uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,314,259,995,663
arxiv
\section{Introduction} \label{Intro:sec} Trading bit rates for better link budgets, LPWANs provide long range wireless connectivity to IoT devices~\cite{centenaro2015long, margelis2015low, vangelista2015long}. Such networks provide a promising alternative to traditional cellular or multi-hop networks and are indeed envisioned to provide nationwide connectivity over industrial, scientific and medical (ISM) bands to battery-powered IoT devices that transmit little amount of data over long periods of time, \emph{e.g.,} water \& gas meters. Thanks to the long range, the IoT devices can communicate directly with the base stations in a star topology. Random access schemes such as Aloha are commonly used in LPWANs in which multiple devices access frequency resources with neither carrier sensing nor contention mechanisms~\cite{Abr09, TanWet11}. This reduces the communication overhead and the packet air time, but it increases the risk of collisions between packets when they overlap in time domain. Multiple works have been dedicated to the interferences modelling in time domain~\cite{ware2001modelling, arnbak1987capacity, cheun1998joint, lee2007experimental}. In~\cite{cheun1998joint}, the product of power and overlapping time duration between a transmission of interest and an interfering transmission is used to represent the quantity of interfering, then the sum is taken over multiple interfering transmissions to give the total interference. Note that interference modelling of transmission overlapping in frequency domain is also well studied in the partially overlapped channels (POC) scenarios~\cite{mishra2006partially, villegas2007effect, xing2009multi}, which are commonly used for networks such as IEEE 802.11. The interference factor in the case of POC is evaluated as the accumulated energy in overlapped frequency domain ~\cite{mishra2006partially}. It has been proven that the use of POC can indeed improve the network throughput in comparison to common orthogonal channelization schemes~\cite{xing2009multi, mishra2006partially}. Our work differs from the aforementioned interference models in that it's the first, to the best of our knowledge, to consider the joint overlapping in both time and frequency domains. Our model based on stochastic geometry is a high-level flexible one which can be adapted to multiple scenarios. In section \ref{sec:Rework}, existing works on LPWANs performance evaluation are introduced. In section \ref{Model:sec}, our interference modelling approach is described and expressions of $\operatorname{SINR}$, outage probability and network throughput are given. Then in section \ref{Proba:sec}, we give the results on probabilistic evaluation of overlapping. Finally in section \ref{App:sec}, the developed model is used to study performances of two different LPWA technologies, Sigfox{\textsuperscript \textregistered} and LoRaWAN{\textsuperscript \textregistered}. Section \ref{conclu:sec} concludes the article and introduces some research perspectives. \section{Related Work} \label{sec:Rework} Multiple works exist for the performance evaluation of LPWANs \cite{mikhaylov2016analysis, goursaud2016random, adelantado2016understanding, reynders2016range}. Most of these works use Poisson processes to model the packet arrival, which we believe is not the most adapted for periodic packet sending scenarios in LPWANs. For example, a device reporting on a daily basis would not send more than one message per day. However, as Poisson models the intensity of packet arrival, an intensity of one message per day represents in fact the mean value, {\it i.e.},\xspace one message on average per day, which is not quite the case described. In \cite{mikhaylov2016analysis}, multiple annuli LoRaWAN{\textsuperscript \textregistered} cell structure is well modelled and illustrated with a few applicative scenarios. This structure is considered in our article but channel effects and capture effect are added to our model thus making it more complete and realistic. In \cite{goursaud2016random}, the performances of a random Frequency Division Multiple Access (random FDMA) scenario are studied in the pure Aloha case, but the capture effect with little overlap between packets is not considered. In \cite{reynders2016range}, the performances in terms of packet delivery ratio and throughput of LoRaWAN{\textsuperscript \textregistered} and Sigfox{\textsuperscript \textregistered} are simulated. However, the simulation process and the numerous network parameters are not exposed enough thus lacking of transparency and possibility of reuse. In \cite{adelantado2016understanding}, some interesting insights on the limits of LoRaWAN{\textsuperscript \textregistered} are given, but again the model is based on Poisson process and there is no possible extension to account for the capture effect. In this article, in order to give the limit of the performance, we study the outage probability and throughput of LoRaWAN{\textsuperscript \textregistered} and Sigfox{\textsuperscript \textregistered} when every node is transmitting as frequently as possible, according to either the ISM band duty cycle constraints or technology-related constraints, which result in message sending periods in the order of 1 to 10 minutes. This scenario of the saturation throughput could be that of packet and object tracking systems~\cite{li2015internet}. Other less frequent IoT scenarios such as water \& gas metering can be evaluated using our model thanks to its flexibility. \section{The ``cards tossing'' model} \label{Model:sec} \subsection {Assumptions} $N$ devices $I_k$ share limited time-frequency resources denoted by $[0 \; T] \times [0 \; F]$ to send packets to a single base station, with $k = 0,1 \ldots N-1$. $F$ is the bandwidth and $T$ the message sending period. $I_0$ is the user of interest. $g_k(t,f)$ the packet sent by user $I_k$. We make the following assumptions: \begin{enumerate} \item\label{limited:as} Messages from all the senders have the same rectangle-shaped time-frequency support, since they are limited in time duration, denoted by $\Delta t$ and bandwidth occupancy, denoted by $\Delta f$. Then $g_k$ takes the form $g_k(t,f) = m_k(t,f) \, \mathbbm{1}_{I_k}(t,f)$ where $\mathbbm{1}_A$ stands for the indicator function of set $A$ and $I_k = [t_k \; t_k + \Delta t] \times [f_k \; f_k + \Delta f]$ denotes the time-frequency support of node $I_k$ (for the sake of simplicity, we use the same notation $I_k$ for the sender itself and the time-frequency support of its message); $t_k$ is the initial time of transmission and $f_k$ the lowest frequency of the packet. $t_k \in \left[0;T -\Delta t \right]$, and $f_k \in \left[0;F -\Delta f \right]$ (see figure~\ref{Aloha:fig}). \item\label{independent:as} There is no cooperation between the devices, \emph{i.e.,} random access considered. The couples $(t_k,f_k)$ are thus independent. They are also assumed to be uniformly distributed, as it is probably optimal in terms of dispersing the packets and avoiding collisions. \item\label{uniform:as} Given support $I_k$, the time-frequency energy of the information packet is uniformly distributed over $I_k$, \emph{i.e.,} $\mathbb{E}\left[|g_k(t,f)|^2 \, | \, I_k\right] = \rho_k \mathbbm{1}_{I_k}(t,f)$, where $\rho_k$ is the energy density. \item\label{noise:as} The channel is affected by an additive time-frequency white noise $\xi(t,f)$ of energy density $\gamma$, \emph{i.e.,} $\mathbb{E}\left[|\xi(t,f)|^2\right] = \gamma$. \end{enumerate} Assumption~\ref{uniform:as} is the ideal and most efficient way of using time-frequency resources~\cite{mishra2006partially}. In practice, transmit spectrum mask is usually applied to specify the upper limit of power permissible and attenuate the signal outside the mask. An attenuation of $30\si{dB}$ to $50\si{dB}$ is observed in real-world scenarios~\cite{mishra2006partially}, so this assumption can be considered as realistic. Notice that the common Aloha scenario is encompassed in this formalism by fixing $\Delta f = F$. In this case, $f_k = 0$ but $t_k$ remains random. \subsection{$\operatorname{SINR}$ expression } \label{QuantitiesInterest:sec} As mentioned in the introduction, interference can be modelled as the sum of accumulated energy in time-frequency domain, which is calculated as the sum of energy coming from different interfering transmissions. The $\operatorname{SINR}$ is defined as the ratio between the energy of the message of interest, $\displaystyle \int_{I_0} \mathbb{E}\left[|g_0(t,f)|^2\right] \, \mathrm{d}t \,\mathrm{d}f$ and the interference $\displaystyle \sum_{k=1}^{N-1} \int_{I_0 \cap I_k} \mathbb{E}\left[|g_k(t,f)|^2\right] \, \mathrm{d}t \,\mathrm{d}f$ plus noise, \emph{i.e.,} \begin{equation} \operatorname{SINR} = \frac{\rho_0 \, \Delta t \, \Delta f}{\sum_{k=1}^{N-1}\rho_k \, S_{k} + \gamma \, \Delta t \, \Delta f} \end{equation} where $S_k = \mu\left( I_0 \cap I_k \right)$ is the surface between the transmission of interest $I_0$ and an interfering one $I_k$ ($\mu$ is the surface measure). We can normalize $S_k$ with respect to the surface of the time-frequency support of transmissions, {\it i.e.},\xspace $X_k = \frac{S_k}{\Delta t \, \Delta f}$. Thus, the $\operatorname{SINR}$ can be recast as \begin{equation} \operatorname{SINR} = \frac{\rho_0 }{\sum_{k=1}^{N-1}\rho_k \, X_{k} + \gamma} \label{SINR_XS:eq} \end{equation} The overlapping phenomenon between packets is similar to the game of players tossing cards onto a table and trying to recognize their own cards afterwards (see figure \ref{Aloha:fig}). When there are too many players, the probability of overlapping will increase to the extent that it's highly probable to be unable to recognize a card. In the next subsection, we illustrate in two different scenarios, the interest of our ``cards tossing'' model in the derivation of the outage probability and throughput of wireless systems in function of the number of devices $N$. \begin{figure}[ht] \centerline{\includegraphics[width=.85\columnwidth]{Aloha}} \caption{Illustration of the ``cards tossing'' game, where each rectangle represents the information packet $I_k$. The rectangle in bold represents the transmission of interest, \emph{i.e.,} the packet $I_0$, while the gray areas depict the sub regions in collision (the darker the area, the larger the number of ``cards'' covering the sub region). The light blue area is defined as the non border area $\left[\Delta t;T-2\Delta t\right] \times \left[\Delta f;F-2\Delta f\right]$, denoted by $\overline{B}$. The light green defined as the border area $\left[0;T-\Delta t\right] \times \left[0;F-\Delta f\right] \backslash \overline{B}$. These two areas constitute $\left[0;T-\Delta t\right] \times \left[0;F-\Delta f\right]$.} \label{Aloha:fig} \end{figure} \subsection{Outage and throughput model} \subsubsection{With multipath fading and path loss} \label{sec:outage1} The first scenario is when the packets from devices suffer from path loss and fading. $\rho_k$ can be thus expressed as $\rho_k = \rho_{tm} h\,l(r_k)$. \begin{itemize} \item $\rho_{tm}$ models the transmission energy density, which is supposed to be identical for all devices. \item $l(r_k)$ models the distance-dependent attenuation, \emph{e.g.,} path loss, where $r_k$ is the Euclidean distance between device $k$ and the base station. We choose the following non singular model \cite{aljuaid2010investigating} expressed as $l(r_k) = \alpha \left[\max(r_k, r_c)\right]^{-\beta}$, where $r_c$ is a critical distance to avoid $l(r_k)$ taking infinity when $r_k$ tends to 0. Here we fix it to $1 \si{m}$. $\alpha$ is a constant modelling system-level losses and gains which is fixed to 1 in our study. $\beta$ is the path loss exponent assumed to be greater than 2. \item $h$ is a random variable modelling small scale, large-scale or composite distance non dependent fading. We suppose that $\sqrt{h}$ results from a rayleigh multipath fading which gives $h$ exponential cumulative distribution function (cdf), {\it i.e.},\xspace $P_H(h) = 1 - \exp(-\lambda h)$ (with $P$ denoting the cdf and $\mathbb{E}(h) = \lambda$). The mean value $\lambda$ is fixed to 1 in our study. Note that other forms of fading can be considered with our paradigm. \end{itemize} The $\operatorname{SINR}$ can be thus recast as follows, \begin{equation} \operatorname{SINR} = \frac{h}{\frac{\sum_{k=1}^{N-1}r_k^{-\beta} \, h \, X_{k}}{r_0^{-\beta}} + \frac{\gamma}{\rho_{tm} \,r_0^{-\beta}}} \end{equation} In order to study the distance distribution of devices in the cell, we define $r_{\max}$ as the distance to the base station of the most distant devices, in the sense that their transmissions barely satisfy the target $\operatorname{SINR}$, denoted by $\zeta$ in the presence of only path loss, {\it i.e.},\xspace $\frac{\rho_{tm} r_{\max}^{-\beta}}{\gamma} = \zeta$. This gives us $r_{\max} = \sqrt[\beta]{\frac{\rho_{tm}}{\zeta \gamma}}$. Let us denote $\Pr$ the probability measure. Suppose that every packet is repeated $n_{\mathrm{rep}}$ times, and the repetitions are independent. The outage probability can be defined as $\left[\Pr\left[\operatorname{SINR} < \zeta\right]\right]^{n_{\mathrm{rep}}}$, which is further expressed as follows, \begin{equation} \operatorname{OP}_{n_{\mathrm{rep}}}(r_0) = \left[\Pr\left[h < \frac{ r_{\max} ^{-\beta}}{ r_0^{-\beta}} + \frac{\zeta \sum_{k=1}^{N-1}r_k^{-\beta} \, h \, X_{k}}{r_0^{-\beta}}\right]\right]^{n_{\mathrm{rep}}} \label{eq:OPr0rep} \end{equation} Note that repetition mechanism is a commonly used scheme in LPWANs to trade efficiency for robustness of transmission. Naturally, $\operatorname{OP}_{n_{\mathrm{rep}}}(r_0) $ depends on the position of the device of interest. Distant devices with larger $r_0$ suffer from greater $\operatorname{OP}_{n_{\mathrm{rep}}}(r_0)$. Suppose that the devices are uniformly distributed in the cell of the base station, which is defined as in the shape of an annulus formed with smaller radius $r_c$ and larger radius $r_{\max}$. The probability density function (pdf) of $r_k$ can be expressed as $p_R(r) = \frac{2r}{r_{\max}^2 - r_c^2}$ ($p$ is used to denote the pdf). The notation $k$ is omitted for the sake of simplicity. Global outage probability $\overline{\operatorname{OP}}_{n_{\mathrm{rep}}}$ is defined as the outage probability averaged over $r_0$, \emph{i.e.,} \begin{equation} \overline{\operatorname{OP}}_{n_{\mathrm{rep}}} = \int_{r_c}^{r_{\max}} \operatorname{OP}_{n_{\mathrm{rep}}}(r_0)\, \frac{2r_0}{r_{\max}^2 - r_c^2} dr_0 \label{eq:OPavg} \end{equation} The effective throughput is defined as the average number of non repetitive packets received per unit time and is denote by $\it{Th}(n_{\mathrm{rep}})$, which can be expressed as follows, \begin{equation} \it{Th}(n_{\mathrm{rep}}) = \frac{N(1 - \overline{\operatorname{OP}}_{n_{\mathrm{rep}}})}{T \, n_{\mathrm{rep}}} \label{eq:Throughput} \end{equation} Recall that $N$ is the number of devices, and $T$ the message sending period. In the case of pure Aloha, {\it i.e.},\xspace packets considered lost when they collide in time or frequency domain, {\it i.e.},\xspace $X_{\Sigma} = \sum_{k=1}^{N-1} X_k \neq 0 $. Assuming $X_k$ and $r_k$ independent, our outage probability can be recast as, \begin{multline} \operatorname{OPAloha}_{n_{\mathrm{rep}}}(r_0) = \\ \left[\Pr\left[ X_{\Sigma} \neq 0 \right] + \Pr\left[ X_{\Sigma} = 0 \right] \Pr\left[ h < \frac{ r_\max^{-\beta}}{r_0^{-\beta}} \right] \right]^{n_{\mathrm{rep}}} \label{eq:OPAloha} \end{multline} One can observe that \eqref{eq:OPAloha} is greater than \eqref{eq:OPr0rep}, as \eqref{eq:OPr0rep} includes the capture effect, {\it i.e.},\xspace certain packets not considered lost even in case of collision. $\overline{\operatorname{OPAloha}}_{n_{\mathrm{rep}}}$ and $\it{ThAloha}(n_{\mathrm{rep}})$ can be calculated in the similar way. By definition, $\overline{\operatorname{OPAloha}}_{n_{\mathrm{rep}}} \leq \overline{\operatorname{OP}}_{n_{\mathrm{rep}}}$ and $\it{ThAloha}(n_{\mathrm{rep}}) \leq \it{Th}(n_{\mathrm{rep}})$. \subsubsection{With perfect power control} \label{sec:powerC} We consider another scenario in which the packets of different devices are supposed to arrive at the base station with identical energy density, {\it i.e.},\xspace $\rho_0 = \rho_k = \rho$, thanks to a certain power control mechanism. The $\operatorname{SINR}$ can be recast in this case as, \begin{equation} \operatorname{SINR} = \frac{1}{X_{\Sigma} + \operatorname{SNR}^{-1}} \label{eq:SINRTPC} \end{equation} where $ \operatorname{SNR} = \frac{\rho}{\gamma}$. The outage probability can be recast as, \begin{equation} \operatorname{OP}_{n_{\mathrm{rep}}} = \left[\Pr\left[X_{\Sigma} \ge \zeta^{-1} - \operatorname{SNR}^{-1} \right]\right]^{n_{\mathrm{rep}}} \label{eq:OP1D} \end{equation} In this case the non fairness between devices in terms of distance to the base station is resolved. $\operatorname{OP}_{n_{\mathrm{rep}}}$ does not depend on $r_0$ any more, but only on $\gamma$, the target $\operatorname{SINR}$ $\zeta$ and $\rho$, the energy density that results from the power control. The average throughput can be recast as $\it{Th}(n_{\mathrm{rep}}) = \frac{N(1 - \operatorname{OP}_{n_{\mathrm{rep}}})}{T \, n_{\mathrm{rep}}}$. The quantities to be simulated are listed in table~\ref{tab:simulQ}. In both scenarios, we should first study the probabilistic distributions of $X_k$ and $X_{\Sigma}$. In the next section, we derive the probabilistic evaluations of $X_k$ and $X_{\Sigma}$. In the case of multipath fading and path loss, the exact distribution of $\sum_{k=1}^{N-1}r_k^{-\beta} \, h \, X_{k}$ remains difficult to evaluate even with the distribution of $X_k$ derived and $r_k$, $h$ and $X_k$ assumed to be independent random variables. We use Monte Carlo method to evaluate it. \section{Probabilistic Evaluations} \label{Proba:sec} The results of $X_k$ and $X_{\Sigma}$ are different in 1D case and 2D case. We first give the results of $X_k$ in the easier 1D case in section \ref{sec:1D}, {\it i.e.},\xspace $\Delta f = F$ so interference happens only when there is overlap in time domain. Physical layer technologies such as spreading spectrum fall into this case. Then the results of $X_k$ in the more complicated 2D case are given in section \ref{sec:2D}, where overlapping can happen in both time and frequency domains, random FDMA approach belongs to this case. The results on $X_{\Sigma}$ are given in section \ref{sec:sigma}. We denote by $p_{X_k}$ (resp. $p_{X_{\Sigma}}$) the pdf of $X_k$ (resp.$X_{\Sigma}$), by $P_{X_k}$ (resp. $P_{X_{\Sigma}}$) the cumulative distribution function (cdf) of $X_k$ (resp. $X_{\Sigma}$). The overlapped surface between two packets $X_k$ is determined by their relative position in $\left[0;T\right] \times \left[0;F\right]$. Recall that $t_k$ and $f_k$ are defined over$\left[ 0 \; T- \Delta t \right]$ and $\left[ 0 \; F- \Delta f \right]$. We can thus define $\tau_k = \frac{|t_k-t_0|}{\Delta t}$ and $\varphi_k = \frac{|f_k-f_0|}{\Delta f}$ as the normalized absolute time and frequency difference between emission $I_0$ and $I_k$, see figure \ref{Surface2:fig}. $\tau_k$ and $\varphi_k$ are defined over $\left[ 0 \;N_t-1 \right]$ and $\left[ 0 \; N_f-1 \right]$ respectively. From the assumption \ref{independent:as}, $(\tau_k,\varphi_k)$ also have identical distributions. For the sake of brevity, we will omit index $k$ in the expressions, \emph{i.e.,} pdf of $(\tau_k,\varphi_k)$ (resp. $\tau_k$ and $\varphi_k$) is denoted by $p_{\tau,\varphi}$ (resp. $p_{\tau}$ and $p_{\varphi}$) , and the (cdf) denoted by $P_{\tau,\varphi}$ (resp. $P_{\tau}$ and $P_{\varphi}$). \begin{figure}[ht] \centerline{\includegraphics[width=0.85\columnwidth]{Surface2}} \vspace{-1mm} \caption{When an emission $I_k$ collides with $I_0$, the overlapped surface, represented in gray, is $(\Delta t - |t_k-t_0|) (\Delta f - |f_k-f_0|)$.} \label{Surface2:fig} \end{figure} \subsection{1D ``cards tossing'' game} \label{sec:1D} In the 1D game, $f_0 = f_k = 0$ so that $\varphi_k = 0$. $f_k$ and $\varphi_k$ become deterministic and independent of $t_k$. We have \begin{equation} X_k = (1-\tau_k) \mathbbm{1}_{[0 \; 1)}(\tau_k) \end{equation} One can easily deduce that \begin{equation} \Pr[X_k > x] = \int_0^{1-x} p_{\tau}(u) du \end{equation} which gives $\Pr[X_k > x] = P_{\tau}(1-x)$ and the probability of collision $p_c = \Pr[X_k > 0]$ is given by $P_{\tau}(1)$. In the case where $t_k$ are assumed uniformly distributed over $\left[ 0 \; T- \Delta t \right]$, the cdf of $X_k$, $P_{X_k}$ can be derived as follows, \begin{equation} P_{X_k} = 1 - \Pr[X_k > x] = 1 - \frac{(2 N_t-3 + x) (1-x)}{(N_t-1)^2} \end{equation} where $x \in [0 \; 1)$. $p_{X_k}$ can be obtained by deriving $P_{X_k}$ for $x \ne 0$, and $p_{X_k} (0) = P_{X_k} (0) = 1 - \frac{2 N_t-3}{(N_t-1)^2}$. \subsection{2D ``cards tossing'' game} \label{sec:2D} Recall that $t_k$ and $f_k$ are assumed independent and uniformly distributed over $\left[ 0 \; T- \Delta t \right]$ and $\left[ 0 \; F- \Delta f \right]$. Denote $\frac{T}{\Delta t}$ by $N_t$ and $\frac{F}{\Delta f}$ by $N_f$. A simple look at the geometrical configuration plotted in figure~\ref{Surface2:fig} allows to express the normalized surface as \begin{equation} X_k = (1-\tau_k) (1-\varphi_k) \mathbbm{1}_{[0 \; 1)^2}(\tau_k,\varphi_k) \end{equation} Thus, for $x \in [0 \; 1)$, we have $\Pr[X_k > x] = \Pr[(1-\tau_k) (1-\varphi_k) > x]$, we immediately get, \begin{equation} \Pr[X_k > x] = \int_0^{1-x} \! \left( \int_0^{1-\frac{x}{1-u}} \!\! p_{\tau,\varphi}(u,v) \, dv \right) du \label{N2_ccdf:eq} \end{equation} where the bound of the first integral is due to the fact that when $u \ge 1-x$, $1-\frac{x}{1-u} \le 0$, the inner integral is zero. The probability of collision is $p_c = P_{\tau,\varphi}(1,1)$ . When $t_k$ and $f_k$ are independent and uniformly distributed over $\left[ 0 \; T- \Delta t \right]$ and $\left[ 0 \; F- \Delta f \right]$, some long algebra leads to, \begin{equation}\begin{array}{l} \displaystyle P_{X_k} = 1 - \frac{(a + b \, x) (1-x) + (c + x) x \ln x}{(N_t-1)^2 (N_f-1)^2} \\[2mm] \mbox{with} \\[2mm] \left\{\begin{array}{lll} a & = & (2 N_t-3)(2 N_f-3)\\[2mm] b & = & 9 - 2 N_t - 2 N_f\\[2mm] c & = & 2 \, (N_t-2) (N_f-2) \end{array}\right. \end{array}\end{equation} Similar procedures as in 1D game should be taken to find $p_{X_k}$. \subsection{Probabilistic evaluation of $X_{\Sigma}$} \label{sec:sigma} Let's first consider the probabilistic evaluation of $X_{\Sigma}$ in the 2D case. Notice that $\left[0;T-\Delta t\right] \times \left[0;F-\Delta f\right]$ can be divided into two areas \emph{i.e.,} the non border area denoted by $\overline{B}$ and the border area denoted by $B$. We have $\overline{B} = \left[\Delta t;T-2\Delta t\right] \times \left[\Delta f;F-2\Delta f\right]$, and $B = \left[0;T-\Delta t\right] \times \left[0;F-\Delta f\right] / \overline{B}$. See figure \ref{Aloha:fig}. For the sake of brevity, we define the event that $(t_0, f_0)$ falls into the non border area $\overline{B}$ also as $\overline{B}$. The event that $(t_0, f_0)$ falls into the border area as $B$. In fact $p_{X_k \vert \overline{B}} \ne p_{X_k \vert B}$ because a packet in $\overline{B}$ has greater chance to be corrupted by an interfering one as it can come from all directions. A packet in $B$ cannot be interfered from certain positions of $(t_k, f_k)$ out of border, resulting in a smaller probability of being corrupted. In section \ref{sec:1D} and \ref{sec:2D}, we could have separated the derivation in $\overline{B}$ and $B$ and obtained the same results. For $P_{X_{\Sigma}}$, instead of evaluating it separately in $\overline{B}$ and $B$, we give the approximation as follows, \begin{align} P_{X_{\Sigma}} & = P_{X_{\Sigma} \vert \overline{B}} \Pr(\overline{B}) + P_{X_{\Sigma} \vert B} \Pr(B) \nonumber \\ & \approx P_{X_{\Sigma} \vert \overline{B}} \nonumber \\ & = P_{X_k \vert \overline{B}} * p_{X_k \vert \overline{B}}^{(k-1)*} \nonumber \\ & \approx P_{X_k} * p_{X_k}^{(k-1)*} \label{eq:somme} \end{align} where $*$ stands for convolution, and $(k-1)*$ the $(k-1)$ times convolution. When $\Pr(\overline{B}) \gg \Pr(B)$ (This hypothesis is realistic in the case where $T \gg \Delta t$ and $F \gg \Delta f$, which is verified in most LPWANs scenarios~\cite{centenaro2015long, margelis2015low, vangelista2015long, mikhaylov2016analysis, do2014benefits, adelantado2016understanding, reynders2016range}), the first approximation is obviously valid. The second approximation comes from $P_{X_k} = P_{X_k \vert \overline{B}} \Pr(\overline{B}) + P_{X_k \vert B} \Pr(B) \approx P_{X_k \vert \overline{B}}$, and it's the same with $p_{X_k}$. We use $P_{X_k}$ and $p_{X_k}$ obtained in the case of independent and uniform distribution in section \ref{sec:1D} and \ref{sec:2D} to evaluate (\ref{eq:somme}). The reasoning and the evaluation of $X_{\Sigma}$ in the 1D case is similar and thus omitted. \section{Application} \label{App:sec} Let us now illustrate how our model can be used to evaluate the performance of two LPWA technologies. \subsection{Sigfox{\textsuperscript \textregistered}} \subsubsection{2D ``cards tossing'' parameters} First, we consider Sigfox{\textsuperscript \textregistered}, an LPWA technology based on Ultra Narrow Band (UNB)~\cite{goursaud2016random}. The packet takes only a bandwidth $\Delta f$ around 100\si{Hz}. In doing so, the noise power $\gamma \Delta f$ is greatly reduced and the transmission range is thus increased. In the physical layer, binary phase-shift keying (BPSK) is used. In the medium access control (MAC) layer, random FDMA scheme is adopted~\cite{do2014benefits}, {\it i.e.},\xspace due to transmitter oscillator's jitter, it's not possible to channelize, so a packet is transmitted at a randomly chosen frequency in the available frequency band of 40\si{kHz}. Sigfox{\textsuperscript \textregistered} also limits the number of messages per node to 140 messages per day, which equals to a message around every 617\si{s} \cite{reynders2016range}. $T$ is fixed to $617\si{s}$, {\it i.e.},\xspace devices transmit as frequently as possible. The maximal allowed payload size per packet is 12 bytes. With the preamble and cyclic redundancy check (CRC) fields, the transmission duration $\Delta t$ of a packet is around 1.76\si{s}~\cite{goursaud2015dedicated}. Note that this $\Delta t$ and $T$ satisfy the European Telecommunications Standards Institute (ETSI) requirement of 1\% duty cycle constraints in the 868\si{MHz} band~\cite{etsi}. The ``cards tossing'' game of Sigfox{\textsuperscript \textregistered} falls into the 2D case described in \ref{sec:2D} and its parameters, {\it i.e.},\xspace $\Delta f$, $F$, $\Delta t$ and $T$ are given in table \ref{tab:lora}. \subsubsection{Outage and throughput model for Sigfox{\textsuperscript \textregistered}} For now, there is no report of any power control mechanism in Sigfox{\textsuperscript \textregistered}, so the model introduced in \ref{sec:outage1} is chosen. Parameters $\rho_{tm}$, $\gamma$, $\zeta$, $\beta$, $r_{\max}$ are also listed in table \ref{tab:lora}. The derivation of these parameters is as follows. The maximum transmission power in 868\si{MHz} is fixed to 14\si{dBm} {\it i.e.},\xspace $\rho_{tm} \Delta f = 14\si{dBm}$~\cite{etsi}. Thanks to UNB, Sigfox{\textsuperscript \textregistered} benefits from a reduced noise floor around -154\si{dBm}, {\it i.e.},\xspace $\gamma \Delta f = -154\si{dBm}$. This gives us a link budget around 168\si{dB}. Let's consider a reception threshold of 8\si{dB}, a shadow fading margin of 10\si{dB} as well as a penetration loss of around 15\si{dB} for urban environment. This gives us a target $\operatorname{SINR}$ around 33\si{dB}, {\it i.e.},\xspace $\zeta(\si{dB}) = 33\si{dB}$. Finally let's consider a path loss exponent $\beta$ of 3.6 for urban environment. All of these parameters give us a $r_{\max}$ of 5.2\si{km} for urban scenario. \begin{table}[ht] \centering \caption{Table of Notations} \renewcommand{\arraystretch}{1.2} \begin{tabular}{|l|c|c|} \hline & Sigfox{\textsuperscript \textregistered} & LoRa{\textsuperscript \textregistered} \\ \hline Outage and throughput model & With path loss & Perfect power\\ & and fading & control \\ \hline ``cards tossing'' model & 2D & 1D \\ \hline Transmission power & 14 & See section \ref{sec:PCLoRaWAN} \\ $\rho_{tm}$ $\Delta f $(\si{dBm}) & & \\ \hline Noise floor $\gamma \Delta f$(\si{dBm}) & -154 & -117 \\ \hline Target $\operatorname{SINR} \zeta$ (\si{dB}) & 33 & See table \ref{tab:loraSF} \\ \hline Path loss exponent $\beta$ & 3.6 & 3.6 \\ \hline Range $r_{\max}$ (\si{km})& 5.2 & See table \ref{tab:loraSF} \\ \hline Cell form & Single annulus & Multiple annuli \\ \hline Payload size (bytes) & 12 & See table \ref{tab:loraSF} \\ \hline Application bit rate (\si{bits/s}) & 54.5& See table \ref{tab:loraSF} \\ \hline F(\si{kHz}) & 40 & 125 \\ \hline $\Delta f$ & 100\si{Hz} & 125\si{kHz} \\ \hline T(\si{s}) & 617 & See table \ref{tab:loraSF} \\ \hline $\Delta t$(\si{s}) & 1.76 & See table \ref{tab:loraSF} \\ \hline Number of channels & 1 & 3 \\ \hline \end{tabular} \label{tab:lora} \end{table} \subsubsection{Simulation} Taking all the parameters of Sigfox{\textsuperscript \textregistered}, formulae ~\eqref{eq:OPr0rep}--\eqref{eq:OPavg}--\eqref{eq:Throughput}--\eqref{eq:OPAloha} expressing the first 6 quantities listed in \ref{tab:simulQ} (For the definition of $\it{SF}$, see section \ref{sec:lora}) are simulated and the results are given in figures \ref{fig:OP12} and \ref{fig:OPSigfox}. \begin{table}[ht] \centering \caption{Table of simulated quantities} \renewcommand{\arraystretch}{1.1} \begin{tabular}{|l|l|} \hline $\operatorname{OP}_{n_{\mathrm{rep}}}(r_0)$ & Outage probability in function of $r_0$ and $n_{\mathrm{rep}}$\\ \hline $\overline{\operatorname{OP}}_{n_{\mathrm{rep}}}$ & Global outage probability \\ & averaged over $r_0$ or $SF$ \\ \hline $\it{Th}(n_{\mathrm{rep}})$& Average effective throughput in function of $n_{\mathrm{rep}}$ \\ \hline $\operatorname{OPAloha}_{n_{\mathrm{rep}}}(r_0)$& Outage probability in pure Aloha scenario \\ & in function of $r_0$ and $n_{\mathrm{rep}}$ \\ \hline $\overline{\operatorname{OPAloha}}_{n_{\mathrm{rep}}}$& Global outage probability in pure \\ &Aloha scenario averaged over $r_0$ or $SF$ \\ \hline $\it{ThAloha}(n_{\mathrm{rep}})$& Average effective throughput in function of $n_{\mathrm{rep}} $ \\ & in pure Aloha scenario \\ \hline $\operatorname{OP}_{n_{\mathrm{rep}}}(SF)$& Outage probability in function of $SF$ and $n_{\mathrm{rep}}$ \\ \hline $\it{Th}_{SF}(n_{\mathrm{rep}})$& Average effective throughput of a certain $SF$ \\ & in function of $n_{\mathrm{rep}}$ \\ \hline \end{tabular} \label{tab:simulQ} \end{table} \begin{figure}[h!] \centerline{\includegraphics[width=\columnwidth]{OP12.pdf}} \caption{ Outage probability in function of $r_0$ and $N$. The solid line represents the pure Aloha case, the dashed line the case with capture effect. The five series of curves in each sub figure represent from top to bottom the case where $N = 30000,20000,10000,1$. } \label{fig:OP12} \end{figure} \begin{figure}[h!] \centerline{\includegraphics[width=\columnwidth]{OPThroughput.pdf}} \caption{Global outage probability and average effective throughput in one hour for Sigfox{\textsuperscript \textregistered} scenario. The five colors in each sub figure from top to bottom represent $n_{\mathrm{rep}} = 1,3,5,7,9$. Solid line represents the pure Aloha case, dashed line the case with capture effect.} \label{fig:OPSigfox} \end{figure} Figure \ref{fig:OP12} shows that capture effect represented by the difference between the solid and dashed lines decreases with the distance $r_0$, because naturally the devices nearer to the base station have more chances to benefit from the capture effect. Repetitions do reduce the outage probability. Figure \ref{fig:OPSigfox} shows that $\overline{\operatorname{OP}}_{n_{\mathrm{rep}}}$ is not greatly reduced in the capture case than in the pure Aloha case; the improvement in $\it{Th}(n_{\mathrm{rep}})$ increases with $N$, as more collisions happen with higher $N$, thus amplifying the capture effect, but the high collision regime is not the optimal zone for the low-power devices to function. Device density $p_R(r)$ increases with distance $r_0$ and further devices barely benefit from the capture effect, so devices at the cell edge are probably the bottleneck of the network performance, {\it i.e.},\xspace it's them that stops the network performance from getting better. One can also observe that repetitions reduce the global outage probability but also result in lower effective throughput because of the introduced redundancy. Our abacuses permits to find the optimal $n_{\mathrm{rep}}$ in function of the target outage probability, $N$, throughput and energy cost. \subsection{LoRaWAN{\textsuperscript \textregistered}} \label{sec:lora} \subsubsection{1D ``cards tossing'' parameters } LoRaWAN{\textsuperscript \textregistered} is another LPWA technology based on spectrum spreading \cite{lora}. The spreading factor is denoted by $SF$ and can vary from 6 to 12. Every packets are spread in the available bandwidth $F$, {\it i.e.},\xspace $\Delta f = F$. In Europe 3 default channels are used, each with a bandwidth of $125\si{kHz}$~\cite{lora}. Different $SF$ result in different bit rate. The smaller the $SF$, the higher the bit rate. Different payload sizes are also specified for different $SF$. $\Delta t_{SF}$ can thus be calculated, the detail can be found in~\cite{mikhaylov2016analysis, lora2}. $T_{SF}$ is fixed to $100 \Delta t_{SF}$ according to the ETSI duty cycle constraint of $1\%$~\cite{etsi} in the 868\si{MHz} band. Transmissions from different $SF$ are considered orthogonal and do not interfere with each other~\cite{mikhaylov2016analysis, reynders2016range, lora2}, so the ``cards tossing'' game of LoRaWAN{\textsuperscript \textregistered} can be seen as seven orthogonal and parallel 1D games described in section \ref{sec:1D}, each having three orthogonal channels of bandwidth $F = 125\si{kHz}$, different $\Delta t_{SF}$ and $T_{SF}$. The corresponding game parameters are listed in table \ref{tab:lora} and \ref{tab:loraSF} ~\cite{mikhaylov2016analysis,lora2}. \subsubsection{Multiple annuli cell structure} \label{sec:multiAnnuli} The greater the $SF$, the lower the sensitivity of transmission associated ~\cite{lora2} . This results in smaller $\zeta_{SF}$ required for greater $SF$. Transmission with greater $SF$ can thus reach further. $r_{SF}$ is defined as the maximum distance that a transmission with a certain $SF$ can barely reach, with only path loss considered, {\it i.e.},\xspace $\frac{\rho_{tm} r_{SF}^{-\beta}}{\gamma} = \operatorname{SNR} = \zeta_{SF}$, where $\rho_{tm} \Delta f$ takes the maxmimum allowed power $14\si{dBm}$~\cite{etsi}. The noise power is around $-117\si{dBm}$. Let's add a shadow margin of 10 \si{dB} as well as a penetration loss of around 15 \si{dB}, to give us the $\zeta_{SF}$ for different $SF$, see table \ref{tab:loraSF}. The path loss exponent $n$ is fixed to the same 3.6 as in Sigfox{\textsuperscript \textregistered} scenario. With the maximal allowed transmission power $\rho_t \Delta f$ of 14 \si{dBm}, the communication ranges $r_{SF}$ in terms of path loss for different $SF$ are thus calculated and listed in table \ref{tab:loraSF}. These are the ranges with which the reception $\operatorname{SNR} = \frac{\rho_t r_{SF}^{-n}}{\gamma}$ barely satisfies $\zeta_{SF}$, in presence of only path loss and without considering fading and interferences, {\it i.e.},\xspace $\zeta_{SF}^{-1}-\operatorname{SNR}^{-1}=0$. Further devices should use greater $SF$ to simply reach out to the base station, while nearer devices can benefit from higher bit rate of smaller $SF$. Let's consider a ideally pre-configured network where all the devices located in the annulus defined by $r_{SF}$ and $r_{{SF}-1}$ take spreading factor $SF$ (For $SF \, 6$, it's the annulus between $r_6$ and $r_c$) so that they make use of the smallest possible $SF$ and thus the highest possible bit rate while guaranteeing the communication range at the same time~\cite{mikhaylov2016analysis, adelantado2016understanding}. The probability of a node falling into a certain annulus denoted by $p_{SF}$ is proportional to its surface~\cite{adelantado2016understanding}. The number of devices taking a certain $SF$ is thus just $N p_{SF}$. $\zeta_{SF}$, $r_{SF}$ and $p_{SF}$ are listed in table \ref{tab:loraSF}. LoRaWAN{\textsuperscript \textregistered} network support over-the-air activation of the node which requires the node to open 2 successive downlink windows after a uplink transmission, in order to receive the MAC layer commands from the network~\cite{lora}. Also, LoRaWAN{\textsuperscript \textregistered} network infrastructure can manage the $SF$ and date rate by means of an ADR (Adaptive Data Rate) scheme, which also necessitates the node to listen to gateway downlink transmissions. Note that the downlink and uplink in LoRaWAN{\textsuperscript \textregistered} share the same channels, so collision phenomenon may be aggravated by the use of downlink. The pre-configured network that we consider is the scenario without nodes listening to gateway, {\it i.e.},\xspace there is only uplink transmissions. Each node is set to an appropriate $SF$ according to its distance from the gateway. \begin{table}[ht] \centering \caption{Parameters of LoRaWAN{\textsuperscript \textregistered}} \renewcommand{\arraystretch}{1.2} {\setlength{\tabcolsep}{0.1cm}} \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline SF & Sensitivity & $\zeta_{SF}$& Range & $p_{SF}$ &Payload&$\Delta t_{SF}$ & Bit rate &$T_{SF}$\\ &(\si{dBm}) &(\si{dB}) &$r_{SF}$(\si{km}) & & (bytes)& (\si{s}) & (\si{Kb/s})& (\si{s})\\ \hline 6 & -121 & 21 &1.13 & 0.13 & 242 &0.233&8.309 & 23.3\\ \hline 7 & -124 & 18 &1.37 & 0.06 & 242 & 0.400&4.840 &40.0\\ \hline 8& -127 & 15 &1.67& 0.09 & 242 &0.707&2.738&70.7\\ \hline 9& -130& 12 &2.02 & 0.13 & 115 & 0.677& 1.359 &67.7\\ \hline 10& -133 & 9&2.45 & 0.19 & 51&0.698& 0.585&69.8\\ \hline 11& -135 & 7 &2.78 & 0.17 & 51 &1.561& 0.261&156.1\\ \hline 12& -137 & 5 &3.16 & 0.23 &51 &2.793& 0.146&279.3\\ \hline \end{tabular} \label{tab:loraSF} \end{table} \subsubsection{Outage and throughput model with power control scheme for LoRaWAN{\textsuperscript \textregistered}} \label{sec:PCLoRaWAN} To improve the fairness of devices located in the same annulus and having the same $SF$, we consider the following ideal power allocation strategy : Allocate $\rho_{tm} \Delta f = 14\si{dBm}$ to the devices with distance $r_{SF}$ to make sure they get covered; Make sure that the reception power density attenuated by path loss of all devices in the same annulus, denoted by $\rho$, is identical and equal to that of devices with distance $r_{SF}$, {\it i.e.},\xspace $\rho_{tm} r_{SF}^{-\beta}$. Fading effect is neglected in this case. This setting falls into the perfect power control paradigm introduced in sub section \ref{sec:powerC}. $\zeta_{SF}^{-1} - \operatorname{SNR}^{-1} = 0$ according to the section \ref{sec:multiAnnuli}. By shrinking all the annuli, our power allocation scheme can in fact result in non zero $\zeta_{SF}^{-1} - \operatorname{SNR}^{-1}$. With cdf of $X_{\Sigma}$ already given in section~\ref{Proba:sec}, there is no problem in evaluating this adaptation. In our scenario, the non fairness between devices with the same $SF$ but different distance $r_0$ is removed, {\it i.e.},\xspace $\operatorname{OP}_{n_{\mathrm{rep}}}$ doesn't depend on $r_0$ any more, but non fairness exists between different $SF$ as $SF$ with greater $p_{SF}$ have greater device number $Np_{SF}$ and thus greater $\operatorname{OP}_{n_{\mathrm{rep}}}(SF)$ expressed as $\displaystyle \operatorname{OP}_{n_{\mathrm{rep}}}(SF) = \left[\Pr\left[X_{\sum_{k=1}^{N_{SF}} } \geq 0\right]\right]^{n_{\mathrm{rep}}}$. $\it{Th}(n_{\mathrm{rep}}) = \sum_{SF = 6}^{12} \it{Th}_{SF}(n_{\mathrm{rep}})$ should be recast as follows, \begin{equation} \it{Th}(n_{\mathrm{rep}}) = \sum_{SF = 6}^{12} \frac{3 N p_{SF}(1-\operatorname{OP}_{n_{\mathrm{rep}}}(SF))}{T_{SF} \, n_{\mathrm{rep}}} \end{equation} where the factor three comes from the three available channels. The outage probability averaged over $SF$ is expressed as $\overline{\operatorname{OP}}_{n_{\mathrm{rep}}} = \sum_{SF = 6}^{12} \operatorname{OP}_{n_{\mathrm{rep}}}(SF)p_{SF}$. Quantities to be simulated are listed in table \ref{tab:simulQ}. Note that in the scenario considered, $\overline{\operatorname{OP}}_{n_{\mathrm{rep}}}$ coincides with $\overline{\operatorname{OPAloha}}_{n_{\mathrm{rep}}}$, and $\it{Th}(n_{\mathrm{rep}})$ with $\it{ThAloha}(n_{\mathrm{rep}})$ as there is no tolerance of overlapping. \subsubsection{Simulation} The simulation results in the LoRaWAN{\textsuperscript \textregistered} case are given in figures \ref{fig:LoraOP1} and \ref{fig:OPThLoRA}. Figure \ref{fig:LoraOP1} shows the non fairness in terms of outage probability between different $SF$, which is directly related to $p_{SF}$, which again is dictated by non uniformity of device density in function with $r_0$. \begin{figure}[h!] \centerline{\includegraphics[width=\columnwidth]{LoraOP12.pdf}} \caption{Outage probability of LoRaWAN{\textsuperscript \textregistered} in function of $N$ and $SF$. From top to bottom, the five curves in each sub figure represent $N = 250, 200, 150, 100, 1$. Sub figure on the left is the case where $n_{\mathrm{rep}} = 1$, on the right $n_{\mathrm{rep}} = 3$. } \label{fig:LoraOP1} \end{figure} \begin{figure}[h!] \centerline{\includegraphics[width=\columnwidth]{OPThroughputLoRA.pdf}} \caption{The figure in the top represents average effective throughput in one hour. The five different colors from top to bottom represent the case $n_{\mathrm{rep}} = 1, 3, 5, 7, 9$. The figure in the middle represents average effective throughput in one hour of different $SF$, with $n_{\mathrm{rep}}=1$. The numbers on the curves stand for the $SF$. The figure in the bottom represents the global outage probability. The colors have the same meaning as described earlier. } \label{fig:OPThLoRA} \end{figure} Several observations can be made from figure \ref{fig:OPThLoRA}. First, repetition mechanism reduces overall outage probability but also the average effective throughput as redundancy is introduced. It increases energy cost as well. Second, the differences between $\it{Th}_{SF}(n_{\mathrm{rep}})$ are multiple, dictated by $p_{SF}$ and $T_{SF}$. $p_{SF}$ determines $OP_{n_{\mathrm{rep}}}(SF)$, while $T_{SF}$ determines the speed of message sending. For example, $SF \, 6$ is the fastest of all $SF$, but as $p_6 > p_7$, the curve of $SF \, 6$ reaches its maximum much earlier than $SF \,7$, which limits its performance. $SF \, 12$ has the worst performance as it has the longest $T_{SF}$ and biggest $p_{SF}$, which again confirms that the devices at the cell edge are probably the bottleneck of the network performance. At last, if we try to compare Sigfox{\textsuperscript \textregistered} and LoRaWAN{\textsuperscript \textregistered}, we can see that even though Sigfox{\textsuperscript \textregistered} can support more devices but in terms of throughput, it's of the same order as LoRaWAN{\textsuperscript \textregistered}. Note that the payload sizes in LoRaWAN{\textsuperscript \textregistered} are much more important than that of Sigfox{\textsuperscript \textregistered} (see table \ref{tab:lora} and \ref{tab:loraSF}). It seems that Sigfox{\textsuperscript \textregistered} is more suitable for applications with a lot of devices having smaller traffic, while LoRaWAN{\textsuperscript \textregistered} can support applications with more important traffic but less devices. \section{Conclusion and perspectives } \label{conclu:sec} In this article, we provide a high-level flexible model whose interest is illustrated by the performance evaluation of two LPWA technologies. To the best of our knowledge, this is the first model which considers joint time-frequency interference. Note that our paradigm can be adapted to other systems to evaluate the relationship between number of devices, repetition times, outage probability, throughput and energy cost. Capture effect is also taken into account in our model so further questions such as how to amplify it intelligently can be investigated in the future. We believe that our model provides a useful dimensioning tool for the future IoT scenarios. Our model can be completed in the mathematical level. First, when the hypotheses $T \gg \Delta t$ and $ F\gg \Delta f$ are not satisfied, algorithmic approach seems to be more adapted due to the difficulty in the probability evaluation on the border area. Second, the proportion between $\Delta t$ and $\Delta f$ has an influence on the probability distributions of $X_k$ and $X_\Sigma$, thus comes the question of the best strategy of proportioning the time duration and frequency occupancy of the information packet, in order to minimize the overlapping. At last, it's possible to formulate a more general problem of finding the best strategy to use a 2D time-frequency resource, always in terms of minimization of overlapping phenomenon. We have the options between orthogonal division of the frequency band, division into multiple partially overlapping bands (POC), and random FDMA if we go to the extreme. In both the LPWANs scenarios considered, cell edge devices seem to be the bottleneck of the global network performance. A possible solution is the densification of the infrastructure, knowing that IoT devices can communicate with multiple base stations {\it i.e.},\xspace multiple reception or macro-diversity. A study on the $k$-coverage of devices is given in \cite{hal}. By combining the $k$-coverage model for LPWANs and our ``cards tossing'' model, we can jointly design the infrastructure deployment and MAC layer of devices, in order to improve the network performance while limiting the cost.
1,314,259,995,664
arxiv
\section{Introductions}\label{intro} Let $X$ be a smooth complex projective variety of maximal Albanese dimension (m.A.d. for short) and of general type. Recall that tricanonical map is birational onto its image (cf. \cite{CH2} and \cite{JMT}). It is interesting to consider the birationality of its bicanonical map. Let us recall the following results. Assume moreover that the bicanonical map of $X$ is not birational. Then \begin{itemize} \item[I]{If $X$ is a surface, then either $X$ is fibered by curves of genus 2 (the standard case), or \begin{enumerate} \item[(i)]{if $q(X) >2$, then $X$ is birationally equivalent to a theta divisor of a principally polarized abelian variety (p.p.a.v. for short) of dimension 3 (cf. \cite{CCM});} \item[(ii)]{if $q(X) = 2$, then $X$ is birational to a double cover of a simple principally polarized abelian surface $A$ branched along a divisor $B \in |2\Theta|$ (cf. \cite{CCM}, \cite{CFM}, \cite{CM}).} \end{enumerate}} \item[II]{\begin{enumerate} \item[(i)]{If $X$ is a primitive variety (cf. Def. \ref{prm}) with $q(X) > \dim X$, then it is birational to a theta divisor of a p.p.a.v. (cf. \cite{BLNP}).} \item[(ii)]{If $X$ is not necessarily primitive, then $\mathrm{gv}(\omega_X) \leq 1$, and the Albanese image is fibred by subvarieties of codimension at most 1 of an Abelian subvariety of $\mathrm{Alb}(X)$ (cf. \cite{La}).} \end{enumerate}} \end{itemize} If $X$ has a fibration $f: X \rightarrow Y$ with general fibers having non-birational bicanonical map, then the bicanonical map of $X$ is not birational. It is known that a non-primitive variety always has an irregular fibration by generic vanishing theorem (cf. Theorem \ref{gv}). Therefore, it is of special interest to study the bicanonical map of primitive varieties or those with simple Albanese varieties. For the bicanonical map of primitive varieties, when $q(X) > \dim X$, it is completely clear by the results of I(i) and II(i); when $q(X) = \dim X$, it is not clear yet except in dimension 2 (I(ii)), and it is conjectured that if $\mathrm{Alb} (X)$ is simple, then $\mathrm{Alb}(X)$ is a p.p.a.v., and $X$ is birational to a double cover of $\mathrm{Alb}(X)$ branched along a divisor $B \in |2\Theta|$ (see also \cite{La}). In this paper, we study the case $q(X) = \dim X$, the main result is \begin{Theorem}[Theorem \ref{eun}, \ref{spr2}]\label{main} Let $X$ be a smooth complex projective variety of general type with $q(X) = \dim X$ and maximal Albanese dimension. Suppose that its bicanonical map is not birational and that $\mathrm{Alb}(X)$ is simple. Then $\chi(\omega_X) = 1$, and the linear system $|2K_X|$ separates two distinct points over the same general point $\mathrm{Alb} (X)$ via the Albanese map. \end{Theorem} This paper is organized as follows. In Section \ref{tool}, we list the technical results needed in this paper. In Section \ref{map}, we compare the Euler numbers of two irregular varieties of m.A.d. and equipped with a generically finite surjective morphism. In Section \ref{bicmap}, we study the bicanonical map and prove our main theorem. Finally, in Section \ref{inequ} as an appendix, we give an inequality on the irregularity of a fibration, and describe a certain fibration with the equality attained. \textbf{Conventions:} All varieties are assumed over $\mathbb{C}$. ``$\equiv$'' denotes the linear equivalence of line bundles or Cartier divisors respectively. Let $E$ be a vector bundle on a variety $X$. We denote the projective bundle by $\mathbb{P}_X(E):=\mathrm{Proj}_{\mathcal{O}_X}(\oplus_kS^k(E^*))$ and the tautological line bundle by $\mathcal{O}_{\mathbb{P}_X(E)}(1)$. Let $X$ be a projective variety. We denote by $D^b(X)$ be the bounded derived category of coherent sheaves on $X$. Let $f: X \rightarrow Y$ be a morphism between two smooth projective varieties. We denote by $Rf_*$ and $Lf^*$ the derived functors of $f_*$ and $f^*$ respectively. We say an object $E \in D^b(X)$ is a sheaf if it is quasi-isomorphic to ($\cong$) a sheaf in $D^b(X)$. For a product $X = X_1 \times X_2 \times ... \times X_r$ of $r$ varieties, $p_i$ denotes the projection from $X$ to the $i$\textsuperscript{th} factor $X_i$. For an abelian variety $A$, $\hat{A}$ denotes its dual $\mathrm{Pic}^0 (A)$, $\mathcal{P}$ denotes the Poincar\'{e} line bundle on $A \times \hat{A}$, and the Fourier-Mukai transform $R\Phi_{\mathcal{P}}: D^b(A) \rightarrow D^b(\hat{A})$ w.r.t. $\mathcal{P}$ is defined as $$R\Phi_{\mathcal{P}}(\mathcal{F}) := R(p_2)_*(Lp_1^*\mathcal{F} \otimes \mathcal{P});$$ similarly $R\Psi_{\mathcal{P}}: D^b(\hat{A}) \rightarrow D^b(A)$ is defined as $$R\Psi_{\mathcal{P}}(\mathcal{F}) := R(p_1)_*(Lp_2^*\mathcal{F} \otimes \mathcal{P}).$$ Since $p_i, i =1,2$ are flat morphisms, $R\Phi_{\mathcal{P}}$ and $R\Psi_{\mathcal{P}}$ are two right derived functors. If $a: X \rightarrow A$ is a map to an abelian variety, then $\mathcal{P}_a : = (a \times \mathrm{id}_{\hat{A}}) ^*\mathcal{P}$, and for $\mathcal{F} \in D^b(X)$, $R\Phi_{\mathcal{P}_a}(\mathcal{F})$ is defined similarly; and if $\alpha \in \hat{A}$, we often denote the line bundle $a^*\alpha \in \mathrm{Pic}^0 (X)$ by $\alpha$ for simplicity. For an irregular variety $X$, we usually denote by $\mathrm{alb}_X: X \rightarrow \mathrm{Alb} (X)$ the Albanese map. {\bf Acknowledgements.} Part of this note appears in the author's doctoral thesis submitted to Peking University (2011). The author expresses appreciations to Prof. Jinxing Cai and Dr. Wenfei Liu for many useful discussions. He thanks Prof. Meng Chen, Dr. Fan Peng and Ze Xu for their help on the inequality appearing in the appendix, and thanks Olivier Debarre and Yifei Chen for their suggestions in improving the English. He also thanks Sofia Tirabassi for some suggestions, and thanks the authors of \cite{BLNP} for their stimulating ideas. Finally, the author owes too much to an anonymous referee, who shares his or her ideas on improving the result of Theorem \ref{euln} and simplifying the proof of Corollary \ref{fm} and Theorem \ref{euln}. The author is supported by NSFC (No. 11226075). \section{Definitions and technical results}\label{tool} In this section, we collect some definitions and results needed in the sequel. First recall that \begin{Theorem}[\cite{Mu} Thm. 2.2]\label{Mu} Let $A$ be an abelian variety of dimension $d$. Then $$R\Psi_{\mathcal{P}} \circ R\Phi_{\mathcal{P}} = (-1)_A^*[-d]~\mathrm{and}~R\Phi_{\mathcal{P}} \circ R\Psi_{\mathcal{P}} = (-1)_{\hat{A}}^*[-d].$$ \end{Theorem} \subsection{GV-sheaves, M-regular sheaves and $IT^0$-sheaves} \begin{Definition}[\cite{PP2} Def. 2.1, 2.2, 2.8, 2.10, \cite{CH2} Def. 2.6]\label{defgv} Given a coherent sheaf $\mathcal{F}$ on an abelian variety $A$, its \emph{$i$\textsuperscript{th} cohomological support locus} is defined as $$V^i(\mathcal{F}): = \{\alpha \in \mathrm{Pic}^0 (A)| h^i(\mathcal{F} \otimes \alpha) > 0\}.$$ The number $\mathrm{gv}^i(\mathcal{F}): = \mathrm{codim}_{\mathrm{Pic}^0(A)}V^i(\mathcal{F}) - i$ is called the \emph{$i$\textsuperscript{th} generic vanishing index} of $\mathcal{F}$; $\mathrm{gv}(\mathcal{F}): = \min_{i>0}\{\mathrm{gv}^i(\mathcal{F})\}$ is called the \emph{generic vanishing index} of $\mathcal{F}$. We say $\mathcal{F}$ is a \emph{GV-sheaf} (resp. \emph{M-regular sheaf}) if $\mathrm{gv}(\mathcal{F}) \geq 0$ (resp. $>0$) and an \emph{$IT^0$-sheaf} if $V^i(\mathcal{F}) = \emptyset$ for $i>0$. Let $X$ be an irregular variety equipped with a morphism to an abelian variety $a: X \rightarrow A$. Let $\mathcal{F}$ be a sheaf on $X$, its \emph{$i$\textsuperscript{th} cohomological support locus w.r.t. $a$} is defined as $$V^i(\mathcal{F}, a) := \{\alpha \in \mathrm{Pic}^0(A)| h^i(X, \mathcal{F} \otimes \alpha) > 0\}$$ We say $\mathcal{F}$ is \emph{continuously~ globally~ generated} (\emph{CGG} for short) w.r.t. $a$ if the sum of the evaluation maps $$\mathrm{ev}_U: \oplus_{\alpha \in U}H^0(\mathcal{F} \otimes \alpha) \otimes (\alpha^{-1}) \rightarrow \mathcal{F}$$ is surjective for any non-empty open set $U \subset \hat{A}$. \end{Definition} \begin{Proposition}[\cite{PP2} Thm. 5.1]\label{cgg} An M-regular sheaf on an abelian variety is CGG. \end{Proposition} \begin{Proposition}\label{sjt} Let $\mathcal{F}$ be a sheaf on an abelian variety $A$ of dimension $d$. \begin{itemize} \item[(i)] If $\mathcal{F}$ is M-regular, then there is a natural surjection $$(-1)_A^*R^d\Psi_\mathcal{P}R^0\Phi_\mathcal{P}\mathcal{F} \rightarrow \mathcal{F}.$$ \item[(ii)] $\mathcal{F}$ is $IT^0$ if and only if $R\Phi_{\mathcal{P}}\mathcal{F} \cong R^0\Phi_{\mathcal{P}}\mathcal{F}$. \end{itemize} \end{Proposition} \begin{proof} (i) By assumption, for $j>0$, $\mathrm{codim}_{\hat{A}}\mathrm{Supp} R^j\Phi_\mathcal{P}\mathcal{F} > j$ (\cite{PP2} Prop. 2.1), thus $$\clubsuit: R^i\Psi_\mathcal{P}R^j\Phi_\mathcal{P}\mathcal{F} = 0~\mathrm{if}~j \neq 0~ \mathrm{and} ~i+j \geq d.$$ By $(-1)_A^*R\Psi_\mathcal{P}R\Phi_\mathcal{P}(a_*\omega_X)\cong a_*\omega_X[-d]$ (Theorem~\ref{Mu}), applying Leray spectral sequence gives that $$E_2^{i,j}:= (-1)_A^*R^i\Psi_\mathcal{P}R^j\Phi_\mathcal{P}\mathcal{F} \Rightarrow \mathcal{H}^{i+j}(\mathcal{F}[-d]).$$ Since $\clubsuit$, we have that \begin{itemize} \item $E_{\infty}^{i,j} \cong 0$ for $i+j =d$ and $(i,j) \neq (d,0)$, thus $E_{\infty}^{d,0} \cong \mathcal{F}$; and \item $d_r^{d,0}: E_{r}^{d,0} \rightarrow E_{r}^{d+r,-r+1} = 0$ is zero for $r\geq 2$, thus there is a surjection $$E_{2}^{d,0} = (-1)_A^*R^d\Psi_\mathcal{P}R^0\Phi_\mathcal{P}\mathcal{F} \rightarrow E_{\infty}^{d,0}.$$ \end{itemize} Then we conclude a natural surjection $$(-1)_A^*R^d\Psi_\mathcal{P}R^0\Phi_\mathcal{P}\mathcal{F} \rightarrow \mathcal{F}.$$ (ii) The direction ``only if'' follows from applying \cite{ha77} Cor. 12.9. For the other direction, note that for every $\alpha \in \mathrm{Pic}^0 (A)$ and $i>d$ the natural map $R^i\Phi_\mathcal{P}\mathcal{F}\otimes \mathbb{C}(\alpha) \rightarrow H^i(A, \mathcal{F}\otimes \alpha) = 0$ is surjective, and if $R\Phi_{\mathcal{P}}\mathcal{F} \cong R^0\Phi_{\mathcal{P}}\mathcal{F}$, then $R^i\Phi_{\mathcal{P}}\mathcal{F} = 0$ for $i>0$. Then applying \cite{ha77} Cor. 12.11 (b), we can prove the direction ``if'' by induction. \end{proof} \begin{Corollary}\label{fm} Let $a: X \rightarrow A$ be a generically finite morphism from a smooth projective variety to an abelian variety. Suppose that $\dim X = \dim A =d \geq 2$, $a^* : \mathrm{Pic}^0(A) \rightarrow \mathrm{Pic}^0(X)$ is an embedding, and for $i>0$, $V^i(\omega_X, a)$ is composed of at most some isolated points. Then there exists an exact sequence $$(-1)_A^*R^d\Psi_\mathcal{P}R^0\Phi_\mathcal{P}(a_*\omega_X) \rightarrow a_*\omega_X \rightarrow \omega_A \rightarrow 0.$$ \end{Corollary} \begin{proof} By assumption we have a splitting (cf. for example \cite{CV} Prop. 1.2) $$a_*\omega_X \cong \omega_A \oplus \mathcal{F}.$$ Then by $R\Phi_\mathcal{P}(a_*\omega_X) \cong R\Phi_\mathcal{P}\omega_A \oplus R\Phi_\mathcal{P} \mathcal{F}$ and $R^d\Phi_\mathcal{P}(a_*\omega_X) \cong R^d\Phi_\mathcal{P}\omega_A \cong \mathbb{C}(\hat{0})$ (Proposition 6.1 in \cite{BLNP}), we find that $$R^i\Phi_\mathcal{P}(a_*\omega_X) \cong R^i\Phi_\mathcal{P} \mathcal{F}~\mathrm{for}~i = 0,1,...,d-1~\mathrm{and}~R^d\Phi_\mathcal{P} \mathcal{F}=0.$$ Therefore, $\mathcal{F}$ is M-regular, and there is a surjection by Proposition \ref{sjt} (i) $$(-1)_A^*R^d\Psi_\mathcal{P}R^0\Phi_\mathcal{P}(a_*\omega_X) \cong (-1)_A^*R^d\Psi_\mathcal{P}R^0\Phi_\mathcal{P}\mathcal{F} \rightarrow \mathcal{F}.$$ Then naturally it follows the exact sequence $$(-1)_A^*R^d\Psi_\mathcal{P}R^0\Phi_\mathcal{P}(a_*\omega_X) \rightarrow a_*\omega_X \rightarrow \omega_A \rightarrow 0$$ \end{proof} Applying Theorem \ref{Mu} and Proposition \ref{sjt} (ii), we get (see \cite{Zh} Cor. 2.2 for details) \begin{Corollary}\label{Muc} Let $A$ be an abelian variety of dimension $d$, $E$ an $IT^0$-vector bundle on $A$. Then $R\Phi_{\mathcal{P}}E$ is a vector bundle on $\hat{A}$, and its dual $(R\Phi_{\mathcal{P}}E)^*$ is an $IT^0$-vector bundle such that $$R\Psi_{\mathcal{P}}((R\Phi_{\mathcal{P}}E)^*) \cong E^* \cong ((-1)_A^*R^d\Psi_{\mathcal{P}}((R^0\Phi_{\mathcal{P}}E)))^*.$$ \end{Corollary} \subsection{Generic vanishing theorem} Recall generic vanishing theorem due to Green and Lazarsfeld: \begin{Theorem} [\cite{GL1}, \cite{GL2}]\label{gv} Let $X$ be a smooth compact k\"{a}hler manifold with $\dim X =n$ and $\dim \mathrm{alb}_X(X) = k$. Then \begin{enumerate} \item[(i)]{$\mathrm{codim}_{\mathrm{Pic}^0 (X)}V^i(\omega_X, \mathrm{alb}_X) \geq k-n +i$.} \item[(ii)]{Let $Z$ be a component of $V^i(\omega_X, \mathrm{alb}_X)$ of positive dimension. Then $Z$ is a subtorus of $\mathrm{Pic}^0 (X)$, and there exists an analytic variety $Y$ of dimension $\leq n-i$ and a dominant map $f: X \rightarrow Y$ such that $Z \subset \alpha + f^*\mathrm{Pic}^0 (Y)$ where $\alpha$ is torsion.} \item[(iii)]{$Y$ has maximal Albanese dimension.} \item[(iv)]{Let $\alpha \in V^i(\omega_X, \mathrm{alb}_X)$ and $v \in H^1(\mathcal{O}_X) = T_\alpha \mathrm{Pic}^0(X)$. The \emph{derived complex} below $$\centerline{\xymatrix{ &H^{n-i-1}(\alpha^{-1}) \ar[r]^{\cup v} &H^{n-i}(\alpha^{-1}) \ar[r]^{\cup v} &H^{n-i+1}(\alpha^{-1}) }}$$ is exact if $v$ is not contained in the tangent cone $TC_\alpha V^i(\omega_X, \mathrm{alb}_X)$ to $V^i(\omega_X, \mathrm{alb}_X)$ at $\alpha$.} \end{enumerate} \end{Theorem} \begin{Corollary}\label{rmk} Let $X$ be a smooth projective variety of m.A.d. and of dimension $d$. Then \begin{itemize} \item[(i)]{$h^i(X, \omega_X \otimes \alpha) = h^i(\mathrm{Alb}(X), (\mathrm{alb}_X)_*\omega_X \otimes \alpha)$ for $\alpha \in \mathrm{Pic}^0 (X), i \geq 0$, and $(\mathrm{alb}_X)_*\omega_X $ is a GV-sheaf, thus $$V^0(\omega_X, \mathrm{alb}_X) \supset V^1(\omega_X, \mathrm{alb}_X) \supset \cdot\cdot\cdot \supset V^d(\omega_X, \mathrm{alb}_X);$$} \item[(ii)]{$R\Phi_{\mathcal{P}}((\mathrm{alb}_X)_*\mathcal{O}_X)[d] \cong R^d\Phi_{\mathcal{P}}((\mathrm{alb}_X)_*\mathcal{O}_X)$ is a sheaf, which we denote by $\widehat{\mathcal{O}_X}$;} \item[(iii)]{$R^i\Phi_{\mathcal{P}}((\mathrm{alb}_X)_*\omega_X) \cong (-1)^*_{\mathrm{Pic}^0(X)}\mathcal{E}xt^i(\widehat{\mathcal{O}_X}, \mathcal{O}_{\mathrm{Pic}^0(X)})$;} \item[(iv)]{$p_g(X) > \chi(\omega_X)$.} \end{itemize} \end{Corollary} \begin{proof} By Koll\'{a}r's results (\cite{Ko1} Thm. 2.1, \cite{Ko2} Thm. 3.1), we have that $$R(\mathrm{alb}_X)_*\omega_X \cong \sum_iR^i(\mathrm{alb}_X)_*\omega_X[-i]$$ and $R^i(\mathrm{alb}_X)_*\omega_X$ is torsion free if restricted to the Albanese image $\mathrm{alb}_X(X)$. We conclude that $R^i(\mathrm{alb}_X)_*\omega_X = 0$ for $i>0$ since $\mathrm{alb}_X$ is generically finite, hence $R(\mathrm{alb}_X)_*\omega_X \cong (\mathrm{alb}_X)_*\omega_X$. Using Grothendieck duality and projection formula, the assertions (i), (ii) and (iii) follow from Theorem \ref{gv}. See \cite{Ha} Thm. 1.5, 4.1 and Cor. 3.2 for the details. For (iv), take a general $v \in H^1(\mathcal{O}_X) = T_{\hat{0}} \mathrm{Pic}^0X$. Theorem \ref{gv} (iv) tells that \emph{the derived complex} $D_v$ is exact $$\centerline{\xymatrix{&0 \ar[r] &H^{0}(\mathcal{O}_X) \ar[r]^{\cup v} &H^{1}(\mathcal{O}_X) \ar[r]^{\cup v}&\cdot\cdot\cdot\ar[r]^{\cup v} &H^{n-1}(\mathcal{O}_X) \ar[r]^{\cup v}&H^{n}(\mathcal{O}_X) }}$$ This implies that the cokernel of the right-most map is a linear space of dimension $\chi(\omega_X) = (-1)^n\chi(\mathcal{O}_X)$. Since $X$ is of m.A.d., the right-most map is non-zero. Therefore, $p_g(X) = h^n(\mathcal{O}_X) > \chi(\omega_X)$. \end{proof} \begin{Remark}[\cite{EL} Remark 1.6] If replacing the Albanese map by a generically finite morphism to an abelian variety $a: X \rightarrow A$ and replacing $\mathrm{Pic}^0(X)$ by $\mathrm{Pic}^0(A)$, then the evident analogues of the results in Corollary \ref{rmk} hold. \end{Remark} \begin{Proposition}\label{cnm} Let $a: X \rightarrow A$ be a generically finite morphism from a smooth projective variety onto an abelian variety $A$. Suppose that $\chi(\omega_X) > 0$. Then for any $n>0$ the pluri-canonical map $\phi_{nK_X}$ does not factor through $a$ rationally. \end{Proposition} \begin{proof} We only need to consider the canonical map. Since $X$ is of m.A.d., we have $p_g(X) > 1$ by Corollary \ref{rmk} (iv), and thus the canonical map $\phi_{K_X}$ is not constant. Assume to the contrary that $\phi_{K_X} = g \circ a$ where $g: A \dashrightarrow \mathbb{P}^{p_g(X) - 1}$. By blowing up $X$ and $A$, we get a birational model of $g \circ a: X \rightarrow A \dashrightarrow \mathbb{P}^{p_g(X) - 1}$ $$\tilde{g} \circ \tilde{a}: \tilde{X} \rightarrow \tilde{A} \rightarrow \mathbb{P}^{p_g(X) - 1}$$ such that both $\tilde{g}$ and $\tilde{a}$ are morphisms and $$|K_{\tilde{X}}| = (\tilde{g} \circ \tilde{a})^* |\mathcal{O}_{\mathbb{P}^{p_g(X) - 1}}(1)| + F = \tilde{a}^*|M| + F~ \mathrm{where}~ |M| = \tilde{g}^* |\mathcal{O}_{\mathbb{P}^{p_g(X) - 1}}(1)|$$ Denote by $\tilde{R}$ the ramification divisor of $\tilde{g}: \tilde{X} \rightarrow \tilde{A}$. Then $K_{\tilde{X}} \equiv \tilde{R} + \tilde{a}^*E$ where $E$ is an effective divisor on $\tilde{A}$ exceptional w.r.t. the blowing up map $\tilde{A} \rightarrow A$. So there exists $M \in |M|$ such that $\tilde{R} + \tilde{a}^*E - \tilde{a}^*M$ is an effective divisor. Notice that $M$ is not contained in $E$. We get a contradiction by the property of the ramification divisor. \end{proof} \begin{Definition}[\cite{Ca} Def. 1.24]\label{prm} Let $X$ be an irregular variety of m.A.d.. It is called \emph{primitive} if $V^i(\omega_X, \mathrm{alb}_X)$ is composed of at most finitely many points for $i>0$. \end{Definition} \subsection{Characterization of a theta divisor} Imitating the proof of \cite{BLNP} Prop. 3.1, we can prove \begin{Proposition}\label{refp} Let $X$ be a smooth projective variety of general type equipped with a generically finite morphism $a: X \rightarrow A$ to an abelian variety $A$. Suppose that \begin{itemize} \item[(i)]{$\dim V^1(\omega_X, a) = 0$;} \item[(ii)]{$\dim X < \dim A$ and $a^*: \mathrm{Pic}^0(A) \rightarrow \mathrm{Pic}^0(X)$ is an embedding; and} \item[(iii)]{$\chi(X, \omega_X) = 1$.} \end{itemize} Then A is a p.p.a.v., and $a: X \rightarrow A$ birationally maps $X$ to a theta divisor on $A$. \end{Proposition} \begin{Corollary}\label{ref} Let $X$ and $a: X \rightarrow A$ be as in Proposition \ref{refp}. Assume (i), (ii) in Proposition \ref{refp} and \begin{itemize} \item[(iii)']{for $\alpha \in U_0:=\hat{A} \setminus V^1(\omega_X, a)$, $|\omega_X \otimes \alpha| = |M| + F_\alpha$ where $M$ is the movable part which is independent of $\alpha$ and $F_\alpha$ is the fixed part.} \end{itemize} Then A is a p.p.a.v., and $a: X \rightarrow A$ birationally maps $X$ to a theta divisor on $A$. \end{Corollary} \begin{proof} Assumption (iii)' implies that $\mathcal{B}:= \{(x, \alpha) \in X \times U_0| x \in F_\alpha\}$ is a divisor in $X \times U_0$. Denote by $\bar{\mathcal{Y}}$ the closure of $\mathcal{B}$ in $X \times \hat{A}$. Noticing that $\mathrm{codim}_{\hat{A}}V^1(\omega_X, a)\geq 2$, by the see-saw principle we have (\cite{BLNP} Lemma 5.2) $$\bar{\mathcal{Y}} \equiv p_1^* \omega_X(-M) \otimes \mathcal{P}_a \otimes p_2^*\mathcal{O}_{\hat{A}}(\bar{\mathcal{Y}}_p)$$ where $p \in X$ is a point mapped to $0 \in A$ via $a$. With these settings, by similar argument as in \cite{BLNP} Lemma 5.3, we can show that $\chi(X, \omega_X) = 1$. Then we are done by Proposition \ref{refp}. \end{proof} \subsection{Universal divisors and separation} \label{rzh} Recall the following results from \cite{Zh} Sec. 3. \begin{Theorem}[\cite{Zh} Theorem 2.10]\label{pf} Let $X$ and $Y$ be two projective normal varieties, and $\mathcal{L}$ a line bundle on $X \times Y$. Assume $E= (p_2)_*\mathcal{L}$ is a vector bundle and put $P = \mathbb{P}_Y(E)$. Note that there exists an open set $U \subset P$ parametrizing the divisors in $|\mathcal{L}_y|, y \in Y$. Denote by $\mathcal{D} \subset X \times U$ the universal family. Then its closure $\bar{\mathcal{D}} \subset X \times P$ is a divisor, and $$\bar{\mathcal{D}} \equiv p^*\mathcal{L} \otimes q^*\mathcal{O}_P(1)$$ where $p,q$ denote the two projections $p: X \times P \rightarrow X \times Y$, $q: X \times P \rightarrow P$. \end{Theorem} Let $E$ be an $IT^0$-vector bundle on an abelian variety $A$. Then $R\Phi_{\mathcal{P}}E$ is a vector bundle $\hat{A}$. Its dual $(R\Phi_{\mathcal{P}}E)^*$ is an $IT^0$-vector bundle, and $R\Psi_{\mathcal{P}}(R\Phi_{\mathcal{P}}(E)^*) \cong E^*$ (cf. Corollary \ref{Muc}). Let $P = \mathbb{P}_A(E^*)$, $\hat{P} = \mathbb{P}_{\hat{A}}(R\Phi_{\mathcal{P}}(E))$, and denote by $\pi: P \rightarrow A$ and $\hat{\pi}: \hat{P} \rightarrow \hat{A}$ the natural projections. Note that $$(p_2)_*(p_1^*\mathcal{O}_{P}(1)\otimes (\pi \times \mathrm{id}_{\hat{A}})^*\mathcal{P}) \cong R\Phi_{\mathcal{P}}(E)~\mathrm{and}~(p_1)_*(p_2^*\mathcal{O}_{\hat{P}}(1)\otimes (\mathrm{id}_{A} \times \hat{\pi})^*\mathcal{P}) \cong E^*.$$ We can identify $\hat{P}$ (resp. $P$) with the Hilbert scheme parametrizing the divisors in $\{|\mathcal{O}_{P}(1)\otimes \alpha||\alpha \in \mathrm{Pic}^0(P) = \hat{A}\}$ (resp. $\{|\mathcal{O}_{\hat{P}}(1)\otimes \hat{\alpha}||\hat{\alpha} \in \mathrm{Pic}^0(\hat{P}) = A\}$). Denote by $\mathcal{U} \subset P \times \hat{P}$ the universal family and by $\tilde{\mathcal{P}}$ the pull-back $(\pi \times \hat{\pi})^*\mathcal{P}$ of the Poincar\'{e} bundle on $A \times \hat{A}$. We have \begin{itemize} \item $\mathcal{U} \equiv p_1^* \mathcal{O}_P(1) \otimes \tilde{\mathcal{P}} \otimes p_2^*\mathcal{O}_{\hat{P}}(1) $ (by Theorem \ref{pf}); \item identifying a divisor in $|\mathcal{O}_{P}(1)\otimes \alpha|,\alpha \in \hat{A}$ with a point in $\hat{P}$, for every $x \in P$, the fiber $\mathcal{U}_x$ parametrizes all those divisors passing through $x$; \item for $x,y \in P$, $\mathcal{U}_x \equiv \mathcal{U}_y \Leftrightarrow \pi(x) = \pi(y), ~\mathrm{and}~\mathcal{U}_x = \mathcal{U}_y \Leftrightarrow x = y$. \end{itemize} We can write that \begin{equation}\label{dec} \mathcal{U}_x = \mathcal{H}_x + \mathcal{V}_x ~\text{and}~ \mathcal{V}_x= \mathcal{V}^1_x + \cdots + \mathcal{V}^r_x \end{equation} where $\mathcal{H}_x$ is the horizontal part (if $\mathrm{rank}(R\Phi_{\mathcal{P}}(E)) = 1$ then $\mathcal{H}_x = \emptyset$), $\mathcal{V}_x = \hat{\pi}^*V_x$ is the vertical part ($\mathcal{V}_x = \emptyset$ if $\mathcal{U}_x$ is irreducible), and the $\mathcal{V}^i_x = \hat{\pi}^*V^i_x$'s are the reduced and irreducible vertical components (two of them may equal). In fact there is a decomposition $\mathcal{U}= \mathcal{H} + \mathcal{V}$ such that for general $x \in P$, $\mathcal{U}_x = \mathcal{H}_x + \mathcal{V}_x$. \begin{Lemma}[\cite{Zh}, Lemma 3.3]\label{spr} Let $x,y \in P$ be two distinct points. Write that $\mathcal{U}_x = \mathcal{H}_x + \mathcal{V}_x $ and $\mathcal{U}_y = \mathcal{H}_y + \mathcal{V}_y$ as in \ref{dec}. Then the following conditions are equivalent \begin{itemize} \item[(a)]{$|\mathcal{O}_P(2)|$ fails to separate $x,y$;} \item[(b)]{$\mathcal{H}_x= \mathcal{H}_y$ and $\mathrm{Supp}(V_x +(-1)_{\hat{A}}^*V_x) = \mathrm{Supp}(V_y +(-1)_{\hat{A}}^*V_y)$.} \end{itemize} \end{Lemma} \section{The maps between two irregular varieties}\label{map} Here we give a theorem comparing the Euler numbers of two varieties of m.A.d. and equipped with a generically finite surjective morphism. Similar result has been proved by Tirabassi with a stronger assumption (\cite{Ti} Prop. 5.2.4). A weaker version also appeared in \cite{CLZ}, where it is applied to study the automorphism groups inducing trivial actions on cohomology of irregular varieties. \begin{Theorem}\label{euln} Let $\pi: X \rightarrow Z$ be a generically finite surjective morphism between two smooth projective varieties of m.A.d.. Then $\chi(\omega_X) \geq \chi(\omega_Z)$. If moreover \begin{itemize} \item[(i)]{$\pi$ is not birational;} \item[(ii)]{$\pi^*: \mathrm{Pic}^0 (Z) \rightarrow \mathrm{Pic}^0 (X)$ is an embedding; and} \item[(iii)]{$\mathrm{gv}^i(\omega_X, \mathrm{a}_X)\geq 1~\mathrm{for} ~i=1,2,\cdots,\dim X-1$, where $\mathrm{a}_X:= \mathrm{alb}_Z \circ \pi: X \rightarrow Z \rightarrow \mathrm{Alb} (Z)$,} \end{itemize} then $\chi(X, \omega_X) > \chi(Z, \omega_Z)$. \end{Theorem} \begin{proof} By assumption we have a splitting $\pi_*\omega_X \cong \omega_Z \oplus \mathcal{F}$. Since $(\mathrm{a}_X)_*\omega_X$ is a GV-sheaf on $\mathrm{Alb} (Z)$, the direct summand $(\mathrm{alb}_Z)_*\mathcal{F}$ is also a GV-sheaf. Then $$\chi(Z, \mathcal{F}) = \chi(\mathrm{Alb} (Z), (\mathrm{alb}_Z)_*\mathcal{F}) = h^0(\mathrm{Alb} (Z), (\mathrm{alb}_Z)_*\mathcal{F} \otimes \alpha) \geq 0 ~\mathrm{for~ general}~ \alpha \in \mathrm{Pic}^0 (Z),$$ and it follows that $$\chi(X, \omega_X) = \chi(Z, \omega_Z) + \chi(Z, \mathcal{F}) \geq \chi(Z, \omega_Z).$$ Now assume (i, ii, iii). Note that (i) implies that $\mathcal{F} \neq 0$; (ii) implies that $R^d \Phi_\mathcal{P}((\mathrm{a}_X)_*\omega_X) \cong R^d \Phi_\mathcal{P}((\mathrm{alb}_Z)_*\omega_Z) \cong \mathbb{C}(\hat{0})$ where $d = \dim X$, thus $R^d \Phi_\mathcal{P}((\mathrm{alb}_Z)_*\mathcal{F}) = 0$; (iii) implies that $\mathrm{gv}^i((\mathrm{alb}_Z)_*\mathcal{F}) \geq 1~\mathrm{for} ~i=1,2,\cdots,\dim X-1$. So we conclude that $(\mathrm{alb}_Z)_*\mathcal{F}$ is a non-zero M-regular sheaf. Since $(\mathrm{alb}_Z)_*\mathcal{F}$ is CGG (cf. Proposition \ref{cgg}), for general $\alpha \in \mathrm{Pic}^0 (Z)$, we have $$\chi(\mathrm{Alb} (Z), (\mathrm{alb}_Z)_*\mathcal{F}) = h^0(\mathrm{Alb} (Z), (\mathrm{alb}_Z)_*\mathcal{F} \otimes \alpha) > 0.$$ As a consequence we get that $$\chi(X, \omega_X) > \chi(Z, \omega_Z).$$ \end{proof} \section{The bicanonical map}\label{bicmap} \begin{Assumption-Notation}\label{not2} Let $X$ be a smooth projective variety of general type and of m.A.d., with $q(X) = \dim X = d\geq 2$. Denote by $a: X \rightarrow A$ the Albanese map, and assume $A$ is simple, which implies that $X$ is primitive. Suppose that the bicanonical map $\phi: X \dashrightarrow \mathbb{P}^{P_2(X) - 1}$ is not birational. \end{Assumption-Notation} \subsection{The Fourier-Mukai transform of $\omega_X$} \begin{Lemma}\label{pre} $R^0\Phi_{\mathcal{P}_a}(\omega_X) \cong \mathcal{O}_{\hat{A}}(-\hat{D})^{\oplus \chi(\omega_X)}$ where $\hat{D}$ is an ample divisor on $\hat{A}$. \end{Lemma} \begin{proof} Let $U_0 = \hat{A} \setminus V^1(\omega_X, a)$ and $\mathcal{B}_a(x) = \{\alpha \in U_0| x~ \mathrm{is ~a ~ base~ point~ of }~ |\omega_X \otimes \alpha|\}$. Applying \cite{BLNP} Theorem 4.13 gives that $\mathrm{codim}_{\hat{A}}\mathcal{B}_a(x) = 1$ for general $x \in X$. Denote by $\bar{\mathcal{Y}}$ the divisorial part of the closure of $\mathcal{B}:= \{(x, \alpha) \in X \times U_0| \alpha \in B_a(x)\}$ in $X\times \hat{A}$. We conclude that for $\alpha \in U_0$, $|\omega_X \otimes \alpha| = |M_\alpha| + F_\alpha$, where $|M_\alpha|$ is the movable part and $F_\alpha = \bar{\mathcal{Y}}_\alpha$ is the fixed part. As in \cite{BLNP} Sec. 5.1, we define a map $f: \hat{A} \rightarrow \hat{A}$. Since $\hat{A}$ is simple, we conclude that $f = \mathrm{id}_{\hat{A}}$ by \cite{BLNP} Lemma 5.1 (a). As a consequence $|M_\alpha|$ is independent of $\alpha$., i.e., $$|\omega_X \otimes \alpha| = |M| + F_\alpha$$ By \cite{BLNP} Lemma 5.2, we have $$\mathcal{P}_a \cong \mathcal{O}_{X \times \hat{A}}(\bar{\mathcal{Y}}) \otimes p_1^*(\omega_X^{-1} \otimes M) \otimes p_2^*\mathcal{O}_{\hat{A}}(-\hat{D})$$ where $\hat{D}$ is a fiber $\bar{\mathcal{Y}}_p$ for some $p \in X$. Then there is an exact sequence $$0 \rightarrow \mathcal{P}_a^{-1} \rightarrow p_1^*(\omega_X \otimes M^{-1})\otimes p_2^*\mathcal{O}_{\hat{A}}(\hat{D}) \rightarrow p_1^*(\omega_X \otimes M^{-1})\otimes p_2^*\mathcal{O}_{\hat{A}}(\hat{D})|_{\bar{\mathcal{Y}}} \rightarrow 0$$ Applying $R^d(p_2)_*$ to the sequence above, we obtain the following exact sequence \begin{equation}\label{3} 0 \rightarrow \tau \rightarrow (-1)_{\hat{A}}^*\widehat{\mathcal{O}_X} \rightarrow \mathcal{O}_{\hat{A}}(\hat{D})^{\oplus \chi(\omega_X)} \rightarrow \tau' \rightarrow 0 \end{equation} where \begin{enumerate} \item[(a)]{The rank $\chi(\omega_X)$ in the third term appears because $h^d(\omega_X \otimes M^{-1}) = h^0(M) = \chi(\omega_X)$.} \item[(b)]{$\tau'$ is supported at the locus of the $\alpha \in \hat{A}$ such that the fiber $\bar{\mathcal{Y}}_\alpha$ of the projection $p_2: \bar{\mathcal{Y}} \rightarrow \hat{A}$ has dimension $d$. Such locus is contained in $V^1(\omega_X,a)$, hence consists of finitely many torsion points.} \item[(c)]{Since $(-1)_{\hat{A}}^*\widehat{\mathcal{O}_X}$ (cf. Corollary \ref{rmk} (ii)) also has rank $\chi(\omega_X)$, the kernel of the map $(-1)_{\hat{A}}^*\widehat{\mathcal{O}_X} \rightarrow \mathcal{O}_{\hat{A}}(\hat{D})^{\oplus \chi(\omega_X)}$ is the torsion part $\tau \cong \mathbb{C}(\hat{0})$ of $(-1)_{\hat{A}}^*\widehat{\mathcal{O}_X}$ (cf. \cite{BLNP} Prop. 6.1). } \item[(d)]{Note that $R\Psi_{\mathcal{P}}(\tau'), R\Psi_{\mathcal{P}}(\tau)~\mathrm{and}~ R\Psi_{\mathcal{P}}((-1)_{\hat{A}}^*\widehat{\mathcal{O}_X}) \cong a_*\mathcal{O}_X$ (by Theorem \ref{Mu}) are all sheaves on $A$. Applying $R\Psi_{\mathcal{P}}$ to Sq. \ref{3}, then by using spectral sequence we conclude that $R\Psi_{\mathcal{P}}(\mathcal{O}_{\hat{A}}(\hat{D})^{\oplus \chi(\omega_X)})$ is also a sheaf, hence $\hat{D}$ is an ample divisor on $\hat{A}$.} \end{enumerate} Note that $$\mathcal{E}xt^i(\mathcal{O}_{\hat{A}}(\hat{D})^{\oplus \chi(\omega_X)}, \mathcal{O}_{\hat{A}}) = 0 ~\mathrm{if}~i\neq 0~\mathrm{and}~ \mathcal{E}xt^i(\tau, \mathcal{O}_{\hat{A}}) = \mathcal{E}xt^i(\tau', \mathcal{O}_{\hat{A}}) = 0~\mathrm{if}~i\neq d.$$ Recall that $R^i\Phi_{\mathcal{P}_a}(\omega_X) \cong (-1)_{\hat{A}}^*\mathcal{E}xt^i(\widehat{\mathcal{O}_X}, \mathcal{O}_{\hat{A}})$ (cf. Corollary \ref{rmk} (iii)). Applying $\mathcal{E}xt(-, \mathcal{O}_{\hat{A}})$ to (\ref{3}), by using spectral sequence, we conclude that $$R^0\Phi_{\mathcal{P}_a}(\omega_X) \cong (-1)_{\hat{A}}^*\mathcal{E}xt^0(\widehat{\mathcal{O}_X}, \mathcal{O}_{\hat{A}}) \cong \mathcal{O}_{\hat{A}}(-\hat{D})^{\oplus \chi(\omega_X)}$$ \end{proof} \subsection{The universal divisor}\label{ud} Since $\hat{D}$ is an ample divisor on $\hat{A}$, $\mathcal{O}_{\hat{A}}(\hat{D})^{\oplus \chi(\omega_X)}$ is an $IT^0$-vector bundle, using Corollary \ref{Muc}, we have that the sheaf $$E := (-1)_A^*R^d\Psi_{\mathcal{P}}R^{0}\Phi_{\mathcal{P}_a}(\omega_X) \cong (-1)_A^*R^d\Psi_{\mathcal{P}}(\mathcal{O}_{\hat{A}}(-\hat{D})^{\oplus \chi(\omega_X)}) \cong (R\Psi_{\mathcal{P}}(\mathcal{O}_{\hat{A}}(\hat{D})^{\oplus \chi(\omega_X)}))^*$$ is an $IT^0$-vector bundle such that $R^0\Phi_{\mathcal{P}}(E) = R^{0}\Phi_{\mathcal{P}_a}(\omega_X)$. By Corollary \ref{fm}, $E$ fits into the following exact sequence \begin{equation}\label{4} E \rightarrow a_*\omega_X \rightarrow \omega_A \rightarrow 0. \end{equation} Let $$\hat{P} = \mathbb{P}_{\hat{A}}(R^{0}\Phi_{\mathcal{P}_a}(\omega_X))=\mathbb{P}_{\hat{A}}(R^0\Phi_{\mathcal{P}}(E)) \cong \hat{A} \times \mathbb{P}^{\dim |M|}~\mathrm{and}~P = \mathbb{P}_A(E^*).$$ Then $\hat{P}$ is one component of the Hilbert scheme parametrizing the divisors in $|K_X \otimes \alpha|, \alpha \in \hat{A}$. We denote by $\mathcal{K} \subset X \times \hat{P}$ the universal family (cf. Theorem \ref{pf}). Let the notation $\tilde{\mathcal{P}}$, $\mathcal{U}$, $\pi$ and $\hat{\pi}$ be as in Sec. \ref{rzh}. By the proof of Lemma \ref{pre}, for $\hat{p} = (\alpha, M) \in U_0 \times \mathbb{P}^{\dim |M|} \subset \hat{P}$, where $M\in |M|$, we have $$\mathcal{K}_{\hat{p}} = \bar{\mathcal{Y}}_\alpha + M.$$ So $(\mathrm{id}_X \times \hat{\pi})^*\bar{\mathcal{Y}} \subset \mathcal{K}$, and we can write that \begin{equation} \mathcal{K} = \mathcal{H} + (\mathrm{id}_X \times \hat{\pi})^*\bar{\mathcal{Y}}. \end{equation} \begin{Fact}\label{facts} \begin{itemize} \item[(a)] If $\mathcal{H}$ is non-empty (i.e., $\chi(X, \omega_X) > 1$), then it is dominant over $\hat{P}$, and for $\hat{p} = (\alpha, M) \in \hat{P}$, $\mathcal{H}_{\hat{p}} = M$; \item[(b)] for general $x \in X$, $\mathcal{H}_x = \hat{A} \times H_x \subset \hat{A} \times \mathbb{P}^{\dim |M|}$ where $H_x$ is the hyperplane in $\mathbb{P}^{\dim |M|}$ parametrizing all the divisors in $|M|$ passing through $x$; \item[(c)] for the divisor $\bar{\mathcal{Y}}$, denoting by $\mathcal{V}$ the sum of all the components dominant over $X$, then $\bar{\mathcal{Y}} = \mathcal{V} + p_1^*F$ where $F \subset X$ is the common fixed part of all $F_\alpha, \alpha \in U_0$; \item[(d)] for a general point $x \in X \setminus F$, $\mathcal{Y}_x = \mathcal{V}_x\equiv \mathcal{O}_{\hat{A}}(\hat{D}) \otimes \mathcal{P}_{a(x)}$. \end{itemize} \end{Fact} Let $\tilde{\mathcal{P}}_a = (a \times \hat{\pi})^*\mathcal{P}$. By Theorem \ref{pf}, we have $$\mathcal{K} \equiv p_1^*\omega_X \otimes \tilde{\mathcal{P}}_a \otimes p_2^*\mathcal{O}_{\hat{P}}(1)~~~\mathrm{and}~~~ \mathcal{U} \equiv p_1^* \mathcal{O}_P(1) \otimes \tilde{\mathcal{P}}\otimes p_2^*\mathcal{O}_{\hat{P}}(1).$$ Observe that for a general $x \in X$, the fiber $\mathcal{K}_x$ is a divisor on $\hat{P}$ linearly equivalent to $\mathcal{O}_{\hat{P}}(1) \otimes \hat{\pi}^*\mathcal{P}_{a(x)}$, hence is a fiber of $\mathcal{U} \rightarrow P$. We can define a rational map relative over $A$ $$h: X \dashrightarrow P ~\mathrm{via}~x \mapsto \mathcal{K}_x,$$ Assume that $h$ is a morphism by blowing up $X$. There exists an open set $U \subset X$ such that the restriction $\mathcal{K}|_{U \times \hat{P}} = (h \times \mathrm{id}_{\hat{P}})^* \mathcal{U}|_{U \times \hat{P}}$. Since $\mathcal{U}$ is flat over $P$, $(h \times \mathrm{id}_{\hat{P}})^* \mathcal{U}$ is the closure of $(h \times \mathrm{id}_{\hat{P}})^* \mathcal{U}|_{U \times \hat{P}}$ in $X \times \hat{P}$, thus $(h \times \mathrm{id}_{\hat{P}})^* \mathcal{U} \subset \mathcal{K}$. \begin{Fact}\label{fact} \begin{itemize} \item[(1)] Using the see-saw principle, $\mathcal{K} = (h \times id_{\hat{P}})^*\mathcal{U} \otimes p_1^*\mathcal{O}_X(G)$ where $G$ is an effective divisor on $X$ such that $h^*\mathcal{O}_P(1) + G \equiv \omega_X$. \item[(2)] We have a natural homomorphism $\otimes s: h^*\mathcal{O}_P(1) \rightarrow \omega_X$ where $s \in H^0(X, \mathcal{O}_X(G))$ is a section with zero locus $G$, then pushing forward gives a homomorphism $a_*h^*\mathcal{O}_P(1) \rightarrow a_*\omega_X$. \item[(3)] The relative map $h: X/A \rightarrow P/A$ is induced by the homomorphism $E =\pi_*\mathcal{O}_P(1) \rightarrow a_*h^*\mathcal{O}_P(1)$. \item[(4)] The composite homomorphism $E \rightarrow a_*h^*\mathcal{O}_P(1) \rightarrow a_*\omega_X$ coincides with the natural homomorphism $E \rightarrow a_*\omega_X$ in (\ref{4}) up to multiplication by a non-zero constant. \end{itemize} \end{Fact} We explain (4). Since $E$ is CGG, the composite homomorphism $E \rightarrow a_*h^*\mathcal{O}_P(1) \rightarrow a_*\omega_X$ is determined by its Fourier-Mukai transform $\lambda: R^0\Phi_{\mathcal{P}}E \rightarrow R^0\Phi_{\mathcal{P}}(a_*h^*\mathcal{O}_P(1)) \rightarrow R^0\Phi_{\mathcal{P}}(a_*\omega_X)$. By abuse of notation, we also use $\mathcal{U}$ and $\mathcal{K}$ for the line bundles on $P \times \hat{P}$ and $X \times \hat{P}$ linearly equivalent to $\mathcal{U}$ and $\mathcal{K}$ respectively. Then with the corresponding terms being isomorphic, we have that $\lambda$ coincides with the following natural composite homomorphism $$\lambda': \hat{\pi}_*(p_2)_*(\mathcal{U} \otimes p_2^*\mathcal{O}_{\hat{P}}(-1)) \rightarrow \hat{\pi}_*(p_2)_*((h \times \mathrm{id}_{\hat{P}})^*\mathcal{U} \otimes p_2^*\mathcal{O}_{\hat{P}}(-1)) \rightarrow \hat{\pi}_*(p_2)_*(\mathcal{K} \otimes p_2^*\mathcal{O}_{\hat{P}}(-1)).$$ We can see that $\lambda'$ coincides with the Fourier-Mukai transform of $E \rightarrow a_*\omega_X$ in (\ref{4}) up to multiplication by a non-zero constant, then (4) follows. \begin{Lemma}\label{embd} If $\deg(a) > 2$, then $h : X \rightarrow P$ is an embedding generically, which means that for two general distinct points $x, y \in X$, $\mathcal{K}_x \neq \mathcal{K}_y$. \end{Lemma} \begin{proof} By Fact \ref{fact} (3, 4), the degree of the restriction map $\pi|_{h(X)}: h(X) \rightarrow A$ is $\geq \mathrm{rank}(E \rightarrow a_*h^*\mathcal{O}_P(1)) \geq \deg(a)-1$. Then we have $\deg(h) \leq \frac{\deg(a)}{\deg(a) -1}$, and the assertion follows easily by assumption. \end{proof} \subsection{The Euler number $\chi(X, \omega_X)$} \begin{Proposition}\label{ir} $\mathcal{V}$ is irreducible. \end{Proposition} \begin{proof} We argue by contradiction. Suppose that $\mathcal{V} = \mathcal{V}^1 + \mathcal{V}^2$ is reducible. Note that both $\mathcal{V}^1$ and $\mathcal{V}^2$ are dominant over $X$ and $\hat{A}$. Fixing a general $\alpha_0$, we define two maps $\iota_i: \hat{A} \rightarrow \hat{A}$ via $\alpha \mapsto \mathcal{V}^i_\alpha - \mathcal{V}^i_{\alpha_0}$ with $\mathcal{V}^i_\alpha - \mathcal{V}^i_{\alpha_0}$ identified as an element in $\hat{A} = \mathrm{Pic}^0(X)$, which extend to two morphisms. We claim that neither of the two maps are constant. Indeed, say, if $\iota_1$ is constant, then for a general $\alpha \in \hat{A}$ $\mathcal{V}^1_\alpha \equiv \mathcal{V}^1_{\alpha_0}$ and $\mathcal{V}^1_\alpha \neq \mathcal{V}^1_{\alpha_0}$, so $\mathcal{V}^1_\alpha + \mathcal{V}^2_{\alpha_0} \equiv \mathcal{V}^1_{\alpha_0} + \mathcal{V}^2_{\alpha_0} \in |\mathcal{V}_{\alpha_0}|$, contradicting that $\mathcal{V}_{\alpha_0}$ is contained in the fixed part of $|\omega_X \otimes \alpha_0|$. Therefore, both $\iota_1$ and $\iota_2$ are surjective since $\hat{A}$ is simple. For general $\alpha \in \hat{A}$, there exist $\alpha_1, \alpha_2 \in \hat{A}$ such that $\iota_1(\alpha_1) = \alpha, \iota_2(\alpha_2) = \alpha^{-1}$. Then we conclude that $$\mathcal{V}_{\alpha_0} = \mathcal{V}^1_{\alpha_0} + \mathcal{V}^2_{\alpha_0} \equiv \mathcal{V}^1_{\alpha_1} + \mathcal{V}^2_{\alpha_2}$$ which contradicts that $\mathcal{V}_{\alpha_0}$ is contained in the fixed part of $|\omega_X \otimes \alpha_0|$ again. \end{proof} \begin{Theorem}\label{eun} The Euler number $\chi(\omega_X) = 1$. \end{Theorem} \begin{proof} We argue by contradiction. Suppose that $\chi(\omega_X) \geq 2$. So for general $\alpha \in \mathrm{Pic}^0 (X)$, the movable part $|M|$ of $|\omega_X \otimes \alpha|$ is non-trivial. By taking two different general elements $M_1, M_2 \in |M|$, we define a rational map $f: X \dashrightarrow \mathbb{P}^1$, and assume that $f$ is a morphism by blowing up $X$. Let $f = \pi \circ g: X \rightarrow Y \rightarrow \mathbb{P}^{1}$ be the Stein factorization. Since $A$ is simple and $\dim A \geq 2$, we conclude that $Y$ is a rational curve. Take a general fiber $M'$ of $g: X \rightarrow Y$. Then $M'$ is smooth, $M \equiv kM'$ for some integer $k>0$, and the restriction map of the bicanonical map $\phi|_{M'}$ is not birational. We claim that $M'$ is not birational to a theta divisor on a p.p.a.v.. Indeed, otherwise we have $q(M') = \dim M' +1 = \dim X$, thus $q(X) = q(M') + q(Y)$. Then $X$ is birational to $M' \times \mathbb{P}^1$ by Theorem \ref{cp}, contradicting that $X$ is of general type. It is reduced to prove that \begin{Claim} $M'$ is birational to a theta divisor on a p.p.a.v.. \end{Claim} \emph{Proof of the claim:} We break the proof into 3 steps. \underline{Step 1}: Consider the line bundle $\omega_X(M')$. For general $x\in X$, we define the locus $B'_x := \{\alpha \in \hat{A}|x~ \mathrm{is ~a ~base ~ point~of}~|\omega_X(M')\otimes \alpha|\}$. Then $\mathrm{codim}_{\hat{A}}B'_x = 1$. Assume to the contrary that $\mathrm{codim}_{\hat{A}}B'_x > 1$. Then take two general distinct points $x,y \in X$ such that $|2K_X|$ fails to separate them. We can see that every $M \in |M|$ containing $x$ must contain $y$, thus $\mathcal{H}_x = \mathcal{H}_y$ by Fact \ref{facts} (b). We claim that $\mathcal{V}_x = \mathcal{V}_y$, as a consequence $\mathcal{K}_x = \mathcal{K}_y$ and $a(x) = a(y)$ by Fact \ref{facts} (d). Indeed, if not, we can choose $\alpha \in \hat{A}$ contained in $\mathcal{V}_x$ while not in $\mathcal{V}_y$ such that $-\alpha$ is not contained in $B'_y$ since $\mathrm{codim}_{\hat{A}}(B'_y) > 1$. It follows that $\mathcal{V}_\alpha$ contains $x$ but not $y$, and there exists $D \in |\omega_X(M')\otimes \alpha^{-1}|$ which does not contain $y$. Taking a divisor $M' \in |M'|$ not containing $y$, then the divisor $\mathcal{V}_\alpha + D + (k-1)M' + F \in |2K_X|$ contains $x$ but not $y$ (where $F$ is introduced in Fact \ref{facts} (c)), a contradiction. Then we obtain a contradiction by Proposition \ref{cnm} if $\deg (a) = 2$, and by Lemma \ref{embd} if $\deg (a) > 2$. \underline{Step 2}: $|\omega_X(M')\otimes \alpha| = |H| + F'_{\alpha}$, where the movable part $H$ is independent of general $\alpha\in \hat{A}$. Since $\mathrm{codim}_{\hat{A}}B'_x = 1$ for general $x \in X$, similarly as in the proof of Lemma \ref{pre} we get a divisor $\bar{\mathcal{Y}}'$ dominant over $\hat{A}$, such that $|\omega_X(M')\otimes \alpha| = |H_{\alpha}| + F'_{\alpha}$ for general $\alpha \in \hat{A}$, where $|H_{\alpha}|$ is the movable part and $F'_{\alpha} = \bar{\mathcal{Y}}'_\alpha$ is the fixed part. Since $|M'|$ is base point free, we have $F'_{\alpha} \leq F_\alpha$, i.e., $\bar{\mathcal{Y}}'_\alpha \leq \bar{\mathcal{Y}}_\alpha$. There exists a non-empty open set $U \subset \hat{A}$ such that, the restriction $\bar{\mathcal{Y}}'|_{X \times U} \leq \bar{\mathcal{Y}}|_{X \times U}$, thus $\bar{\mathcal{Y}}' \leq \bar{\mathcal{Y}}$ because they are the closure of $\bar{\mathcal{Y}}'|_{X \times U}$ and $\bar{\mathcal{Y}}|_{X \times U}$ in $X \times \hat{A}$ respectively. Denote by $\mathcal{V}'$ the sum of the components of $\bar{\mathcal{Y}}'$ dominant over $X$. We have $\mathcal{V}' \leq \mathcal{V}$, thus $\mathcal{V}' = \mathcal{V}$ by Proposition \ref{ir}. Denote by $F'$ the common fixed part of $|\omega_X(M')\otimes \alpha|$. We can see that $H_{\alpha} \equiv \omega_X(M' - \mathcal{V}_\alpha -F')\otimes \alpha$ is independent of general $\alpha\in \hat{A}$. Setting $F'_{\alpha} = \mathcal{V}_\alpha + F'$, then we are done. \underline{Step 3}: Tensoring the following exact sequence $$0\rightarrow \omega_X \rightarrow \omega_X(M') \rightarrow \omega_{M'} \rightarrow 0$$ with $\alpha \in U_0$ and taking cohomology, we conclude that the restriction map $H^0(X, \omega_X(M')\otimes \alpha) \rightarrow H^0(M', \omega_{M'}\otimes \alpha)$ is surjective since $H^1(X,\omega_X \otimes \alpha) = 0$. Then by $|\omega_X(M')\otimes \alpha| = |H| + F'_{\alpha}$, we have that $$|\omega_{M'}\otimes \alpha| = |H||_{M'} + F'_{\alpha}|_{M'}$$ By assumption that $A$ and $\hat{A}$ are simple, they have no proper subtorus of positive dimension, we conclude that $A$ is generated by the translates through the origin of $a(M')$, and $\dim V^1(\omega_{M'}, a|_{M'}) = 0$ by generic vanishing theorem. The restriction morphism $a|_{M'}: M'\rightarrow A$ factors through a morphism to an abelian variety $a_{M'}: M'\rightarrow A_{M'}$, where $A_{M'}$ is the dual of the image of the natural map $(a|_{M'})^*: \mathrm{Pic}^0(A) \rightarrow \mathrm{Pic}^0 (M')$. So $(a_{M'})^*:\mathrm{Pic}^0(A_{M'}) \rightarrow \mathrm{Pic}^0(M')$ is an embedding. Write that $ a|_{M'} =\eta \circ a_{M'}: M'\rightarrow A_{M'} \rightarrow A$. Then $\eta$ is finite, and thus $\eta^*: \mathrm{Pic}^0(A) \rightarrow \mathrm{Pic}^0(A_{M'})$ is an epimorphism. So $V^1(\omega_{M'}, a_{M'}) = \eta^*V^1(\omega_{M'}, a|_{M'})$ is of dimension $0$. Applying Corollary \ref{ref} to $a_{M'}: M'\rightarrow A_{M'}$ shows that $M'$ is birational to a theta divisor. \end{proof} \begin{Remark} To prove $\chi(X, \omega_X) = 1$, the simplicity of $\mathrm{Alb}(X)$ is necessary by Example \ref{eg}. \end{Remark} \subsection{The degree of the bicanonical map}\label{sdeg} $\chi(X, \omega_X) = 1$ implies that $R^0\Phi_{\mathcal{P}_a}(\omega_X) \cong \mathcal{O}_{\hat{A}}(-\hat{D})$ is a line bundle, and $\hat{P} = A$. By Fact \ref{facts}, we have that $\mathcal{K} = \bar{\mathcal{Y}}$, and for $x \in X\setminus F$, $\mathcal{K}_x = \bar{\mathcal{Y}}_x= \mathcal{V}_x$. Write that $\mathcal{V}_x = V^1_x + ...+V^r_x$ as in Sec. \ref{rzh}. \begin{Theorem}\label{deg} Let the notation be as above. Then $\deg(\phi) \leq 2^r$. \end{Theorem} \begin{proof} By Lemma \ref{spr}, if $x, y \in P$ are two distinct points such that $|\mathcal{O}_P(2)|$ fails to separate them, then $\mathcal{H}_x = \mathcal{H}_y$ and $$\mathcal{V}_y = ((-1)_{\hat{A}}^{\epsilon_1})^*V^1_x + \cdots + ((-1)_{\hat{A}}^{\epsilon_r})^*V^r_x, ~for~some~\epsilon_i \in \{0,1\}, i = 1,2,...,r$$ which has $2^r$ possibilities. If $\deg(h) = 1$, then we are done since $h^*|\mathcal{O}_P(2)| \subset |2K_X|$. If $\deg(h) > 1$, then $\deg(a) = 2$ by Lemma \ref{embd}, and thus $a$ and $h$ are birationally equivalent. The assertion follows by combining the two facts that the restriction of $|\mathcal{O}_P(2)|$ on $h(X)$ defines a map of degree $\leq 2^r$ and that the bicanonical map does not factor through $a$ rationally (cf. Proposition \ref{cnm}). \end{proof} \begin{Theorem}\label{spr2} $|2K_X|$ separates the points over the same general point on $A$. \end{Theorem} \begin{proof} Consider the diagonal map $(a \times \phi): X \dashrightarrow A \times \mathbb{P}^{P_2(X) - 1}$. We can assume this map is a morphism by blowing up $X$, and denote by $Z$ the image. If the theorem is not true, then $X \rightarrow Z$ is not birational. Note that $a$ factors through $(a \times \phi): X \rightarrow Z$, so $(a \times \phi)^*: \mathrm{Pic}^0(Z) \rightarrow \mathrm{Pic}^0(X)$ is an embedding. Since $\chi(\omega_X) = 1$, Theorem \ref{euln} implies that $\chi(\omega_Z) = 0$, so $Z$ is birational to $A$ (\cite{BLNP}, Prop. 4.10). Therefore, $(a \times \phi): X \rightarrow Z$ is birational to $a: X \rightarrow A$, and $\phi$ factors through $a$ rationally. However, this contradicts Proposition \ref{cnm}. \end{proof} \subsection{Remarks and an example}\label{sre} We remark the following: (1) $\mathcal{V}$ is irreducible (Proposition \ref{ir}), it is expected that $\mathcal{V}_x$ is irreducible for general $x \in X$. If this is true, then by Theorem \ref{deg}, the bicanonical map $\phi$ is of degree 2. Precisely, using the idea of \cite{Zh}, we know that $\phi$ factors through an involution $\sigma$, and up to a translate on $A$, the quotient map $X \rightarrow X/(\sigma)$ fits into the following commutative diagram \[\begin{CD} X @> >> X/(\sigma) \\ @Va VV @Va' VV \\ A @> >> A/((-1)_A) \end{CD} \] (2) For a primitive variety $X$, if we do not assume $A= \mathrm{Alb}(X)$ is simple, then $R^0\Phi_{\mathcal{P}_a} \omega_X \cong (\mathcal{E}(\mathcal{D}))^*$ where $\mathcal{E}$ is a vector bundle and $\mathcal{D}$ is a divisor on $\mathrm{Pic}^0 (X)$ (cf. \cite{BLNP} proof of Lemma 5.3). Assume that $\mathcal{E}(\mathcal{D})$ is an $IT^0$-vector bundle. Then $E = (R^0\Psi_{\mathcal{P}}\mathcal{E}(\mathcal{D}))^*$ is an $IT^0$-vector bundle on $A$. Similarly we can define $P, \hat{P}, \mathcal{K}$ and $\mathcal{V}$. For general $x \in X$, if $\mathcal{V}_x = \hat{\pi}^*(V^1_x + \cdots+V^r_x)$ as before, then by similar argument we can prove $\deg(\phi) \leq 2^r$. This bound is analogous to \cite{Zh} Corollary 3.4, and is optimal (cf. Example \ref{eg}). Stimulated by \cite{Zh} Theorem 1.2 and 3.5, it is expected that $A$ is decomposable if $r>1$ (Example \ref{eg} provides an evidence). So it is possible to give an upper bound to $\deg(\phi)$ relying on the numbers of the factors of $A$. \begin{Example}\label{eg} Let \begin{itemize} \item $(A_i, \Theta_i) , i = 1,2,\cdots,r$ be $r$ simple p.p.a.v. and $A_{r+1}$ a simple abelian variety; \item $A = A_1 \times A_2 \times \cdots \times A_r \times A_{r+1}$; \item $B = p_1^*B_1 + \cdots + p_r^*B_r + p_{r+1}^*B_{r+1}$ where $B_i \in |2\Theta_i|$ is a smooth divisor on $A_i$ for $i =1,2,\cdots,r$ and $B_{r+1}\equiv 2D_{r+1}$ is a smooth very ample divisor on $A_{r+1}$; \item $Y \rightarrow A$ the double cover given by the relation $2L \equiv B$ where $L$ is a line bundle linearly equivalent to $p_1^*\Theta_1 + \cdots + p_r^*\Theta_r + p_{r+1}^*D_{r+1}$; \item $X \rightarrow Y$ a smooth resolution. \end{itemize} Note that $Y$ has at most canonical singularities since $B$ is a reduced simple normal crossing divisor. We denote by $\pi: X \rightarrow Y \rightarrow A$ the composed map which coincides with the Albanese map. Immediately we have \begin{itemize} \item[(i)] $\omega_X \equiv \pi^*L$, thus $\pi_*\omega_X \cong \omega_A \oplus L$ and $X$ is primitive; \item[(ii)] $\pi_*\omega_X^2 \cong L \oplus \mathcal{O}_A(B)$, and the linear system $|B|$ defines a morphism of degree $2^r$ on $A$; \item[(iii)] $E \cong L$ is a line bundle, $P = A$ and $R\Phi_{\mathcal{P}} E$ is a vector bundle of rank $\chi(X, \omega_X) = \chi(A_{r+1}, \mathcal{O}_{A_{r+1}}(D_{r+1}))$. \end{itemize} We can prove that (with details left to readers) \begin{itemize} \item[(1)] for general $x \in X$, $\mathcal{K}_x = \mathcal{H}_x + \mathcal{V}_x^1 + \mathcal{V}_x^2 + \cdots + \mathcal{V}_x^r$ where $\mathcal{V}_x^i$ is the pull-back of the divisor on $\hat{A}_i$ parametrizing the theta divisors passing through $p_i (\pi (x))$ via the projection $\hat{P} \rightarrow \hat{A} \rightarrow \hat{A}_i$ for $i = 1,2,\cdots,r$; \item[(2)] the degree of the bicanonical map of $X$ is $2^r$. \end{itemize} \end{Example} \section{Appendix: An inequality on the fibrations of irregular varieties}\label{inequ} Let $f:S\rightarrow C$ be a fibration of a smooth surface and $F$ a general fiber. Beauville proved that $q(S) \leq q(F) + q(C)$; and if $q(F) \geq 2$, then the equality is attained if, and only if $S$ is birational to $C \times F$ (\cite{Beau}). For arbitrary dimensional case, we have \begin{Theorem}\label{ineq} Let $f: X \rightarrow Y$ be a fibration between two smooth projective varieties and $F$ a general fiber. Then $q(X) \leq q(Y) + q(F)$, and the kernel of the restriction map $r: \mathrm{Pic}^0(X) \rightarrow \mathrm{Pic}^0(F)$ contains $f^*\mathrm{Pic}^0(Y)$, as the whole component passing through the identity point $\hat{0} \in \mathrm{Pic}^0(X)$. \end{Theorem} \begin{proof} The inequality has been proved in \cite{CH1} Cor. 3.5 for Iitaka fibration. We use the notation in \cite{CH1} Lemma 2.6 and Cor. 3.5 for convenience. Noticing that the natural map $\mathrm{A}(X_y) \rightarrow J$ is surjective by the proof of \cite{CH1} Lemma 2.6, the inequality $q(X) \leq q(Y) + q(F)$ is obtained by applying the proof of \cite{CH1} Cor. 3.5 straightforwardly. The remaining assertion follows from \cite{CH1} Lemma 2.6 iii). Another approach is using Beauville's argument (\cite{Beau}) and \cite{Lan} Chap. VIII Theorem 13. \end{proof} \begin{Theorem}\label{cp} Let $f: X \rightarrow Y$ be a fibration between two smooth projective varieties. Suppose that for general $y \in Y$, $\mathrm{Alb}(X_y)$ is a p.p.a.v., the general fiber $X_y$ is birational to a theta divisor $F_y \subset \mathrm{Alb}(X_y)$, and that $q(X) = q(X_y) + q(Y)$. Then $\mathrm{Alb}(X_y)$ is isomorphic to a p.p.a.v. $A$ independent of general $y \in Y$, $F_y\cong F$ where $F$ is a theta divisor on $A$, and $X$ is birational to $F \times Y$. \end{Theorem} \begin{proof} By Theorem \ref{ineq} and the assumption $q(X) = q(X_y) + q(Y)$, the restriction map $\mathrm{Pic}^0(X)/f^*\mathrm{Pic}^0(Y) \rightarrow \mathrm{Pic}^0(X_y)$ is an epimorphism between two abelian varieties of the same dimension, and the kernel consists of finitely many torsion points which is independent of general $y$. Then we can see that $\mathrm{Pic}^0(X_y)$ is independent of general $y$ up to isomorphisms, so is its dual $\mathrm{Alb}(X_y)$. We can assume $\mathrm{Alb}(X_y) \cong A$, and $A$ has a theta divisor $F$ such that $F_y\cong F$. Note that $f: X \rightarrow Y$ has a birational model $f': X' \rightarrow Y$ such that the general fibers are all isomorphic to $F$. Take an equivariant resolution $\tilde{f}: \tilde{X} \rightarrow Y$ of $f': X' \rightarrow Y$ (\cite{Ka} p.14), whose general fibers are smooth and isomorphic to each other. Let $\tilde{F}$ be a general fiber of $\tilde{f}$. Since $\tilde{F}$ is of general type, using \cite{Le} Proposition 1, we know that $\tilde{f}: \tilde{X} \rightarrow Y$ is birational to $(\tilde{F} \times Z)/G \rightarrow Z/G$, where $G$ is a finite group and the action of $G$ on the product $\tilde{F} \times Z$ is compatible with the actions on each factor. The action $G$ on $\tilde{F}$ descends to $F$, and since $q(X) = q(F) + q(Y)$, $G$ induces a trivial action on $H^1(F, \mathcal{O}_F)$. If we can show $G$ acts on $F$ trivially, then we are done. From now on fix the Albanese map $a: F \rightarrow A$, and take $\sigma \in G$. By the universal property of Albanese maps, we obtain the following commutative diagram $$\centerline{\xymatrix{ &F \ar[d]^{a} \ar[r]^\sigma &F \ar[d]^{a}\\ &A \ar[r]^{\tilde{\sigma}} &A }}$$ i.e., $\sigma$ extends to an isomorphism $\tilde{\sigma}$ of $A$ fixing $F$. Since $\sigma$ acts trivially on $H^1(F, \mathcal{O}_{F})$, $\tilde{\sigma}$ acts trivially on $H^1(A, \mathcal{O}_{A})$, so it is nothing but a translate $t_a$ for some $a \in A$. Since $F$ is a theta divisor, the morphism $\phi_{F}: A \rightarrow \mathrm{Pic}^0(A)$ via $a' \mapsto t_{a'}^*F - F$ is an isomorphism. Then since $\tilde{\sigma}$ fixes $F$, we have $t_a^*F = F$, thus $a = 0$, this means $\sigma$ is identity. \end{proof}
1,314,259,995,665
arxiv
\section{Light Localization} The phenomenon of light localization appears in three--dimensional periodic dielectric structures, in which, due to periodicity, an electromagnetic band gap develops. Then spontaneous emission with a frequency inside the band gap can be rigorously forbidden [1--3], because of a severe depression in the photon density of states for those frequencies which remain in the spectral gap between the upper and lower branches. This kind of samples, in which photon band gap develops because of the structure periodicity in real space, has been called photonic band--gap materials. The appearance of gaps in the spectrum of photon states, due to real--space periodicity, is similar to the formation of gaps in the spectrum of electrons in a periodic lattice potential [4]. If resonance atoms are incorporated in a band--gap material, so that their transition frequency is inside the gap, then the effect of light localization [2,3] can arize. To formulate explicitly what this effect means, let us consider the average $$ s(t) \equiv < \sigma^z(t) > $$ of the population--difference operator $\sigma^z$. The average here implies the statistical or, at zero temperature, quantum--mechanical average. Under the localization of light one understands [5,6] that $$ \lim_{t\rightarrow\infty} s(t) > - 1\; . $$ The light localization becomes possible due to the formation of photon--atom bound states [5--7]. If a collective of identical impurity atoms is incorporated into a medium with a photon band gap, so that their transition frequency is inside the gap, and their spacing is much less than the transition wavelength, then a photonic impurity band is formed within the photonic band gap [7]. Electromagnetic coupling of neighboring atoms takes place by means of an effective resonance dipole--dipole interaction. If this interaction is sufficiently strong, then electromagnetic radiation can propagate inside the impurity band [7]. The formation of photon band gaps in photonic band--gap materials is similar to the well known polariton effect of the appearance of photon bands due to the interaction of light with collective excitations of dense medium [8,9]. Physical processes are, actually, the same in both types of materials. The difference is only in the nature of scatterers which light interacts with. In artificial photonic band--gap materials, a suppression of the photon density of states over a narrow frequency range results from multiple photon scattering by spatially correlated scatterers. In natural dense media, such as dielectrics or semiconductors, a frequency gap for propagating electromagnetic modes develops as a result of the photon interaction with optical collective excitations, such as optical phonons, magnons, excitons, and so on. Photons in a medium, coupled with collective excitations, are called polaritons. When a single resonance atom is placed in a frequency dispersive medium whose polariton spectrum has a gap, and the atomic transition frequency lies inside this gap, then a polariton--atom bound state appears causing a significant suppression of spontaneous emission [10,11]. The physical picture explaining this suppression is as follows. Let us imagine an atom in a medium, with the atomic transition frequency within the polariton gap. If this atom is initially excited, then it tends to become deexcited emitting a photon. However, since the propagation of photons inside the polariton gap is prohibited, the emitted photon is scattered, by collective excitations, back and is again absorbed by the atom. Thus, the atom cannot get rid of a photon and is doomed to stay excited. Conversely, if the atom is initially in the ground state, it continues to be in that state, since there are no photons around to excite it. The supression of spontaneous emission of an atom is termed localization of light. As is explained above, the effect of light localization can be expressed as the inequality $s(t)>-1$ for the average population difference, valid for all times. The formation of polariton--atom bound states has been studies for a stationary case [10,11]. In dynamical picture, the population difference of an atom satisfies the equation $$ \frac{ds}{dt} = -\gamma_1( s - \zeta ) \; , $$ in which $\gamma_1$ is a level width and $\zeta$, a stationary value of the population difference defined by a solution to the stationary problem. From the dynamical equation one has $$ s(t) = ( s_0 - \zeta ) e^{-\gamma_1t} + \zeta \; , $$ where $s_0\equiv s(0)$. If the stationary value $\zeta>-1$, then $\lim_{t\rightarrow\infty}s(t) =\zeta>-1$, which implies the localization of light. The complete suppression of emission corresponds to $\zeta=s_0$; then $s(t)=\zeta$. Note that the linewidth $\gamma_1$ is caused by vacuum quantum fluctuations and is always nonzero, irrespectively what medium the atom is placed into. If a collection of resonance impurity atoms is doped into a medium with a polariton band gap, then, in the same way as for photonic band--gap materials [7], an impurity band can be formed within the polariton gap [12,13]. Then electromagnetic radiation can propagate in such an impurity band. In order that such an impurity band be formed, the spacing of resonance impurity atoms in the medium should be much smaller than the radiation wavelength. If it is so, then for a group of atoms a sufficiently strong effective interaction, caused by photon exchange, can develop. This interaction collectivizes the atoms that start radiating coherently [14]. In this way, the suppression of spontaneous emission for a single atom can be overcome by a group of atoms radiating coherently. The situation when a single atom cannot radiate inside the polariton gap but a collective of strongly interacting atoms can radiate reminds the following related case. If a sample with a polariton band gap is irradiated by a monochromatic electromagnetic wave with a frequency within the polariton gap, then the incident light cannot propagate through this medium because of total reflection. However, if the incident intensity is large enough, the light can penetrate into the dense media even when propagating inside the polariton gap [15,16]. In such a case, analogously to that of coherent radiation, the possibility of the radiation propagation inside the band gap is due to nonlinear effects. Here and in what follows we use the term "atom" in the general sense, implying under a resonance atom any two--level object. Depending on radiation frequencies, this could be atoms as such, molecules, nuclei, or quantum dots and wells. The latter case is of special importance for semiconductors. Really, the polariton effect is well developed in many semiconductors, for instance, in $CuCl,\; CuBr,\; CdSe,\; ZnSe,\; GaAs,\; GaSb,\; InAs,\; AlAs,\; SiC$. The characteristic frequencies, where the polariton band gap arises, are as follows [4] (see also [12--14]). For example, in $GaAs$ the polariton gap having the width $\Delta\equiv\Omega_2-\Omega_1=4\times 10^{12}s^{-1}$ lies between $\Omega_1=5.1\times 10^{13}s^{-1}$ and $\Omega_2=5.5\times 10^{13}s^{-1}$; the linewidth being $\gamma_1/\Omega_1=1.2\times 10^{-5}$. In $SiC$ the polariton gap $\Delta=3\times 10^{13}s^{-1}$ is between $\Omega_1=1.5\times 10^{14}s^{-1}$ and $\Omega_2=1.8\times 10^{14}s^{-1}$; with the linewidth $\gamma_1/\Omega_1=3\times 10^{-6}$. In all cases, for the relaxation parameter $\gamma_2$ one has $\gamma_2/\Omega_1\sim 10^{-2}$. As is seen, the polariton band gap in such semiconductors is located in the infrared region. Therefore, resonance radiation for this region of frequencies could be presented by quantum dots and wells. Keeping in mind the feasibility of different radiating objects, we continue, for the sake of simplicity, to use the term "resonance atoms". \section{Basic Equations} The total Hamiltonian is given by the sum \begin{equation} \hat H = \hat H_a + \hat H_f + \hat H_m + \hat H_{af} + \hat H_{mf} \; , \end{equation} consisting of atomic, $\hat H_a$, field, $\hat H_f$, matter, $\hat H_m$, atom--field, $\hat H_{af}$, and matter--field, $\hat H_{mf}$, Hamiltonians. In the atomic Hamiltonian \begin{equation} \hat H_a =\frac{1}{2} \sum_{i=1}^n \omega_0 ( 1 +\sigma_i^z ) \end{equation} the index $i$ enumerates the atoms, $\omega_0$ is a transition frequency, and $\sigma_i^z$ is a population difference operator. Here and in what follows we set $\hbar\equiv 1$. The field Hamiltonian \begin{equation} \hat H_f = \frac{1}{8\pi} \int \left [ \stackrel{\rightarrow}{E}^2(\stackrel{\rightarrow}{r}) + \stackrel{\rightarrow}{H}^2(\stackrel{\rightarrow}{r}) \right ] d\stackrel{\rightarrow}{r} \end{equation} has the standard form in which $\stackrel{\rightarrow}{E}$ is electric field and $\stackrel{\rightarrow}{H}=\stackrel{\rightarrow}{\nabla}\times\stackrel{\rightarrow}{A}$ is magnetic field, with a vector potential $\stackrel{\rightarrow}{A}$ satisfying the Coulomb gauge condition $\stackrel{\rightarrow}{\nabla}\cdot\stackrel{\rightarrow}{A}=0$. The Hamiltonian of matter represents optic collective excitations and can be modelled by an ensemble of oscillators, \begin{equation} \hat H_m =\sum_{i=1}^{N'} \frac{\stackrel{\rightarrow}{p}_i^2}{2m} + \frac{1}{2}\sum_{ij}^{N'} \sum_{\alpha\beta}^3 D^{\alpha\beta}_{ij} u_i^\alpha u_j^\beta \; , \end{equation} where the index $i=1,2,\ldots,N'$ enumerate lattice sites, $\stackrel{\rightarrow}{p}_i$ and $\stackrel{\rightarrow}{u}_i$ are momentum and displacement operators, respectively, and $D_{ij}^{\alpha\beta}$ is a dynamical matrix. The atom--field interaction is described by the Hamiltonian \begin{equation} \hat H_{af} = -\frac{1}{c} \sum_{i=1}^N J_a(\stackrel{\rightarrow}{r}_i)\stackrel{\rightarrow}{A}(\stackrel{\rightarrow}{r}_i) \; , \end{equation} which corresponds to the dipole approximation with the transition current \begin{equation} \stackrel{\rightarrow}{J}_a(\stackrel{\rightarrow}{r}_i) = i\omega_0\left ( \sigma_i^+ \stackrel{\rightarrow}{d}^* - \sigma_i^- \stackrel{\rightarrow}{d} \right ) \; , \end{equation} where $\sigma_i^+$ and $\sigma_i^-$ are the rising and lowering operators, respectively, and $\stackrel{\rightarrow}{d}$ is a transition dipole. The matter--field interaction can be presented as \begin{equation} \hat H_{mf} = -\frac{1}{c}\sum_{j=1}^{N'}\stackrel{\rightarrow}{J}_m(\stackrel{\rightarrow}{r}_j)\stackrel{\rightarrow}{A}(\stackrel{\rightarrow}{r}_j) \; , \end{equation} with the matter current \begin{equation} \stackrel{\rightarrow}{J}_m(\stackrel{\rightarrow}{r}_j) = \frac{e}{m}\stackrel{\rightarrow}{p}_j \; , \end{equation} in which $e$ and $m$ are charge and mass, respectively. The commutation relations for the operators introduced above are $$ [ \sigma_i^+,\sigma_j^- ] =\delta_{ij}\sigma_i^z \; , \qquad [ \sigma_i^z,\sigma_j^\pm ] =\pm 2\delta_{ij}\sigma_i^\pm \; , $$ $$ [ E^\alpha(\stackrel{\rightarrow}{r}),A^\beta(\stackrel{\rightarrow}{r}') ] = i4\pi c\delta_{\alpha\beta}\delta(\stackrel{\rightarrow}{r}-\stackrel{\rightarrow}{r}') \; . $$ Using these relations and the Heisenberg equations of motion, we get the Maxwell operator equations \begin{equation} \frac{1}{c}\frac{\partial \stackrel{\rightarrow}{A}}{\partial t} = -\stackrel{\rightarrow}{E} \; , \qquad \frac{1}{c}\frac{\partial \stackrel{\rightarrow}{E}}{\partial t} =\stackrel{\rightarrow}{\nabla}\times \stackrel{\rightarrow}{H} - \frac{4\pi}{c}\stackrel{\rightarrow}{J} \; , \end{equation} with the total density of current \begin{equation} \stackrel{\rightarrow}{J}(\stackrel{\rightarrow}{r}) =\sum_{i=1}^N \stackrel{\rightarrow}{J}_a(\stackrel{\rightarrow}{r}_i) \delta(\stackrel{\rightarrow}{r}-\stackrel{\rightarrow}{r}_i) + \sum_{j=1}^{N'} J_m(\stackrel{\rightarrow}{r}_j)\delta(\stackrel{\rightarrow}{r}-\stackrel{\rightarrow}{r}_j) \; . \end{equation} For the atomic variables, we find \begin{equation} \frac{d\sigma^-}{dt} = - i\omega_0\sigma_i^- + k_0\sigma_i^z \stackrel{\rightarrow}{d}^*\cdot\stackrel{\rightarrow}{A}_i\; , \end{equation} and \begin{equation} \frac{d\sigma_i^z}{dt} = - 2k_0 ( \sigma_i^+ \stackrel{\rightarrow}{d}^* + \sigma_i^- \stackrel{\rightarrow}{d}) \cdot \stackrel{\rightarrow}{A}_i \; , \end{equation} where the notation $$ \stackrel{\rightarrow}{A}_i \equiv \stackrel{\rightarrow}{A}(\stackrel{\rightarrow}{r}_i,t) \; , \qquad k_0\equiv\frac{\omega_0}{c} $$ is used. From (9), with the Coulomb gauge condition, we have the wave equation \begin{equation} \left ( \stackrel{\rightarrow}{\nabla}^2 -\frac{1}{c^2}\frac{\partial^2}{\partial t^2}\right )\stackrel{\rightarrow}{A} = -\frac{4\pi}{c}\stackrel{\rightarrow}{J}\; , \end{equation} whose solution reads \begin{equation} \stackrel{\rightarrow}{A}(\stackrel{\rightarrow}{r},t) =\stackrel{\rightarrow}{A}_v(\stackrel{\rightarrow}{r},t) +\frac{1}{c}\int \frac{\stackrel{\rightarrow}{J}(\stackrel{\rightarrow}{r}',t-|\stackrel{\rightarrow}{r}-\stackrel{\rightarrow}{r}'|/c)}{|\stackrel{\rightarrow}{r}-\stackrel{\rightarrow}{r}'|} d\stackrel{\rightarrow}{r}'\; , \end{equation} where $\stackrel{\rightarrow}{A}_v$, being a solution of the related homogeneous equation, corresponds to vacuum fluctuations. Substituting the density of current (10) into (14) yields for the vector potential at the point $\stackrel{\rightarrow}{r}_i$ the expression \begin{equation} \stackrel{\rightarrow}{A}_i(t) = \stackrel{\rightarrow}{A}_v(\stackrel{\rightarrow}{r}_i,t) + \stackrel{\rightarrow}{A}_a(\stackrel{\rightarrow}{r}_i,t) + \stackrel{\rightarrow}{A}_m(\stackrel{\rightarrow}{r}_i,t) \; , \end{equation} in which the first term is caused by vacuum fluctuations, the second term, \begin{equation} \stackrel{\rightarrow}{A}_a(\stackrel{\rightarrow}{r}_i,t) = ik_0\sum_{j(\neq i)}^N \frac{1}{r_{ij}}\left [ \sigma_j^+\left ( t -\frac{r_{ij}}{c}\right )\stackrel{\rightarrow}{d}^* - \sigma_j^-\left ( t -\frac{r_{ij}}{c}\right )\stackrel{\rightarrow}{d} \right ] \; , \end{equation} where $$ r_{ij} \equiv |\stackrel{\rightarrow}{r}_{ij}| \; , \qquad \stackrel{\rightarrow}{r}_{ij}\equiv \stackrel{\rightarrow}{r}_i - \stackrel{\rightarrow}{r}_j \; , $$ is a vector potential generated by radiating atoms, and the last term, \begin{equation} \stackrel{\rightarrow}{A}_m(\stackrel{\rightarrow}{r},t) = \frac{1}{c} \sum_{j(\neq i)}^{N'} \frac{1}{r_{ij}} \stackrel{\rightarrow}{J}_m \left ( \stackrel{\rightarrow}{r}_j, t -\frac{r_{ij}}{c}\right ) \; , \end{equation} is due to local electric currents in the medium. In the vector potentials (16) and (17) the self--action parts are excluded. Instead, we shall add to Eqs. (11) and (12) the terms describing the level width and the line width, $$ \gamma_1 =\frac{2}{3} k_0^3 d_0^2 = \frac{1}{T_1} \; , \qquad \gamma_2 =\frac{1}{T_2}\; , $$ where $d_0\equiv |\stackrel{\rightarrow}{d}|$. In this way, introducing the effective electric induction \begin{equation} \stackrel{\rightarrow}{D}_i(t) \equiv k_0\left [ \stackrel{\rightarrow}{A}_v(\stackrel{\rightarrow}{r},t) + \stackrel{\rightarrow}{A}_m(\stackrel{\rightarrow}{r},t)\right ] \; , \end{equation} we come to the equations $$ \frac{d\sigma_i^-}{dt} = - ( i\omega_0 +\gamma_2)\sigma_i^- +\sigma_i^z\stackrel{\rightarrow}{d}^*\cdot \stackrel{\rightarrow}{D}_i + $$ \begin{equation} + ik_0^2\sigma_i^z\stackrel{\rightarrow}{d}\cdot \sum_{j(\neq i)}^N \frac{1}{r_{ij}}\left [ \sigma_i^+\left ( t -\frac{r_{ij}}{c}\right )\stackrel{\rightarrow}{d}^* - \sigma_j^-\left ( t -\frac{r_{ij}}{c}\right ) \stackrel{\rightarrow}{d}\right ] \end{equation} and $$ \frac{d\sigma_i^z}{dt} = -\gamma_1 ( \sigma_i^z -\zeta ) + 2 (\sigma_i^z\stackrel{\rightarrow}{d}^* + \sigma_i^- \stackrel{\rightarrow}{d}) \cdot \stackrel{\rightarrow}{D}_i - $$ \begin{equation} - i2k_0^2( \sigma_i^+ \stackrel{\rightarrow}{d}^* +\sigma_i^- \stackrel{\rightarrow}{d}) \sum_{j(\neq i)}^N \frac{1}{r_{ij}}\left [ \sigma_i^+\left ( t -\frac{r_{ij}}{c}\right )\stackrel{\rightarrow}{d}^* - \sigma_j^-\left ( t -\frac{r_{ij}}{c}\right ) \stackrel{\rightarrow}{d}\right ] \; . \end{equation} The retardation effects in these equations can be treated in the quasirelativistic approximation. This means the following. In the nonrelativistic limit, when $c\rightarrow\infty$ and $k_0\rightarrow 0$, from (19) would follow $\sigma_i^-\sim \exp(-i\omega_0 t)$. In the quasirelativistic approximation, we set \begin{equation} \sigma_i^-\left ( t -\frac{r_{ij}}{c}\right ) \simeq \sigma_i^-(t)\exp(ik_0r_{ij}) \; . \end{equation} Define the statistical averages \begin{equation} u_i\equiv < \sigma_i^- > \; , \qquad s_i \equiv < \sigma_i^z > \end{equation} over atomic degrees of freedom. Then from (19) and (20), in the semiclassical approximation, we obtain \begin{equation} \frac{du_i}{dt} = - (i\omega_0 +\gamma_2 ) u_i + s_i (\stackrel{\rightarrow}{d}_i^*\cdot \stackrel{\rightarrow}{D}_i) + ik_0^3s_i\stackrel{\rightarrow}{d}\cdot\sum_{j(\neq i)}^N (\varphi_{ij}^*u_j^*\stackrel{\rightarrow}{d}^* - \varphi_{ij}u_j\stackrel{\rightarrow}{d} ) \end{equation} and $$ \frac{ds_i}{dt} = -\gamma_1 (s_i - \zeta ) - 2 (u_i^*\stackrel{\rightarrow}{d}^* + u_i\stackrel{\rightarrow}{d} )\cdot \stackrel{\rightarrow}{D}_i - $$ \begin{equation} - i2k_0^3 ( u_i^*\stackrel{\rightarrow}{d}^* + u_i\stackrel{\rightarrow}{d}) \cdot \sum_{j(\neq i)}^N ( \varphi_{ij}^* u_j^*\stackrel{\rightarrow}{d}^* -\varphi_{ij} u_j\stackrel{\rightarrow}{d} ) \; , \end{equation} where $$ \varphi_{ij} \equiv\frac{\exp(ik_0r_{ij})}{k_0r_{ij}} \; . $$ The semiclassical approximation is a kind of the mean--field approximation. In the spirit of these, we may make the following mean--field approximation $$ \sum_{j(\neq i)}^N \varphi_{ij} u_j \approx u_i \sum_{j(\neq i)}\varphi_{ij} \equiv u_i\varphi_i \; , $$ where $$ \varphi_i \equiv \sum_{j(\neq i)}^N \varphi_{ij} =\sum_{j(\neq i)}^N \frac{\exp(ik_0r_{ij})}{k_0r_{ij}}\; . $$ The factors $\varphi_{ij}$ and $\varphi_i$ describe local fields. Introduce the local--field shift \begin{equation} \Delta_L \equiv \gamma_2g's \; , \qquad g' \equiv \frac{k_0^3d_0^2}{\gamma_2} \sum_{j(\neq i)}^N \frac{\cos(k_0r_{ij})}{k_0r_{ij}} \; , \end{equation} also called the cooperative Lamb shift [17], and the effective atom--atom coupling parameter \begin{equation} g \equiv \frac{k_0^3d_0^2}{\gamma_2}\sum_{j(\neq i)}^N \frac{\sin(k_0r_{ij})}{k_0r_{ij}}\; . \end{equation} These quantities enter into the definitions of the effective radiation frequency and radiation width, \begin{equation} \Omega \equiv \omega_0 +\Delta_L \; , \qquad \Gamma \equiv\gamma_2( 1 - gs ) \; , \end{equation} respectively. Involving these notations and keeping in mind that $$ u_i = u(\stackrel{\rightarrow}{r}_i,t) \; , \qquad s_i = s(\stackrel{\rightarrow}{r}_i,t) \; , \qquad \stackrel{\rightarrow}{D}_i = \stackrel{\rightarrow}{D}(\stackrel{\rightarrow}{r}_i,t) , \qquad \varphi_i = \varphi(\stackrel{\rightarrow}{r}_i) \; , $$ we transform Eqs. (23) and (24) to the form \begin{equation} \frac{du}{dt} = - (i\Omega +\Gamma) u + s\stackrel{\rightarrow}{d}^*\cdot \stackrel{\rightarrow}{D} + ik_0^3 s\varphi^*u^*(\stackrel{\rightarrow}{d}^*)^2 \end{equation} and $$ \frac{ds}{dt} = - 4\gamma_2 g |u|^2 -\gamma_1 (s -\zeta ) - 2 (u^*\stackrel{\rightarrow}{d}^* + u\stackrel{\rightarrow}{d} )\cdot \stackrel{\rightarrow}{D} - $$ \begin{equation} - i2k_0^3 \left [ \varphi^*(u^*\stackrel{\rightarrow}{d}^*)^2 -\varphi(u\stackrel{\rightarrow}{d})^2 \right ] \; . \end{equation} Since $u$ is a complex variable, we have to supplement Eqs. (28) and (29) by an equation for either $u^*$ or $|u|^2$. For instance, for $|u|^2$ we get $$ \frac{d|u|^2}{dt} = - 2\Gamma |u|^2 + s (u^*\stackrel{\rightarrow}{d}^* + u\stackrel{\rightarrow}{d})\cdot \stackrel{\rightarrow}{D} + $$ \begin{equation} + ik_0^3 s\left [ \varphi^*(u^*\stackrel{\rightarrow}{d}^*)^2 -\varphi(u\stackrel{\rightarrow}{d})^2\right ] \; . \end{equation} Equations (29) and (30) give for the Bloch vector the equation $$ \frac{d}{dt} \left ( s^2 + 4|u|^2\right ) = - 8\gamma_2|u|^2 - 2\gamma_1 ( s -\zeta )s \; . $$ The derived equations (28), (29), and (30) are the basic equations describing nonequilibrium processes in the system of resonance atoms interacting with polariton field. \section{Scale Separation} Equations (28), (29), and (30) can be solved using the scale separation approach [18--20]. To start with, we need to define what small parameters we have. The standard small parameters are related to the relaxation parameters $\gamma_1$ and $\gamma_2$, for which \begin{equation} \frac{\gamma_1}{\omega_0} \ll 1 \; , \qquad \frac{\gamma_2}{\omega_0}\ll 1\; . \end{equation} It is reasonable to suppose that \begin{equation} \left | \frac{\Delta_L}{\Omega}\right | < 1 \; , \qquad \left | \frac{\Gamma}{\Omega}\right | < 1 \; , \end{equation} although $\Gamma$ can become much larger than $\gamma_2$. Assume also that \begin{equation} \left | \frac{\stackrel{\rightarrow}{d}\cdot\stackrel{\rightarrow}{D}}{\Omega}\right | < 1 \; , \end{equation} which means that the interaction of atoms with matter does not change drastically the properties of the atoms. Under the validity of small parameters (31) to (33), the variable $u$ has to be considered as fast, compared to $s$ and $|u|^2$ that are to be treated as slow. Accepting the variables $s$ and $|u|^2$ as quasi--integrals of motion, we keep them fixed when solving Eq. (28). Then the solution for the fast variable is $$ u(t) = u_0 G_1(t) + u_0^* G_2(t) + $$ \begin{equation} + s\stackrel{\rightarrow}{d}^* \int_0^t\left [ G_1(t-\tau) + G_2(t-\tau)\right ] \stackrel{\rightarrow}{D}(\tau)d\tau \; , \end{equation} where $u_0\equiv u(0)$ and the Green functions are $$ G_1(t) =\left (\frac{\lambda_1 -a^*}{\lambda_1-\lambda_2}\right ) e^{\lambda_1t} - \left ( \frac{\lambda_2 -a^*}{\lambda_1-\lambda_2}\right ) e^{\lambda_2t} \; , $$ $$ G_2(t) =\frac{b}{\lambda_1-\lambda_2}\left ( e^{\lambda_1t} - e^{\lambda_2t}\right )\; , $$ $$ a = - (i\Omega +\Gamma ) \; , \qquad b = -i k_0^3s\varphi^*(\stackrel{\rightarrow}{d}^*)^2 \; , $$ $$ \lambda_{1,2} =\frac{1}{2} \left [ a + a^*\pm \sqrt{(a-a^*)^2 + 4|b|^2}\right ] \; . $$ Taking into account the existence of small parameters, we may write $$ \lambda_{1,2} = \pm i\Omega -\Gamma \; , $$ and (34) can be simplified to \begin{equation} u(t) = e^{-(i\Omega+\Gamma)t}\left [ u_0 + s\stackrel{\rightarrow}{d}^*\int_0^t e^{(i\Omega+\Gamma)\tau}\stackrel{\rightarrow}{D}(\tau)d\tau \right ] \; . \end{equation} The found fast variable (35) is to be substituted into the equations (29) and (30) for the slow variables and the right--hand sides of these equations are to be averaged over time and over the degrees of freedom corresponding to collective excitations of matter [18--20]. Recall that the quantities in (22) were defined as the averages over atomic degrees of freedom. The double averaging, over time and over the matter degrees of freedom, for a function $F(t)$, depending on time and on the operators of collective excitations, is defined as \begin{equation} << F >> \equiv \lim_{\tau\rightarrow\infty} \frac{1}{\tau} \int_0^\tau < F(t)> dt \; , \end{equation} where the angle brackets imply the statistical averaging over the matter degrees of freedom. The usage of the same angle brackets for denoting the statistical averaging over the atomic and over matter degrees of freedom should not yield confusion, since at the present stage the atomic degrees of freedom do not arise being averaged out earlier. Therefore, in the definition (36) and in what follows the statistical averaging always concerns only the matter degrees of freedom. Let us introduce the parameter \begin{equation} \alpha \equiv <<\left | e^{-\Gamma t}\int_0^t e^{(i\Omega +\Gamma)\tau}\stackrel{\rightarrow}{d}^*\cdot \stackrel{\rightarrow}{D}(\tau) d\tau\right |^2 >> \end{equation} characterizing the strength of interaction between the atoms and matter. Thus, the quantity (37) can be called the atom--medium coupling parameter. When substituting the fast variable (35) into the equations (29) and (30) for the slow variables and averaging, according to (36), the right--hand sides of the latter equations, we may notice the following useful property. The fast variable (35) can be written as the sum $u=u_1+u_2$ of the term $$ u_1 = u_0 \exp\left\{ - (i\Omega + \Gamma) t\right \} \; , $$ not depending on the field $\stackrel{\rightarrow}{D}$ of matter, and of the term $$ u_2 = e^{-(i\Omega+\Gamma)t} s\stackrel{\rightarrow}{d}^*\int_0^t e^{(i\Omega+\Gamma)\tau}\stackrel{\rightarrow}{D}(\tau)d\tau\; , $$ depending on the matter field. Define the function $$ w\equiv |u_1|^2 = |u_0|^2 e^{-2\Gamma t} \; . $$ By this definition, the function $w$ must satisfy an equation that follows from the equation for $|u|^2$ where the matter field $\stackrel{\rightarrow}{D}$ is set zero. For $|u|^2$ we have $$ |u|^2 = w + u_1^*u_2 + u_2^*u_1 + |u_2|^2 \; . $$ When averaging, according to (36), we take into account that $$ << u_1^*u_2 + u_2^*u_1 >> = 0 \; . $$ This is because of two reasons. First, the terms $u_1$ and $u_2$ oscillate, in general, with different frequencies. Second, the term $u_2$ is a linear combination of operators of collective excitations in matter. In this way, averaging $|u|^2$ over fast variables, we get $$ |u|^2 = w + << |u_2|^2 >> \; , \qquad << |u_2|^2 >> =\alpha s^2 \; . $$ This consideration suggests that it is convenient to introduce the slow variable \begin{equation} w \equiv |u|^2 - \alpha s^2 \; , \end{equation} for which the evolution equation should have a form simpler than for $|u|^2$. Really, averaging the equations (29) and (30) for the slow variables, we obtain \begin{equation} \frac{ds}{dt} = - 4g\gamma_2 w - \gamma_1 (s-\zeta) \; , \end{equation} where the transformation (38) is used, and \begin{equation} \frac{dw}{dt} = - 2\gamma_2 ( 1 -gs ) w\; . \end{equation} From these two equations one can derive one equation $$ \frac{d^2s}{dt^2} + ( 2 + \gamma - 2gs )\frac{ds}{dt} - 2\gamma g s^2 + 2\gamma( 1 + g\zeta ) s - 2\gamma\zeta = 0 \; , $$ in which $\gamma=\gamma_1/\gamma_2$ and time is measured in units of $\gamma_2^{-1}$. \section{Models of Matter} Before analysing equations (39) and (40), let us consider some examples defining concretely the matter field $\stackrel{\rightarrow}{D}$. Suppose, first, that the matter consists of a set of random scatterers, such that \begin{equation} \stackrel{\rightarrow}{d}\cdot\stackrel{\rightarrow}{D} = \xi \; , \end{equation} where $\xi$ is a stochastic field defined by the averages \begin{equation} < \xi > = 0 \; , \qquad < |\xi|^2 > =\gamma^2 \; . \end{equation} Then the coupling parameter (37) is \begin{equation} \alpha =\frac{\gamma^2}{\Omega^2} \; . \end{equation} As another example, consider the matter modelled by the white noise, when \begin{equation} \stackrel{\rightarrow}{d}\cdot\stackrel{\rightarrow}{D} =\xi(t) \; , \end{equation} where the white--noise stochastic variable $\xi(t)$ is defined by the averages \begin{equation} < \xi(t) > = 0\; , \qquad < \xi^*(t)\xi(t') > = 2\gamma\delta(t-t') \; , \end{equation} where the angle brackets mean a stochastic averaging. Then for the coupling parameter, we get \begin{equation} \alpha =\frac{\gamma}{\Gamma} \; . \end{equation} In the third example, we model the matter by an oscillator, so that \begin{equation} \stackrel{\rightarrow}{d}\cdot\stackrel{\rightarrow}{D} = \gamma\left ( b_\omega e^{-i\omega t} + b_\omega^\dagger e^{i\omega t} \right ) \; , \end{equation} where $b_\omega$ and $b_\omega^\dagger$ are the annihilation and creation operators satisfying the Bose statistics, and for which the statictical averaging gives \begin{equation} < b_\omega^\dagger b_\omega > = n_\omega \; , \qquad < b_\omega b_\omega^\dagger > = 1 + n_\omega \; , \end{equation} with $n_\omega$ being an effective weight of excitations of a frequency $\omega$. Then the coupling parameter (37) is \begin{equation} \alpha =|\gamma|^2\left [ \frac{n_\omega}{(\Omega-\omega)^2 +\Gamma^2} + \frac{1+n_\omega}{(\Omega+\omega)^2 +\Gamma^2}\right ] \; . \end{equation} The strongest coupling between the impurity atoms and matter happens at the resonance, when $\omega = \Omega$, and $\alpha\cong n_\omega|\gamma/\Gamma|^2$. Finally, we consider a more realistic situation when the effective electric induction of matter is defined by the relations (18), (17), and (8), so that \begin{equation} \stackrel{\rightarrow}{D}_i = \frac{ek_0}{mc} \sum_{j(\neq i)}^{N'} \frac{1}{r_{ij}} \stackrel{\rightarrow}{p}_i\left ( t -\frac{r_{ij}}{c}\right ) \; , \end{equation} with the momentum operator $$ \stackrel{\rightarrow}{p}_j(t) = - i\sum_{ks} \left ( \frac{m\omega_{ks}}{2N'}\right )^{1/2} \stackrel{\rightarrow}{e}_{ks}\times $$ \begin{equation} \times \left [ b_{ks}\exp\{ i(\stackrel{\rightarrow}{k}\cdot\stackrel{\rightarrow}{r}_j - \omega_{ks}t)\} - b_{ks}^\dagger\exp\{ - i(\stackrel{\rightarrow}{k}\cdot\stackrel{\rightarrow}{r}_j -\omega_{ks}t)\} \right ] \; , \end{equation} in which $\omega_{ks}=\omega_{-ks}$ is a spectrum of collective excitations; $\stackrel{\rightarrow}{k}$ being a wave vector; $s=1,2,3$, a polarization index; $\stackrel{\rightarrow}{e}_{ks}$ is a polarization vector; $N'$ is the number of lattice sites. The annihilation and creation operators of collective excitations satisfy the Bose statistics and have the following statistical averages \begin{equation} < b_{ks}^\dagger b_{k's'} > = n_{ks}\delta_{kk'}\delta_{ss'} \; , \qquad < b_{ks}b_{k's'} > = 0 \; . \end{equation} In this case, for the coupling parameter (37), we obtain \begin{equation} \alpha =\frac{k_0r_e}{2N'} \sum_{ks} f_{ks}\gamma_{ks}\omega_{ks} \left [ \frac{n_{ks}}{(\Omega-\omega_{ks})^2+\Gamma^2} + \frac{1+n_{ks}}{(\Omega+\omega_{ks})^2+\Gamma^2}\right ] \; , \end{equation} where \begin{equation} \gamma_{ks}\equiv k_0^3|\stackrel{\rightarrow}{d}\cdot\stackrel{\rightarrow}{e}_{ks}|^2 \; , \qquad r_e\equiv \frac{e^2}{mc^2} \; , \end{equation} and \begin{equation} f_{ks} \equiv\left |\sum_{j(\neq i)}^{N'} \frac{\exp\left\{ i \left (\stackrel{\rightarrow}{k}\cdot\stackrel{\rightarrow}{r}_j +\frac{\omega_{ks}}{c}r_{ij}\right ) \right\}} {k_0r_{ij}}\right |^2\; . \end{equation} Again, it is clear that the coupling parameter (53) is the most strongly influenced by resonance collective excitations with $\omega_{ks}\approx\Omega$. \section{Transient Regime} Consider the times shorter than $\gamma^{-1}_1$. Then the term containing $\gamma_1$ in (39) can be omitted. In this case, using the second relation from (27), we have \begin{equation} \frac{d\Gamma}{dt} = 4g^2\gamma_2^2 w\; , \qquad \frac{dw}{dt} = - 2\Gamma w \; . \end{equation} These two equations can be reduced to one, $$ \frac{d^2\Gamma}{dt^2} + 2\Gamma\frac{d\Gamma}{dt} = 0 \; , $$ integrating which we get $$ \frac{d\Gamma}{dt} + \Gamma^2 = \gamma_0^2 \; , $$ $\gamma_0$ being an integration constant. The last equation is a Riccati equation whose solution is \begin{equation} \Gamma = \gamma_0 {\rm tanh}\left (\frac{t-t_0}{\tau_0}\right ) \; , \qquad \gamma_0 \equiv\frac{1}{\tau_0} \; , \end{equation} where $t_0$ is another integration constant. Using relation (27) gives \begin{equation} s = -\frac{\gamma_0}{g\gamma_2}{\rm tanh}\left (\frac{t-t_0}{\tau_0}\right ) + \frac{1}{g} \; . \end{equation} The first equation in (56), together with (57), yields \begin{equation} w=\frac{\gamma_0^2}{4g^2\gamma_2^2}{\rm sech}^2\left ( \frac{t-t_0}{\tau_0}\right ) \; . \end{equation} And from (38) we find \begin{equation} |u|^2 =\frac{\gamma_0^2}{4g^2\gamma_2^2}{\rm sech}^2\left (\frac{t-t_0}{\tau_0} \right ) + \alpha s^2 \; . \end{equation} The integration constants $\gamma_0$ and $t_0$ are to be defined from the initial conditions \begin{equation} u(0) = u_0 \; , \qquad s(0) = s_0 \; . \end{equation} From the latter we obtain for the effective radiation width $\gamma_0$ the expression \begin{equation} \gamma_0^2 = \Gamma_0^2 + 4g^2\gamma_2^2 \left ( |u_0|^2 - \alpha_0 s_0^2\right ) \; , \end{equation} in which $\alpha_0$ is $\alpha$ at $t=0$, when $s=s_0$, $$ \Gamma_0\equiv \gamma_2 ( 1 - gs_0 )\; , $$ and the delay time \begin{equation} t_0 =\frac{\tau_0}{2}\ln \left | \frac{\gamma_0 - \Gamma_0}{\gamma_0 +\Gamma_0} \right | \; . \end{equation} The radiation width can be written in the form \begin{equation} \gamma_0 = 2|gs_0|\gamma_2 ( \alpha_c - \alpha_0 )^{1/2} \; , \end{equation} where the critical atom--matter coupling parameter \begin{equation} \alpha_c \equiv \frac{(1-gs_0)^2 + 4g^2|u_0|^2}{4g^2s_0^2} \end{equation} is introduced. Expression (64) is a direct consequence of (62) for all $g$ and $\alpha_0$. It is necessary to stress that the atom--matter coupling parameter (37) cannot surpass the critical value (65). If this would happen, then the radiation width (64) would become imaginary and, instead of (57), we would have $$ \Gamma = - |\gamma_0|{\rm tan}\left (\frac{t-t_0}{|\tau_0|}\right ) \; , \qquad |\tau_0| \equiv\frac{1}{|\gamma_0|} \; , $$ $$ |\gamma_0| = 2\gamma_2 |gs_0|\sqrt{\alpha_0 -\alpha_c} \qquad (\alpha_0 > \alpha_c)\; . $$ The delay time (63) would be $$ t_0 = |\tau_0|{\rm arctan}\left ( \frac{1-gs_0}{2|gs_0|\sqrt{\alpha_0-\alpha_c}}\right ) \; . $$ And for the solutions (58) and (59) we would get $$ s = s_0{\rm sgn}(gs_0)\sqrt{\alpha_0 -\alpha_c}{\rm tan}\left ( \frac{t-t_0}{\tau_0}\right ) +\frac{1}{g} \; , $$ $$ w = - s_0^2 (\alpha_0 - \alpha_c){\rm sec}^2\left ( \frac{t-t_0}{\tau_0}\right ) \; . $$ The effective width $\Gamma$, as well as the solutions $s$ and $w$, become divergent at $t=t_n$, $$ t_n = t_0 +\frac{\pi}{2} ( 1 + 2n) |\tau_0| \qquad (n=0,1,2\ldots) \; . $$ Certainly, this behaviour is unphysical and it means that some conditions, under which the method of scale separation has been used, are, probably, not valid any more. This is really the case since, when $\Gamma$ and $s$ diverge, conditions (32) do not hold true. Consequently, when $\alpha_0>\alpha_c$, we cannot separate solutions onto fast and slow, all of them oscillating equally fast. At the same time, the existence of slow solutions is a characteristic feature of developed coherence. Thus, the absence of slow solutions suggests that coherence cannot emerge in the system. From the physical point of view, all this sounds quite understandable. There should be a threshold for the strength of interactions of atoms with matter after which such strong interactions destroy the correlation between atoms, thus, destroying their coherence. In this way, the inequality \begin{equation} \alpha_0 < \alpha_c \end{equation} is a necessary condition for the applicability of the scale separation approach and, at the same time, a condition for the possibility of coherent radiation of doped atoms. The maximal level of coherence develops at the time $t=t_0$ when \begin{equation} s(t_0) =\frac{1}{g}\; , \qquad w(t_0) = s_0^2 (\alpha_c -\alpha_0 )\; , \qquad |u(t_0)|^2 = (\alpha_c -\alpha_0) s_0^2 + \frac{\alpha}{g^2} \; . \end{equation} For the times much longer than $t_0$, Eqs. (58) to (60) give $$ s\simeq \frac{1}{g} \left ( 1 -\frac{\gamma_0}{\gamma_2}\right ) \qquad ( t \gg t_0) \; , $$ \begin{equation} w \simeq \frac{\gamma_0^2}{g^2\gamma_2^2}\exp\left ( -2\gamma_0 t\right ) \; , \end{equation} $$ |u|^2 \simeq \frac{\gamma_0^2}{g^2\gamma_2^2}\exp\left ( -2\gamma_0 t\right ) + \frac{\alpha}{g^2}\left ( 1 -\frac{\gamma_0}{\gamma_2}\right )^2 \; . $$ However, the asymptotic behaviour given by (68) is valid only for $t\ll T_1$. Consider the case when both the atom--matter and atom--atom coupling parameters are small, i.e. \begin{equation} \alpha_0 \ll \alpha_c \; , \qquad |g| \ll 1 \; . \end{equation} Using the first of inequalities in(69), we have from (64) $$ \gamma_0 \simeq \gamma_2\left [ ( 1 - gs_0 )^2 + 4g^2 |u_0|^2 \right ]^{1/2} \left ( 1 -\frac{\alpha_0}{2\alpha_c}\right ) \; . $$ The latter expression, with the second inequality in (69), reduces to $$ \gamma_0 \simeq \gamma_2 (1 - g s_0)\left ( 1 -\frac{\alpha_0}{2\alpha_c}\right )\; . $$ The critical parameter (65) becomes \begin{equation} \alpha_c \simeq ( 4g^2 s_0^2)^{-1} \qquad ( g\ll 1 ) \; . \end{equation} Employing this, we find \begin{equation} \gamma_0 \simeq \gamma_2 ( 1 -gs_0 - 2 \alpha_0 g^2 s_0^2 ) \; , \end{equation} valid for small coupling parameters as in (69). For the delay time (63), we get \begin{equation} t_0 \simeq \frac{1+gs_0}{2\gamma_2}\ln\left | \alpha_0 g^2 s_0^2\right |\; , \end{equation} which tends to $-\infty$ if either $\alpha_0$ or $g$ tends to zero. This implies that, under conditions (69), an essential level of coherence does not evolve. Let us analyse the case when \begin{equation} \alpha_0 \ll \alpha_c \;, \qquad |g| \gg 1\; . \end{equation} Then the critical parameter (65) is \begin{equation} \alpha_c \simeq \frac{1}{4s_0^2} \left ( s_0^2 + 4|u_0|^2 - 2\frac{s_0}{g} \right ) \; . \end{equation} The radiation width (64) becomes \begin{equation} \gamma_0 \simeq \frac{\gamma_2|g|}{\sqrt{s_0^2 + 4|u_0|^2}}\left ( s_0^2 + 4|u_0|^2 - 2\alpha_0 s_0^2 - \frac{s_0}{g} \right ) \; , \end{equation} with the corresponding radiation time \begin{equation} \tau_0 \simeq \frac{T_2}{|g|\sqrt{s_0^2 + 4|u_0|^2}}\; . \end{equation} For the delay time (63), we find \begin{equation} t_0 \simeq \frac{\tau_0}{2}\ln\left | \frac{|g|(s_0^2 + 4|u_0|^2-2\alpha_0s_0^2) + gs_0\sqrt{s_0^2+4|u_0|^2}} {|g|(s_0^2 + 4|u_0|^2-2\alpha_0s_0^2) - gs_0\sqrt{s_0^2+4|u_0|^2}}\right |\; . \end{equation} If the process develops from an initially incoherent state, when $u_0=0$, then the radiation width (75) is \begin{equation} \gamma_0 \simeq \gamma_2 |gs_0| \left ( 1 - 2\alpha_0 -\frac{1}{gs_0}\right ) \qquad (u_0 = 0 )\; . \end{equation} Thence, the delay time (77) becomes \begin{equation} t_0 \simeq\frac{T_2}{2|gs_0|}\ln\left | \frac{1-2\alpha_0 +\varepsilon}{1-2\alpha_0-\varepsilon}\right | \; , \end{equation} where $$ \varepsilon\equiv{\rm sgn}(gs_0) =\pm 1 \; . $$ As far as for $u_0=0$ and $|g|\gg 1$, the critical parameter (74) is $$ \alpha_c \simeq \frac{1}{4} \qquad (|g|\gg 1, \; u_0=0) \; , $$ then the inequality $\alpha_0\ll\alpha_c$ implies $\alpha_0\ll 1$. Hence, we may simplify (79) as \begin{equation} t_0 \simeq \frac{T_2}{2gs_0}\left |\ln\alpha_0\right | \; . \end{equation} After the time (80), the population difference tends to \begin{equation} s\simeq -\varepsilon s_0( 1 -2\alpha_0) +\frac{1+\varepsilon}{g} \qquad (t\gg t_0) \; . \end{equation} If $g>0$, then $\varepsilon s_0=|s_0|$, while for $g<0$, one has $\varepsilon s_0=-|s_0|$. Combining both these cases, we get $\varepsilon s_0 ={\rm sgn}(g)|s_0|$. Assume now that the atom--matter coupling parameter is close to its critical value (65), but the atom--atom coupling is arbitrary, \begin{equation} \frac{|\alpha_0-\alpha_c|}{\alpha_c} \ll 1 \qquad (\forall g) \; . \end{equation} Then the radiation width (64) tends to zero, as $\alpha_0\rightarrow\alpha_c$, and respectively, the radiation time $\tau_0\equiv\gamma_0^{-1}$ tends to infinity. For the delay time (63), we find \begin{equation} t_0\simeq \frac{2|gs_0|T_2}{(1-gs_0)^2} (\alpha_c -\alpha_0)^{1/2} \; , \end{equation} while the radiation time is \begin{equation} \tau_0 =\frac{T_2}{2|gs_0|}(\alpha_c -\alpha_0)^{-1/2} \; . \end{equation} When $\alpha_0\rightarrow\alpha_c$, then $t_0\rightarrow 0$, and for the functions (58) and (59) we have $$ s\simeq \frac{1}{g} - 2|s_0|{\rm sgn}(g)\sqrt{\alpha_c-\alpha_0}\left ( 1 - 2e^{-2\gamma_0t}\right ) \; , $$ \begin{equation} w\simeq 4s_0^2 (\alpha_c - \alpha_0 ) e^{-2\gamma_0t} \qquad (t > t_0) \; . \end{equation} There is a suppression of self--organized coherence in the system of atoms, their radiation being almost completely due to the pumping by matter excitations $$ s \approx \frac{1}{g}\; , \qquad w\approx 0 \; , \qquad |u|^2 \approx \frac{\alpha}{g^2} \; . $$ Although the coherent relaxation may happen provided that $\tau_0\ll T_2$, that is, $$ |gs_0|(\alpha_c -\alpha_0)^{1/2} \gg 1 \; , $$ which corresponds to superradiant emission. Note that analysing the properties of the solutions to equations (39) and (40), we talk about radiation processes keeping in mind the following. The total radiation intensity of atoms can be approximately defined in the usual way as \begin{equation} I(t) = - N\hbar \omega_0\frac{ds}{dt} \; . \end{equation} For a more accurate definition of radiation intensity see e.g. Refs.[21,22]. From (86), using equation (39), we find \begin{equation} I(t) = I_{coh}(t) + I_{inc}(t) \; , \end{equation} where the first term \begin{equation} I_{coh}(t) = 4Ng\hbar\omega_0\gamma_2 w \end{equation} had the meaning of the coherent radiation intensity, and the second, \begin{equation} I_{inc}(t) = N\hbar\omega_0\gamma_1 (s -\zeta ) \; , \end{equation} corresponds to the intensity of incoherent radiation. The latter is always proportional to the number of atoms $N$, while the radiation intensity (88) is proportional to $Ng$. For a concentrated sample, whose linear size is much smaller that the radiation wavelength, we have $g\approx N$, and the radiation intensity (88) becomes proportional to $N^2$, which is typical of superradiance. In this way, the solutions $s$ and $w$ define the temporal behaviour of the incoherent radiation intensity (89) and of the coherent radiation intensity (88), respectively. For instance, using the solution $w$ given by (59), with the radiation width (64), we obtain the intensity of coherent radiation \begin{equation} I_{coh}(t) = 4Ng\hbar\omega_0\gamma_2 s_0^2 (\alpha_c -\alpha_0){\rm sech}^2\left ( \frac{t-t_0}{\tau_0}\right ) \; . \end{equation} The latter shows that, if $\alpha_0\rightarrow\alpha_c$, then $I_{coh}\rightarrow 0$. \section{Close--to--Stationary Regime} In the previous section the transitient regime is considered corresponding to times $t\ll T_1$. For the times comparable or larger that $T_1$, we cannot neglect any more the term with $\gamma_1$ in equation (39). In the intermediate stage, when $t\sim T_1$, an exact solution of Eq.(39) and (40) is not available. Here we have to resort to numerical calculations, which will be the subject of a separate paper. But it is possible to give an analysis for asymptotically large times, when $t\gg T_1$. The following analysis assumes that $g\neq 0$. Since, if $g=0$, the solutions to Eqs.(39) and (40) are $$ s =\zeta + ( s_0 -\zeta ) e^{-\gamma_1 t} \; , $$ \begin{equation} w = \left ( |u_0|^2 - \alpha s_0^2 \right ) e^{-2\gamma_2 t} \qquad (g=0) \; , \end{equation} which describes the relaxation process of a single atom. In such a case, if there is the localization of light, then $\zeta=s_0$, and (91) gives $s=s_0$. If $N$ impurity atoms are doped into the matter, then $g\neq 0$. The resonance dipole--dipole interactions of a pair of atoms with a transition frequency inside the photon gap have been studied in several works [7,23,24]. The conclusion of these studies is that two closely spaced atoms, with transition frequencies in the gap, interact with each other by means of the virtual photon exchanges much in the same way as the atoms in vacuum. That is, if the atoms are separated from each other by a spacing much larger than the radiation wavelength, then each of them can be considered as a single atom. If the transition frequency of such an atom is inside the gap, then the phenomenon of light localization occurs. However, if the atoms are close to each other, with a spacing mush smaller than the radiation wavelength, then they practically do not experience the existence of the gap [7,23,24]. Consider the close--to--stationary regime, when $t\gg T_1$. Equations (39) and (40) can be written as \begin{equation} \frac{ds}{dt} = V_1 \; , \qquad \frac{dw}{dt} = V_2 \; , \end{equation} with the right--hand sides $$ V_1 = - 4g\gamma_2 w - \gamma_1 ( s - \zeta ) \; , $$ \begin{equation} V_2 = -2\gamma_2 ( 1 -gs ) w \; . \end{equation} Stationary points, or fixed points for Eqs.(92), (93), are given by the condition $V_1=V_2=0$. This yields two stationary points: \begin{equation} s_1^* =\zeta\; , \qquad w_1^* = 0 \end{equation} and \begin{equation} s_2^*=\frac{1}{g} \; , \qquad w_2^* = -\frac{\gamma_1(1-g\zeta)}{4\gamma_2g^2}\; . \end{equation} The stability of these fixed points can be defined by the Lyapunov analysis. To this end, we need to find the eigenvalues of the Jacobian matrix \begin{eqnarray} \hat J=\left [ \begin{array}{cc} \frac{\partial V_1}{\partial s} & \frac{\partial V_1}{\partial w} \\ \\ \frac{\partial V_2}{\partial s} & \frac{\partial V_2}{\partial w} \end{array} \right ] \; . \end{eqnarray} These eigenvalues are given by the expression \begin{equation} \lambda^\pm = -\frac{1}{2}\left \{ \gamma_1 +2\gamma_2(1 - gs) \pm \left [ \left ( \gamma_1 - 2\gamma_2(1-gs)\right )^2 - 32 \gamma_2^2 g^2 w \right ]^{1/2}\right \} \; . \end{equation} Substituting here the values corresponding to the fixed points yields the Lyapunov exponents. For the stationary point (94), we have \begin{equation} \lambda_1^+ = -\gamma_1 \; , \qquad \lambda_1^- = -2\gamma_2(1 -g\zeta)\; , \end{equation} and for the stationary point (95), we find \begin{equation} \lambda_2^\pm = -\frac{\gamma_1}{2}\left\{ 1 \pm \left [ 1 + 8\frac{\gamma_2}{\gamma_1}(1 -g\zeta)\right ]^{1/2}\right \} \; . \end{equation} The analysis of the Lyapunov exponents (98) and (99) shows that if \begin{equation} g\zeta < 1 \; , \end{equation} then the fixed point (94) is a stable node, and the fixed point (95) is a saddle point. When \begin{equation} g\zeta = 1 \; , \end{equation} both fixed points merge together becoming neutral, since $\lambda_1^-=\lambda_2^-=0$. In this case, the system of equations (39) and (40) is structurally unstable. Equality (101) defines a bifurcation point. For the interval \begin{equation} 1 < g\zeta \leq 1 +\frac{\gamma_1}{8\gamma_2}\; , \end{equation} the fixed point (94) is a saddle point, while that (95) is a stable node. For all $g\zeta > 1$, the point (94) is a saddle point. If \begin{equation} g\zeta > 1 + \frac{\gamma_1}{8\gamma_2} \; , \end{equation} the stationary point (95) becomes a stable focus, since the Lyapunov exponents (99) take the form \begin{equation} \lambda_2^\pm = -\frac{\gamma_1}{2} \mp i\omega_\infty \; , \end{equation} where $$ \omega_\infty \equiv \frac{\gamma_1}{2}\left [ \frac{8\gamma_2}{\gamma_1} (g\zeta -1 ) - 1 \right ]^{1/2} \; . $$ Suppose that for a single atom there occurs the localization of light, so that $\zeta=s_0$. If many resonant atoms are doped into matter, but their interactions through the polariton exchange are not strong enough, so that $gs_0<1$, then the light localization prevails. This means that the light remains confined in the vicinity of the atoms. The confinement of light is demonstrated by the fact that the stationary point (94) is a stable node with $s_1^*=s_0$. But if the resonant interaction between the atoms is sufficiently strong, so that \begin{equation} gs_0 > 1 \; , \end{equation} the deconfinement of light happens. Then the fixed point (95) becomes stable, while the point (94) looses its stability. The deconfinement of light is not complete, since $s_2^*=1/g < s_0$, but it is partial. The portion of light that remains confined decreases with increasing $g$. The qualitative change of the asymptotic behaviour of solutions to a system of differential equations is called, in dynamical theory, the dynamical phase transition. In our case, this happens if $g\zeta = 1$. The equality $gs_0=1$ separates the regions where light is localized ($gs_0<1$) and where it is deconfined ($gs_0>1$). Therefore, the dynamical phase transition occurring at $gs_0=1$ corresponds to a transition that may be called the {\it deconfinement of light} or {\it photon deconfinement}. When the resonant interaction between atoms is so strong that inequality (103) holds true, then the stable stationary point (95) is a focus. This means that the solutions to the equations (39) and (40) display an oscillatory regime of motion when approaching the stationary point (95). Such an oscillatory motion is similar to that found for a concentrated system with the resonant frequency near the edge of a photonic band gap [5] and to that for two atoms with transition frequencies inside or slightly outside a photonic band gap [24]. \section{Coupling Parameters} There are several characteristic quantities defining the behaviour of the system. These are the initial conditions $s_0\equiv s(0)$ and $u_0\equiv u(0)$ and the coupling parameters $g,\; g'$, and $\alpha$. Below we study the typical values of the latter. Recall that the coupling parameters $g'$, defined in (25), and $g$, given in (26), have appeared in the evolution equations when treating the retardation effects in the quasirelativistic approximation (21). Without the latter approximation, we should deal with the integral--type equations [25]. Thus, the atom--atom coupling parameters $g'$ and $g$ describe the retardation or local--field effects. The values of these parameters essentially depend on the shape of the sample and on the spacing between atoms [17]. Accepting the equality $\gamma_1=2k_0^3d_0^2/3$, we have from (25) and (26) $$ g=\frac{3\gamma_1}{2\gamma_2}\sum_{j(\neq i)}^N \frac{\sin(k_0r_{ij})}{k_0r_{ij}}\; , \qquad g' =\frac{3\gamma_1}{2\gamma_2} \sum_{j(\neq i)}^N\frac{\cos(k_0r_{ij})}{k_0r_{ij}}\; . $$ If the radiation wavelength $\lambda=2\pi/k_0$ is much smaller than the mean spacing, $a$, between the atoms, then the sums in $g$ and $g'$ can be of any sign but with absolute values less than unity. As far as usually $\gamma_1\ll\gamma_2$, the absolute values $|g|$ and $|g'|$ are small. If $|g|\ll 1$ and $|g'|\ll 1$, the impurity atoms almost do not interact with each other and their behaviour is practically the same as that of a collection of single atoms. In the opposite case, when $\lambda\gg a$, the sums in $g$ and $g'$ can be estimated with a good approximation [17] as $$ \sum_{j(\neq i)}^N \frac{\sin(k_0r_{ij})}{k_0r_{ij}}\approx \sum_{j(\neq i)}^N \frac{\cos(k_0r_{ij})}{k_0r_{ij}}\approx \rho\lambda^3\; , $$ where $\rho\equiv N/V$ is the density of the doped atoms and it is assumed that $\lambda$ is less than the linear sizes of the sample in all directions. Then we have $$ g\approx g' \approx \frac{3\gamma_1}{2\gamma_2}\rho\lambda^3 \; . $$ The value of the atom--matter coupling parameter (37) essentially depends on the peculiarity of the atom--matter interaction. As the models of Section 4 show, one should expect that $\alpha\ll 1$. If the transition frequency of the doped atoms lies outside the polariton band gap, then the atom--matter resonance is possible, when $\omega_{ks}\sim\omega_0$. Moving the atomic frequency into the gap makes such a resonance more and more difficult. Far inside the gap, where there are no elementary excitations of matter, this resonance becomes impossible. It follows from (53) that the relation between the atom--matter coupling parameter, $\alpha_{out}$, corresponding to the case when the atomic frequency is outside the gap, and the parameter $\alpha_{ins}$, when the frequency is far inside the gap, is roughly speaking, as $$ \frac{\alpha_{out}}{\alpha_{ins}}\sim \frac{\Omega^2}{\Gamma^2}\; , $$ for a sufficiently large polariton band gap. The decrease of $\alpha$ leads, as is clear from (80), to the increase of the delay time $t_0$. One can also notice that the coupling parameters $g$ and $\alpha$ are not independent, but $\alpha$ depends on $g$. This dependence, for $g\gg 1$, is approximately as $\alpha\sim g^{-2}$. Therefore, strong atomic interactions diminish $\alpha$, thus, increasing the delay time (80). During the interval $0\leq t < t_0$, there is a temporal localization of light even for rather large parameters $g$, such that $g\zeta > 1$, but then the process of photon deconfinement starts. If $t_0$ becomes comparable with $T_1$, one cannot omit in Eqs. (39), (40) the term containing $\gamma_1$. Then one should resort to numerical solution of these equations, which will be considered in a separate paper. \vspace{5mm} {\bf Acknowledgement} \vspace{2mm} I am grateful to M.R. Singh for useful discussions. Financial support from the University of Western Ontario, Canada, is appreciated. \newpage
1,314,259,995,666
arxiv
\section{Introduction} Over the past few years, training neural machine translation (NMT) models from scratch has become excessively burdensome as the training data size and model capacity continue to increase~\cite{ghorbani2021scaling,siddhant2022towards,wang2022deepnet}. Therefore, making an existing NMT model continually learn knowledge of new domain or language has attracted intensive interests~\cite{neubig-hu-2018-rapid,zeng-etal-2019-iterative,cao2021continual,garcia-etal-2021-towards,Escolano2021,gu-etal-2022-continual}. However, an equally valuable problem remains unrevealed: \emph{Is it possible to leverage knowledge from existing models to continually improve the performance of a given model in its domain?} To facilitate the study of this problem, we propose a new task named \emph{knowledge accumulation for NMT\xspace (KA-NMT\xspace)}, which is defined as transferring and accumulating knowledge from an {\it unlimited number} of teacher models into one student model, such that the student model improves in its domain continually. To make the task more realistic, without loss of generality, we assume that the teacher models (1) no longer have access to their training data, (2) arrive sequentially, and (3) are not necessarily beneficial to the student model. Although the proposed task definition does not cover all the aspects of the aforementioned problem, we believe it is a small but significant step towards resolving the entire problem. \begin{figure*} \centering \includegraphics[width=0.88\textwidth]{bigplot6.pdf} \caption{An overview of our proposed task and method. Our method consists of three components: (1) \emph{Knowledge Detection} for identifying beneficial knowledge, (2) \emph{Cross-Model Knowledge Transfer} for learning from the beneficial knowledge (green arrow) and learning against the other knowledge (orange arrow), and (3) \emph{Cross-stage Knowledge Transfer} for retaining acquired knowledge from previous stages by introducing Distribution Store (DS).} \label{fig:method} \end{figure*} Overall, we argue there are three major challenges for KA-NMT\xspace: (1) {\bf Knowledge detection}: As the knowledge from the teacher model might be useless or even harmful for the student model, it is imperative to detect what knowledge should be accumulated. (2) {\bf Learning efficiency}: As most teacher models may contain only a small amount of definitely beneficial knowledge for the student model in practice, a method which only leverages such knowledge will be extremely inefficient. (3) {\bf Catastrophic forgetting}: As a special case of CL problem, a KA-NMT\xspace method has to learn new knowledge without forgetting old knowledge, especially when encountering bad teacher models. To address these challenges, we propose a knowledge distillation (KD)~\cite{hinton2015distilling} based KA-NMT\xspace method. As shown in Figure~\ref{fig:method}, we divide the learning process into stages and learn from a single teacher at each stage. Moreover, knowledge accumulation is achieved via three sub-tasks: (1) \emph{Knowledge Detection}: identifying the beneficial knowledge from the teacher model at token-level based on the cross-entropy of the teacher and student models; (2) \emph{Cross-model Knowledge Transfer}: transferring knowledge efficiently from the teacher model to the student model by learning from beneficial knowledge and learning against other knowledge simultaneously. (3) \emph{Cross-stage Knowledge Transfer}: transferring knowledge from last-stage student model to the current-stage student model based on the proposed distribution store method to alleviate catastrophic forgetting. Extensive experiments under homogeneous, heterogeneous and malicious model settings on two language pairs show that our method outperforms representative baselines significantly and consistently. Overall, our contributions are three-fold: \begin{itemize} \item We introduce a new task KA-NMT\xspace with corresponding datasets and evaluation metrics to facilitate the study of continuously improving the performance of a model in its domain leveraging knowledge from existing models. \item We propose a novel method for KA-NMT\xspace, which is capable of performing knowledge detection at token-level, learning efficiently from both beneficial and other knowledge, and alleviating catastrophic forgetting by cross-stage knowledge transfer. \item Our approach obtains consistent improvement under three settings covering various typical model types and generalizes well to different language pairs. \end{itemize} \section{Related Work} \paragraph{Continual Learning} Continual learning (CL) for neural machine translation (NMT) aims at extending an NMT model to new domains~\cite{chu-etal-2017-empirical,khayrallah-etal-2018-regularized,thompson-etal-2019-overcoming,zeng-etal-2019-iterative,Liang_Zhao_Wang_Qiu_Li_2021_AAAI,cao2021continual,gu-etal-2021-pruning,gu-etal-2022-continual} or languages~\cite{neubig-hu-2018-rapid,lakew-etal-2018-transfer,lakew-etal-2019-adapting,liu-etal-2021-continual,tang-etal-2021-multilingual,garcia-etal-2021-towards,Escolano2021,huang-etal-2022-continual} without forgetting old knowledge (i.e., catastrophic forgetting). Our proposed knowledge accumulation for NMT\xspace (KA-NMT\xspace) task can be viewed as a new CL for NMT task which focuses on {\it continually improving the performance on the old domain}, which requires going beyond solely overcoming catastrophic forgetting. KA-NMT\xspace is a muti-stage CL learning problem, which is less explored in CL for NMT~\cite{cao2021continual,Liang_Zhao_Wang_Qiu_Li_2021_AAAI}. Moreover, as the majority of models may contain little or even only harmful knowledge for the given model, it is essential for a KA-NMT\xspace method to be capable of detecting beneficial knowledge and learning effectively from these not so good models. Unfortunately, the conventional CL methods overlook this problem and it is non-trivial to adapt them to alleviate it. \citet{zeng-etal-2019-iterative} addresses a similar task of adapting from multiple out-of-domain models to a single in-domain model. Nevertheless, they assume the training data for the out-of-domain models are available, which is not true for our task. Besides, leveraging high-resource languages to improve low-resource language translation has also attracted intensive efforts~\cite{neubig-hu-2018-rapid,lakew-etal-2019-adapting,liu-etal-2021-continual,huang-etal-2022-continual}. We leave it as a future extension. \paragraph{Knowledge Distillation} Knowledge distillation (KD) is the most widely used method to transfer knowledge between models~\citep{hinton2015distilling,kim2016sequence}, making it a natural base framework for KA-NMT\xspace as the training data for the teacher models is unavailable. However, KD methods usually implicitly assume that the teacher model is superior or complementary to the student model, which does not hold for KA-NMT\xspace. Recently, knowledge inheritance~\citep{qin2021knowledge} allows a large model to learn from small domain-specific teacher models. Nevertheless, it is still implicitly assumed that the student model is better than the teacher model for given tasks and datasets. Selective distillation~\citep{wei_online_2019,gu_train_2020,wang_selective_2021,shi_data_2022} filters data and losses to accelerate or enhance robustness rather than detecting useless or harmful knowledge. Therefore, it is necessary to develop novel knowledge detection and selective KD methods to accomplish KA-NMT\xspace. \begin{figure*}[!h] \centering \small \begin{threeparttable} \begin{tabular}{l|l|c|c} \toprule \textbf{Name} & \textbf{Formulation} & \textbf{Granularity} & \textbf{Unsupervised}\\ \midrule Hard Label & $\mathbbm{1}\{y_j=k \}$ & Token & No \\ Token Entropy & $ -\sum_{k=1}^{|V|} p(k|\mathbf{y}_{<j},\mathbf{x})\log p(k|\mathbf{y}_{<j},\mathbf{x})$ & Token & Yes \\ Token-level CE & $-\sum_{k=1}^{|V|} \mathbbm{1}\{y_j=k \}\log p(k|\mathbf{y}_{<j},\mathbf{x})$ & Token & No \\ Sentence-level CE & $\frac{1}{m}\sum_{i=1}^m (-\sum_{k=1}^{|V|} \mathbbm{1}\{y_j=k \}\log p(k|\mathbf{y}_{<j},\mathbf{x}))$ & Sentence & No \\ \bottomrule \end{tabular} \end{threeparttable} \captionof{table}{Candidates for the beneficialness metric $\mathcal{F}$ (Sec.~\ref{sec:detect}). $\mathbbm{1}$ is an indicator function.} \label{tab:metric} \vspace{1em} \centering \includegraphics[width=.93\textwidth]{modicorr.pdf} \captionof{figure}{ Pearson correlation coefficients between metrics and BLEU scores.} \label{fig:corr} \end{figure*} \section{Task Definition} \label{sec:def} Suppose we have a sequence of teacher models $[\mathcal{M}^g]=\mathcal{M}_1^g, \cdots, \mathcal{M}_i^g, \cdots$, and a student model $\mathcal{S}$, knowledge accumulation for NMT\xspace (KA-NMT\xspace) aims at improving the performance of $\mathcal{S}$ on $\mathcal{S}$'s domain continually by leveraging knowledge provided by $[\mathcal{M}^g]$. The procedure is in a sequential manner, which means $\mathcal{S}$ learns from one teacher a time, referring as a \emph{stage}. Note that during the process, the training sets for the teacher models are unavailable. Formally, in stage $i$, given the student model $\mathcal{S}_{\texttt{stage:}i-1}$ obtained in stage $i-1$, the $i$-th teacher model $\mathcal{M}_i^g$, the training and development sets $\mathcal{D}_\mathrm{trn}$ and $\mathcal{D}_\mathrm{dev}$ for the student model, $\mathcal{S}_{\texttt{stage:}i}$ is obtained via a function $\mathrm{F}_\mathrm{Acc}(\cdot)$ \begin{equation} \resizebox{0.88\hsize}{!}{ $\mathcal{S}_{\texttt{stage:}i} = \mathrm{F}_\mathrm{Acc}(\mathcal{S}_{\texttt{stage:}i-1}, \mathcal{M}^g_i, \mathcal{D}_\mathrm{trn}, \mathcal{D}_\mathrm{dev})$ } \label{eqn:accum} \end{equation} such that \begin{equation} \mathrm{Q}(\mathcal{S}_{\texttt{stage:}i}) - \mathrm{Q}(\mathcal{S}_{\texttt{stage:}i-1}) \geq \delta, \end{equation} where $\mathrm{Q}(\cdot)$ is an evaluation metric of model quality and $\delta$ is the tolerance threshold. For brevity, we will denote the initial student model as $\mathcal{S}_{\texttt{stage:}0}$. During the entire process, we expect to maximize the overall quality gain while minimizing the occasional quality degradation, which are mearused by the following two metrics: \noindent (1) \textbf{Accumulative quality gain} (AQG): measuring the overall quality gain of the student model from knowledge accumulation. AQG from stage $1$ to $N$ is defined as \begin{equation} \begin{split} \mathrm{AQG}&=\sum_{i=1}^N \big(\mathrm{Q}(\mathcal{S}_{\texttt{stage:}i}) - \mathrm{Q}(\mathcal{S}_{\texttt{stage:}i-1})\big)\\ &=\mathrm{Q}(\mathcal{S}_{\texttt{stage:}N}) - \mathrm{Q}(\mathcal{S}_{\texttt{stage:}0}). \end{split} \label{eqn:AQG} \end{equation} \noindent (2) \textbf{Accumulative quality degradation} (AQD): measuring the occasional quality degradation during knowledge accumulation, which should be avoided as much as possible. AQD from stage $1$ to $N$ is defined as \begin{equation} \mathrm{AQD}=\sum_{i=1}^N \min\{0, \mathrm{Q}(\mathcal{S}_{\texttt{stage:}i}) - \mathrm{Q}(\mathcal{S}_{\texttt{stage:}i-1})\}. \label{eqn:AQD} \end{equation} \section{Method} \label{sectionmethod} As shown in Figure~\ref{fig:method}, our method consists of three components: (1) \emph{knowledge detection} which identifies the beneficial subset of the tokens in $\mathcal{D}_\mathrm{trn}$(Sec.~\ref{sec:detect}), (2) \emph {cross-model knowledge transfer} which transfers knowledge from teacher to student model efficiently by learning from beneficial knowledge and against other knowledge simultaneously (Sec.~\ref{sec:distill}), and (3) \emph {cross-stage knowledge transfer} which transfers knowledge between student models across stages to alleviate catastrophic forgetting (Sec.~\ref{sec:store}). Before diving into the details, we will first give a brief of knowledge distillation. \subsection{Background} \label{sec:background} As the training data for the teacher models are unavailable, we derive our method from knowledge distillation (KD)~\cite{hinton2015distilling,kim2016sequence}. KD is usually formulated as an equivalent form of the Kullback-Leibler divergence between the output distributions of student and teacher models. Specifically, given a sentence pair $(\mathbf{x},\mathbf{y})\in\mathcal{D}_\mathrm{trn}$, where $\textbf{x}=(x_1,...,x_n)$ and $\textbf{y}=(y_1,...,y_m)$, the KD loss defined for the $j$-th target token $y_j$ is usually defined as \begin{equation} \begin{split} \mathcal{L}_{\mathrm{KD}} (y_j) =&\sum_{k=1}^{|V|}q(k|\textbf{y}_{<j},\mathbf{x};\theta_{\mathcal{M}_i^g})\\ &\quad\quad\times \log \frac{q(k|\textbf{y}_{<j},\mathbf{x};\theta_{\mathcal{M}_i^g})}{p(k|\textbf{y}_{<j},\mathbf{x};\theta_{\mathcal{S}})}, \end{split} \label{eqn:distill} \end{equation} where $|V|$ is the size of target vocabulary $V$, $\textbf{y}_{<j}=(y_1,...,y_{j-1})$, $q(\cdot|\cdot)$ and $p(\cdot|\cdot)$ are conditional probabilities of the teacher and student models parameterized by $\theta_{\mathcal{M}_i^g}$ and $\theta_{\mathcal{S}}$, respectively. \subsection{Knowledge Detection} \label{sec:detect} As shown in Eq.~\ref{eqn:distill}, the conventional KD methods applies the KD loss to all target tokens. However, the teacher model $\mathcal{M}_i^g$ may perform poorly on $\mathcal{D}_{\mathrm{trn}}$ in KA-NMT\xspace. Therefore, it is essential to identify which part of the supervision from $\mathcal{M}_i^g$ will be beneficial for the student model, referred as \emph{knowledge detection}. More specifically, given a batch of training data $\mathcal{B}\in\mathcal{D}_{\mathrm{trn}}$, knowledge detection aims at dividing the target tokens in $\mathcal{B}$ into two subsets $\mathcal{B}_+$ and $\mathcal{B}_-$ based on a beneficialness metric $\mathcal{F}$ \begin{equation} \begin{aligned} &\mathcal{B}_+:=\{y_j | \mathcal{F}(y_j, \mathcal{M}_i^g) \geq \mathcal{F}(y_j, \mathcal{S})\},\\ & \mathcal{B}_-:=\{y_j | \mathcal{F}(y_j, \mathcal{M}_i^g) < \mathcal{F}(y_j, \mathcal{S})\}. \end{aligned} \end{equation} A proper beneficialness metric $\mathcal{F}$ should correlate well with model quality and generalizes well to various corpus domains. To this end, we leverage the well-accepted corpus-level BLEU as a proxy for model quality and collect six widely used datasets of different domains and varying sizes as the evaluation datasets. The correlations between the candidate metrics in Table~\ref{tab:metric} and corpus-level BLEU scores on the six datasets are shown in Figure~\ref{fig:corr}. Both hard label and token-level CE are strongly correlated with corpus-level BLEU. However, hard label can not break a tie when both the teacher and student models' predictions are correct or incorrect. Therefore, we adopt token-level CE as $\mathcal{F}$. More analysis on the metrics can be found in Appendix~\ref{appendixmetric}. \subsection{Cross-Model Knowledge Transfer} \label{sec:distill} After obtaining $\mathcal{B}_+$ and $\mathcal{B}_-$, a trivial way to transfer knowledge from the teacher model to the student model and avoid performance degradation is conducting KD on $\mathcal{B}_+$ and simply discarding the tokens in $\mathcal{B}_-$. However, $\mathcal{B}_+$ may be extremely small most of case in practice, making learning efficiency extraordinarily low. In analogy to humans, teachers can educate students by telling them what not to do. We argue the student model can learn from $\mathcal{B}_-$ in the same way. Our intuition is that erroneous tokens with a high probability in teacher model's output distribution are critical because the student is prone to make the same mistakes. Therefore, pushing the output distribution of the student model away from the poor target distribution may transfer certain error-prone knowledge to the student model. As a result, $\mathcal{B}_-$ can be leveraged effectively and the overall learning efficiency will be improved significantly. To this end, we propose a contrastive KD (CKD) loss. Formally, for the $j$-th token $y_j$, the CKD loss is defined as \begin{equation} \resizebox{0.85\hsize}{!}{$\mathcal{L}_{\mathrm{CKD}}(y_j)=\left\{ \begin{aligned} &\mathcal{L}_{\mathrm{KD}}(y_j) , & y_j \in \mathcal{B}_+ \\ &\min\{0,\alpha-\mathcal{L}_{\mathrm{KD}}(y_j)\} , & y_j \in \mathcal{B}_- \\ \end{aligned} \right.$} \label{alphalabel} \end{equation} where $\alpha$ is a threshold preventing the student model learning from the low-probability tokens predicted by the teacher model. Preliminary experiments show that CKD loss works best when the ratio of positive and negative samples is appropriate. Thus, to balance the positive and negative samples, for every batch $\mathcal{B}$, we require $\frac{|\mathcal{B}_+|}{|\mathcal{B}_-|}\ge r$, where $r$ is a threshold. Batches that do not meet the requirement are discarded. \begin{table} \centering \small \begin{threeparttable} \begin{tabular}{c|l|r|r|r} \toprule \textbf{Model} & \textbf{Domain} & \textbf{Training} & \textbf{Dev.} & \textbf{Test} \\ \midrule $A$ & News & $1,250,000$ & $4,000$ & $13,000$ \\ $B$ & Oral & $2,500,000$ & $4,000$ & $12,000$ \\ $C$ & Internet & $750,000$ & $4,000$ & $13,000$ \\ $D$ & Speech & $220,000$ & $4,000$ & $5,000$ \\ $E$ & Subtitle & $300,000$ & $4,000$ & $4,000$ \\ \bottomrule \end{tabular} \end{threeparttable} \captionof{table}{The domain, training and evaluation corpora of the five Transformer-Base models used in the Chinese-to-English experiments. More details of the datasets are provided in Appendix~\ref{sec:appendix:datasets}. } \label{tab:corpora} \end{table} \subsection{Cross-Stage Knowledge Transfer} \label{sec:store} As the student model learns from multiple teacher models sequentially, KA-NMT\xspace is also challenged by catastrophic forgetting. Ideally, in stage $i$, the student model should retain all the knowledge obtained in stages $1$ to $i-1$ to realize continually knowledge accumulation. To retain knowledge, we propose to store the prediction distributions of the student model in previous stages with a method named {\it distribution store}. With a compromise between performance and cost, we store only one prediction distribution for each target token $y_i$ in the transfer set $\mathcal{D}_\mathrm{trn}$. Formally, denoting the probability of a token indexed by $k$ predicted by a student model $\mathcal{S}_{\mathrm{stage}:i-1}$ given source sentence $\mathbf{x}$ and target prefix $\mathbf{y}_{<j}$ as $\mathrm{DS}_{i-1}(k|\textbf{y}_{<j},\mathbf{x})$, the knowledge from $\mathcal{S}_{\mathrm{stage}:i-1}$ can be transferred to $\mathcal{S}_{\mathrm{stage}:i}$ with loss \begin{equation} \begin{split} \mathcal{L}_{\mathrm{DS}} (y_j) =&\sum_{k=1}^{|V|}\mathrm{DS}_{i-1}(k|\textbf{y}_{<j},\mathbf{x})\\ & \quad\quad\times \log \frac{\mathrm{DS}_{i-1}(k|\textbf{y}_{<j},\mathbf{x})}{p(k|\textbf{y}_{<j},\mathbf{x};\theta_{\mathcal{S}_{\texttt{stage:}i}})}. \end{split} \label{ds_eq} \end{equation} How do we determine which distributions should be stored? Suppose the above method is effective, $\mathcal{S}_{\mathrm{stage}:i-1}$ would have retained all the knowledge and storing its prediction distributions should be enough, which can be formally denoted as \begin{equation} \begin{split} \resizebox{0.88\hsize}{!}{$\mathrm{DS}_{i-1}(k|\textbf{y}_{<j},\mathbf{x})= p(k|\textbf{y}_{<j},\mathbf{x};\theta_{\mathcal{S}_{\texttt{stage:}i-1}})$}. \label{eqn:DS} \end{split} \end{equation} Although Eq.~\ref{eqn:DS} is simple and relies a strong assumption, comparison results with more sophisticated distribution store designs (Appendix~\ref{app:dsss}) verify that Eq.~\ref{eqn:DS} is the most effective. Therefore, Eq.~\ref{eqn:DS} is adopted in the rest of the paper. Now, we can modify $\mathcal{L}_{\mathrm{KD}}$ in Eq.~\ref{eqn:distill} by dividing it into two complementary knowledge sources: (1) $\mathcal{L}_{\mathrm{CKD}}$ to transfer knowledge from teacher models, and (2) $\mathcal{L}_{\mathrm{DS}}$ to transfer knowledge from previous students. And the overall training loss is \begin{equation} \mathcal{L}=\mathcal{L}_{\mathrm{CE}} + \lambda \mathcal{L}_{\mathrm{CKD}}+(1-\lambda)\mathcal{L}_{\mathrm{DS}}, \label{equall} \end{equation} where $\mathcal{L}_{\mathrm{CE}}$ is the cross entropy loss of NMT, and $\lambda$ is a hyper-parameter. \section{Experiments} \begin{figure*}[!h] \centering \includegraphics[width=.9\textwidth]{ploter.pdf} \vspace{-1em} \caption{Comparison of our method with the baselines on six sets of experiments.} \label{fig:mainplot} \vspace{1em} \centering \resizebox{0.98\textwidth}{!}{ \begin{threeparttable} \begin{tabular}{l|............|..} \toprule \multirow{2}{*}{\bf Method}& \multicolumn{2}{c}{\bf BCDE$\rightarrow$ A}&\multicolumn{2}{c}{\bf CBDE$\rightarrow$ A}&\multicolumn{2}{c}{\bf ACDE$\rightarrow$ B}&\multicolumn{2}{c}{\bf CADE$\rightarrow$ B}&\multicolumn{2}{c}{\bf ABDE$\rightarrow$ C}&\multicolumn{2}{c}{\bf BADE$\rightarrow$ C}&\multicolumn{2}{|c}{\bf Average}\\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9} \cmidrule(lr){10-11} \cmidrule(lr){12-13} \cmidrule(lr){14-15} &\multicolumn{1}{c}{\bf {AQG} }&\multicolumn{1}{c}{\bf{AQD}} &\multicolumn{1}{c}{\bf {AQG} }&\multicolumn{1}{c}{\bf{AQD}} &\multicolumn{1}{c}{\bf {AQG} }&\multicolumn{1}{c}{\bf{AQD}} &\multicolumn{1}{c}{\bf {AQG} }&\multicolumn{1}{c}{\bf{AQD}} &\multicolumn{1}{c}{\bf {AQG} }&\multicolumn{1}{c}{\bf{AQD}} &\multicolumn{1}{c}{\bf {AQG} }&\multicolumn{1}{c}{\bf{AQD}} & \multicolumn{1}{|c}{\bf {AQG} }&\multicolumn{1}{c}{\bf{AQD}} \\ \midrule\midrule KD & -12.27 & -27.89 & -12.53 & -28.50 & -4.82 & -12.38 & -4.87 & -10.71 & -4.18 & -10.63 & -4.07 & -9.3 & -7.12 & -16.57 \cr EWC & -2.00 & -7.25 & -3.76 & -10.94 & -1.68 & -4.89 & -2.91 & -6.51 & 0.19 & -1.46 & -1.30 & -3.89 & -1.91 & -5.82 \cr CL-NMT & 0.29 & -3.01 & 0.15 & -2.49 & 0.37 & -3.69 & 0.40 & -1.70 & 0.53 & -1.61 & 0.52 & -0.61 & 0.38 & -2.19 \cr Ours & \Bold 3.05 & \Bold -0.11 & \Bold 3.24 & \Bold 0.00 & \Bold 0.96 & \Bold 0.00 & \Bold 0.98 & \Bold -0.02 & \Bold 1.09 & \Bold -0.03 & \Bold 0.92 & \BoldTwo -0.38 & \Bold 1.71 & \Bold -0.09 \cr \bottomrule \end{tabular} \end{threeparttable}} \captionof{table}{Detailed results under six different configurations. ``$BCDE\rightarrow A$'' denotes $A$ is the student model and model $B$ to $E$ are teacher models in stage $1$ to $4$, respectively.} \label{tab:main2} \end{figure*} To evaluate the effectiveness of our method, we conduct experiments on Chinese-to-English (zh-en) translation under three representative settings covering homogeneous, heterogeneous, and malicious models, respectively. Further investigations are also conducted on German-to-English (de-en) translation. \subsection{Setup} \paragraph{Settings} For the experiments under homogeneous model setting for Chinese-to-English translation, five Transformer-Base models~\cite{vaswani2017attention} $A$, $B$, $C$, $D$, $E$ are trained on five representative datasets shown in Table~\ref{tab:corpora}. Three models $A$, $B$ and $C$ are combined in different orders to form six groups of experiments. To simulate the most challenging scenario, $D$ and $E$ with weaker performance are added at the end of these experiments to test the performance of our method on poor models. For simplicity, each group is denoted with a string like ``$BCDE\rightarrow A$'', which means accumulating knowledge four times on student model $\mathcal{S}_{\texttt{stage:}0}=A$. The teacher models are $\mathcal{M}_1^g=B$, $\mathcal{M}_2^g=C$, $\mathcal{M}_3^g=D$ and $\mathcal{M}_4^g=E$ successively. The six group of experiments are ``$BCDE\rightarrow A$'', ``$CBDE\rightarrow A$'', ``$ACDE\rightarrow B$'', ``$CADE\rightarrow B$'', ``$ABDE \rightarrow C$'', and ``$BADE\rightarrow C$''. For each group of experiment, the training, development and test sets $\mathcal{D}_{\mathrm{trn}}$, $\mathcal{D}_{\mathrm{dev}}$ and $\mathcal{D}_{\mathrm{tst}}$ are set to those of the corresponding student model accordingly. For example, they are the corresponding datasets of $A$ for ``$BCDE\rightarrow A$'' and those of $B$ for ``$ACDE\rightarrow B$''. For clarity, the differences of settings for other aforementioned experiments will be given in the corresponding sections. \paragraph{Evaluation} We leverage the two metrics \emph{Accumulative Quality Gain (AQG)} and \emph{Accumulative Quality Degradation (AQD)} defined in Sec.~\ref{sec:def} to evaluate the methods. And BLEU~\citep{bleu2002papineni} \footnote{BLEU score is computed using \texttt{multi-bleu.perl} on the corresponding test set for each student model.} is used as the model quality metric $\mathrm{Q}(\cdot)$. \paragraph{Baselines} Our method is compared with the following baseline methods: \begin{itemize} \setlength{\itemsep}{0em} \item {\it Knowledge Distillation} (KD)~\cite{khayrallah2018regularized} for NMT applies vanilla knowledge distillation on each token trivially. \item {\it Elastic Weight Consolidation} (EWC)~\citep{saunders2019domain,thompson2019overcoming} is a representative continual learning method that adds an EWC term as a penalty to alleviate catastrophic forgetting. \item {\it Continual Learning for NMT} (CL-NMT) \cite{cao2021continual} is a state-of-the-art work investigating multi-stage continual learning on NMT. \end{itemize} Vanilla ensemble distillation requires unaffordable computational cost and memory for large number of teacher models and it is non-trivial to adapt it to our scenarios due to potential harmful teacher models. Therefore, we do not include it as a major baseline. More detailed discussions and comparisions can be found in the Appendix~\ref{app:ensemble}. \subsection{Implementation Details} Words are tokenized using byte pair encoding (BPE)~\cite{sennrich2015neural}, and the vocabulary size is $32$k The hyper-parameters of the Transformer-Base models are set mostly following \citet{vaswani2017attention}, where the optimizer is Adam~\cite{kingma2014adam} with $\beta_1=0.9$ and $\beta_2=0.98$, learning rate is $7\times 10^{-4}$, and dropout rate is $0.1$. Batch size is $6,000$. $\lambda$ in Eq.~\ref{equall} in stage $i$ is defined as $\lambda=0.999 \frac{1-0.999^{i-1}}{1-0.999^{i}}$ following~\cite{cao2021continual}. More details of hyper-parameters are provided in Appendix~\ref{sec:appendix:hyper-parameter}. \subsection{Chinese-to-English Translation} \label{sec:zh-en-translation} \paragraph{Homogeneous Models} The performance of our method and baselines are shown in Figure~\ref{fig:mainplot} and Table~\ref{tab:main2}. From the results, we can observe that: (1) Our method achieves positive AQG values in all six groups of experiments and outperforms all baselines significantly (Table~\ref{tab:main2}), indicating that our method is effective for leveraging existing models to continually improve a given model's performance in its own domain. (2) Our method achieves zero or near-zero AQD values in all six groups of experiments (Table~\ref{tab:main2}), indicating that our method is also effective for alleviating catastrophic forgetting. Especially, when encountering model $D$, nearly all baselines face severe quality degradation while our method even achieves gain in $ACDE\rightarrow B$ (Figure~\ref{fig:mainplot}), which further justifies the effectiveness of our method. (3) All the three baselines performs poorly on KA-NMT\xspace, indicating it is a challenging task. KD performs the worst, suffering from severe quality degradation (average AQG and AQD are $-7.12$ and $-16.57$, respectively). We argue this is due to KD implicitly assume the teacher models are helpful such that it is prone to unbeneficial knowledge. EWC is designed to alleviate catastrophic forgetting and achieves better AQG and AQD than KD. However, EWC still fails to achieve knowledge accumulation except in $ABCD\rightarrow C$ with a small AQG of $0.19$. CL-NMT is specially designed for NMT and achieves the best AQG and AQD among baselines. However, its average AQG is significant smaller than ours ($0.38$ v.s. $1.71$) and its average AQD significant worse than ours ($-2.19$ v.s. $-0.09$). Overall, KA-NMT\xspace is challenging and our method is remarkably effective for KA-NMT\xspace than baselines. (4) Despite the promising results, slight performance degradation is still observed in $BADE \rightarrow C$ (Figure~\ref{fig:mainplot}(f)). Thus, multi-stage knowledge accumulation still needs more exploration to further alleviating catastrophic forgetting. (5) Interestingly, the performance gains in the first stage are usually the highest, and it seems that performance gain depends on the order of the teacher models been seen, e.g., AQG of $BCDE\rightarrow A$ is larger than $CBDE\rightarrow A$. We failed to find a principle explanation for the phenomena and leave the research on the order of the teacher models as future work. \begin{table}[!t] \resizebox{0.48\textwidth}{!}{ \begin{threeparttable} \begin{tabular}{l|......} \toprule \multirow{2}{*}{\bf Method}& \multicolumn{2}{c}{\bf Transformer-Base}&\multicolumn{2}{c}{\bf RNN}&\multicolumn{2}{c}{\bf Transformer-Big} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} &\multicolumn{1}{c}{\bf {AQG} }&\multicolumn{1}{c}{\bf{AQD}} &\multicolumn{1}{c}{\bf {AQG} }&\multicolumn{1}{c}{\bf{AQD}} &\multicolumn{1}{c}{\bf {AQG} }&\multicolumn{1}{c}{\bf{AQD}} \\ \midrule\midrule KD & 0.10 & -0.93 & -1.19 & -1.21 & -0.27 & -1.17 \cr EWC & 0.15 & -0.90 & -1.65 & -1.65 & -0.09 & -0.32 \cr CL-NMT & 0.09 & -3.59 & -3.59 & -3.59 & -1.48 & -1.91 \cr Ours & \Bold 1.59 & \Bold 0.00 & \Bold -0.04 & \Bold -0.12 & \Bold 1.34 & \Bold 0.00 \cr \bottomrule \end{tabular} \end{threeparttable}} \captionof{table}{Results for different architecture models in stage $1$, averaged over six setting groups.} \label{tab:arc} \end{table} \begin{table}[!t] \resizebox{0.48\textwidth}{!}{ \begin{threeparttable} \begin{tabular}{l|......} \toprule \multirow{2}{*}{\bf Method}& \multicolumn{2}{c}{\bf Transformer-Base}&\multicolumn{2}{c}{\bf RNN}&\multicolumn{2}{c}{\bf Transformer-Big} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} &\multicolumn{1}{c}{\bf {AQG} }&\multicolumn{1}{c}{\bf{AQD}} &\multicolumn{1}{c}{\bf {AQG} }&\multicolumn{1}{c}{\bf{AQD}} &\multicolumn{1}{c}{\bf {AQG} }&\multicolumn{1}{c}{\bf{AQD}} \\ \midrule\midrule KD & -11.12 & -11.12 & -15.60 & -15.60 & -10.95 & -10.95 \cr EWC & -5.35 & -5.35 & -6.64 & -6.64 & -6.44 & -6.44 \cr CL-NMT & -18.34 & -18.34 & -26.32 & -26.32 & -18.58 & -18.58 \cr Ours & \Bold 0.00 & \Bold 0.00 &\Bold 0.00 &\Bold 0.00 &\Bold 0.00 &\Bold 0.00 \cr \bottomrule \end{tabular} \end{threeparttable}} \captionof{table}{Results for malicious models in stage $1$, averaged over six setting groups.} \label{tab:malicious} \end{table} \begin{table}[!t] \resizebox{0.48\textwidth}{!}{ \begin{threeparttable} \begin{tabular}{l|......} \toprule \multirow{2}{*}{\bf Method}& \multicolumn{2}{c}{\bf Homogeneous}&\multicolumn{2}{c}{\bf Heterogeneous}&\multicolumn{2}{c}{\bf Malicious } \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} &\multicolumn{1}{c}{\bf {AQG} }&\multicolumn{1}{c}{\bf{AQD}} &\multicolumn{1}{c}{\bf {AQG} }&\multicolumn{1}{c}{\bf{AQD}} &\multicolumn{1}{c}{\bf {AQG} }&\multicolumn{1}{c}{\bf{AQD}} \\ \midrule\midrule KD & 0.06 & -0.13 & -0.43 & -0.44 & -12.75 & -12.75 \cr EWC & 0.04 & -0.18 & -0.23 & -0.25 & -6.36 & -6.36 \cr CL-NMT & 0.23 & -0.10 & -4.42 & -4.42 & -22.13 & -22.13 \cr Ours & 0.57 & 0.00 & 0.23 & -0.05 & 0.00 & 0.00 \cr \bottomrule \end{tabular} \end{threeparttable}} \captionof{table}{Results for German-to-English translation, averaged over all subsets.} \label{tab:ende_main} \end{table} \begin{table*}[!] \centering \small \resizebox{0.93\textwidth}{!}{ \begin{threeparttable} \label{tab:performance_comparison2} \begin{tabular}{l|lllllll} \toprule {\bf Source} &\multicolumn{6}{l}{ \begin{CJK}{UTF8}{gbsn}每棵圣诞树上都挂满琳琅目的装点,但每棵树的顶端必定有一特大的星星 \end{CJK} } \\ {\bf Target} & \multicolumn{6}{l}{\it every christmas tree hung with dazzling \colorbox{yellow}{decorations} \color{gray}{, but the top of each tree must have a tree big stars}} \\ {\bf Teacher} & {\bf Candidates:} & \colorbox{yellow}{decorations}$_{P=0.396}$ & ornaments$_{P=0.125}$ & costumes$_{P=0.033}$ & ar@@$_{P=0.032}$ & jewelry$_{P=0.022}$ \\ {\bf Student} & {\bf Candidates:}& display$_{P=0.023}$ & car@@$_{P=0.0022}$ & displays$_{P=0.019}$ & 's$_{P=0.011}$ & \colorbox{yellow}{decorations}$_{P=0.010}$ \\ {\bf Loss} & \multicolumn{4}{l}{$\mathcal{F}(\cdot,\mathcal{M}^g)=0.396 \textgreater \mathcal{F}(\cdot,\mathcal{S})=0.010 \Longrightarrow y_i \in\mathcal{B}_+ \Longrightarrow \mathcal{L}_{\mathrm{CKD}}=\mathcal{L}_{\mathrm{KD}}=2.542$ } & \multicolumn{2}{l}{// Teacher is informative.} \\ \midrule \midrule {\bf Source} &\multicolumn{6}{l}{ \begin{CJK}{UTF8}{gbsn}无量跌停并非没有前兆,前一交易日它的证券股价表现已经显得出奇地疲弱\end{CJK} } \\ {\bf Target} & \multicolumn{6}{l}{\it measureless limit is a \colorbox{yellow}{precursor}\color{gray}{, in the previous session its securities share price performance appears ...} \\ {\bf Teacher} & {\bf Candidates:}& fore@@$_{P=0.216}$ & sign$_{P=0.144}$ & single$_{P=0.104}$ & good@@$_{P=0.037}$ & major$_{P=0.029}$ \\ {\bf Student} & {\bf Candidates:} & \colorbox{yellow}{pre@@}$_{P=0.533}$ & \colorbox{yellow}{precursor@@}$_{P=0.161}$ & aug@@$_{P=0.030}$ & omen$_{P=0.017}$ & pre$_{P=0.012}$ \\ {\bf Loss} & \multicolumn{4}{l}{ $\mathcal{F}(\cdot,\mathcal{M}^g)=0.003 \textless \mathcal{F}(\cdot,\mathcal{S})=0.161 \Longrightarrow y_i \in\mathcal{B}_- \Longrightarrow \mathcal{L}_{\mathrm{CKD}}=\min\{0, \alpha-(-3.921)\}=0$ } & \multicolumn{2}{l}{// Teacher is too bad, $\alpha=0.1$.} \\ \midrule \midrule {\bf Source} &\multicolumn{6}{l}{ \begin{CJK}{UTF8}{gbsn}这笔钱将提存立即中标人 \end{CJK} } \\ {\bf Target} & \multicolumn{6}{l}{\it money will be escrowed immediately to winning \colorbox{yellow}{bidder}\color{gray}{.} } \\ {\bf Teacher} & {\bf Candidates:} & the@@$_{P=0.394}$ & mark$_{P=0.260}$ & be$_{P=0.034}$ & pay@@$_{P=0.034}$ & save$_{P=0.019}$ \\ {\bf Student} & {\bf Candidates:} & target@@$_{P=0.331}$ & get@@$_{P=0.235}$ & sign@@$_{P=0.022}$ & put$_{P=0.021}$ & get$_{P=0.020}$ \\ {\bf Loss} & \multicolumn{4}{l}{ $\mathcal{F}(\cdot,\mathcal{M}^g)=0.001 \textless \mathcal{F}(\cdot,\mathcal{S})=0.003 \Longrightarrow y_i \in\mathcal{B}_- \Longrightarrow \mathcal{L}_{\mathrm{CKD}}=\min\{0, \alpha-0.085\}=\alpha-0.085$ } & \multicolumn{2}{l}{// Teacher is somewhat informative, $\alpha=0.1$.} \\ \bottomrule \end{tabular} \end{threeparttable} } \caption{ Three representative examples for illustrating the effectiveness of the contrastive KD loss. Golden label $y_j$ and candidates matching with $y_j$ are highlighted in \colorbox{yellow}{yellow}. ``Student'' and ``Teacher'' show the top 5 predicted candidate tokens and their corresponding probabilities. $\alpha=0.1$ is the threshold here.} \label{tab:casestudy} \end{table*} \paragraph{Heterogeneous Models} Using the prediction distributions as the medium to transfer and accumulate knowledge, our approach is model-agnostic and scalable. To justify that, we replace the Transformer-Base teacher models with RNN~\cite{bahdanau2014neural} and Transformer-Big~\cite{vaswani2017attention} models, and repeat the experiments in Table~\ref{tab:main2} with other settings remaining identical. Table~\ref{tab:arc} shows similar results as Table~\ref{tab:main2} that our method outperforms all baselines significantly and also achieves zero or near-zero AQD, indicating that our method is not sensitive to model architectures. \paragraph{Malicious Models} Robustness to malicious models is critical in knowledge accumulation as only the parameters rather than training data or metadata of teachers are available. We simulate malicious teacher models by shuffling the outputs of a well-trained model within a batch so that the model answers almost completely wrong with high confidence. We repeat the experiments in Table~\ref{tab:main2} with other settings remaining identical. As shown in Table~\ref{tab:malicious}, our approach is far less affected by the malicious model. Moreover, it could be further explored to detect and skip malicious models to save computational resources directly. \subsection{German-to-English Translation} We also conduct experiments on German-to-English datasets. Models are trained on four different datasets WMT16, TildeMODEL v2018, Tanzil v1, and Europarl. Other settings are similar to the Chinese-to-English experiments and are detailed in Appendix~\ref{app:ende}. The average values among each of the homogeneous, heterogeneous, and malicious model settings are reported in Table~\ref{tab:ende_main}. Due to the large domain differences of the datasets, only our method consistently obtains non-negative AQG and AQD, exceeding the baselines by large margins, demonstrating that our approach is effective for different language pairs. \subsection{Ablation Study} \begin{table}[!t] \centering \small \begin{threeparttable} \label{tab:performance_comparison1} \renewcommand\tabcolsep{3.0pt} \begin{tabular}{ll..} \toprule &{\bf Method} & \multicolumn{1}{c}{\bf Stage 1} & \multicolumn{1}{c}{\bf Stage 4} \cr \midrule 1 & Full Model & \Bold 31.07 & \Bold 31.18 \cr 2 & \quad- $\mathcal{L}_\mathrm{DS}$ & 30.74 & 29.94 \cr 3 & \quad - $L_\mathrm{CKD}$, + $L_\mathrm{KD}$ on $\mathcal{B}_+$ & 30.69 & 30.54 \cr 4 & \quad - $\mathcal{L}_\mathrm{CKD}$, + $\mathcal{L}_\mathrm{KD}$ on $\mathcal{B}_+\cup\mathcal{B}_-$ & 30.60 & 30.31 \cr \bottomrule \end{tabular} \end{threeparttable} \caption{Ablation study. BLEU scores averaged over six setting groups are reported. ``- $\mathcal{L}_\mathrm{DS}$'' denotes removing the distribution store loss. ``- $\mathcal{L}_\mathrm{CKD}$, + $\mathcal{L}_\mathrm{KD}$ on $\mathcal{B}_+$'' denotes replacing $\mathcal{L}_\mathrm{CKD}$ with KD loss computed on $\mathcal{B}_+$. And ``- $\mathcal{L}_\mathrm{CKD}$, + $\mathcal{L}_\mathrm{KD}$ on $\mathcal{B}_+\cup\mathcal{B}_-$'' denotes replacing $\mathcal{L}_\mathrm{CKD}$ with KD loss computed on the entire batch.} \label{tab:sub2} \end{table} Table~\ref{tab:sub2} shows the effect of distribution store loss ($\mathcal{L}_{\mathrm{DS}}$) and the contrastive KD loss ($\mathcal{L}_{\mathrm{CKD}}$) at the beginning and later stages of KA-NMT\xspace for Chinese-to-English translation. We can observe that: (1) Removing either $\mathcal{L}_{\mathrm{DS}}$ or $\mathcal{L}_{\mathrm{CKD}}$ hurts the performance, indicating their effectiveness. (2) Without $\mathcal{L}_{\mathrm{DS}}$, the performance drops severely, especially at a later stage, verifying that the distribution store is essential for alleviating catastrophic forgetting. (3) Comparing row 1 with row 3, we can conclude that the negative samples ($\mathcal{B}_-$) also contain valuable nontrivial knowledge. Furthermore, trivially applying vanilla KD loss $\mathcal{L}_{\mathrm{KD}}$ on $\mathcal{B}_-$ (row 3 v.s. 4) brings no gain. Therefore, our proposed contrastive KD loss is effective and essential for leveraging knowledge in negative samples. \subsection{Case Study} In Table~\ref{tab:casestudy}, we show three examples to demonstrate the principle of knowledge detection. (1) In the first case, KD loss is positive because the teacher model assigns a higher probability of the ground truth token ``decorations'' than student, indicating a better distribution from the former. (2) In the second case, the output of teacher model is discarded because the negative KD loss exceeds the threshold. It might be a reasonable choice since the output of the teacher is too far from the golden label. (3) In the third case, the teacher model have slightly worse predictions than students, motivating the student model not to make similar error-prone mistakes. \section{Conclusion} In this work, we propose the knowledge accumulation task for neural machine translation to accumulate knowledge continually from a sequence of teacher models to a student model. Extensive experiments demonstrate that the proposed knowledge detection and contrastive knowledge distillation components are capable of transferring knowledge efficiently by learning from and against teacher models. Moreover, distribution store enables retraining the acquired knowledge from previous students to alleviate catastrophic forgetting. Further analysis reveals that our approach is model-agnostic, highly robust, and effective for different languages. The research on the effect of the teacher model order, overcoming the obstacle introduced by vocabulary differences, and extending to multiple student models are all promising future research directions. \section*{Limitations} There are some limitations that have yet to be addressed. Since we use the predicted probability distributions of the model output as a medium for knowledge accumulation, the vocabulary of multiple models needs to be consistent. Overcoming these limitations allows the knowledge accumulation to be extended to models with different language pairs and different modalities. Also, although our approach is robust to malicious models, there are more diverse and sophisticated attacks in real-world that require more research on defense. In addition, the teacher and student models must be trained on the same language pair. Further studies can consider more general scenarios without the above limitations.
1,314,259,995,667
arxiv
\section{Introduction} X-rays from non-magnetic massive stars are thought to be produced two ways: via embedded wind shocks in the radiately driven wind close to the star, and, in massive binaries, via shocks in the wind collision zone between the two stars \citep{1992ApJ...386..265S}. 3D numerical simulations of colliding wind shocks in $\eta$~Carinae and the WC8 binary WR 140 \citep{2010AAS...21542602R, 2011ApJ...726..105P} correctly predict the characteristic rise, rapid decline, and recovery of the X-ray light curve as these highly eccentric, long-period, adiabatic systems approach and emerge from periastron. 3D simulations of O+O binaries by \citep{2010MNRAS.403.1657P} reproduce the overall X-ray luminosity and post-shock temperatures of a number of systems spanning a range of mass-loss rates, orbital periods and eccentricities. In particular, they were able to produce the strong, but relatively soft X-ray emission seen in some highly radiative, short-period systems. On the other hand, the 3D model for WR~22 (WN7h + O9 III-V) over-predicts the observed $L_{\rm X}$ by an order of magnitude or more \citep{2011A&A...530A.119P}. The O+O binaries in the {\it Chandra} Carina Complex project showed a wide range of $L_{\rm X}/L_{\rm bol}$ \citep{2011ApJS..194....7N}, and \citep{2011ApJS..194....5G} found that the short-period systems have significantly softer X-ray spectra than the longer-period systems. As Owocki discusses in these proceedings, thin-shell mixing may play in important role in setting the scaling between $L_{\rm X}$ and $L_{\rm bol}$ in the winds of single stars, and could produce significant cooling in the wind collision zone of close, massive binaries. Motivated by these results, we undertook a survey of all known WR and O+O binaries with X-ray fluxes measured with {\it Chandra} or {\it XMM-Newton}. \section{Methodology} To begin, we searched the literature for {\it Chandra} and {\it XMM} analyses of WR and O+O binaries. To those, we added X-ray sources in the XMM-Newton Serendipitous Source Catalog (2XMMi), in the {\it XMM-Newton} XAssist Source List, or in the {\it Chandra} XAssist Source List, within $15^{\prime\prime}$ of positions in the 7th catalog of WR stars \citep{2001yCat.3215....0V} and O+O binaries in the SB9 catalog of spectroscopic binaries \citep{2009yCat....102020P}. For the O+O binaries with reliable X-ray fluxes, column densities, and distances, we calculated a 0.5--8 keV X-ray flux. These results and the corresponding X-ray, optical, and distance references are reported in Table 1. Reference codes are noted in parentheses in the references section. Similarly, results for the known WR binaries are reported in Table 2. In some cases, WR binary X-ray luminosities were taken from the {\it ROSAT} survey of \citet{2000MNRAS.318..214I}. In cases where stars (e.g., WR~101k) in the {\it Chandra} or {\it XMM} XAssist source lists were detected in many observations, or by multiple cameras on {\it XMM}, we report $L_{\rm X}$ based on a median unabsorbed X-ray flux. \begin{table} \caption{Known X-ray Luminous O+O binaries} \smallskip \begin{center} {\scriptsize \begin{tabular}{lllcccccl} \tableline \noalign{\smallskip} Name & Primary & Second. & Dist. & Period & $kT$ & $L_{\rm X}$ & $L_{\rm X}/L_{\rm bol}$ & Ref. \\ & Sp.Type & Sp.Type & (kpc) & (days) & (keV) & (cgs) & & Note \\ \noalign{\smallskip} \tableline \noalign{\smallskip} HD 215835 & O5.5 V((f)) & O6.5 V((f)) & 3.5 & 2.11 & 0.3 & 33.73 & $-$6.16 & aly \\ Mk33Na & O6.5 V & O3 If* & 51 & 1140 & 1.3 & 33.72 & $-$6.08 & l \\ Cyg OB2 9 & O5 If & O6-7 & 1.2 & 852.9 & 1.2 & 33.52 & $-$6.30 & dmz \\ HD 47129 & O7.5 I & O6 I & 1.5 & 14.4 & 1.3 & 33.36 & $-$6.02 & kq \\ CEN 1B & O4 & ? & 1.6 & $\ldots$ & 2.3 & 33.28 & $-$6.00 & it \\ HD 165052 & O6.5 V & O6.5 V & 1.6 & 2.95 & 0.6 & 33.18 & $-$6.00 & alwyp\AA \\ HD 93250 & O4 III(fc) & O & 2.3 & 250 & 2.3 & 33.18 & $-$6.41 & su1 \\ CEN 1A & O4 & ? & 1.6 & $\ldots$ & 6.5 & 33.16 & $-$6.12 & it \\ HD 101436 & O6.5 V & O7 V & 2.3 & 37.37 & $\ldots$ & 33.11 & $-$6.17 & b \\ HD 93403 & O5.5 I & O7 V & 2.3 & 15.093 & 1 & 33.11 & $-$6.41 & sxu \\ HD 159176 & O6 V & O6 V & 0.8 & 3.36677 & 0.3 & 33.07 & $-$5.89 & ael \\ HD 101131 & O6.5 V((f)) & O8.5 V & 2.3 & 9.65 & $\ldots$ & 32.98 & $-$6.33 & b \\ V729 Cyg & O7 f & O6 f & 1.8 & 6.5978 & 0.6 & 32.96 & $-$6.74 & p \\ HD 101205 & O7 IIIn((f)) & O & 2.3 & 2.45 & $\ldots$ & 32.95 & $-$6.54 & b \\ HD 1337 & O9 III & O9 III & 3.9 & 3.52 & $\ldots$ & 32.90 & $-$6.65 & al \\ Cyg OB2 8A & O5.5 I(f) & O6 & 1.2 & 21.907 & 1 & 32.88 & $-$6.75 & mnz \\ MT91 516 & O5.5 V & ? & 1.2 & $\ldots$ & 0.5 & 32.73 & $-$6.79 & mz \\ HD 101413 & O8 V & B3: V & 2.3 & 150 & $\ldots$ & 32.68 & $-$6.09 & b \\ HD 100213 & O7 V & O8 V & 2.1 & 1.38729 & $\ldots$ & 32.63 & $-$5.95 & al \\ HD 93205 & O3 V & O8 V & 2.3 & 6.0803 & 0.3 & 32.55 & $-$6.82 & \$sux \\ QZ Car & O9.7 Ib:(n) & O9 III & 2.3 & 20.72 & 1 & 32.55 & $-$7.26 & sN \\ HD 101190 & O4 V((f+)) & O7-7.5 V & 2.3 & 6.05 & $\ldots$ & 32.52 & $-$6.74 & b \\ CPR2002 A11 & O7.5 Ibf & ? & 1.2 & $\ldots$ & 1.6 & 32.47 & $-$6.64 & mz \\ HD 57060 & O7 f & O7 & 1.5 & 4.39 & 0.7 & 32.43 & $-$7.38 & alx \\ HD 97484 & O7.5 & O8.5 & 3.2 & 3.41428 & 0.6 & 32.41 & $-$6.87 & p \\ HD 206267 & O6.5 V((f)) & O9 & 0.8 & 3.71 & 0.6 & 32.36 & $-$6.89 & alp \\ HD 152218 & O9 IV & O9.7 V & 1.4 & 5.604 & 0.6 & 32.05 & $-$6.73 & pv \\ HD 93161A & O8 V & O9 V & 2.3 & 8.566 & 0.5 & 31.94 & $-$6.92 & osu \\ HD 152218 & O9 IV & O9.7 V & 1.6 & 5.6 & 0.5 & 31.93 & $-6.85$ & pv \\ Wd 1 30 & O9-B0.5 Ia & ? & 4.0 & $\ldots$ & 1.3 & 31.90 & $\ldots$ & \O$\hbar$ \\ Tr 16-110 & O7 V & O8 V, O9 V & 2.3 & 3.62864 & 0.6 & 31.74 & $-$7.24 & sxu \\ HD 93343 & O8 V & O7-8.5 V & 2.3 & 44.15 & 3.2 & 31.66 & $-$6.98 & sxu \\ Tr 16-34 & O8 V & O9.5 V & 2.3 & 2.9995 & 0.6 & 31.56 & $-$7.23 & gsu \\ Tr 16-104 & O7 V & O9.5 V, B0.2 & 2.3 & 2.1529 & 0.5 & 31.38 & $-$7.29 & jsxu \\ FO 15 & O5.5 V & O9.5 V & 2.3 & 1.41356 & 0.5 & 31.24 & $-$7.65 & rsu \\ Tr 16-1 & O9.5 V & B0.3 V & 2.3 & 1.4693 & 0.3 & 30.87 & $-$7.30 & sxu \\ \noalign{\smallskip} \tableline \end{tabular} } \end{center} \end{table} \begin{table} \caption{Known X-ray Luminous WR binaries} \smallskip \begin{center} {\scriptsize \begin{tabular}{lllcccccl} \tableline \noalign{\smallskip} Name & Primary & Second. & Dist. & Period & $kT$ & $L_{\rm X}$ & $L_{\rm X}/L_{\rm bol}$ & Ref. \\ & Sp.Type & Sp.Type & (kpc) & (days) & (keV) & (cgs) & & \\ \noalign{\smallskip} \tableline \noalign{\smallskip} WR 48a & WC8ed & ? & 3.8 & 7800 & 2.3 & 35.39 & $-$4.00 & @ \\ Mk34 & WN6(h) & ? & 51 & 1134 & 2.3 & 35.38 & $-$4.72 & 3l \\ $\eta$ Car & LBV & O & 2.3 & 2024 & 4.4 & 35.26 & $-$5.02 & 7 \\ R 140a & WC5 & WN4 & 51 & 880 & 0.9 & 35.25 & $-$4.65 & l \\ WR 25 & WN6h & O4 f & 2.3 & 207.8 & 1.3 & 35.11 & $-$5.49 & 3M \\ R 136c & WN5h & ? & 51 & 998 & 3.0 & 35.04 & $-$4.96 & 3lQ \\ CXO J1745-28 & WN9h & O? & 7.6 & 189 & 2.7 & 35.04 & $-$4.78 & 3WP \\ WR 28 & WN6(h) & OB? & 10.8 & $\ldots$ & $\ldots$ & 34.86 & $\ldots$ & F \\ WR 43c & WN6+abs & ? & 7.6 & 8.89 & $\ldots$ & 34.85 & $-$5.33 & KV3 \\ WR 140 & WC7pd & O4-5 & 1.1 & 2900 & $\ldots$ & 34.68 & $-$4.75 & GHJS \\ Mk33Sa & WC5 & O3 IIIf* & 51 & 1120 & 0.6 & 34.63 & $-$4.87 & l \\ Brey 16 & WN4b & O5: & 51 & 18 & 7.0 & 34.58 & $-$4.30 & BT \\ WR 43a & WN6ha & ? & 10.1 & 3.772 & $\ldots$ & 34.30 & $-$6.30 & I \\ R 136a & WN5 & & 51 & $\ldots$ & 1.8 & 34.28 & $-$5.92 & l \\ WR 29 & WN7h & O & 17.2 & 3.16415 & $\ldots$ & 34.21 & $\ldots$ & FH4 \\ WR 101k & WN9-11 & ? & 8.0 & 9.72 & $\ldots$ & 34.12 & $\ldots$ & J \\ Mk39 & WN6 & O3 If & 51 & 92.6 & 1.6 & 34.11 & $-$5.89 & 3l \\ HD 5980 & WN3 & OB & 61 & 19.266 & 7.0 & 34.08 & $-$5.90 & T \\ WR 121a & WN7 & a/OB & 5.6 & $\ldots$ & $\ldots$ & 34.07 & $\ldots$ & J \\ Arches-F6 & WN9h & ? & 8.0 & $\ldots$ & 1.9 & 34.04 & $\ldots$ & RU \\ R 136a3 & WN5h & & 51 & $\ldots$ & 4.2 & 33.93 & $\ldots$ & 3 \\ WR 20a & WN6ha & WN6ha & 8.0 & 3.68 & 0.5 & 33.90 & $-$6.15 & X \\ WR 65 & WC9d & OB & 5.0 & $\ldots$ & $\ldots$ & 33.90 & $\ldots$ & J \\ Arches-F7 & WN9h & ? & 8.0 & $\ldots$ & 2.1 & 33.86 & $\ldots$ & RU \\ WR 20b & WN6ha & ? & 8.0 & $\ldots$ & 3.6 & 33.81 & $-$6.16 & 3X \\ Brey 10a & O3 If*/WN6 & & 51 & 3.23 & $\ldots$ & 33.80 & $\ldots$ & 3 \\ WR 87 & WN7h & OB & 2.9 & $\ldots$ & $\ldots$ & 33.75 & $-$5.75 & FG \\ WR 93 & WC7 & O7-9 & 2.5 & $\ldots$ & $\ldots$ & 33.74 & $\ldots$ & J \\ BAT99-32 & WN6(h) & & 51 & 1.91 & $\ldots$ & 33.70 & $\ldots$ & 3 \\ Arches-F9 & WN9h & & 8.0 & $\ldots$ & 3.3 & 33.66 & $-$6.22 & RU \\ Brey 26 & WN6(h) & ? & 51 & 1.91 & $\ldots$ & 33.65 & $\ldots$ & 0 \\ WR 71 & WN6 & OB? & 9.0 & 7.69 & $\ldots$ & 33.63 & $\ldots$ & AF \\ WR 63 & WN7 & OB & 3.9 & $\ldots$ & $\ldots$ & 33.63 & $\ldots$ & F \\ R 144 & WN6 & ? & 51 & $\ldots$ & $\ldots$ & 33.52 & $\ldots$ & 30Q \\ Av 336a & WN & O6 & 61 & 19.56 & 2.2 & 33.52 & $\ldots$ & T \\ WR 35 & WN6h & OB? & 17.9 & $\ldots$ & $\ldots$ & 33.47 & $-$5.51 & F \\ WR 145 & WN7/WCE & ? & 1.2 & 20 & $\ldots$ & 33.43 & $\ldots$ & J \\ WR 51 & WN4 & OB? & 8.1 & $\ldots$ & $\ldots$ & 33.26 & $-$5.82 & F \\ Brey 56 & WN6 & ? & 51 & $\ldots$ & 2.3 & 33.23 & $-$5.75 & BT \\ WR 66 & WN8(h) & cc? & 3.3 & 3.515 & $\ldots$ & 33.21 & $-$6.17 & F \\ WR 48 & WC6 & O9.5/B0 Iab & 2.2 & 18.341 & $\ldots$ & 33.20 & $\ldots$ & H \\ R 134 & WN6(h) & ? & 51 & 786 & 1.1 & 33.18 & $-$6.92 & l \\ WR 11 & WC8 & O7.5 III-V & 0.3 & 78.53 & 1.0 & 33.17 & $-$5.39 & GHJ \\ Mk42 & WN6 & O3 If & 51 & 922 & 1.1 & 33.15 & $-$6.85 & l \\ Mk30 & WN6 & O3 If* & 51 & 4.7 & $\ldots$ & 33.11 & $\ldots$ & 0 \\ R 139 & WN & O6 Iaf & 51 & 952 & 1.8 & 33.08 & $-$6.92 & l \\ Wd 1 WR B & WN7o & ? & 4.0 & 3.52 & 1.4 & 33.05 & $\ldots$ & \O$\hbar$ \\ WR 47 & WN6 & O5 V & 3.8 & 6.2393 & 1.1 & 33.05 & $-$6.55 & AHp2 \\ WR 67 & WN6 & OB? & 3.3 & $\ldots$ & $\ldots$ & 33.04 & $-$5.64 & F \\ R 145 & WN6h & ? & 51 & 158.8 & 1.6 & 33.00 & $\ldots$ & Q6 \\ WR 133 & WN5 & O9 I & 2.1 & 112.4 & $\ldots$ & 33.00 & $-$6.36 & EHp \\ WR 21a & WN6 & O/a & 3.0 & 31.673 & 3.3 & 33.00 & $-$5.78 & OY3 \\ WR 22 & WN7h & O9 III-V & 2.3 & 80.336 & 1.4 & 32.95 & $-$6.90 & 3Hp5 \\ WR 158 & WN7h & Be? & 7.9 & $\ldots$ & $\ldots$ & 32.93 & $-$6.55 & F \\ R 140b & WN6h & ? & 51 & 2.76 & $\ldots$ & 32.90 & $\ldots$ & T \\ WR 89 & WN8h & OB & 2.9 & $\ldots$ & $\ldots$ & 32.90 & $-$6.98 & F \\ WR 46 & WN3p & OB? & 4.1 & 0.2825 & $\ldots$ & 32.87 & $-$6.22 & CZ \\ R 135 & WN7h & ? & 51 & 2.11 & $\ldots$ & 32.78 & $\ldots$ & 0 \\ WR 148 & WN8h & B3 IV/BH & 8.3 & 4.317364 & $\ldots$ & 32.78 & $-$6.80 & FH \\ WR 132 & WC6 & ? & 3.9 & 8.16 & $\ldots$ & 32.75 & $-$5.93 & F \\ WR 36 & WN5-6 & OB? & 8.5 & $\ldots$ & $\ldots$ & 32.74 & $-$6.14 & F \\ WR 146 & WC6 & O8 & 1.2 & 1235 & $\ldots$ & 32.73 & $\ldots$ & J \\ WR 24 & WN6ha & ? & 2.3 & $\ldots$ & 1.7 & 32.71 & $-$6.93 & 8 \\ Brey 65 & WN7ha & ? & 51 & 3 & $\ldots$ & 32.70 & $\ldots$ & 30 \\ \noalign{\smallskip} \tableline \end{tabular} } \end{center} \end{table} \setcounter{table}{1} \begin{table} \caption{Known X-ray Luminous WR binaries (continued)} \smallskip \begin{center} {\scriptsize \begin{tabular}{lllcccccl} \tableline \noalign{\smallskip} Name & Primary & Second. & Dist. & Period & $kT$ & $L_{\rm X}$ & $L_{\rm X}/L_{\rm bol}$ & Ref. \\ & SpType & SpType & (kpc) & (days) & (keV) & (cgs) & & \\ \noalign{\smallskip} \tableline \noalign{\smallskip} WR 147N & WN8(h) & B0.5 V & 0.6 & 2880 & 1.8 & 32.67 & $-$7.01 & p9 \\ WR 44 & WN4 & OB? & 10.0 & $\ldots$ & $\ldots$ & 32.64 & $-$6.54 & F \\ WR 155 & WN6 & O9 II-Ib & 2.8 & 1.641244 & $\ldots$ & 32.64 & $-$6.44 & EHp \\ WR 108 & WN9h & OB & 5.6 & $\ldots$ & $\ldots$ & 32.62 & $-$6.76 & F \\ WR 1 & WN4 & ? & 0.7 & 6.1 & $\ldots$ & 32.50 & $\ldots$ & X! \\ WR 139 & WN5 & O6 III-V & 1.9 & 4.212435 & 3.3 & 32.50 & $-$6.40 & EHZ2 \\ WR 125 & WC7ed & O9 III & 3.1 & 6600 & $\ldots$ & 32.49 & $-$6.29 & F \\ WR 12 & WN8h & ? & 5.0 & 23.923 & $\ldots$ & 32.48 & $-$6.90 & 3AF \\ WR 114 & WC5 & OB? & 2.0 & $\ldots$ & $\ldots$ & 32.41 & $-$5.87 & F \\ WR 138 & WN5 & B? & 1.3 & 1538 & $\ldots$ & 32.20 & $-$6.68 & GH \\ WR 79 & WC7 & O5-8 & 2.0 & 8.8908 & $\ldots$ & 32.18 & $\ldots$ & HJ \\ WR 6 & WN4b & ? & 0.9 & 3.765 & $\ldots$ & 32.12 & $\ldots$ & H \\ WR 115 & WN6 & OB? & 2.0 & $\ldots$ & $\ldots$ & 32.08 & $-$6.90 & FG \\ WR 141 & WN5 & O5 V-III & 1.3 & 21.6895 & $\ldots$ & 31.99 & $-$7.98 & EHX \\ WR 39 & WC7 & OB? & 5.5 & $\ldots$ & $\ldots$ & 31.92 & $-$6.76 & F \\ WR 3 & WN3 & O4 & 5.9 & 46.85 & $\ldots$ & 31.91 & $-$7.27 & F \\ WR 14 & WC7 & ? & 2.0 & 2.42 & $\ldots$ & 31.90 & $-$6.58 & F \\ WR 128 & WN4(h) & OB? & 9.4 & 3.56 & $\ldots$ & 31.75 & $-$7.33 & F \\ WR 86 & WC7 & B0 III-I & 2.9 & $\ldots$ & $\ldots$ & 31.72 & $-$7.36 & F \\ WR 136 & WN6h & ? & 1.6 & $\ldots$ & 2.2 & 31.51 & $-$7.47 & 8 \\ WR 121 & WC9d & ? & 1.8 & $\ldots$ & $\ldots$ & 31.49 & $-$7.40 & EL \\ WR 4 & WC5 & ? & 2.4 & 2.4096 & $\ldots$ & 31.37 & $-$7.21 & FH \\ WR 143 & WC4 & OB? & 1.1 & $\ldots$ & $\ldots$ & 31.22 & $-$7.36 & F \\ Wd 1 WR F & WC9 & OB+? & 4.0 & 5.05 & 18 & 31.14 & $\ldots$ & $\hbar$\ss \\ Wd 1 WR L & WN9h: & ? & 4.0 & $<10$ & 8 & 30.88 & $\ldots$ & $\hbar$ \\ \noalign{\smallskip} \tableline \end{tabular} } \end{center} \end{table} \section{Results and Discussion} We emphasize that the results presented in Tables 1 and 2 are preliminary. Moreover, mass-loss rates, orbital parameters, and accurate X-ray spectral parameters are needed for a number of systems. Nonetheless, it is clear that the most X-ray luminous WR binaries, like the LBV binary $\eta$~Car, are typically very long period systems. The exceptions, which include the 8.9-day WN6 binary WR~43c=NGC~3603-A1 with $\log L_{\rm X}/L_{\rm bol} = -5.3$, are remarkable, and merit further study. We note that only two WR systems have $kT < 0.9$~keV: Mk33Sa (WC5 + O3 IIIf*) in the LMC and WR 20a (WN6ha + WN6ha) in Westerlund~2. Though the spectral type of the secondary is often not known, we note that WR systems with known early-O and supergiant secondaries often have $\log L_{\rm X} > 33$. Other systems, e.g., WR~101k, which was observed repeatedly as part of the {\it XMM} and {\it Chandra} galactic center surveys, are variable from observation to observation. WR 48a, the most X-ray luminous WR binary in Table 2, has undergone a dramatic decline in X-ray flux in 2011 in the {\it Swift} XRT (A. M. T. Pollock, private communication). Because of their lower mass-loss rates, the O+O binaries in Table 1 have far lower $L_{\rm X}$, with $\log L_{\rm X}/L_{\rm bol}$ in the range $-5.9$ to $-7.7$ than the WR stars in Table 2. Short-period O+O systems ($P < 10$~days) have soft X-ray spectra ($kT < 0.8$~keV) and longer period systems show harder X-ray spectra ($kT > 1$~keV). This suggests that in close O+O binaries, the higher density shocks, on average, undergo significant cooling, e.g., as a result of thin-shell mixing. For O+O systems with $\log L_{\rm X}/L_{\rm bol} < 7$, embedded wind shocks may account for a large fraction of the X-ray luminosity. \nocite{*}
1,314,259,995,668
arxiv
\section{Introduction} \label{intro} Task-oriented dialogue system interacts with users in natural language to accomplish tasks such as restaurant reservation or flight booking. The goal of dialogue state tracking is to provide a compact representation of the conversation at each dialogue turn, called \textit{dialogue state}, for the system to decide the next action to take. A typical dialogue state is consist of goal of user, action of the current user utterance (\texttt{inform}, \texttt{request} etc.) and dialogue history \cite{young2013pomdp}. User's goal is the most important among them, it is a set of slot-value pairs. All of these are defined in a particularly designed \textit{ontology} that restricts which slots the system can handle, and the range of values each slot can take. The \textit{joint goal} is a set that accumulates turn goals up to the current turn. Accuracy of the joint goal is a rigorous evaluation, because each error slot-value pair may propagate to the next few turns. In this paper, we focus on tracking the user's goal and use the joint goal accuracy to evaluate our model. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{pictures/coloredexampledialogue} \caption{An example from the WoZ restaurant reservation corpus. Each dialogue turn contains a user utterance (labeled with circle) and a system utterance (labeled with rhombic), we annotated \textit{turn inform}, \textit{turn request} and \textit{joint goal} in rounded rectangles, respectively.} \label{figure1} \end{figure} Considering an example of restaurant reservation, users can \textit{inform} the system some restrictions of their goals (\textit{e.g.}, \texttt{inform(food = thai)}) or \textit{request} further information they want (\textit{e.g.}, \texttt{request(phone number)}) at each turn. Figure~\ref{figure1} shows an annotated dialogue in WoZ 2.0 dataset. \begin{figure*}[hbtp] \centering \includegraphics[width=\textwidth]{pictures/coloredoverview.pdf} \caption{The architecture of POGD. The four modules (\textit{i.e.} Pointer, Generator, Switcher and Classifier) are jointly trained via multi-task learning. Inputs are user and system utterance $U$, considering slot $s$, and its value set $V^{S}$. Losses are calculated based on $Scores^{P_1}$, $Scores^{P_2}$, the final predicted value $\mathit{v}^{O}$, and the outputs of Switcher and Classifier.} \label{figure2} \end{figure*} In many task-oriented dialogue system applications we have built in the industry, the state tracker faces several challenges. First, with the complexity of tasks increases, more slots and values can be used during the conversation, the state tracker needs to be more \textbf{accurate} otherwise the system may fail to accomplish tasks. Second, the number of slots or tasks in the ontology may be large, the state tracker requires \textbf{scalability} to large-scale multi-domain dialogues. Third, when new values or slots are introduced to the system, it is hard to collect lots of labeled data, so the ability of \textbf{few-shot} or even zero-shot learning on newly added data is required. To tackle these challenges, we propose a scalable multi-domain dialogue state tracker, \textbf{P}oint-\textbf{O}r-\textbf{G}enerate \textbf{D}ialogue State Tracker (POGD), which could handle unseen values and easy to generalize to new slots. Let's discuss the designation of our proposed POGD model. By taking a deep look at the dialogue data examples, we find that the user's goal can be split into explicit and implicit cases\footnote{In this paper, we think a value is explicitly expressed if it has a Levinstein distance less than 3 with it in the ontology.}. The explicit case means that the value is expressed the same way as it in the ontology, otherwise, it is the implicit case. The explicit case usually needs information searching and matching, while implicit one requires generation. It seems natural to introduce Pointer-Generator Network \cite{see2017get} to improve the accuracy. However, the distribution of the above two cases is unbalanced, \textit{i.e.} matching is used much more than generating. Furthermore, values in the implicit case are also in an imbalanced distribution\footnote{In WoZ 2.0 there are 14.65\% implicit values, and in MultiWoZ 2.0 the number is 10.50\%. Among implicit values, the value \texttt{don't care} accounts 82.49\% in WoZ 2.0 and 16.64\% in MultiWoZ 2.0, respectively.}. Even in the explicit case, values may not be expressed exactly the same as it in the ontology, we called it rephrasing\footnote{In the explicit case, rephrased values take a proportion of 29.32\% in WoZ 2.0 and 32.1\% in MultiWoZ 2.0, respectively.} (\textit{e.g.}, \texttt{moderately} in user's utterance and \texttt{moderate} in the ontology). For the above reasons, straightforward utilization of PGNet might lead to chaos. We attempt to think the same way as PGNet---point out slot values from the user's utterance or generate them based on slot-specific contexts. Figure~\ref{figure2} shows the architecture of POGD. POGD is designed as follows: A \textbf{Pointer} is trained to figure out values explicitly mentioned in utterances, while a \textbf{Generator} is trained to infer implicitly expressed ones. Different from PGNet, the Pointer and Generator are decomposed into two separate modules, and a \textbf{Switcher}---a binary classifier is adapted to enable adaptive switching between them to fit the distribution. To handle rephrasing values in the explicit case and imbalanced distribution of implicit values, a looking up in the value set is performed via attention mechanism in both Pointer and Generator. Finally, a \textbf{Classifier} is applied to determine whether a slot is relevant to an utterance or not. POGD shares all parameters across all slots to be scalable, and to achieve knowledge sharing. The training of the four modules is formulated as a multi-task learning procedure to promote the capability of generalization. POGD achieves a state-of-the-art performance of joint goal accuracy on the small-scale WoZ 2.0 \cite{wen2016network} dataset. On the large-scale multi-domain dataset MultiWoZ 2.0 \cite{budzianowski2018multiwoz}, our model obtains 39.15\% joint goal accuracy, outperforming prior work by 3.57\%. The \textbf{contributions} of this paper are three folds: \begin{itemize} \item We address the DST problem by solving two easier subtasks, 1) point out explicitly expressed slot values in user's utterance and 2) generate implicitly expressed ones based on slot-specific contexts. \item We propose a scalable multi-domain dialogue state tracker POGD, which obtains state-of-the-art results. \item To our knowledge, this is the first attempt to evaluate the capability of generalization of a dialogue state tracker on unseen slot values and new slots. \end{itemize} The rest of this paper is structured as follows: In Section~\ref{relatedwork} we give a brief review of previous works in the field and show their limitations. The architecture and other details of our model are described in Section~\ref{method}. We perform several experiments to show advantages of POGD in Section~\ref{experiments}. Followed by conclusions in Section~\ref{conclusion}. \section{Related Work} \label{relatedwork} Traditional works on DST such as rule-based models \cite{wang2013simple,sun2014generalized}, statistical models \cite{thomson2010bayesian, young2010hidden, lee2013recipe,lee2013structured,sun2014sjtu,xie2015recurrent} failed to produce satisfactory performance and now rarely discussed. Approaches using a separate Spoken Language Understanding (SLU) module \cite{henderson2012discriminative,zilka2015incremental,mrkvsic2015multi} suffered from error accumulation from the SLU, and relied on hand-crafted semantic dictionaries or delexicalization---replace the values in the utterance with general tags to achieve generalization. Recently, deep-learning showed its power to the dialogue state tracking challenges (DSTCs) \cite{williams2013dialog,henderson2014second,henderson2014third}. A variety of DL-based methods were proposed: Neural Belief Tracker (NBT) \cite{mrkvsic2016neural} applied representation learning to learn features as opposed to hand-crafting features; PtrNet \cite{xu2018end} aimed to handle unknown values; GLAD \cite{zhong2018global} and its further improved version GCE \cite{nouri2018toward} addressed the problem that extraction of rare slot-value pairs; another improvement of GLAD \cite{sharma2019improving} used the relevant past user's utterance to get a better performance. But parameters of these models increased with the number of slots. For these scalable approaches, \citet{ramadan2018large} tried to share parameters across slots, but their model needs to iterate all slots and values in the ontology at each dialogue turn. \citet{rastogi2017scalable} generated a fix set of candidate values using a separate SLU module, so suffered from error accumulation. StateNet \cite{ren2018towards} reduced the dependence of ontology but no verification was done in their work. \section{Method} \label{method} In this section, we discuss the details of our model. We formulate state tracking in the way that predicts the turn state like GLAD \cite{zhong2018global}, but we do not use the previous system acts in POGD's inputs. At each dialogue turn, POGD's input is consist of utterance $U$, a slot under considering $s$ and its value set $V^{S}$. For utterance $U$, we simply concatenate user and system utterance ($U^{usr}$ and $U^{sys}$) by a particular symbol ${<}$\texttt{usr}${>}$. \begin{equation} U=U^{sys} \oplus {<}\texttt{usr}{>} \oplus U^{usr} \end{equation} \subsection{Pointer} The Pointer is inspired by previous work PtrNet \cite{xu2018end}, which addressed the DST problem using Pointer Networks \cite{vinyals2015pointer}. We use a bidirectional LSTM \cite{hochreiter1997long} to get the encoding $H^{P}$ of $U$ the same way as PtrNet. \begin{equation} H^{P} = BiLSIM^{P}(U)\in \mathbb{R}^{n\times d_h} \end{equation} Where $n$ is number of words in $U$ and $d_h$ is the dimension of LSTM hidden states. Different from PtrNet, Pointer module is not designed in an encoder-decoder structure, we use two Linears ($\mathrm{Linear}(X) = WX+b $) ${Linear^{P_1}}$ and $Linear^{P_2}$ to encode slot information. \begin{equation} S^{P_i} = Linear^{P_i}(s)\in \mathbb{R}^{1 \times 2d_h} \quad i\in \{1,2\} \end{equation} Then, the starting position $span^{p_1}$ and the ending position $span^{p_2}$ of predicted value are computed via $Attn^{P_i}$ in Figure~\ref{figure2} as following, \begin{align} \alpha^{P_{i}}_{j} &= v^{\mathrm{T}}\mathrm{tanh}(\mathrm{Linear}(H^{P}_{j} + S^{P_i}))\\ Scores^{P_{i}}_{j} &= \exp{\alpha^{P_{i}}_{j}}/\sum_{j=1}^{n}\exp{\alpha^{P_{i}}_{j}}\\ span^{p_i} &= \mathop{\arg\max}\limits_{j} Scores^{P_{i}}_{j} \end{align} where $1\le i \le 2, 1\le j \le n$ and we define a trainable parameter $v$ and a $\mathrm{Linear}$ in each attention module. Note that there is no guarantee that $span^{p_1} \le span^{p_2}$, when $span^{p_1} > span^{p_2}$, it indicates that Pointer's output is not reliable, we will set Switcher's output to choose Generator's output as the final predicted value. We get a $U^{cut}$ by summing the word embeddings between $span^{p_1}$ and $span^{p_2}$ in $U$ and compute its unit vector. It is used to calculate cosine similarity with embeddings of values, as shown in Figure~\ref{figure2}. We find that values and their rephrased have similar embeddings, which is why the Pointer can overcome rephrasing values via attention mechanism. \begin{align} U^{cut}_{origin} &= \sum_{j=span^{p_1}}^{span^{p_2}}U_j\\ U^{cut} &= U^{cut}_{origin} / \left\|U^{cut}_{origin}\right\|_2 \end{align} The cosine similarity between $U^{cut}$ and values in value set $V^{S}$ is calculate using dot attention \cite{luong2015effective}, \begin{gather} v^{P} = \mathop{\arg\max}\limits_{V^{S}_{j}} U^{cut} \cdot V^{S}_{j} \end{gather} where $j$ is in range of the number of values in $V^{S}$, we select the value with max attention score $v^{P}$ as the output of Pointer. We also calculate a context $C^{P}$ by following steps for further using in Classifier and Switcher. \begin{align} C^{P_{i}} &= \sum_{j=1}^{n} Scores_{j}^{P_{i}} H_{j}^{P}\\ C^{P} &= C^{P_{1}} + C^{P_{2}} \end{align} \subsection{Generator} The Generator works similar to Pointer, as described in Figure~\ref{figure2}, after encoding utterance and slot and computing attention scores the same way as Pointer, a slot-specific context $C^{G}$ is computed as follows. \begin{align} Scores^{G} &= Attn^{G}(S^{G}, H^{G})\\ C^{G} &= \sum_{j=1}^{n} Scores_{j}^{G}H_{j}^{G} \end{align} Different from Pointer, Generator is designed to infer the implicit values based on slot-specific contexts, so it needs to handle more complex similarity than the word embeddings similarity computed in $Attn^{P}_{V}$ in Pointer. $C^{G}$ is transformed by a multilayer perceptron (\textrm{MLP}) \cite{gardner1998artificial} with ReLU \cite{nair2010rectified} activation, let $\mathrm{MLP^{i}_{ReLU}}$ denote a $i$-layers MLP with ReLU activation at each hidden layer, transformed context is computed as following. \begin{gather} C^{G^{\prime}} = \mathrm{MLP^{2}_{ReLU}}(C^{G}) \end{gather} Attention $Attn_{V}^{G}$ performed in Generator is to deal with the imbalance of implicit values. Traditional sequence to sequence models like PGNet \cite{see2017get} used a Linear to select final output from vocabulary, which may suffer from imbalanced distribution of values, because each value in vocabulary corresponding to a weight in the Linear. So instead of using a Linear, we use attention mechanism, the trainable parameters $v$ and $Linear$ in $Attn_{V}^{G}$ are shared across all values, which reduces the impact of data imbalance on performance. $Attn_{V}^{G}$ does't perform dot attention as $Attn^{P}_{V}$ in Pointer but perform the same as $Attn^{G}$. We also choose the value that has the max score $v^{G}$ as the final output of Generator. \begin{align} Scores_{V}^{G} &= Attn_{V}^{G}(C^{G^{\prime}}, V^{S})\\ v^{G} &= \mathop{\arg\max}\limits_{V_{j} ^{S}} Scores_{V}^{G} \end{align} \subsection{Classifier \& Switcher} Classifier and Switcher are two binary classifiers that have the same architecture---a 3-layers perceptron with dropout \cite{srivastava2014dropout} at the hidden layer, and they also have the same inputs except for the encoding of slots. But the goal of the two modules are different: Switcher is trained to choose the final predicted value from Pointer's output $v^{P}$ and Generator's output $v^{G}$, while Classifier is trained to determine whether the slot under considering $s$ is relevant to the input utterance $U$ or not. Let's discuss how Switcher works, and the Classifier acts the same. The Switcher's input is consist of three parts: Pointer's attention context $C^{P}$, Generator's attention context $C^{G}$, and the encoding of slot. As shown in Figure~\ref{figure2}, they are simply concatenated together. let $I_{Switcher}$ denote the input of Switcher, the Switcher works as following. \begin{align} I_{Switcher}&=C^{G} \oplus C^{P} \oplus Linear^{S}(s) \\ h &= \mathrm{MLP^{2}_{ReLU}}(I_{Switcher})\\ h^{\prime} &= \mathrm{dropout}(h)\\ output^{S} &= \mathrm{sigmoid}(\mathrm{Linear}(h^{\prime})) \end{align} where $Linear^{S}$ is the slot encoder of Switcher as shown in Figure~\ref{figure2}. The Switcher let Pointer and Generator learn simplified tasks---Pointer learns to point only on pointable values and Generator learns to generate only on implicit values. The final output produced by the $\mathrm{sigmoid}$ \cite{lecun2012efficient} function. In our experiments, the final predicted value $v^{O}$ is produced as follows. \begin{align} v^{O\prime} &= v^{P} output^{S} + v^{G}(1-output^{S})\\ v^{O} &= v^{O\prime} output^{C} \end{align} Here $output^{S}$ and $output^{C} \in \{0,1\}$ are the output of Switcher and Classifier. \subsection{Multi-Task Learning} The four modules in our model are jointly trained via multi-task learning. This section we discuss how we separate the tasks and which parameters are shared. We use 5 loss functions to train our model. In Pointer, we use $Scores^{p_{i}}$, $i\in \{1,2\}$ as outputs and ground truth starting and ending positions of values as labels to compute two cross entropy losses $Loss^{P_{1}}$ and $Loss^{P_{2}}$. For Switcher and Classifier, they use expected outputs generated from the dataset as labels and perform binary cross entropy losses $Loss^{S}$ and $Loss^{C}$. During the training process, we use Switcher's labels to produce final output $v^{O}$, then a cross entropy loss $Loss^{final}$ is computed between $v^{O}$ the ground truth output of the tracker. Generator is a little particular, it doesn't have its own loss function, as we described in the last subsection, $v^{O}$ only contains Pointer's output $v^{P}$ or Generator's output $v^{G}$, so the error back propagation of $Loss^{final}$ will only go through either Pointer or Generator. We use cut values in Pointer and the indexing data doesn't have gradient so the Pointer needs its own losses. The Classifier needs negative sampling, which brings more difficulty to the joint training process. Negative samples are randomly selected from unrelated slots with some ratio, we should make them only have effect on the Classifier. When a negative sample go through the model, Switcher's label will set to its output before the back propagation, and other modules' labels will set to zero and ignored from cross entropy loss functions. Classifier and Switcher use Pointer and Generator's attention contexts as a part of the input, \citet{caruana1997multitask} showed that multi-task learning can improve generalization, our experiments further verified this argument. \section{Experiments} In this section, we perform several experiments to show the advantages of our model. We show joint goal accuracy of POGD on two datasets, and compare to several previous approaches. Then we examine our model's generalization on unseen values, and the capability of few-shot learning on new slots. \label{experiments} \subsection{Dataset} \begin{table}[htbp] \centering \begin{tabular}{l|l|l} \hline \textbf{Metric} & \textbf{WoZ} & \textbf{MultiWoZ} \\ \hline \textrm{dialogues} & 600 & 8,438 \\ \textrm{total turns} &4,472 & 115,424 \\ \textrm{average tokens per turn} & 11.24 & 13.18 \\ \textrm{inform slots} &3 & 35 \\ \textrm{tot values} & 99 & 4510 \\ \textrm{unique tokens}& 2,142 & 24,071 \\\hline \end{tabular} \caption{Details compare between WoZ 2.0 and MultiWoZ 2.0 dataset.} \label{datasets} \end{table} We mainly use two datasets to evaluate our model, one is a small-scale dataset, the second version of Wizard-of-Oz (\textbf{WoZ 2.0}) \cite{wen2016network} which user's goal is to find a suitable restaurant around Cambridge. Another one is a large-scale multi-domain dataset, the second version \textbf{MultiWoZ 2.0} \cite{budzianowski2018multiwoz}, which contains 6 domains but we only use 5 domains occurred in its test set as \citet{ramadan2018large} did. Both of them are human-machine conversations. A comparison of two datasets showed in Table~\ref{datasets}. \begin{table*}[!tp] \centering \begin{tabular}{l|l|l} \hline DST models & \begin{tabular}[c]{@{}l@{}}Joint Acc.\\ WoZ 2.0 (\%)\end{tabular} & \begin{tabular}[c]{@{}l@{}}Joint Acc.\\ MultiWoZ 2.0 (\%)\end{tabular} \\ \hline Belief Tracking: CNN\cite{ramadan2018large} & 85.5 & 25.83 \\ Neural Belief Tracker: NBT-DNN\cite{mrkvsic2016neural} & 84.4 & / \\ GLAD\cite{zhong2018global} & 88.1 & 35.57 \\ GCE\cite{nouri2018toward} & 88.5 & 35.58 \\ GLAD + RC + FS\cite{sharma2019improving} & \textbf{89.2} & / \\ SateNet\cite{ren2018towards} & 88.9 & / \\ \hline \textbf{POGD (ours)} & 88.7 & \textbf{39.15} \\ \hline \end{tabular} \caption{Joint goal accuracy on WoZ 2.0 and MultiWoZ 2.0 test set vs. various approaches as reported in the literature.} \label{tab:result} \end{table*} \subsection{Implementation Details} We use \textit{semantically specialised} Paragram-SL999 vectors \cite{wieting2015paraphrase} the same as Neural Belief Tracker \cite{mrkvsic2016neural} and do not fine-tune the embeddings of utterance and values. Slots' embeddings are random initialized. Model is trained using ADAM \cite{kingma2014adam} with learning rate 1e-3. We apply dropout \cite{srivastava2014dropout} with rate 0.3 at word embeddings and hidden layers of $\mathrm{MLPs}$ in Classifier and Switcher. The position labels used to train Pointer are generated the same way as PtrNet \cite{xu2018end}, we use the last occurrence of the reference value in the dialogue history, if a subsequence of an utterance in history has a Levenshtein distance less than 3 with the value, it will be treated as a successful match. Switcher's labels are generated the same time---matching is successful or not. Labels of Classifier need negative sampling, we randomly choose unrelated slots as negative samples, with a probability of 13/30 in MultiWoZ 2.0, and in the WoZ 2.0 dataset we use all unrelated slots as negative samples. We train the model 400 epochs with L2 regularization 2e-7 on WoZ 2.0 and 50 epochs with L2 1e-7 on MultiWoZ 2.0. Other details of data generation and parameters of POGD are described in the Appendix. \subsection{Performance} As described in Section~\ref{intro}, the joint goal accuracy is a widely used metric in DST task. Table~\ref{tab:result} shows the joint goal accuracy of our model compares against various previous reported baselines on WoZ 2.0 and MultiWoZ 2.0 test set. POGD achieves competitive result 88.70\% on the small dataset and gets a state-of-the-art result 39.15\% on the large scale dataset. \subsection{Generalization} In this section, we will show the POGD model is generalizable in two aspects: unseen values and new slots. New values or called unseen values means they don't appear during the training process. \subsubsection{Unseen values} The generalization of unseen values is evaluated at WoZ 2.0 dataset. We randomly choose values at rate 15\% to 55\% of slot \textit{food} which contains 76 values in total, and delete data contains these values from the train set. We report Precision, Recall and F1 score of these values in the test set, as shown in Table~\ref{tab:unseen}. Experiments here are not intend to get the best performance but to prove our POGD model \textbf{can} handle unseen values, so we only train 20 epochs to get results in Table~\ref{tab:unseen}. It is important to emphasize that our POGD can generalize to unseen values without using SLU module or delexicalization like previous works\cite{rastogi2017scalable, ren2018towards}. \begin{table}[htbp] \centering \begin{tabular}{l|l|l|l} \hline unseen(\%) & Precision & Recall & F1 score \\ \hline 15 (4.15) & 0.9592 & 0.9216 & 0.9400 \\ 25 (10.55) & 0.9746 & 0.8647 & 0.9163 \\ 35 (11.35) & 0.9565 & 0.8333 & 0.8907 \\ 45 (16.31) & 0.9776 & 0.8137 & 0.8881 \\ 55 (24.80) & 0.9132 & 0.8155 & 0.8616 \\ \hline \end{tabular} \caption{POGD's performance on unseen values. Numbers in brackets indicate the proportion of deleted training examples. Unseen values are randomly selected, and each value takes different proportions in the training data.} \label{tab:unseen} \end{table} \subsubsection{New slots} The generalization of new slots is rarely discussed. Recent approaches like NBT \cite{mrkvsic2016neural} and PtrNet \cite{xu2018end} trained model separately for each slot type, when facing a new slot, they need to train an entirely new model. As for those scalable approaches \cite{zhong2018global,ren2018towards,ramadan2018large,rastogi2017scalable,nouri2018toward}, they may generalize to a new slot by fine-tuning their models but no experiments were done in their works. \begin{figure}[tp] \includegraphics[width=0.48\textwidth]{pictures/slot.pdf} \caption{The similarity between slot embeddings, for each edge, the target slot has the highest similarity with source node, and the thickness of the edge indicates the weight and the similarity. } \label{figure3} \end{figure} In our POGD model, parameters are shared across all slots to share knowledge, when a new slot is added to ontology, with a few data and training epochs, the model could get a satisfactory performance. Furthermore, we find that the similarity between embeddings of slots will train to be consistent with the similarity of the slot values' \textit{entity types}, we verify this by computing the cosine similarity between slots embeddings. We keep the most similar slot for each slot, and use the similarity between their embeddings as weights, then construct a directed graph. We plot the graph using Gephi\footnote{https://gephi.org/} with Fruchterman–Reingold algorithm \cite{fruchterman1991graph}, as shown in Figure~\ref{figure3}. We can find that a pair of slots have a higher similarity when their corresponding values have a more similar entity type. Experiments show this phenomenon can be used to improve the convergence speed of our model. \begin{figure}[tp] \includegraphics[width=0.48\textwidth]{pictures/few_shot.pdf} \caption{Results of few-shot learning. At each point we train the model for 10 epochs and select the best, \textit{full} at the end of abscissa means we use full data (19651 examples) to train the model.} \label{few_shot} \end{figure} \begin{figure}[!hb] \includegraphics[width=0.48\textwidth]{pictures/speed.pdf} \caption{Results of convergence speed tests. The results here are generated by fine-tuning the pretrained model with only 700 of 19651 examples (including negative examples) in 5 epochs. } \label{conv_speed} \end{figure} The following experiments are performed to show the ability of few-shot learning and the convergence speed of POGD. In this subsection, data of 30 slots from 5 domains with 1:10 negative sampling rate are used. We use data of 29 slots except \texttt{train departure} to train a model 10 epochs as a pretrained model, and use \texttt{train departure}'s data to evaluate. We use this slot because it is complex enough and has less overlapping values with its similar slot---\texttt{restaurant name}, which ensures the effectiveness of experiments. To show the capability of few-shot learning, we randomly choose training examples from data of slot \texttt{train departure} which contains 19651 examples in total (including negative examples), after fine-tuning the pretrained model using these examples 10 epochs, we choose the best performance and get results shown in Figure~\ref{few_shot}. We find that with only 700 of 19651 (\textbf{3.56\%}) training examples we could get nearly the same results as using full data. We compare the convergence speed between the slot \texttt{train departure} is random initialized and initialized with embeddings of \texttt{restaurant name}, the results shown in Figure~\ref{conv_speed} indicate that initializing with similar slot embeddings can significantly increase the convergence speed. But if we only use the new slot's data to fine-tune the pretrained model, after several epochs the pretrained model may be broken, so an efficient way to learn a new slot is using the slot-specific data and full data alternately to fine-tune the pretrained model, while new slot initialized with a similar slot. \subsection{Ablation study} \label{ablationstudy} We perform ablation experiments to analyze the effectiveness of different components of POGD. The results of these experiments are shown in Table~\ref{tab:ablation}. \begin{table}[h] \centering \begin{tabular}{l|l|l} \hline models & \begin{tabular}[c]{@{}l@{}}joint goal\\ WoZ 2.0\end{tabular} & \begin{tabular}[c]{@{}l@{}}joint goal\\ MultiWoZ 2.0\end{tabular} \\ \hline POGD & 0.8870 & 0.3915 \\ POGD+C &0.9411 & 0.5849 \\ POGD+S & 0.8451 & 0.3206 \\ POGD+C+S & 0.9350 & 0.5942 \\ only Pointer & 0.7357 & 0.5615 \\ only Generator & 0.0334 & 0.0297 \\ \hline \end{tabular} \caption{Results of ablation experiments of POGD. "+C" means we use the labels of Classifier as its outputs, "+S" means the same for Switcher. Only Pointer (Generator) means Switcher always chose the output from Pointer (Generator) and use the Classifier's labels as its outputs.} \label{tab:ablation} \end{table} \textbf{Combining Pointer and Generator we can get a strong performance on figuring out values given a correct slot.} When Classifier uses the labels, we can get a 94.11\% joint goal accuracy on WoZ 2.0 and 58.49\% on MultiWoZ 2.0, which outperforms the previous state-of-the-art\cite{sharma2019improving,nouri2018toward} by 4.91\% and 22.91\%. It is because the binary Switcher lets them handle simplified cases: Pointer only learns to point out values from the user's utterance and Generator only learns to infer implicit values. \textbf{The Switcher choose the best output from Pointer and Generator.} From the results we notice that when only the Switcher uses the generated labels as outputs, the performance of our model is slightly worse, after analyzing the output of the model, we found two reasons. One is the dialogue state doesn't always update as soon as a value is mentioned by the user, but Switcher's labels are generated turn by turn, if a value's first occurrence is missed, it will not be corrected in the followed turns. For example, a user informs the system he wants to find a restaurant that serves Chinese food, but in some data examples the slot-value pair of \texttt{food=chinese} is added to the dialogue state until the booking is completed. In such cases, generated labels cause all of these turn states go wrong, but Switcher and Classifier may decide to output this pair when it is first mentioned, so after turns of wrong state, it may finally get a correct state (\textit{e.g.}, when booking is completed). Another reason is, a pointable value not always produced by Pointer, it may generate by Generator with higher confidence. Because of these two reasons, Switcher produced unexpected results with better performance than the generated labels. \begin{table}[ht] \centering \begin{tabular}{l|l|l} \hline & \begin{tabular}[c]{@{}l@{}}Acc\\ WoZ 2.0\end{tabular} & \begin{tabular}[c]{@{}l@{}}Acc\\ MultiWoZ 2.0\end{tabular} \\ \hline Switcher & 0.9782 & 0.9837 \\ Classifier & 0.9793 & 0.9888 \\ \hline \end{tabular} \caption{Performance of Switcher and Classifier.} \label{tab:perfsc} \end{table} \textbf{The Classifier works well but suffers a lot from error accumulation.} Even though it has high accuracy as illustrated in Table~\ref{tab:perfsc}, but with the increasing of slots, each percent of classifier errors cause more loss on the joint goal accuracy, because at each turn we iterate all slot and try to figure out their values, but most of them are not relevant to the utterance. This is the biggest problem with our model. We should design it in a more stable way in the future. \section{Conclusion} \label{conclusion} We propose the Point-Or-Generate Dialogue State Tracker (POGD)---a scalable multi-domain dialogue state tracker. POGD can handle unseen values and reduce the effort when new slots added to the ontology. Our model obtains 88.7\% joint goal accuracy on the WoZ 2.0 dataset, and on the large-scale multi-domain dataset MultiWoZ 2.0, it obtains 39.15\% joint goal accuracy, outperforming prior work by 3.57\%.
1,314,259,995,669
arxiv
\section{Introduction} Optimizing the generalization performances of overparametrized neural networks is one of the main challenges in machine learning. A crucial role is played by gradient-based training algorithms, which converge to solutions which generalize well also when no explicit regularization of the model is used \citep{zhang2021understanding}. Mini-batch stochastic gradient descent (SGD) is the workhorse algorithm to train modern neural networks. Yet, key aspects of these algorithms are debated. {\it Effect on performance:} A popular idea has been that mini-batch SGD can generalize better than full batch gradient descent (GD) \citep{heskes1993, lecun2012, keskar2016, hochreiter1997flat, jastrzkebski2017, entropysgd2019}, yet this view is debated \citep{hoffer2017, dinh2017, shallue2018, zhang2019}. In fact, comparing SGD and GD at fixed number of training epochs leads to a generalization gap \citep{keskar2016} that can be closed by training longer with a fixed number of training steps \citep{hoffer2017, smith2020}. More generally, the choice of the computational budget can affect which algorithm performs better \citep{shallue2018,smith2020}. {\it Theories for the role of SGD:} Several works have argued that larger SGD stochasticity leads the dynamics toward flatter minima of the loss landscape, and it has been argued that this effect leads to improved performances \citep{hochreiter1997flat, keskar2016, zhang2018energy, smith2018bayesian, wu2018sgd}. By contrast, other studies suggest that the SGD noise biases the model in a manner similar to initializing the network with small weights, and helps recovering sparse predictors \citep{blanc2020implicit, haochen2021, flammarion2021} \subsection{This work} In this work, we clarify these two debates by performing systematic empirical studies of how performance is affected by the noise magnitude of SGD or temperature $T$ (the ratio between the learning rate $\eta$ and the batch size $B$ \citep{jastrzkebski2017, zhang2019, smith2020}), by the initialization scale $\alpha$, and by the size of the training set $P$. The initialization scale $\alpha$ was rarely considered in empirical studies so far, yet it governs the training regimes in which nets operate. For large $\alpha$, tiny changes of weights are sufficient to fit the data: the predictor is approximately linear in its parameters, corresponding to the \textit{kernel} or \textit{lazy} regime \citep{jacot2018, chizat2019lazy}. By contrast for small initialization, networks can learn the relevant features of the task and the dynamics is non-linear, corresponding to the so-called feature-learning regime \citep{rotskoff2018, mei2018, sirignano2020}. We also deal with the computational budget issue by considering the hinge loss $l(y,\hat{y})=(1-y\hat{y})^+$, allowing us to train networks until the time $t^*$ where the loss is strictly zero, and the dynamics stops. Our central empirical results are: \begin{itemize} \item[(i)] obtaining phase diagrams for performance in the $(\alpha,T)$ plane. They show that SGD noise can be detrimental or instead useful depending on the training regime, even in the absence of budget constraints. This observation clarifies why different conclusions on the benefits of SGD were previously made. \item[(ii)] Although we find that increasing $T$ or decreasing $\alpha$ both allow the net to escape the lazy regime, these changes can have opposite effects on performance, in disagreement with simple models \citep{flammarion2021}. \item[(iii)] Most importantly, we reveal that several observables characterizing the dynamics follow scaling laws. Denote by $\Delta {\bf \omega}$ the relative weights variation accumulated after training, $t^*$ the training time defined as the learning rate times the number of training steps required to bring a hinge loss to zero, and $T_{c}$ the characteristic temperature scale for which performance is affected by SGD. We find that for $\alpha\gg 1$, these quantities do not depend on $\alpha$ and follow: \begin{equation*} \Delta {\bf \omega}\sim T^\delta P^\gamma\ \ t^*\sim T P^b \ \ T_{c}\sim P^{-a} \end{equation*} where $\delta,\gamma,b,a$ are exponents. Assuming that $T_{c}$ is the temperature scale where the network exits the lazy regime, i.e. $\Delta {\bf \omega}={\cal O}(1)$, gives $a=\gamma/\delta$ in agreement with our observations. \item[(iv)] We rationalize these findings using a teacher-student perceptron model, for which $ \Delta {\bf \omega}$ and $t^*$ also display power-law dependence on $T$ and $P$. We show that SGD noise increases weights in directions irrelevant to the task, implying that the correct weights must grow much larger to fit data, thus increasing both $t^*$ and $\Delta {\bf \omega}$. We compute the dependence of these effects on the size of the training set, and show that this dependence varies qualitatively with the distribution of data near the decision boundary. \end{itemize} Overall, instead of a static view where SGD noise would bias networks toward broader minima of the population loss, these results support a dynamical viewpoint where SGD noise delays the end of training. This effect allows the weights to grow more, affecting performance the most when one escapes the lazy regime. \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{FC-5L-mnist-test-phase-horizontal-noH.png} \includegraphics[width=0.49\textwidth]{CNN-cifar-test_phase-horizontal-noH.png} \vspace{-.5 cm} \caption{\textbf{Test error of deep networks on image data-sets, for varying $T$ and $\alpha$ and fixed $P=1024$.} Batch size $B$ is kept fixed and learning rate $\eta$ is varied ($T=\eta/B$). \textbf{(a)} 5-hidden layers fully-connected network (FC) on parity MNIST with $B=16$. \textbf{(b)} 9-hidden layers CNN (MNAS) on CIFAR (animals vs the rest) with $B=64$. Black dots correspond to training runs that do not converge. The black dashed lines indicates the maximal temperatures $T_{max}$. The lowest test error is achieved in the feature regime ($\alpha\ll 1$), for a temperature $T_{opt}\propto T_{max}$. In the lazy regime ($\alpha\gg 1$), performance is best for the highest $T$ for FC on MNIST. Although it is not apparent here for CNN on CIFAR, it is also the case as the training set increases (see below). In \textit{(a)}, the number of hidden layers is $D=5$ and $T_{max}\sim \alpha^{\frac{D-1}{D+1}} = \alpha^{\frac{2}{3}}$ (black dashed line) when $\alpha\ll1$ as argued in \ref{eq:T_feature}. Similarly in \textit{(b)}, $D=9$ and $T_{max}\sim \alpha^{\frac{D-1}{D+1}} = \alpha^{\frac{4}{5}}$ (black dashed line). } \label{fig:test_phase} \end{figure*} \subsection{Related works} More related works are indicated in Appendix \ref{app:other_works}. \section{Empirical analysis} \label{sec:empiric} \subsection{General setting and notation} \label{sec:definition} We consider binary classification on the data $\{{\bm{x}}_{\mu}\}_{\mu=1,...,P}\in \mathbb{R}^d$ with labels $\{y_\mu\}_{\mu=1,...,P} \in \{-1, +1\}$. $P$ is the size of the training set. Given a predictor $\hat{y}_{\mu}$, the hinge loss on the sample $\mu$ is defined as $l(y_{\mu},\hat{y}_{\mu}) = (1-y_{\mu}\hat{y}_{\mu})^+$, where $(x)^+ = \max(0,x)$. To control between feature and lazy training, we multiply the model output by $\alpha$ \citep{chizat2019lazy}. For the hinge loss, it is equivalent to changing the loss margin to $1/\alpha$, therefore we study the training loss: \begin{equation} L({\bm{w}}) = \frac{1}{P} \sum_{\mu=1}^P (\alpha^{-1}-y_{\mu} F({\bm{w}},{\bm{x}}_{\mu}))^+ \label{eq:hingeLoss} \end{equation} where $F({\bm{w}},{\bm{x}}_{\mu})$ is the model predictor with weights ${\bm{w}}$ on the datum ${\bm{x}}_{\mu}$. The model predictor at time $t$ corresponds to $F({\bm{w}},{\bm{x}}_{\mu}) = f({\bm{w}}^t,{\bm{x}}_\mu) - f({\bm{w}}^0,{\bm{x}}_\mu)$, where $f({\bm{w}}^t,{\bm{x}}_\mu)$ is the output of a neural net with weights ${\bm{w}}^t$ at time $t$ and ${\bm{w}}^0$ are the weights at initialization. For a network of width $h$, the weights are initialized as Gaussian random numbers with standard deviation $1/\sqrt{h}$ for the hidden layers and $1/h$ for the output layer. Such an initialization ensures that the feature learning limit corresponds to $\alpha\ll1$ while the lazy training limit corresponds to $\alpha\gg 1$, and that every layer has a similar change of weights \citep{geiger2020disentangling,yang2021tensor}.\\ The stochastic gradient descent updating equation is: \begin{equation}\label{eq:SGD} {\bm{w}}^{t+\eta} ={\bm{w}}^t + \frac{\eta}{B}\sum\limits_{\mu \in {\mathbb{B}}_t} \theta\left(\alpha^{-1}-y_{\mu} F({\bm{w}},{\bm{x}}_{\mu})\right) y_{\mu} \nabla_{{\bm{w}}} f({\bm{w}}^t,{\bm{x}}_\mu) \end{equation} where $\theta(x)$ is the Heaviside step function, ${\mathbb{B}}_t \subset \{1,...,P\}$ is the batch at time $t$ and $B$ is its size. The time $t$ corresponds to the number of training steps times the learning rate $\eta$. The batch ${\mathbb{B}}_t$ is randomly selected at each time step among all the $P$ data. The learning rate $\eta$ is kept constant during training. The end of training is reached when $L({\bm{w}}^{t^*})=0$.\\ The batch size $B$ is taken small enough to be in the ``noise dominated'' regime \citep{smith2020, zhang2019}, where the dynamics depends on the SGD temperature $T=\eta/B$. Empirical verification of this fact is provided in Appendix \ref{app:details}.\\ Below we use a 5-hidden-layers fully-connected (FC) network and a 9-hidden-layers convolutional neural network (CNN) (MNAS architecture \citep{mnasnet}). In Appendix \ref{app:details} we report data also for a 3-hidden layers CNN (simple-CNN). We consider the binary datasets MNIST (even vs odd numbers) and CIFAR10 (animals vs the rest). All the networks use ReLU as activation functions. The code with all the details of the experiments is provided at \href{https://tinyurl.com/yh6kay4b}{https://tinyurl.com/yh6kay4b}. \subsection{Performance in the $(\alpha, T)$ phase diagram} Fig. \ref{fig:test_phase}-(a) shows the test error for a FC network trained on MNIST and Fig. \ref{fig:test_phase}-(b) shows the same quantity obtained after training a CNN on CIFAR10. The black dots correspond to training loss exploding to infinity due to too large learning rate. Therefore, the dashed back lines indicate the maximal temperature $T_{max}$ for which SGD converges. \\ From Fig. \ref{fig:test_phase} we make the following observations: (i) In the feature regime, both $T_{max}$ and the temperature of optimal performance $T_{opt}$ follow $T_{max}\sim T_{opt}\sim \alpha^k$. In Appendix \ref{app:scaling_feature}, we relate the exponent $k$ to the number $D$ of hidden layers of the network as $k=(D-1)/(D+1)$. In the lazy regime, $T_{max}$ and $T_{opt}$ are independent of $\alpha$. (ii) In Fig. \ref{fig:test_phase}-(a), in the lazy regime (largest $\alpha$), increasing $T$ leads to an initial slight degradation of the test error followed by an improvement just before reaching the instability $T_{max}$. (iii) In Fig. \ref{fig:test_phase}-(b), in the lazy regime, increasing $T$ leads to a degradation of the test error before reaching the instability $T_{max}$ (for larger $P$, a region of good performance appears near $T_{max}$, see below). In this regime increasing $T$ or decreasing $\alpha$ have opposite effects, showing that in general an increase of SGD noise is not equivalent to making the initialization smaller. \subsection{Role of size of the training set $P$} \label{sec:role_P} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{FC-5L_mnist-grid_plot.png} \includegraphics[width=1.0\textwidth]{MNAS_cifar-grid_plot.png} \vspace{-.5 cm} \caption{\textbf{Lazy regime, $\alpha=32768$, $B=16$, $T=\eta/B$. (a) FC on MNIST, (b) MNAS on CIFAR.} \textbf{(a-I, b-I): test error $\epsilon$.} \textit{Inset:} $\epsilon$ starts improving at a cross-over temperature $T_{c}$ depending on $P$. The y-axis is rescaled by $P^\beta$, with $\beta$ some fitting exponent, to align $\epsilon$ at $T_c$. \textit{Main:} Rescaling the x-axis by $P^{0.5}$ aligns horizontally the points where $\epsilon$ starts improving, suggesting a dependence $T_c \sim P^{-0.5}$. \textbf{(a-II, b-II): total weights variation at the end of training normalized with respect to their initialization ($\Delta w$).} \textit{Inset:} $\Delta w$ increases with both $T$ and $P$. \textit{Main:} Plotting $\Delta w P^{-\gamma}$ yields a curve increasing approximately as $T^\delta$, suggesting $\Delta w\sim T^\delta P^{\gamma}$, with $\gamma$ and $\delta$ some fitting exponents (FC: $\delta\approx 1$, $\gamma\approx 0.4$. CNN: $\delta\approx 0.8$, $\gamma\approx 0.5$). \textbf{(a-III, b-III): test error vs weights variation.} The point where the test error starts improving shows a better alignment when plotted as a function of the weights variation (\textit{main plots}) rather than temperature alone (\textit{insets}). \textbf{(a-IV, b-IV): training time $t^*$.} \textit{Inset:} $t^*$ increases with both $T$ and $P$. \textit{Main:} Plotting $t^* P^{-b}$, with $b$ a fitting exponent ($b\approx 1.3$), yields a curve increasing approximately linearly in $T$, suggesting a dependence $t^*\sim T P^{b}$. } \label{fig:lazy} \end{figure*} {\it Generalization error:} Fig. \ref{fig:test_phase} suggests that increasing $T$ leads to a larger test error in the lazy regime. This is evident for the CNN in Fig. \ref{fig:test_phase}-(b). However, a detailed analysis for larger $P$ reveals that the test error for the CNN has a non-monotonic behaviour in $T$. Fig. \ref{fig:lazy}-(b-I) shows that increasing the number of training points, the performances of the CNN in the lazy regime, after degrading, start improving for increasing $T$. Also for the FC performances improve for increasing $T$ (Fig. \ref{fig:lazy}-(a-I)). In both cases, the improvement in performances corresponds to a cross-over temperature $T_c$ that changes with $P$. In fact, plotting the test error with respect to $T P^a$, with some fitting exponent $a$, aligns the point where the test error starts improving (Fig. \ref{fig:lazy}-(a-I, b-I)). This establishes the existence of a characteristic temperature $T_c$ where SGD affects performances, having an asymptotic dependence on $P$ as \begin{equation} \label{crit} T_{c} \sim P^{-a} \end{equation} with exponent values $ a \simeq 0.5$ as reported in Table \ref{tab:exponents}. {\it Changes of weights:} To rationalize this finding, it is useful to consider how the total weights variation relative to their initialization, $\Delta w = \frac{||{\bm{w}}^{t^*}-{\bm{w}}^{0}||}{||{\bm{w}}^{0}||}$, increases with $T$. In Fig. \ref{fig:lazy}-(II) we observe an empirical scaling \begin{equation} \Delta w \sim T^\delta P^\gamma \label{eq:lazy-weights} \end{equation} with exponents' values $\delta\simeq 1$ (slightly lower for CNNs where $\delta\simeq 0.8, 0.9$) and $\gamma\simeq 0.5$. The values are reported in Table \ref{tab:exponents}. The dependence of the weight variations on $T$ apparent in Eq. \ref{eq:lazy-weights} suggests the following hypothesis: the characteristic temperature $T_{c}$ governing the test error corresponds to the exit from the kernel regime, which occurs when $\Delta w={\cal O}(1)$. We test this hypothesis in two ways. Firstly, if it is true then the test error plotted as a function of $\Delta w$ should be maximum at the same value of this argument, independently of the size of the training set $P$. We confirm this result in Fig. \ref{fig:lazy}-(III). Secondly, imposing that $\Delta w={\cal O}(1)$ and using Eq.\ref{eq:lazy-weights} leads to a characteristic temperature $T_c\sim P^{-{\gamma}/{\delta}}$, yielding Eq.\ref{crit} with $a=\frac{\gamma}{\delta} $. This prediction is approximately verified, as shown in Table \ref{tab:exponents}. {\it Convergence time:} We expect that a larger change of weights requires a longer training time $t^*$. We confirm that indeed the increase of $T$ in the lazy regime is accompanied by an increase of the training time $t^*$ (Fig. \ref{fig:lazy}-IV) and we empirically find the asymptotic behaviour \begin{equation} t^*\sim T P^b \label{eq:lazy-time} \end{equation} with values of $b$ around $1.3$ (see Table \ref{tab:exponents}). In table \ref{tab:exponents} we report the exponents $a$, $b$, $\gamma$ and $\delta$ of $T_c\sim P^{-a}$, $t^*\sim T P^b$ and $||\Delta w||\sim T^\delta P^\gamma$ that we use to align the data in the Figs. \ref{fig:lazy} and Figs. \ref{fig:FC_cifar}, \ref{fig:MNAS_mnist}, \ref{fig:simpleCNN_mnist}, \ref{fig:simpleCNN_cifar} in Appendix \ref{app:details}. We observe that the relationship $a = \gamma/\delta$ is approximately verified. \begin{table}[t] \caption{Exponents $b$, $\gamma$, $\delta$, $a$ of the empirical observations \ref{crit},\ref{eq:lazy-weights},\ref{eq:lazy-time} in the lazy regime of neural networks and for the perceptron with data distribution parameter $\chi$. } \label{tab:exponents} \begin{center} \begin{tabular}{llllll} \multicolumn{1}{c}{\bf MODEL} &\multicolumn{1}{c}{$b$} &\multicolumn{1}{c}{$\gamma$} &\multicolumn{1}{c}{$\delta$} &\multicolumn{1}{c}{$\gamma/\delta$} &\multicolumn{1}{c}{$a$} \\ \hline \\ FC on CIFAR & 1.4 & 0.5 & 1 & 0.5 & 0.5\\ FC on MNIST & 1.3 & 0.4 & 1 & 0.4 & 0.5\\ MNAS on CIFAR & 1.3 & 0.5 & 0.8 & 0.6 & 0.5\\ MNAS on MNIST & 1.2 & 0.3 & 0.75 & 0.4 & 0.5\\ simpleCNN on CIFAR & 1.5 & 0.6 & 0.9 & 0.67 & 0.6\\ simpleCNN on MNIST & 1.4 & 0.35 & 0.9 & 0.45 & 0.5\\ perceptron $\chi=1.5$ & 1.8 & 0.4 & 1 & 0.4 & \\ perceptron $\chi=4$ & 1.4 & 0.2 & 1 & 0.2 & \\ \end{tabular} \end{center} \end{table} \section{Interpretation of the observations} \label{sec:toy} In this section we provide an understanding for Eq. \ref{eq:lazy-weights}, which justifies Eq. \ref{crit}, and \ref{eq:lazy-time} based on the local alignment of the model decision boundary with the true one. We then test it in the perceptron model, where relevant quantities can be easily measured. \subsection{Neural networks} \label{sec:interpretation} \paragraph{Local alignment of decision boundaries.} In binary classification, the true decision boundary in data space is the locus of points between ${\bm{x}}$'s with different labels $y({\bm{x}})=\pm 1$, while the decision boundary learnt by the model $F({\bm{x}})$ corresponds to the ${\bm{x}}$'s such that $F({\bm{x}})=0$. Considering a point ${\bm{x}}^*$ where the two boundaries cross and its neighbourhood $B_{\epsilon}$ of diameter $\epsilon$, the local alignment of the model boundary with the true one is given by \begin{equation} \frac{|| \partial_{\bm{x}} F_\parallel ||}{||\partial_{\bm{x}} F_{\perp}||} \end{equation} at linear order in $\epsilon$, where $\partial_{\bm{x}} F_\parallel $ is the component of the gradient $\partial_{\bm{x}} F({\bm{x}}^*)$ in the direction perpendicular to the true decision boundary, while $\partial_{\bm{x}} F_\perp = \partial_{\bm{x}} F({\bm{x}}^*) - \partial_{\bm{x}} F_\parallel $ is orthogonal to it (see Fig. \ref{fig:scheme}). The angle between the two boundaries corresponds to $\theta = \text{arccot}\left(\frac{||\partial_{\bm{x}} F_\parallel ||}{||\partial_{\bm{x}} F_\perp||}\right)$ and perfect learning requires that $\frac{||\partial_{\bm{x}} F_\parallel ||}{||\partial_{\bm{x}} F_\perp||}\rightarrow\infty$.\\ $\partial_{\bm{x}} F_\parallel $ identifies the direction that is informative for the task, while $||\partial_{\bm{x}} F_\perp||$ is the component in the non-informative directions, which act as noise. \begin{figure} \centering \def.8\columnwidth{.8\columnwidth} \input{scheme_boundary.pdf_tex} \caption{\textbf{Pictorial representation of a neighbourhood $B_\epsilon$ of the true decision boundary (purple dashed line).} Red (blue) dots are training points with labels $+1$ ($-1$) and the point ${\bm{x}}^{+}$ (${\bm{x}}^-$) is the closest to the true decision boundary. The decision boundary of the trained model $F({\bm{x}})$ corresponds to the ${\bm{x}}$'s such that $F({\bm{x}})=0$. The gradients $\partial_{\bm{x}} F$ on it quantify the local alignment between the model boundary and the true one: $\partial_{{\bm{x}}} F_\parallel $ is the component in the direction of correct alignment, while $\partial_{{\bm{x}}} F_{\perp}$ is orthogonal to it.} \label{fig:scheme} \end{figure} \paragraph{Fitting condition.} When considering the hinge loss in Eq. \ref{eq:hingeLoss} with margin $\alpha^{-1}$ defined in Sec. \ref{sec:definition}, a training point $({\bm{x}}^\mu, y^\mu)$ is fitted (i.e. it has zero training loss) when $y^{\mu}\ F({\bm{x}}^\mu)\geq\alpha^{-1}$. Having $P$ training points and calling ${\bm{x}}^{\pm}$ the two of them in $B_\epsilon$ with $y({\bm{x}}^{\pm})=\pm 1$ that have the shortest distances $\delta^{\pm}$ from the true decision boundary, their fitting conditions $\pm F({\bm{x}}^{\pm})\geq \alpha^{-1}$ imply $F({\bm{x}}^{+}) - F({\bm{x}}^{-}) \geq 2\alpha^{-1}$. Assuming $F({\bm{x}})$ is differentiable in $B_\epsilon$, the last inequality can be approximated at linear order in $\epsilon$ as \begin{equation} \partial_{\bm{x}} F({\bm{x}}^{*}) \cdot \left( {\bm{x}}^+ - {\bm{x}}^{-}\right) \geq 2\alpha^{-1}. \label{eq:fit_ineq2} \end{equation} Defining $\delta_\parallel$ and $c$ as $\delta_\parallel = \delta^+ + \delta^-= \frac{\partial_{\bm{x}} F_\parallel}{||\partial_{\bm{x}} F_\parallel ||} \cdot \left( {\bm{x}}^+ - {\bm{x}}^{-}\right)$ and $c = -\frac{\partial_{\bm{x}} F_\perp}{||\partial_{\bm{x}} F_{\perp}||} \cdot \left( {\bm{x}}^+ - {\bm{x}}^{-}\right)$, inequality \ref{eq:fit_ineq2} becomes \begin{equation} \frac{||\partial_{\bm{x}} F_\parallel ||}{||\partial_{\bm{x}} F_\perp||} \geq \frac{1}{\delta_\parallel } \left( \frac{2 \alpha^{-1}}{||\partial_{\bm{x}} F_\perp||} + c \right). \label{eq:fit_ineq3} \end{equation} \paragraph{Role of the training set size $P$ and of the SGD temperature $T$.} Considering Eq. \ref{eq:fit_ineq3}: \begin{itemize} \item[(1)] we argue that increasing $P$ corresponds to shorter distances $\delta_{\parallel}$, which require a better alignment of the model decision boundary with the true one, that is a larger $\frac{||\partial_{\bm{x}} F_\parallel ||}{||\partial_{\bm{x}} F_\perp||}$. \item[(2)] Since increasing $T$ makes the training dynamics more noisy, we propose that a larger $T$ increases the non-informative component $||\partial_{\bm{x}} F_\perp||$. This implies, according to Eq. \ref{eq:fit_ineq3}, a larger informative component $||\partial_{\bm{x}} F_\parallel ||$ to fit the training set. \end{itemize} \begin{figure*} \centering \includegraphics[width=.31\textwidth]{boundary-mlp_bias-eta102.4-h4096-P1024.png} \includegraphics[width=.31\textwidth]{boundary-mlp_bias-eta2048.0-h4096-P1024.png} \includegraphics[width=.31\textwidth]{boundary-mlp_bias-eta102.4-h4096-P4096.png} \includegraphics[width=.31\textwidth]{boundary-linear-eta1_P128.png} \includegraphics[width=.31\textwidth]{boundary-linear-eta2_P128.png} \includegraphics[width=.31\textwidth]{boundary-linear-eta1_P1024.png} \vspace{-.5 cm} \caption{\textbf{Decision boundary for binary classification in 2 dimensions: (a) one-hidden-layer FC neural network and (b) perceptron model.} Red (blue) dots are training points with label $+1$ ($-1$) and the purple dashed line is the true decision boundary. The black line is the decision boundary obtained from training the model $F({\bm{x}})$ with SGD. \textbf{(I)-(II).} Increasing the SGD temperature $T$ gives larger gradients $\partial_{\bm{x}} F$ but not a better alignment between the decision boundaries: it increases the non-informative component (${\bm{w}}_\perp$ for the perceptron). \textbf{(I)-(III).} Increasing the number of training points $P$ gives larger gradients $\partial_{\bm{x}} F$ and a better alignment between the decision boundaries. } \label{fig:boundary_gradients} \end{figure*} According to (1) and (2), both $T$ and $P$ increase the gradients magnitude $||\partial_{\bm{x}} F({\bm{x}}^*)||$, but only increasing $P$ gives a better boundary alignment, that is a larger $||\partial_{\bm{x}} F_\parallel||/||\partial_{\bm{x}} F_\perp||$. This effect is illustrated in Fig. \ref{fig:boundary_gradients} for two-dimensional data. Overall, both increasing $P$ and $T$ require larger gradient magnitudes $||\partial_{\bm{x}} F({\bm{x}}^*)||$ to fit the training set, which corresponds to a larger relative variation of the weights, in accordance with the observation of Eq. \ref{eq:lazy-weights}. This larger growth of the weights requires a longer training time, in accordance with the observation of Eq. \ref{eq:lazy-time}. In this view, a key effect of increasing $P$ is to diminish the distance between data of different labels, which are the last points to be fitted. We thus expect that changing $P$ affects the dynamics only late in training, as we demonstrate in Fig. \ref{fig:dynamics_TP}. Therefore, the hardest data to fit affect both the growth of the weights and the training time. \begin{figure} \centering \includegraphics[width=.8\columnwidth]{FC-5L_mnist-train_err_vs_time-P.png} \vspace{-.5cm} \caption{\textbf{FC on MNIST: training error in time, fixed $T$, changing $P$.} Increasing the training set size $P$ delays the point when the training error goes to zero, while the first part of the dynamics stays unchanged.} \label{fig:dynamics_TP} \end{figure} \subsection{Perceptron model} \label{sec:perceptron_problem} We consider a linearly-separable classification task on $d$-dimensional data ${\bm{x}} \in \mathbb{R}^d$ with labels $y({\bm{x}})=\pm 1$ given by the signs of the first components: \begin{equation} y({\bm{x}}) = \text{sign}(x_{1}). \end{equation} The true decision boundary in this problem is the hyper-plane $x_1=0$. We study this problem with a linear classifier, called perceptron: \begin{equation} F({\bm{w}},{\bm{x}}) = \frac{1}{\sqrt{d}} {\bm{w}} \cdot {\bm{x}} \end{equation} initialized with ${\bm{w}}^0=0$.\\ Although the perceptron is always in the lazy regime\footnote{Because it is linear with respect to the weights ${\bm{w}}$.} and does not have a characteristic temperature of SGD controlling performance, it is of interest because the interpretation discussed in Sec. \ref{sec:interpretation} can be tested. In fact, the gradient $\partial_{\bm{x}} F({\bm{x}}^*)$ corresponds to the perceptron's weights ${\bm{w}}/\sqrt{d}$, with the informative and non-informative components respectively $||\partial_{\bm{x}} F_\parallel|| = w_1/\sqrt{d}$ and $||\partial_{\bm{x}} F_\perp|| = ||{\bm{w}}_\perp||/\sqrt{d}$. The alignment of the perceptron decision boundary with the true one is given by the ratio \begin{equation} w_1/||{\bm{w}}_\perp||. \end{equation} The fitting condition on the data point $({\bm{x}}^\mu, y^\mu)$ requires that the weights ${\bm{w}} = [w_1; {\bm{w}}_\perp]$ satisfy \begin{equation} w_1 |x^{\mu}_1| + y^{\mu} {\bm{w}}_\perp \cdot {\bm{x}}^{\mu}_\perp \geq \frac{\sqrt{d}}{\alpha} \label{eq:sat} \end{equation} which, by defining the random quantities $c_\mu = -y^{\mu} \frac{{\bm{w}}_\perp}{||{\bm{w}}_\perp||} \cdot {\bm{x}}^{\mu}_\perp$, can be recast as \begin{equation} \frac{w_1}{||{\bm{w}}_\perp||} \geq \frac{1}{|x^{\mu}_1|} \left(\frac{\sqrt{d}}{\alpha||{\bm{w}}_\perp||} + c_{\mu}\right). \label{eq:sat1} \end{equation} This relationship is a special case of Eq. \ref{eq:fit_ineq3}. In fact, increasing $P$ gives smaller values of $|x^\mu_1|$ which require larger $\frac{w_1}{||{\bm{w}}_\perp||}$ to fit the training set, while increasing $T$ corresponds to increasing $||{\bm{w}}_\perp||$. A qualitative confirmation of this effect is reported in Fig. \ref{fig:boundary_gradients}-(b).\\ In the following, we consider the regime of large $T$ and large $\alpha$, corresponding to $\frac{\sqrt{d}}{\alpha||{\bm{w}}_\perp||}\ll |c_\mu|$, for which condition \ref{eq:sat1} becomes \begin{equation} \frac{w_1}{||{\bm{w}}_\perp||}\geq \frac{c_{\mu}}{|x^{\mu}_1|} \left( 1+ o(1)\right). \label{eq:sat2} \end{equation} \paragraph{Data distribution and setting.} To control the density of data near the decision boundary $x_1=0$, we consider a distribution on the first component $x_1$ parametrized by $\chi \geq 0$ (Fig. \ref{fig:perceptron_data}): \begin{equation} \rho(x_1) = |x_1|^\chi e^{-x_1^2/2} / Z, \label{eq:rho_x1} \end{equation} with $Z=2^{\frac{1+\chi}{2}}\Gamma(\frac{1+\chi}{2})$ the normalization constant. The other $d-1$ components ${\bm{x}}_\perp = [x_i]_{i=2,...,d}$ are distributed as standard multivariate Gaussian numbers, i.e. ${\bm{x}}_\perp \sim \mathcal{N}({\bm 0}, {\bm{I}}_{d-1})$. $\chi=0$ corresponds to the Gaussian case. This data distribution has been first considered in \citet{tomasini2022failure}. \begin{figure} \centering \includegraphics[width=1\columnwidth]{depleted_sign-data_distribution.png} \vspace{-.8cm} \caption{\textbf{Perceptron model, data distribution on the $x_1$ component.} The sign of $x_1$ determines the class $y=\text{sign}(x_1)$. For $\chi=0$ the distribution is Gaussian.} \label{fig:perceptron_data} \end{figure} The learning setting is defined identically to the one of neural networks in Sec. \ref{sec:definition}. We consider the case $1 \ll d \ll P$, where $d$ is the dimension of the data and the perceptron weights and $P$ is the number of training points. We consider this being a realistic limit when considering the effective dimension $d_{\text{eff}}$ of real datasets ($d_{\text{eff}}\approx 15$ for MNIST and $d_{\text{eff}}\approx 35$ for CIFAR-10 \citep{spigler2020asymptotic}) with respect to the number of training samples $P>10^3$. \textbf{Empirical observations.} A key result is that the perceptron displays asymptotic behaviours in the change of weights and training time similar to those of neural networks. For the considered perceptron initialized with ${\bm{w}}^0=0$, the weights variation $\Delta w$ corresponds to $||{\bm{w}}||$. Since $w_1/||{\bm{w}}_\perp||\gg 1$ for large $P$, we have $\Delta w = ||{\bm{w}}||\simeq w_1$. Eqs. \ref{eq:lazy-weights} and \ref{eq:lazy-time} are verified with exponents reported in Table \ref{tab:exponents}, as shown in Fig. \ref{fig:perceptron_scheme}-(a,c).\\ In addition, we observe that $||{\bm{w}}_\perp||$ at the end of training is proportional to $T$ and independent of $P$ (Fig. \ref{fig:perceptron_scheme}-(b)): \begin{equation} ||{\bm{w}}_\perp||\sim T. \label{eq:wp_T} \end{equation} This observation is a positive test about the effect of $T$ on $||\partial_{\bm{x}} F_\perp||$ proposed in Sec. \ref{sec:interpretation}. \paragraph{Non-universality of the exponents.} Remarkably, the exponents $\gamma$ and $b$ of $P$ for the perceptron depend on the parameter $\chi$ of the data distribution. This finding can be rationalized by considering condition \ref{eq:sat2} at the end of training. In fact, satisfying \ref{eq:sat2} for every training point requires $\frac{w_1}{||{\bm{w}}_\perp||}\geq \underset{\mu}{\text{max}} \frac{c_{\mu}}{|x^{\mu}_1|}$. In Appendix \ref{app:max}, classical extreme value theory is used to show that, for large $P$, the typical value of $\underset{\mu}{\text{max}} \frac{c_{\mu}}{|x^{\mu}_1|}$ behaves asymptotically as $\langle \underset{\mu}{\text{max}} \frac{c_{\mu}}{|x^{\mu}_1|} \rangle = C P^{\frac{1}{1+\chi}} + o\left( P^{\frac{1}{1+\chi}}\right)$ for some constant $C$. Therefore we obtain a prediction for the exponent $\gamma$: \begin{equation} \gamma = \frac{1}{1+\chi}, \label{eq:gamma_chi} \end{equation} in excellent agreement with data (Fig \ref{fig:perceptron_scheme}-(a)). This further confirms that the asymptotic behaviour with respect to $P$ is controlled by the statistics of the points close to the decision boundary. Thus the exponents are non-universal, since they depend directly on the data distribution.\\ An estimate of the parameter $\chi$ for some images datasets is reported in \citet{tomasini2022failure} through the study of kernel ridge regression. For binary CIFAR10, $\chi_{CIFAR}=1.5$ is reported, that according to \ref{eq:gamma_chi} corresponds to $\gamma = 0.4$, a value compatible with those observed in neural networks (Table \ref{tab:exponents}). \begin{figure*} \centering \includegraphics[width=.31\textwidth]{depleted_sign-w1_variation_TPchi-d128.png} \includegraphics[width=.31\textwidth]{depleted_sign-wperp_variation_TPchi-d128.png} \includegraphics[width=.31\textwidth]{depleted_sign-time-empirical_TPchi-d128.png} \vspace{-.5cm} \caption{ \textbf{Perceptron model, $d=128$, varying $T$ and $P$.} \textbf{(a)} \textit{Inset:} Total variation of the weight $w_1$ at the end of training with respect to SGD noise $T$ and training set size $P$ (colors), for different data distributions $\chi=0$ (empty circles) and $\chi=3$ (full diamonds). \textit{Main:} Plotting $w_1 P^{-\frac{1}{1+\chi}}$ gives a curve proportional to $T$ for each value of $\chi$, revealing the asymptotic behaviour $w_1\sim T P^{\gamma}$ (Eq. \ref{eq:lazy-weights} for neural networks) with a data-dependent exponent $\gamma=\frac{1}{1+\chi}$ in accordance with prediction \ref{eq:gamma_chi}. \textbf{(b)} Total variation of $||w_\perp||$ for the same setting of panel (a). $||w_\perp||$ is proportional to $T$ independently of $P$, as stated in Eq. \ref{eq:wp_T}. \textbf{(c)} \textit{Inset:} Total training time $t^*$ for the same setting as panel (a): $t^*$ increases with both $T$ and $P$. \textit{Main:} Plotting $t^* P^{-b}$, with $b$ depending on $\chi$, gives approximately one curve proportional to $T$ for each value of $\chi$, corresponding to the asymptotic behaviour $t^*\sim T P^{b}$ as found for neural networks (Eq. \ref{eq:lazy-time}). } \label{fig:perceptron_scheme} \end{figure*} \section{Conclusions} \label{sec:discussion} In this work we have explored the effect of SGD noise in different training regimes of neural networks using the hinge loss. Since this loss goes to zero at the end of training, the minima found by the algorithm are always flat: a static view explaining the benefit of SGD in terms of the flatness of minima cannot be applied. Instead, we propose a dynamical view where SGD noise increases the weights of the model in directions that are detrimental for learning, which in turn induces an increase in the useful directions to fit the training set. Fitting is the hardest for data close to the decision boundary, whose statistics depends both on the size of the training set and the distribution of data close to the decision boundary. This view naturally explained our observations that the total weights variation, and the training time, depend on both the SGD noise and the size of the training set. It also rationalizes the puzzling observation that the characteristic SGD temperature for which weight changes become significant and the test error is affected by the noise depends on the training set size. Exponents characterizing this relationship are non-universal. We expect them to depend on the data distribution near the decision boundary, as we demonstrated for the perceptron.\\ Our work thus clarifies a key effect of SGD, and explains the range of temperatures where SGD noise matters. However, understanding the sign of the effect of this noise on performance (beneficial or detrimental), and how it relates to the data structure and the network architecture, appears to be a particularly vexing question. For example, for the lazy regime of CNNs, we observe a non-monotonic behaviour of the test error, which initially grows and then decays as the SGD noise is increased. \section*{Acknowledgments} We thank Francesco Cagnetta, Alessandro Favero, Bastien Olivier Marie Göransson, Leonardo Petrini and Umberto Maria Tomasini for helpful discussions. This work was supported by a grant from the Simons Foundation (\# 454953 Matthieu Wyart). \newpage \section{Introduction} Optimizing the generalization performances of overparametrized neural networks is one of the main challenges in machine learning. A crucial role is played by gradient-based training algorithms, which converge to solutions which generalize well also when no explicit regularization of the model is used \citep{zhang2021understanding}. Mini-batch stochastic gradient descent (SGD) is the workhorse algorithm to train modern neural networks. Yet, key aspects of these algorithms are debated. {\it Effect on performance:} A popular idea has been that mini-batch SGD can generalize better than full batch gradient descent (GD) \citep{heskes1993, lecun2012, keskar2016, hochreiter1997flat, jastrzkebski2017, entropysgd2019}, yet this view is debated \citep{hoffer2017, dinh2017, shallue2018, zhang2019}. In fact, comparing SGD and GD at fixed number of training epochs leads to a generalization gap \citep{keskar2016} that can be closed by training longer with a fixed number of training steps \citep{hoffer2017, smith2020}. More generally, the choice of the computational budget can affect which algorithm performs better \citep{shallue2018,smith2020}. {\it Theories for the role of SGD:} Several works have argued that larger SGD stochasticity leads the dynamics toward flatter minima of the loss landscape, and it has been argued that this effect leads to improved performances \citep{hochreiter1997flat, keskar2016, zhang2018energy, smith2018bayesian, wu2018sgd}. By contrast, other studies suggest that the SGD noise biases the model in a manner similar to initializing the network with small weights, and helps recovering sparse predictors \citep{blanc2020implicit, haochen2021, flammarion2021} \subsection{This work} In this work, we clarify these two debates by performing systematic empirical studies of how performance is affected by the noise magnitude of SGD or temperature $T$ (the ratio between the learning rate $\eta$ and the batch size $B$ \citep{jastrzkebski2017, zhang2019, smith2020}), by the initialization scale $\alpha$, and by the size of the training set $P$. The initialization scale $\alpha$ was rarely considered in empirical studies so far, yet it governs the training regimes in which nets operate. For large $\alpha$, tiny changes of weights are sufficient to fit the data: the predictor is approximately linear in its parameters, corresponding to the \textit{kernel} or \textit{lazy} regime \citep{jacot2018, chizat2019lazy}. By contrast for small initialization, networks can learn the relevant features of the task and the dynamics is non-linear, corresponding to the so-called feature-learning regime \citep{rotskoff2018, mei2018, sirignano2020}. We also deal with the computational budget issue by considering the hinge loss $l(y,\hat{y})=(1-y\hat{y})^+$, allowing us to train networks until the time $t^*$ where the loss is strictly zero, and the dynamics stops. Our central empirical results are: \begin{itemize} \item[(i)] obtaining phase diagrams for performance in the $(\alpha,T)$ plane. They show that SGD noise can be detrimental or instead useful depending on the training regime, even in the absence of budget constraints. This observation clarifies why different conclusions on the benefits of SGD were previously made. \item[(ii)] Although we find that increasing $T$ or decreasing $\alpha$ both allow the net to escape the lazy regime, these changes can have opposite effects on performance, in disagreement with simple models \citep{flammarion2021}. \item[(iii)] Most importantly, we reveal that several observables characterizing the dynamics follow scaling laws. Denote by $\Delta {\bf \omega}$ the relative weights variation accumulated after training, $t^*$ the training time defined as the learning rate times the number of training steps required to bring a hinge loss to zero, and $T_{c}$ the characteristic temperature scale for which performance is affected by SGD. We find that for $\alpha\gg 1$, these quantities do not depend on $\alpha$ and follow: \begin{equation*} \Delta {\bf \omega}\sim T^\delta P^\gamma\ \ t^*\sim T P^b \ \ T_{c}\sim P^{-a} \end{equation*} where $\delta,\gamma,b,a$ are exponents. Assuming that $T_{c}$ is the temperature scale where the network exits the lazy regime, i.e. $\Delta {\bf \omega}={\cal O}(1)$, gives $a=\gamma/\delta$ in agreement with our observations. \item[(iv)] We rationalize these findings using a teacher-student perceptron model, for which $ \Delta {\bf \omega}$ and $t^*$ also display power-law dependence on $T$ and $P$. We show that SGD noise increases weights in directions irrelevant to the task, implying that the correct weights must grow much larger to fit data, thus increasing both $t^*$ and $\Delta {\bf \omega}$. We compute the dependence of these effects on the size of the training set, and show that this dependence varies qualitatively with the distribution of data near the decision boundary. \end{itemize} Overall, instead of a static view where SGD noise would bias networks toward broader minima of the population loss, these results support a dynamical viewpoint where SGD noise delays the end of training. This effect allows the weights to grow more, affecting performance the most when one escapes the lazy regime. \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{FC-5L-mnist-test-phase-horizontal-noH.png} \includegraphics[width=0.49\textwidth]{CNN-cifar-test_phase-horizontal-noH.png} \vspace{-.5 cm} \caption{\textbf{Test error of deep networks on image data-sets, for varying $T$ and $\alpha$ and fixed $P=1024$.} Batch size $B$ is kept fixed and learning rate $\eta$ is varied ($T=\eta/B$). \textbf{(a)} 5-hidden layers fully-connected network (FC) on parity MNIST with $B=16$. \textbf{(b)} 9-hidden layers CNN (MNAS) on CIFAR (animals vs the rest) with $B=64$. Black dots correspond to training runs that do not converge. The black dashed lines indicates the maximal temperatures $T_{max}$. The lowest test error is achieved in the feature regime ($\alpha\ll 1$), for a temperature $T_{opt}\propto T_{max}$. In the lazy regime ($\alpha\gg 1$), performance is best for the highest $T$ for FC on MNIST. Although it is not apparent here for CNN on CIFAR, it is also the case as the training set increases (see below). In \textit{(a)}, the number of hidden layers is $D=5$ and $T_{max}\sim \alpha^{\frac{D-1}{D+1}} = \alpha^{\frac{2}{3}}$ (black dashed line) when $\alpha\ll1$ as argued in \ref{eq:T_feature}. Similarly in \textit{(b)}, $D=9$ and $T_{max}\sim \alpha^{\frac{D-1}{D+1}} = \alpha^{\frac{4}{5}}$ (black dashed line). } \label{fig:test_phase} \end{figure*} \subsection{Related works} More related works are indicated in Appendix \ref{app:other_works}. \section{Empirical analysis} \label{sec:empiric} \subsection{General setting and notation} \label{sec:definition} We consider binary classification on the data $\{{\bm{x}}_{\mu}\}_{\mu=1,...,P}\in \mathbb{R}^d$ with labels $\{y_\mu\}_{\mu=1,...,P} \in \{-1, +1\}$. $P$ is the size of the training set. Given a predictor $\hat{y}_{\mu}$, the hinge loss on the sample $\mu$ is defined as $l(y_{\mu},\hat{y}_{\mu}) = (1-y_{\mu}\hat{y}_{\mu})^+$, where $(x)^+ = \max(0,x)$. To control between feature and lazy training, we multiply the model output by $\alpha$ \citep{chizat2019lazy}. For the hinge loss, it is equivalent to changing the loss margin to $1/\alpha$, therefore we study the training loss: \begin{equation} L({\bm{w}}) = \frac{1}{P} \sum_{\mu=1}^P (\alpha^{-1}-y_{\mu} F({\bm{w}},{\bm{x}}_{\mu}))^+ \label{eq:hingeLoss} \end{equation} where $F({\bm{w}},{\bm{x}}_{\mu})$ is the model predictor with weights ${\bm{w}}$ on the datum ${\bm{x}}_{\mu}$. The model predictor at time $t$ corresponds to $F({\bm{w}},{\bm{x}}_{\mu}) = f({\bm{w}}^t,{\bm{x}}_\mu) - f({\bm{w}}^0,{\bm{x}}_\mu)$, where $f({\bm{w}}^t,{\bm{x}}_\mu)$ is the output of a neural net with weights ${\bm{w}}^t$ at time $t$ and ${\bm{w}}^0$ are the weights at initialization. For a network of width $h$, the weights are initialized as Gaussian random numbers with standard deviation $1/\sqrt{h}$ for the hidden layers and $1/h$ for the output layer. Such an initialization ensures that the feature learning limit corresponds to $\alpha\ll1$ while the lazy training limit corresponds to $\alpha\gg 1$, and that every layer has a similar change of weights \citep{geiger2020disentangling,yang2021tensor}.\\ The stochastic gradient descent updating equation is: \begin{equation}\label{eq:SGD} {\bm{w}}^{t+\eta} ={\bm{w}}^t + \frac{\eta}{B}\sum\limits_{\mu \in {\mathbb{B}}_t} \theta\left(\alpha^{-1}-y_{\mu} F({\bm{w}},{\bm{x}}_{\mu})\right) y_{\mu} \nabla_{{\bm{w}}} f({\bm{w}}^t,{\bm{x}}_\mu) \end{equation} where $\theta(x)$ is the Heaviside step function, ${\mathbb{B}}_t \subset \{1,...,P\}$ is the batch at time $t$ and $B$ is its size. The time $t$ corresponds to the number of training steps times the learning rate $\eta$. The batch ${\mathbb{B}}_t$ is randomly selected at each time step among all the $P$ data. The learning rate $\eta$ is kept constant during training. The end of training is reached when $L({\bm{w}}^{t^*})=0$.\\ The batch size $B$ is taken small enough to be in the ``noise dominated'' regime \citep{smith2020, zhang2019}, where the dynamics depends on the SGD temperature $T=\eta/B$. Empirical verification of this fact is provided in Appendix \ref{app:details}.\\ Below we use a 5-hidden-layers fully-connected (FC) network and a 9-hidden-layers convolutional neural network (CNN) (MNAS architecture \citep{mnasnet}). In Appendix \ref{app:details} we report data also for a 3-hidden layers CNN (simple-CNN). We consider the binary datasets MNIST (even vs odd numbers) and CIFAR10 (animals vs the rest). All the networks use ReLU as activation functions. The code with all the details of the experiments is provided at \href{https://tinyurl.com/yh6kay4b}{https://tinyurl.com/yh6kay4b}. \subsection{Performance in the $(\alpha, T)$ phase diagram} Fig. \ref{fig:test_phase}-(a) shows the test error for a FC network trained on MNIST and Fig. \ref{fig:test_phase}-(b) shows the same quantity obtained after training a CNN on CIFAR10. The black dots correspond to training loss exploding to infinity due to too large learning rate. Therefore, the dashed back lines indicate the maximal temperature $T_{max}$ for which SGD converges. \\ From Fig. \ref{fig:test_phase} we make the following observations: (i) In the feature regime, both $T_{max}$ and the temperature of optimal performance $T_{opt}$ follow $T_{max}\sim T_{opt}\sim \alpha^k$. In Appendix \ref{app:scaling_feature}, we relate the exponent $k$ to the number $D$ of hidden layers of the network as $k=(D-1)/(D+1)$. In the lazy regime, $T_{max}$ and $T_{opt}$ are independent of $\alpha$. (ii) In Fig. \ref{fig:test_phase}-(a), in the lazy regime (largest $\alpha$), increasing $T$ leads to an initial slight degradation of the test error followed by an improvement just before reaching the instability $T_{max}$. (iii) In Fig. \ref{fig:test_phase}-(b), in the lazy regime, increasing $T$ leads to a degradation of the test error before reaching the instability $T_{max}$ (for larger $P$, a region of good performance appears near $T_{max}$, see below). In this regime increasing $T$ or decreasing $\alpha$ have opposite effects, showing that in general an increase of SGD noise is not equivalent to making the initialization smaller. \subsection{Role of size of the training set $P$} \label{sec:role_P} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{FC-5L_mnist-grid_plot.png} \includegraphics[width=1.0\textwidth]{MNAS_cifar-grid_plot.png} \vspace{-.5 cm} \caption{\textbf{Lazy regime, $\alpha=32768$, $B=16$, $T=\eta/B$. (a) FC on MNIST, (b) MNAS on CIFAR.} \textbf{(a-I, b-I): test error $\epsilon$.} \textit{Inset:} $\epsilon$ starts improving at a cross-over temperature $T_{c}$ depending on $P$. The y-axis is rescaled by $P^\beta$, with $\beta$ some fitting exponent, to align $\epsilon$ at $T_c$. \textit{Main:} Rescaling the x-axis by $P^{0.5}$ aligns horizontally the points where $\epsilon$ starts improving, suggesting a dependence $T_c \sim P^{-0.5}$. \textbf{(a-II, b-II): total weights variation at the end of training normalized with respect to their initialization ($\Delta w$).} \textit{Inset:} $\Delta w$ increases with both $T$ and $P$. \textit{Main:} Plotting $\Delta w P^{-\gamma}$ yields a curve increasing approximately as $T^\delta$, suggesting $\Delta w\sim T^\delta P^{\gamma}$, with $\gamma$ and $\delta$ some fitting exponents (FC: $\delta\approx 1$, $\gamma\approx 0.4$. CNN: $\delta\approx 0.8$, $\gamma\approx 0.5$). \textbf{(a-III, b-III): test error vs weights variation.} The point where the test error starts improving shows a better alignment when plotted as a function of the weights variation (\textit{main plots}) rather than temperature alone (\textit{insets}). \textbf{(a-IV, b-IV): training time $t^*$.} \textit{Inset:} $t^*$ increases with both $T$ and $P$. \textit{Main:} Plotting $t^* P^{-b}$, with $b$ a fitting exponent ($b\approx 1.3$), yields a curve increasing approximately linearly in $T$, suggesting a dependence $t^*\sim T P^{b}$. } \label{fig:lazy} \end{figure*} {\it Generalization error:} Fig. \ref{fig:test_phase} suggests that increasing $T$ leads to a larger test error in the lazy regime. This is evident for the CNN in Fig. \ref{fig:test_phase}-(b). However, a detailed analysis for larger $P$ reveals that the test error for the CNN has a non-monotonic behaviour in $T$. Fig. \ref{fig:lazy}-(b-I) shows that increasing the number of training points, the performances of the CNN in the lazy regime, after degrading, start improving for increasing $T$. Also for the FC performances improve for increasing $T$ (Fig. \ref{fig:lazy}-(a-I)). In both cases, the improvement in performances corresponds to a cross-over temperature $T_c$ that changes with $P$. In fact, plotting the test error with respect to $T P^a$, with some fitting exponent $a$, aligns the point where the test error starts improving (Fig. \ref{fig:lazy}-(a-I, b-I)). This establishes the existence of a characteristic temperature $T_c$ where SGD affects performances, having an asymptotic dependence on $P$ as \begin{equation} \label{crit} T_{c} \sim P^{-a} \end{equation} with exponent values $ a \simeq 0.5$ as reported in Table \ref{tab:exponents}. {\it Changes of weights:} To rationalize this finding, it is useful to consider how the total weights variation relative to their initialization, $\Delta w = \frac{||{\bm{w}}^{t^*}-{\bm{w}}^{0}||}{||{\bm{w}}^{0}||}$, increases with $T$. In Fig. \ref{fig:lazy}-(II) we observe an empirical scaling \begin{equation} \Delta w \sim T^\delta P^\gamma \label{eq:lazy-weights} \end{equation} with exponents' values $\delta\simeq 1$ (slightly lower for CNNs where $\delta\simeq 0.8, 0.9$) and $\gamma\simeq 0.5$. The values are reported in Table \ref{tab:exponents}. The dependence of the weight variations on $T$ apparent in Eq. \ref{eq:lazy-weights} suggests the following hypothesis: the characteristic temperature $T_{c}$ governing the test error corresponds to the exit from the kernel regime, which occurs when $\Delta w={\cal O}(1)$. We test this hypothesis in two ways. Firstly, if it is true then the test error plotted as a function of $\Delta w$ should be maximum at the same value of this argument, independently of the size of the training set $P$. We confirm this result in Fig. \ref{fig:lazy}-(III). Secondly, imposing that $\Delta w={\cal O}(1)$ and using Eq.\ref{eq:lazy-weights} leads to a characteristic temperature $T_c\sim P^{-{\gamma}/{\delta}}$, yielding Eq.\ref{crit} with $a=\frac{\gamma}{\delta} $. This prediction is approximately verified, as shown in Table \ref{tab:exponents}. {\it Convergence time:} We expect that a larger change of weights requires a longer training time $t^*$. We confirm that indeed the increase of $T$ in the lazy regime is accompanied by an increase of the training time $t^*$ (Fig. \ref{fig:lazy}-IV) and we empirically find the asymptotic behaviour \begin{equation} t^*\sim T P^b \label{eq:lazy-time} \end{equation} with values of $b$ around $1.3$ (see Table \ref{tab:exponents}). In table \ref{tab:exponents} we report the exponents $a$, $b$, $\gamma$ and $\delta$ of $T_c\sim P^{-a}$, $t^*\sim T P^b$ and $||\Delta w||\sim T^\delta P^\gamma$ that we use to align the data in the Figs. \ref{fig:lazy} and Figs. \ref{fig:FC_cifar}, \ref{fig:MNAS_mnist}, \ref{fig:simpleCNN_mnist}, \ref{fig:simpleCNN_cifar} in Appendix \ref{app:details}. We observe that the relationship $a = \gamma/\delta$ is approximately verified. \begin{table}[t] \caption{Exponents $b$, $\gamma$, $\delta$, $a$ of the empirical observations \ref{crit},\ref{eq:lazy-weights},\ref{eq:lazy-time} in the lazy regime of neural networks and for the perceptron with data distribution parameter $\chi$. } \label{tab:exponents} \begin{center} \begin{tabular}{llllll} \multicolumn{1}{c}{\bf MODEL} &\multicolumn{1}{c}{$b$} &\multicolumn{1}{c}{$\gamma$} &\multicolumn{1}{c}{$\delta$} &\multicolumn{1}{c}{$\gamma/\delta$} &\multicolumn{1}{c}{$a$} \\ \hline \\ FC on CIFAR & 1.4 & 0.5 & 1 & 0.5 & 0.5\\ FC on MNIST & 1.3 & 0.4 & 1 & 0.4 & 0.5\\ MNAS on CIFAR & 1.3 & 0.5 & 0.8 & 0.6 & 0.5\\ MNAS on MNIST & 1.2 & 0.3 & 0.75 & 0.4 & 0.5\\ simpleCNN on CIFAR & 1.5 & 0.6 & 0.9 & 0.67 & 0.6\\ simpleCNN on MNIST & 1.4 & 0.35 & 0.9 & 0.45 & 0.5\\ perceptron $\chi=1.5$ & 1.8 & 0.4 & 1 & 0.4 & \\ perceptron $\chi=4$ & 1.4 & 0.2 & 1 & 0.2 & \\ \end{tabular} \end{center} \end{table} \section{Interpretation of the observations} \label{sec:toy} In this section we provide an understanding for Eq. \ref{eq:lazy-weights}, which justifies Eq. \ref{crit}, and \ref{eq:lazy-time} based on the local alignment of the model decision boundary with the true one. We then test it in the perceptron model, where relevant quantities can be easily measured. \subsection{Neural networks} \label{sec:interpretation} \paragraph{Local alignment of decision boundaries.} In binary classification, the true decision boundary in data space is the locus of points between ${\bm{x}}$'s with different labels $y({\bm{x}})=\pm 1$, while the decision boundary learnt by the model $F({\bm{x}})$ corresponds to the ${\bm{x}}$'s such that $F({\bm{x}})=0$. Considering a point ${\bm{x}}^*$ where the two boundaries cross and its neighbourhood $B_{\epsilon}$ of diameter $\epsilon$, the local alignment of the model boundary with the true one is given by \begin{equation} \frac{|| \partial_{\bm{x}} F_\parallel ||}{||\partial_{\bm{x}} F_{\perp}||} \end{equation} at linear order in $\epsilon$, where $\partial_{\bm{x}} F_\parallel $ is the component of the gradient $\partial_{\bm{x}} F({\bm{x}}^*)$ in the direction perpendicular to the true decision boundary, while $\partial_{\bm{x}} F_\perp = \partial_{\bm{x}} F({\bm{x}}^*) - \partial_{\bm{x}} F_\parallel $ is orthogonal to it (see Fig. \ref{fig:scheme}). The angle between the two boundaries corresponds to $\theta = \text{arccot}\left(\frac{||\partial_{\bm{x}} F_\parallel ||}{||\partial_{\bm{x}} F_\perp||}\right)$ and perfect learning requires that $\frac{||\partial_{\bm{x}} F_\parallel ||}{||\partial_{\bm{x}} F_\perp||}\rightarrow\infty$.\\ $\partial_{\bm{x}} F_\parallel $ identifies the direction that is informative for the task, while $||\partial_{\bm{x}} F_\perp||$ is the component in the non-informative directions, which act as noise. \begin{figure} \centering \def.8\columnwidth{.8\columnwidth} \input{scheme_boundary.pdf_tex} \caption{\textbf{Pictorial representation of a neighbourhood $B_\epsilon$ of the true decision boundary (purple dashed line).} Red (blue) dots are training points with labels $+1$ ($-1$) and the point ${\bm{x}}^{+}$ (${\bm{x}}^-$) is the closest to the true decision boundary. The decision boundary of the trained model $F({\bm{x}})$ corresponds to the ${\bm{x}}$'s such that $F({\bm{x}})=0$. The gradients $\partial_{\bm{x}} F$ on it quantify the local alignment between the model boundary and the true one: $\partial_{{\bm{x}}} F_\parallel $ is the component in the direction of correct alignment, while $\partial_{{\bm{x}}} F_{\perp}$ is orthogonal to it.} \label{fig:scheme} \end{figure} \paragraph{Fitting condition.} When considering the hinge loss in Eq. \ref{eq:hingeLoss} with margin $\alpha^{-1}$ defined in Sec. \ref{sec:definition}, a training point $({\bm{x}}^\mu, y^\mu)$ is fitted (i.e. it has zero training loss) when $y^{\mu}\ F({\bm{x}}^\mu)\geq\alpha^{-1}$. Having $P$ training points and calling ${\bm{x}}^{\pm}$ the two of them in $B_\epsilon$ with $y({\bm{x}}^{\pm})=\pm 1$ that have the shortest distances $\delta^{\pm}$ from the true decision boundary, their fitting conditions $\pm F({\bm{x}}^{\pm})\geq \alpha^{-1}$ imply $F({\bm{x}}^{+}) - F({\bm{x}}^{-}) \geq 2\alpha^{-1}$. Assuming $F({\bm{x}})$ is differentiable in $B_\epsilon$, the last inequality can be approximated at linear order in $\epsilon$ as \begin{equation} \partial_{\bm{x}} F({\bm{x}}^{*}) \cdot \left( {\bm{x}}^+ - {\bm{x}}^{-}\right) \geq 2\alpha^{-1}. \label{eq:fit_ineq2} \end{equation} Defining $\delta_\parallel$ and $c$ as $\delta_\parallel = \delta^+ + \delta^-= \frac{\partial_{\bm{x}} F_\parallel}{||\partial_{\bm{x}} F_\parallel ||} \cdot \left( {\bm{x}}^+ - {\bm{x}}^{-}\right)$ and $c = -\frac{\partial_{\bm{x}} F_\perp}{||\partial_{\bm{x}} F_{\perp}||} \cdot \left( {\bm{x}}^+ - {\bm{x}}^{-}\right)$, inequality \ref{eq:fit_ineq2} becomes \begin{equation} \frac{||\partial_{\bm{x}} F_\parallel ||}{||\partial_{\bm{x}} F_\perp||} \geq \frac{1}{\delta_\parallel } \left( \frac{2 \alpha^{-1}}{||\partial_{\bm{x}} F_\perp||} + c \right). \label{eq:fit_ineq3} \end{equation} \paragraph{Role of the training set size $P$ and of the SGD temperature $T$.} Considering Eq. \ref{eq:fit_ineq3}: \begin{itemize} \item[(1)] we argue that increasing $P$ corresponds to shorter distances $\delta_{\parallel}$, which require a better alignment of the model decision boundary with the true one, that is a larger $\frac{||\partial_{\bm{x}} F_\parallel ||}{||\partial_{\bm{x}} F_\perp||}$. \item[(2)] Since increasing $T$ makes the training dynamics more noisy, we propose that a larger $T$ increases the non-informative component $||\partial_{\bm{x}} F_\perp||$. This implies, according to Eq. \ref{eq:fit_ineq3}, a larger informative component $||\partial_{\bm{x}} F_\parallel ||$ to fit the training set. \end{itemize} \begin{figure*} \centering \includegraphics[width=.31\textwidth]{boundary-mlp_bias-eta102.4-h4096-P1024.png} \includegraphics[width=.31\textwidth]{boundary-mlp_bias-eta2048.0-h4096-P1024.png} \includegraphics[width=.31\textwidth]{boundary-mlp_bias-eta102.4-h4096-P4096.png} \includegraphics[width=.31\textwidth]{boundary-linear-eta1_P128.png} \includegraphics[width=.31\textwidth]{boundary-linear-eta2_P128.png} \includegraphics[width=.31\textwidth]{boundary-linear-eta1_P1024.png} \vspace{-.5 cm} \caption{\textbf{Decision boundary for binary classification in 2 dimensions: (a) one-hidden-layer FC neural network and (b) perceptron model.} Red (blue) dots are training points with label $+1$ ($-1$) and the purple dashed line is the true decision boundary. The black line is the decision boundary obtained from training the model $F({\bm{x}})$ with SGD. \textbf{(I)-(II).} Increasing the SGD temperature $T$ gives larger gradients $\partial_{\bm{x}} F$ but not a better alignment between the decision boundaries: it increases the non-informative component (${\bm{w}}_\perp$ for the perceptron). \textbf{(I)-(III).} Increasing the number of training points $P$ gives larger gradients $\partial_{\bm{x}} F$ and a better alignment between the decision boundaries. } \label{fig:boundary_gradients} \end{figure*} According to (1) and (2), both $T$ and $P$ increase the gradients magnitude $||\partial_{\bm{x}} F({\bm{x}}^*)||$, but only increasing $P$ gives a better boundary alignment, that is a larger $||\partial_{\bm{x}} F_\parallel||/||\partial_{\bm{x}} F_\perp||$. This effect is illustrated in Fig. \ref{fig:boundary_gradients} for two-dimensional data. Overall, both increasing $P$ and $T$ require larger gradient magnitudes $||\partial_{\bm{x}} F({\bm{x}}^*)||$ to fit the training set, which corresponds to a larger relative variation of the weights, in accordance with the observation of Eq. \ref{eq:lazy-weights}. This larger growth of the weights requires a longer training time, in accordance with the observation of Eq. \ref{eq:lazy-time}. In this view, a key effect of increasing $P$ is to diminish the distance between data of different labels, which are the last points to be fitted. We thus expect that changing $P$ affects the dynamics only late in training, as we demonstrate in Fig. \ref{fig:dynamics_TP}. Therefore, the hardest data to fit affect both the growth of the weights and the training time. \begin{figure} \centering \includegraphics[width=.8\columnwidth]{FC-5L_mnist-train_err_vs_time-P.png} \vspace{-.5cm} \caption{\textbf{FC on MNIST: training error in time, fixed $T$, changing $P$.} Increasing the training set size $P$ delays the point when the training error goes to zero, while the first part of the dynamics stays unchanged.} \label{fig:dynamics_TP} \end{figure} \subsection{Perceptron model} \label{sec:perceptron_problem} We consider a linearly-separable classification task on $d$-dimensional data ${\bm{x}} \in \mathbb{R}^d$ with labels $y({\bm{x}})=\pm 1$ given by the signs of the first components: \begin{equation} y({\bm{x}}) = \text{sign}(x_{1}). \end{equation} The true decision boundary in this problem is the hyper-plane $x_1=0$. We study this problem with a linear classifier, called perceptron: \begin{equation} F({\bm{w}},{\bm{x}}) = \frac{1}{\sqrt{d}} {\bm{w}} \cdot {\bm{x}} \end{equation} initialized with ${\bm{w}}^0=0$.\\ Although the perceptron is always in the lazy regime\footnote{Because it is linear with respect to the weights ${\bm{w}}$.} and does not have a characteristic temperature of SGD controlling performance, it is of interest because the interpretation discussed in Sec. \ref{sec:interpretation} can be tested. In fact, the gradient $\partial_{\bm{x}} F({\bm{x}}^*)$ corresponds to the perceptron's weights ${\bm{w}}/\sqrt{d}$, with the informative and non-informative components respectively $||\partial_{\bm{x}} F_\parallel|| = w_1/\sqrt{d}$ and $||\partial_{\bm{x}} F_\perp|| = ||{\bm{w}}_\perp||/\sqrt{d}$. The alignment of the perceptron decision boundary with the true one is given by the ratio \begin{equation} w_1/||{\bm{w}}_\perp||. \end{equation} The fitting condition on the data point $({\bm{x}}^\mu, y^\mu)$ requires that the weights ${\bm{w}} = [w_1; {\bm{w}}_\perp]$ satisfy \begin{equation} w_1 |x^{\mu}_1| + y^{\mu} {\bm{w}}_\perp \cdot {\bm{x}}^{\mu}_\perp \geq \frac{\sqrt{d}}{\alpha} \label{eq:sat} \end{equation} which, by defining the random quantities $c_\mu = -y^{\mu} \frac{{\bm{w}}_\perp}{||{\bm{w}}_\perp||} \cdot {\bm{x}}^{\mu}_\perp$, can be recast as \begin{equation} \frac{w_1}{||{\bm{w}}_\perp||} \geq \frac{1}{|x^{\mu}_1|} \left(\frac{\sqrt{d}}{\alpha||{\bm{w}}_\perp||} + c_{\mu}\right). \label{eq:sat1} \end{equation} This relationship is a special case of Eq. \ref{eq:fit_ineq3}. In fact, increasing $P$ gives smaller values of $|x^\mu_1|$ which require larger $\frac{w_1}{||{\bm{w}}_\perp||}$ to fit the training set, while increasing $T$ corresponds to increasing $||{\bm{w}}_\perp||$. A qualitative confirmation of this effect is reported in Fig. \ref{fig:boundary_gradients}-(b).\\ In the following, we consider the regime of large $T$ and large $\alpha$, corresponding to $\frac{\sqrt{d}}{\alpha||{\bm{w}}_\perp||}\ll |c_\mu|$, for which condition \ref{eq:sat1} becomes \begin{equation} \frac{w_1}{||{\bm{w}}_\perp||}\geq \frac{c_{\mu}}{|x^{\mu}_1|} \left( 1+ o(1)\right). \label{eq:sat2} \end{equation} \paragraph{Data distribution and setting.} To control the density of data near the decision boundary $x_1=0$, we consider a distribution on the first component $x_1$ parametrized by $\chi \geq 0$ (Fig. \ref{fig:perceptron_data}): \begin{equation} \rho(x_1) = |x_1|^\chi e^{-x_1^2/2} / Z, \label{eq:rho_x1} \end{equation} with $Z=2^{\frac{1+\chi}{2}}\Gamma(\frac{1+\chi}{2})$ the normalization constant. The other $d-1$ components ${\bm{x}}_\perp = [x_i]_{i=2,...,d}$ are distributed as standard multivariate Gaussian numbers, i.e. ${\bm{x}}_\perp \sim \mathcal{N}({\bm 0}, {\bm{I}}_{d-1})$. $\chi=0$ corresponds to the Gaussian case. This data distribution has been first considered in \citet{tomasini2022failure}. \begin{figure} \centering \includegraphics[width=1\columnwidth]{depleted_sign-data_distribution.png} \vspace{-.8cm} \caption{\textbf{Perceptron model, data distribution on the $x_1$ component.} The sign of $x_1$ determines the class $y=\text{sign}(x_1)$. For $\chi=0$ the distribution is Gaussian.} \label{fig:perceptron_data} \end{figure} The learning setting is defined identically to the one of neural networks in Sec. \ref{sec:definition}. We consider the case $1 \ll d \ll P$, where $d$ is the dimension of the data and the perceptron weights and $P$ is the number of training points. We consider this being a realistic limit when considering the effective dimension $d_{\text{eff}}$ of real datasets ($d_{\text{eff}}\approx 15$ for MNIST and $d_{\text{eff}}\approx 35$ for CIFAR-10 \citep{spigler2020asymptotic}) with respect to the number of training samples $P>10^3$. \textbf{Empirical observations.} A key result is that the perceptron displays asymptotic behaviours in the change of weights and training time similar to those of neural networks. For the considered perceptron initialized with ${\bm{w}}^0=0$, the weights variation $\Delta w$ corresponds to $||{\bm{w}}||$. Since $w_1/||{\bm{w}}_\perp||\gg 1$ for large $P$, we have $\Delta w = ||{\bm{w}}||\simeq w_1$. Eqs. \ref{eq:lazy-weights} and \ref{eq:lazy-time} are verified with exponents reported in Table \ref{tab:exponents}, as shown in Fig. \ref{fig:perceptron_scheme}-(a,c).\\ In addition, we observe that $||{\bm{w}}_\perp||$ at the end of training is proportional to $T$ and independent of $P$ (Fig. \ref{fig:perceptron_scheme}-(b)): \begin{equation} ||{\bm{w}}_\perp||\sim T. \label{eq:wp_T} \end{equation} This observation is a positive test about the effect of $T$ on $||\partial_{\bm{x}} F_\perp||$ proposed in Sec. \ref{sec:interpretation}. \paragraph{Non-universality of the exponents.} Remarkably, the exponents $\gamma$ and $b$ of $P$ for the perceptron depend on the parameter $\chi$ of the data distribution. This finding can be rationalized by considering condition \ref{eq:sat2} at the end of training. In fact, satisfying \ref{eq:sat2} for every training point requires $\frac{w_1}{||{\bm{w}}_\perp||}\geq \underset{\mu}{\text{max}} \frac{c_{\mu}}{|x^{\mu}_1|}$. In Appendix \ref{app:max}, classical extreme value theory is used to show that, for large $P$, the typical value of $\underset{\mu}{\text{max}} \frac{c_{\mu}}{|x^{\mu}_1|}$ behaves asymptotically as $\langle \underset{\mu}{\text{max}} \frac{c_{\mu}}{|x^{\mu}_1|} \rangle = C P^{\frac{1}{1+\chi}} + o\left( P^{\frac{1}{1+\chi}}\right)$ for some constant $C$. Therefore we obtain a prediction for the exponent $\gamma$: \begin{equation} \gamma = \frac{1}{1+\chi}, \label{eq:gamma_chi} \end{equation} in excellent agreement with data (Fig \ref{fig:perceptron_scheme}-(a)). This further confirms that the asymptotic behaviour with respect to $P$ is controlled by the statistics of the points close to the decision boundary. Thus the exponents are non-universal, since they depend directly on the data distribution.\\ An estimate of the parameter $\chi$ for some images datasets is reported in \citet{tomasini2022failure} through the study of kernel ridge regression. For binary CIFAR10, $\chi_{CIFAR}=1.5$ is reported, that according to \ref{eq:gamma_chi} corresponds to $\gamma = 0.4$, a value compatible with those observed in neural networks (Table \ref{tab:exponents}). \begin{figure*} \centering \includegraphics[width=.31\textwidth]{depleted_sign-w1_variation_TPchi-d128.png} \includegraphics[width=.31\textwidth]{depleted_sign-wperp_variation_TPchi-d128.png} \includegraphics[width=.31\textwidth]{depleted_sign-time-empirical_TPchi-d128.png} \vspace{-.5cm} \caption{ \textbf{Perceptron model, $d=128$, varying $T$ and $P$.} \textbf{(a)} \textit{Inset:} Total variation of the weight $w_1$ at the end of training with respect to SGD noise $T$ and training set size $P$ (colors), for different data distributions $\chi=0$ (empty circles) and $\chi=3$ (full diamonds). \textit{Main:} Plotting $w_1 P^{-\frac{1}{1+\chi}}$ gives a curve proportional to $T$ for each value of $\chi$, revealing the asymptotic behaviour $w_1\sim T P^{\gamma}$ (Eq. \ref{eq:lazy-weights} for neural networks) with a data-dependent exponent $\gamma=\frac{1}{1+\chi}$ in accordance with prediction \ref{eq:gamma_chi}. \textbf{(b)} Total variation of $||w_\perp||$ for the same setting of panel (a). $||w_\perp||$ is proportional to $T$ independently of $P$, as stated in Eq. \ref{eq:wp_T}. \textbf{(c)} \textit{Inset:} Total training time $t^*$ for the same setting as panel (a): $t^*$ increases with both $T$ and $P$. \textit{Main:} Plotting $t^* P^{-b}$, with $b$ depending on $\chi$, gives approximately one curve proportional to $T$ for each value of $\chi$, corresponding to the asymptotic behaviour $t^*\sim T P^{b}$ as found for neural networks (Eq. \ref{eq:lazy-time}). } \label{fig:perceptron_scheme} \end{figure*} \section{Conclusions} \label{sec:discussion} In this work we have explored the effect of SGD noise in different training regimes of neural networks using the hinge loss. Since this loss goes to zero at the end of training, the minima found by the algorithm are always flat: a static view explaining the benefit of SGD in terms of the flatness of minima cannot be applied. Instead, we propose a dynamical view where SGD noise increases the weights of the model in directions that are detrimental for learning, which in turn induces an increase in the useful directions to fit the training set. Fitting is the hardest for data close to the decision boundary, whose statistics depends both on the size of the training set and the distribution of data close to the decision boundary. This view naturally explained our observations that the total weights variation, and the training time, depend on both the SGD noise and the size of the training set. It also rationalizes the puzzling observation that the characteristic SGD temperature for which weight changes become significant and the test error is affected by the noise depends on the training set size. Exponents characterizing this relationship are non-universal. We expect them to depend on the data distribution near the decision boundary, as we demonstrated for the perceptron.\\ Our work thus clarifies a key effect of SGD, and explains the range of temperatures where SGD noise matters. However, understanding the sign of the effect of this noise on performance (beneficial or detrimental), and how it relates to the data structure and the network architecture, appears to be a particularly vexing question. For example, for the lazy regime of CNNs, we observe a non-monotonic behaviour of the test error, which initially grows and then decays as the SGD noise is increased. \section*{Acknowledgments} We thank Francesco Cagnetta, Alessandro Favero, Bastien Olivier Marie Göransson, Leonardo Petrini and Umberto Maria Tomasini for helpful discussions. This work was supported by a grant from the Simons Foundation (\# 454953 Matthieu Wyart). \newpage
1,314,259,995,670
arxiv
\section{Introduction} Kinetic Ising models were originally intended to study relaxational processes near equilibrium states \cite{gla63,kaw72}. Later combinations of Glauber and Kawasaki dynamics were used successfully in investigating questions about temperature driven nonequilibrium phase transitions \cite{dem85,gon87,wan88}. In a previous paper a class of general nonequilibrium kinetic Ising models (NEKIM) with combined spin flip dynamics at $T=0$ and spin exchange dynamics at $T=\infty$ has been introduced \cite{men94}, in which, for a range of parameters ( other then temperature) of the model, a directed-percolation-like Ising-to-active phase transition takes place. The line of phase transitions have been found to belong to the same universality class as the phase transitions occurring in the cellular automaton models introduced and investigated by Grassberger et al. \cite{gra84,gra89}. Numerical studies of other models showing similar type of phase transition have been reported recently \cite{jen94,kim94}. In the present note we consider a generalized form of NEKIM by allowing for exchanges of arbitrary range, $R$. The mean-field (MF) limit of NEKIM phase transitions is reached when $R\rightarrow\infty$ and/or the probability of the exchanges, $p_{ex}$, relative to the time scale of spin-flips approaches infinity. In a systematic generalized MF approach (GMF)\cite{gut87,dic88,sza91}, besides the lowest order approximation (ordinary dynamic MF, $n=1$) also the second order cluster equations(n=2) could be solved exactly. Numerical solutions have been obtained up to sixth order. In this way we have found strong theoretical evidence for the conjecture, stemming from simulations \cite{men94}, that the line of Ising-to-active second order phase transition points ends at the Glauber limit ($\delta_{c}=0$, $\delta$ being a parameter of the spin-flip transition rate of crucial importance here) with maximal exchange range and/or rate. It is shown here that this end point is of first order (tricritical point) and is described by plain MF theory. The relaxation time is obtained as $\tau\propto 1/{\mid\delta\mid}$. GMF results show, that with increasing $n$ the critical point becomes of second order and moves towards negative values of $\delta$ of increasing absolute value. The coherent anomaly method \cite{suz86,kol94} has been used to extract the exponent $\beta$ of the order parameter from the results of GMF calculations yielding $\beta=1.0$. \section{The model} \label{sec:2} In NEKIM we have started with the general form of the Glauber \cite{gla63} spin-flip transition rate in one-dimension for spin $s_i$ sitting at site $i$ ($s_i=\pm1$): \begin{equation} w_i = {{\Gamma/2}}(1+\delta s_{i-1}s_{i+1})\left(1 - {\gamma\over2}s_i(s_{i-1} + s_{i+1})\right) \end{equation} where $\gamma=\tanh{{2J}/{kT}}$ ($J$ denoting the coupling constant in the Ising Hamiltonian), $\Gamma$ and $\delta$ are further parameters which can, in general, also depend on temperature. When $T=0$ is taken then $\gamma=1$ and (1) leads to two independent rates: \begin{equation} p_{RW}\equiv{2w_{\uparrow\downarrow\downarrow}}={\Gamma}(1-\delta),\,\, p_{an}\equiv{w_{\uparrow\downarrow\uparrow}}={\Gamma}(1+\delta) \end{equation} responsible for random walk and pairwise annihilation of kinks, respectively. $\Gamma$ and $\delta$ are constants to be varied.\\ The other ingredient of NEKIM has been a spin-exchange transition rate of neighbouring spins ( the Kawasaki\cite{kaw72} rate at $T=\infty$): \begin{equation} w_{ii+1}={1\over2}p_{ex}[1-s_is_{i+1}] \end{equation} where $p_{ex}$ is the probability of spin exchange. $p_{RW}$, $p_{an}$ and $p_{ex}$ have been chosen as normalised to unity, leading to the relation: \begin{equation} p_{ex}=1-2\Gamma \end{equation} The spin-exchange process induces pairwise creation of kinks in the immediate neighbourhood of an existing kink: $k \rightarrow 3k$ with probability ${p_{ex}}$. From this process the ultimate development of an active phase can arise and in ref \cite{men94} we have made the conjecture, and found numerical evidence for it, that $p_{RW}>p_{an}$ (i.e. $\delta<0$) is necessary for this to happen. Now we generalize the original NEKIM model by allowing the range of the spin-exchange to vary. Namely, eq.(3) is replaced by \begin{equation} w_{i,i+k}={1\over2}p_{ex}[1-s_{i}s_{i+k}], \end{equation} where $i$ is a randomly chosen site and $s_i$ is allowed to exchange with $s_{i+k}$ with probability $p_{ex}$. Site $k$ is again randomly chosen in the interval $1\leq R$ , $R$ being thus the range of exchange. The spin-flip part of the model will be unchanged. We have carried out numerical studies with this generalized model in order to locate the lines of Ising-to-active phase transitions. Spin-flip and spin-exchange have been applied alternatingly at each time step, the spin-flip part has been applied using two-sublattice updating, while making $l$ Monte Carlo attempts at random ( $l$ denotes the size of the chain) has been counted as one time-step of exchange updating. It is worth mentioning, that besides $k\rightarrow3k$, also the process $k\rightarrow5k$ can occur for $R\geq3$, and the new kink pairs are not necessarily neighbors. The character of the phase transition line at $R>1$ is similar to that for $R=1$, except that the active phase extends, asymptotically, down to $\delta_c=0$. This is illustrated in Fig.1 , where besides $R=1$, the case $R=3$ is also depicted: the critical value of $-\delta_{c}$ is shown as a function of $p_{ex}$ (Fig 1.a) and b)). Moreover, $-\delta_{c}$ as a function of $R$ is also shown at constant $\Gamma=.35$ . The abscissa has been suitably chosen to squeeze the whole ( infinite ) range of $R$ between $0$ and $1$ and for getting phase lines of comparable size (hence the power 4 of $R/(1+R)$ in case of Fig.1.c)). Besides the critical exponent $\alpha$, used in identifying the phase transition points, also the other critical exponents characterizing the phase transition have been determined numerically around some typical points (far from the end-points) of the phase transition lines for $R>1$, with the same result as was obtained in \cite {men94} for the case $R=1$: the exponents agree - within error - with those of Grassberger's automata \cite{gra84,gra89}. On the phase diagrams of Fig.1 two non-typical regimes can be distinguished, namely\\ a). $p_{ex}\approx 0$.( on Figs. 1.a) and b).) Here NEKIM's behaviour is getting close to that of the plain spin-flip model at $T=0$: the steady state is everywhere Ising-like except for the limit-point $\delta=-1$, where $p_{an}=0$ and the initial kink density is sustained. At this specific point the energy becomes conserved \cite{spo89} and a phase transition takes place with a change in the form of the time dependence of correlations from exponential to stretched exponential. This limit will not be further discussed here. b). $p_{ex}\approx 1.0$ and/or $R\rightarrow\infty$ (on Figs 1.a)-c)). For $p_{ex}\rightarrow1$, $R=1$ we have concluded in [6] that $\delta_{c}\rightarrow{-0}$, though it has been difficult to get reliable numerical estimates for the critical exponents of the transition due to the long transients. In limit b)., after each step of spin-flip ordering, maximal mixing of the neighbourhood of each spin follows, suggesting that a mean-field type situation takes place. It is important that according to eq.(4) $p_{ex}=1$ is approached together with $\Gamma\rightarrow0$ and thus $p_{ex}/{\Gamma}\rightarrow \infty$. (As $\Gamma$ sets the time scale of the ordering process, its vanishing tendency enhances the effect of mixing). The same limit can be reached at fixed $p_{ex}$ by increasing $R$ to infinity (Fig.1.c)). We have also checked the decrease of $-\delta_c$ with $1/R$ numerically at fixed $\Gamma=.35, p_{ex}=.3$ and found, over the decade of $R=4 - 40$, that \begin {equation} {-\delta_c}\approx{2.0{(1/R)}^2} \end{equation} reminiscent of a crossover type behaviour of equilibrium and non-equilibrium phase transitions \cite{rac94}, here with crossover exponent $1/2$. It should be noted here, that to get closer to the expected $\delta_{cMF}=0$, longer chains (we used $l$ values up to $20000$) and longer runs (here up to $t=5\cdot10^4$) would have been necessary. The former to ensure $l\gg R$ \cite{mon93} and the latter to overcome the long transients present at the first few decades of time steps. In what follows we will always refer to the MF limit in connection with $p_{ex}\rightarrow 1$ (i.e. $p_{ex}/{\Gamma}\rightarrow \infty$), for the sake of concreteness, but keep in mind that $R\rightarrow \infty$ can play the same role. \section{Mean-field theory and corrections to mean-field} \label{3} It is straightforward to find the MF equation for the spin-flip model alone ( at $T=0$). By denoting the average spin density by $M$ we get, using (1) \begin{equation} {dM/{dt}}=-{\delta\Gamma}M(M^2-1) \end{equation} The fixed point solutions are $M^{*}=1,-1,0$ of which the first two are stable if $\delta>0$, while the $M^{*}=0$ solution is stable for $\delta<0$, suggesting a (first order) order-disorder-type phase transition at $\delta=0$. That the $M^{*}=0$ fixed point is not an antiferromagnetic type can be shown by carrying out a two-sublattice MF analysis of the model [19]. First sublattice: odd lattice points with average magnetization $M_1$, second sublattice: even lattice points with average magnetization $M_2$. The total average magnetization $M={(M_1+M_2)}/2$ and the difference of the sublattice magnetizations $\Delta={(M_1-M_2)}/2$ obey the following MF equations: \begin{equation} {dM/{dt}}=-{\delta{\Gamma}}M(M^2-1-{\Delta}^2) \end{equation} \begin{equation} {d\Delta/{dt}}=-{2\Gamma\Delta}+{\delta{\Gamma}}\Delta{(M^2-1- {\Delta}^2)} \end{equation} The solutions for fixed point $\Delta^{*}\neq0$ are: $M^{*}=0$, ${\Delta^{*}}^2=-1-{2\over\delta}$. The values of $\delta$ being restricted to $1\geq\delta\geq {-1}$, the only possibility to ensure ${\Delta^{*}}^2>0$ at the same time is: $ \delta=-1$, with ${\Delta^{*}}^2=1$. Thus we will suppose that the transition at $\delta_c=0$ is of order-disorder type. A small fluctuation $dM$ around one of the stable fixed points decreases as ${dM}{\propto}{e^{-t/\tau}}$ with $\tau=1/{\Gamma{\mid\delta\mid}}$. This relaxation time becomes infinite at the MF transition point. The corresponding critical slowing down in its vicinity explains the longer and longer transients observed during simulations. Fig. 2. serves to illustrate the MF result in comparison with results of simulation of NEKIM at three values of $p_{ex}$. The average density of kinks at $t=\infty$ is depicted versus $\delta$. MF approximation corresponds to Fig. 2.a). with a jump at $\delta_{cMF}=0$. Fig. 2.b). shows the behaviour of the pure spin-flip model at $T=0$: the steady state is everywhere Ising-like ($\rho(\infty)=0$) except for $\delta=-1$. Figs. 2.c).,d). and e.) are results of simulation of NEKIM at $p_{ex} =.02$ , $p_{ex}=.9$ and $p_{ex}=.98$, respectively. By further decreasing (increasing) $p_{ex}$, the NEKIM curves get closer and closer to b). ( a).), respectively. Such behaviour is in accordance with our expectations: it supports the MF interpretation of the high-$p_{ex}$ part of the phase diagram. We have applied the generalized mean-field calculation method, or cluster approximation \cite{gut87,dic88} in the form applied for cellular automata \cite{sza91} in order to go beyond the lowest order approximation shown above. Steady-state equations have been set up for block probabilities in $n=2...6$-th order. The system of GMF equations are solvable analitically also for $n = 2$. The $n = 2$ approximation gives the density of kinks $\rho(\infty)$ as : \begin {equation} \rho(\infty) ={{{3\over 4}\,{{{ p_{RW}}}^2} + { p_{an}} - { p_{RW}}\,{ p_{an}} - {\sqrt{{{{{1\over 16} p_{RW}}}^4} + {3\over 2}\,{{{ p_{RW}}}^2}\,{ p_{an}} - {1\over 2}\,{{{ p_{RW}}}^3}\,{ p_{an}} + {{{ p_{an}}}^2} - 2\,{ p_{RW}}\,{{{ p_{an}}}^2}}}}\over {2\,\left({1\over 2}\,{{{ p_{RW}}}^2} - { p_{RW}}\,{ p_{an}} + {{{ p_{an}}}^2} \right) }} \end{equation} for $\delta<0$. For $\delta>0$ $\rho(\infty)=0$, i.e. GMF still predicts a first order transition for $\delta = 0$; the jump in $\rho(\infty)$ at $\delta=0$, however, decreases monotonically with decreasing $\Gamma$, according to eqs. (10) and (2). In order to get the $n > 2$ approximations, the set of GMF equations can be solved numerically only. We determined the solutions of the $n =3,4,5,6$ approximations for the kink density at i).$\Gamma = 0.35$ (Figure 3.) and of the $n=3,4,5$ approximations at ii).$\Gamma=.05$ (Figure 4.). As we can see the transition curves became continuous, with negative values for $\delta_{c}^n$ ($\delta_{c}^n$ denotes the value of $\delta$ in the $n$-th approximation for which the corresponding $\rho(\infty)$ becomes zero). Moreover, $\mid\delta_{c}^n\mid$ increases with growing $n$ values. As increasing $n$ corresponds to decreasing mixing, i. e. decreasing $p_{ex}$, the tendency shown by the above results is correct. Figs. 5.a). and 5.b). show a quantitative - though only tentative - comparison between the results of GMF and the simulated NEKIM phase diagrams. The obtained GMF data for $\delta_{c}^n$ corresponding to $n=3,4,...,6$ ($\Gamma=.35$) are depicted in Fig. 5.a.) as a function of $1/(n-3)$, together with results of simulations. The correspondence between $n$ and $p_{ex}$ has been chosen as the simplest conceivable one. ( Note that $\delta_c\not=0$ is obtained first for $n=4$.). The simulated phase diagram has been obtained without requiring the fulfillment of eq.(4), at constant $\Gamma=.35$. In this case the $\delta_c=0$ limit, of course, is not reached and a purely second order phase transition line can be compared with GMF results (for $n$ values where it also predicts a second order transition). Simulations for $R=3$ have been found to lead to $\mid\delta_c\mid$ values low enough to fit GMF data. The (polynomial) extrapolation of GMF data to $n\rightarrow\infty$ (corresponding to $p_{ex}=0$ , i.e. plain spin-flip), shown also in Fig. 5., could have been expected to approach $\delta=-1$. That this is not case can be due to the circumstances that upon increasing $n$ i) GMF starts here from a first-order MF phase transition, ii). which becomes second order and iii).GMF should end up at a pathological point, discussed shortly in section II., with quite unusual (and not yet cleared up) properties. Fig 5b). shows the $n=3,4,5$ results for $\Gamma=.05$, which are compared now with the $R=1$ simulation data. The critical exponent $\beta$ of the order parameter has been determined processing the results of GMF approximation by the Coherent-Anomaly Method (CAM) [14,15]. According to CAM the GMF solution for kink density $\rho$ at a given level of approximation -- in the vicinity of the critical point, $\delta_c$ -- is the product of some mean-field behavior multiplied by the anomaly factor $a(n)$: \begin{equation} \rho(n) = a(n) \ (\delta / \delta_c^n - 1)^{\beta_{MF}} \ , \end{equation} The true critical exponent, $\beta$, can be obtained by fitting, using the knowledge that the divergence of the anomaly factor scales as : \begin{equation} a(n) \sim (\delta_c^n / \delta_c - 1)^{\beta - \beta_{MF}} \ , \label{ano} \end{equation} as the level of approximation $n$ goest to infinity. More precisely, for the available low level of approximations ($n \le 6$), correction to scaling should also be taken into account : \begin{equation} a(n) = b \ \Delta_n^{\beta - \beta_{MF}} + c \ \Delta_n^{\beta - \beta_{MF} + 1} + ... \ , \end{equation} where $b$ and $c$ are constants and the invariant variable \begin{equation} \Delta_n = (\delta_c / \delta_c^n)^{1/2} - (\delta_c^n / \delta_c)^{1/2} \end{equation} is used. This new variable was introduced recently \cite{kol94} to avoid the ambiguity on the choice of the independent variable ($\delta \leftrightarrow \delta^{-1}$). Using this new variable accurate estimate was given for the critical exponents of the 3D Ising model \cite{kol94} and for the exponent $\beta$ of the stochastic Rule 18 cellular automaton \cite{odo95}. From our GMF approximation results , as shown on Fig.3, we can use the $n=4,5,6$ data for the CAM analysis, while the $n=3$ result can be taken to represent the lowest order MF approximation (with ${\delta_c}^{MF}=0$) for a {\it continuous} transition ( no jump in $\rho$ for $n=3$). For $\delta_c$ we use the results of the polynomial extrapolation. Fig.6. shows that in the $n=3$ approximation the exponent $\beta=1.0064$ , thus $\beta_{MF}\approx1$. Graphs similar to that on Fig.6. have been obtained for $n=4,5,6$, as well. Consequently, as Table \ref{tablex} shows, the anomaly factor does not depend on $n$. This means, according to eq.(12), that the exponent is estimated to be equal to the 'mean-field' value $\beta \simeq \beta_{MF} = 1$. \nopagebreak \narrowtext \begin{table} \caption{CAM calculation results } \begin{tabular}{lrl} $n$ & $\Delta_c^n$ & $a(n)$ \\ \tableline $4$ & $2.49043$ & $0.01083$ \\ $5$ & $1.81022$ & $0.01074$ \\ $6$ & $1.45766$ & $0.01079$ \\ \end{tabular} \label{tablex} \end{table} \pagebreak \section{Discussion} \label{4} The mean-field limit of the line of non-thermal phase transitions occurring in a family of one-dimensional kinetic Ising models has been analysed here. This line consists of second order Ising-to-active phase transition points which belong to the universality class found first by Grassberger et al\cite{gra84,gra89}. The first order endpoint of this line has been found to be described by MF theory. Systematic generalized MF theory has been applied to treat bigger and bigger blocks of size $n$ exactly in order to be able to depart from this tricritical point. Numerically solvable results up to $n=6$ have given support of results of simulations and especially provided a value for the critical exponent $\beta$ of the order parameter (density of kinks) $\beta=1$, which is in accord with Grassberger's result : $\beta=.94\pm .06$. The value $\beta=1$ coincides with the MF $\beta$-exponent for directed percolation. This is not surprising in case of our $n=3$ result which we have used as an effective MF one for a {\it continuous} transition at $\delta=0$. As Grassberger has pointed out \cite{gra89} in the rate (or MF) approximation there is no difference between models leading to the Ising-to- active transition and directed percolation. In this argumentation, however, the MF equation is written for the kink density (and not for the magnetization as in eq.(7)) and has the form: $ dn/dt= 2\mu n-2\lambda n^2 $, where $\mu$ and $\lambda$ are the reproduction and annihilation rates, respectively. Nevertheless, a heuristic equation of similar type can also be constructed in the present model using $\mu\propto (p_{RW}-p_{an})$ as a rate making kink-reproduction effective (a conclusion stemming from simulations). The corresponding Mf critical value is then $\delta_{c}^{MF}=0$ and $\beta^{MF}=1$. It is, however, surprising that higher order approximations of GMF have not given practically any deviation from the MF value of $\beta$. In simulations for branching annihilating random walk with four offsprings Jensen \cite{jen94} conjectures a value $\beta=13/14$ on the basis of simulation results, no theoretical motivation appears to exist for this value, however. To decide the question what the exact value of $\beta$ in this universality class is appears to be a challenging task, and probably calculating GMF in even higher approximations than here would be worth wile. \acknowledgments The authors would like to thank the Hungarian research fund OTKA ( Nos. T-2090, T-4012 and F-7240) for support during this study. One of us (N.M.) would like to acknowledge support by Deutsche Forschunggemeinschaft, SFB341, during her stay in K\"oln, where part of this work has been carried out.
1,314,259,995,671
arxiv
\section{Introduction} The main problem of dynamics is probably to understand in depth the role and meaning of the term ``time''. Two kinds of time are used in physics. On one side, the parametric time $t$, just an auxiliary mathematical element which, strictly speaking, is not observable since any other time of the form $t'=f(t)$, $f$ being any well behaved function, serves equally to describe the motion of a dynamical system. On the other the time measured with particular clocks, say $\sigma$, which are dynamical systems obeying the laws of physics. The latter is a dynamical variable, for instance the angle of a pointer, and deserves therefore to be qualified as dynamical. The consequences of the existence of these two kinds of time, parametric and dynamical, have not been perhaps explored enough. Here we show that the dynamical time $\sigma(t)$ measured by a clock $\sigma$, can be obtained as the solution of the equation of motion that characterizes the clock, of the form ${\rm d} \sigma /{\rm d} t=u(t)$, where $u(t)$ denotes here the ``march" of the clock $\sigma$ with respect to the parametric time $t$. While $\sigma(t)$ has a real dynamical character, $t$ is just a mathematical parameter, which (i) has a purely auxiliary role to write the action and obtain the equations of motion, (ii) lacks any physical or dynamical nature, iii) it is a symbol that describes the evolutive character of the reality and (iv) is not observable. The differences between parametric and dynamical times could have significant consequences, since two dynamical clock-times, say $\sigma_1$ and $\sigma_2$, are not necessarily equivalent, so that there could be different times accelerating with respect to one another. The consequences of these arguments could be important; we just mention here two cases in which they could shed some light. First is the meaning of the cosmic time. Second, the fourth Heisenberg relation which requires that the time be a dynamical variable. In order to write the equations of motion of a system in terms of really observable and dynamical quantities, what is done is to compare two motions, one of the system and the other of a standard clock. This requires the use of two principles. The first is the parametric invariance under transformations $t\rightarrow t'=f(t)$, an important property of gravitation theories, the other is a principle of coherence, {\it i.e., } that the equations of motion of both the system and the clock be described by the same physical theory.\\ \section{Parametric invariance in classical dynamics} Though the parametric time is a fundamental concept in classical dynamics, as said before, it has a non-dynamical character. As a consequence, there is no canonical momentum conjugate to $t$. Common wisdom assumes that this non-dynamical $t$ is measured with a clock, but this assumption must be submitted to a rigorous analysis. Note that it is always possible to synchronize two clocks at a certain initial time $t_0$, but what cannot be assured is that they will keep ticking at the same rate. This raises the question whether the equations of motion of dynamics depend or not on the march of the clocks, which implies the need to establish a parametric invariance principle. There exists a scheme in which all these problems can be solved by means of the introduction of the idea of a dynamical time \cite{Han76,RT08}. The $t$ variable appearing in this scheme is just a non-observable auxiliary parameter. In fact, the theory so constructed is parametric invariant, as happens also in general relativity. In order to do that we replace the standard action $S=\int [p\,\dot{q}-H(p,q)]{\rm d} t$, by the alternative expression \begin{equation} S=\int\{\Pi (t)\dot{\sigma}_0(t)+p\,(t)\dot{q}(t)-u(t)[ \Pi(t )+H(p\,(t),q(t))]\}{\rm d} t \,,\label{20}\end{equation} (overdot means derivation with respect to the auxiliary parameter $t$), where $\sigma_0(t)$, $\Pi(t)$ are conjugate variables that describe the behavior of the clock, and $\Pi _u$, the momentum conjugate to $u(t)$, weakly vanishes. The corresponding Hamiltonian is $\hat{H}=u[\Pi +H(p,q)]+\lambda \,\Pi _u$ where $\lambda$ is a Lagrange multiplier. The stability of the weak condition $\Pi _u=0$ implies the following first class constraint \begin{equation} \Pi +H(p,q)=0,\label{25}\end{equation} which induces the following reparametrization transformations $\delta \sigma _0=\alpha (t)$, $\delta q=\alpha (t) \dot{q}$ and $\delta p=\alpha (t) \dot{p}$ with $\alpha(t)$ being an arbitrary function. The transformations induced by $\Pi _u$ allow then to interpret $u(t)$ as an arbitrary function, so that the Hamiltonian becomes \begin{equation} H^E =u[\Pi +H(p,q)]\,,\label{40}\end{equation} Though this Hamiltonian reduces to a first class constraint, it contains a very realistic dynamical evolution, given by the Hamiltonian equations \begin{equation} \dot{q}=u{\partial H\over \partial p},\quad \dot{p}=-u{\partial H\over \partial q},\quad u=\dot{\sigma}_0,\quad \dot{\Pi}=-\dot{H}=0.\label{50}\end{equation} It follows that \begin{equation} {{\rm d} q\over {\rm d} \sigma_0}={\partial H\over \partial p},\quad {{\rm d} p\over {\rm d} \sigma_0}=-{\partial H\over \partial q},\quad u={{\rm d} \sigma_0 \over {\rm d} t},\quad {{\rm d} H\over {\rm d} \sigma_0}=0\,, \label{60}\end{equation} equations that are full of dynamical significance. The first two are the canonical equations of motion with the dynamical time $\sigma_0$ as the time variable, in such a way that the evolution becomes a correlation between dynamical variables. The third one can be interpreted as the equation of motion ({\it i.e., } the ``march'') of $\sigma_0$ with respect to the parameter $t$. Notice that the total Hamiltonian $\hat{H}=u[\Pi +H(p,q)]+\lambda \,\Pi _u$ is the sum of two terms, describing, respectively, the physical system and the clock. The equation of motion of the second term, $H_{\rm clock}=u\Pi + \lambda \Pi _u$, is precisely that of a clock $u={\rm d}\sigma_0 /{\rm d}t$. Since this theory is invariant under reparametrization, we may fix, for instance, the gauge by the condition $\sigma_0 =t$ ({\it i.e., } $u=1$), so that we recover the ordinary canonical formalism with $t$ being the Newtonian time. Notice that the choice of gauge means in fact to choose a clock. The observations are performed with real clocks, which are dynamical systems, each one with a dynamical variable that is a well behaved increasing function of $t$ and can therefore be identified with a dynamical clock-time $\sigma(t)$, which can be used to fix the reparametrization gauge. As long as the observations make use of only the standard dynamical clock $\sigma_0$, the scheme is nothing else than the Hamiltonian equations. This may not occur, however, if a real clock $\sigma(t)$ with a different march is involved. In the latter case, the motion equations are (\ref{60}), but with $\sigma$ and $\sigma _0$ instead of $\sigma_0$ and $t$, respectively, \begin{equation} {{\rm d} q\over {\rm d} \sigma}={\partial H\over \partial p}\,;\qquad {{\rm d} p\over {\rm d} \sigma}=-{\partial H\over \partial q}\,;\qquad u={{\rm d} \sigma \over {\rm d} \sigma _0}\,, \label{70}\end{equation} which describe the physics of a system in operationally realistic terms. This means that they do not refer to any unobservable parametric time but to $\sigma$, which is the time really observed by a real clock. The novelty is here the presence of the third equation (\ref{70}), which is the dynamic equation of the second clock with respect to the standard one. The important fact for our purposes is that classical dynamics can be formulated as a parametrically invariant theory. \section{The relativistic particle} Before going into this section let us summarize the arguments of the previous one. Starting from a Hamiltonian theory with $n$ degrees of freedom, we introduced a new one (the dynamical time), in such a way that the motion equations become correlations between dynamical variables only (\ref{60}). Nevertheless the new theory has a first class constraint (reparametrizations) that allows us to fix arbitrarily the value of $\sigma_0$. The only way to do that is to choose a new dynamical system, {\em i.e., } a real clock such as the Earth's motion or any other, with a well behaved dynamical variable $\sigma (t)$ as appears in (\ref{70}). So in practice to fix the gauge of the symmetry under reparametrizations is to choose a clock. In other words, we measure a motion using another one as a standard. It must be underscored that, to completely formulate the equations of a dynamical system, the chosen clock must be specified. The kinematics of the free particle in special relativity follows the same scheme. The parametric invariant action ${\cal S}=mc\int {\rm d} s$, corresponds to the Lagrangian (overdot means derivative with respect to an arbitrary time) \begin{equation} {\cal L}=-mc\sqrt{\dot{x}_0^2-\dot{x}_1^2-\dot{x}_2^2-\dot{x}_3^2} \label{100}\end{equation} It is easy to see that there is a first class primary constraint of the form \begin{equation} p_0^2-p_1^2-p_2^2-p_3^2=m^2c^2,\label{110}\end{equation} that expresses the evident parametric invariance of the action. Due to the existence of this primary constraint not all the time derivatives of the coordinates can be obtained in terms of the momenta. Choosing now $\dot{x}_0$ as an arbitrary function of $t$, the Hamiltonian becomes \begin{equation} H^E=\dot{x}_0\left(p_0+\sqrt{p_1^2+p_2^2+p_3^2+m^2c^2}\right), \label{110}\end{equation} which reproduces (\ref{40}) with $p_0$ playing the role of $\Pi$ and the square root being $H(p,q)$. It is clear thus that $x_0(t)$ must be interpreted as a dynamical time. Let us take now the motion of a particle in a general metric tensor $g_{\alpha\beta}$, so that ${\rm d} s=\sqrt{g_{\alpha\beta}{\rm d} x^\alpha {\rm d} x^\beta}$. The Lagrangian is \begin{equation} {\cal L}= -mc\sqrt{g_{\alpha\beta}\dot{x}^\alpha \dot {x}^\beta}.\label{120}\end{equation} Note that, since the motion is geodesic, the components of the metric tensor are not dynamical variables but prescribed functions of the coordinates. Following the same procedure as in the previous case, the primary constraint is now $g^{\alpha\beta}p_\alpha p_\beta=m^2c^2$. Thus the Hamiltonian becomes \begin{equation} H=\dot{x}^0[p_0-(N\sqrt{p_ip^i+m^2c^2}+p_iN^i)],\label{140}\end{equation} where $i=1,2,3$, $N$ is the lapse, $N_i$ the shift and the Latin indices are raised and lowered with the three-dimensional metric. As we see, the situation is the same as in previous cases, $x^0$ playing the same role as the dynamical time. It must be underlined that all the previous Hamiltonians are, in fact, first class constraints. They generate, however, well defined dynamical evolutions (see (\ref{60})--(\ref{70})). Notice that they contain two terms that describe i) the dynamical system which is studied and ii) a particular clock. The case of the particle in a gravitational field $g_{\alpha\beta}$ illustrates the difference between the spatial coordinates $x^i$ and the temporal one $x^0$ since the former can be chosen arbitrarily while the latter needs an additional dynamical system (a real clock) in order for it to be fixed so that probably a (3+1)-spacetime is closer to the reality than a 4-spacetime. \section{The Einstein--Hilbert action} General relativity was constructed to be a parametric invariant theory from its very foundation, as happens with any other diff-invariant theory. Its essential difference from the previous examples is that, in the former cases, the dynamical variables are the coordinates, defined in a non-dynamical metric. Conversely, in the latter, the dynamical variables are the components of the metric tensor, while the coordinates are auxiliary objects with no dynamical nature. Accordingly to our previous statements, we will take from now on a (3+1)-spacetime. In the ADM scheme \cite{ADM59} the Hamiltonian becomes \begin{equation} H^E= \int {\rm d} ^3x [N{\cal H}(q_{ij},\,\pi^{ij})+N_i {\cal \chi}^i (q_{ij},\,\pi^{ij})],\label{160}\end{equation} where $N$ and $N_i$ are the lapse and the shift, respectively, $q_{ij}$ the 3-metric and $\pi ^{ij}$ its canonically conjugate momentum. The absence of time derivatives of $N$ and $N_i$ determines the presence of primary first class constraints, which implies in turn that $N$ and $N_i$ are arbitrary functions. The secondary first class constraints ${\cal H} =0$ and $\chi _i =0$ fix the subspace in which the motion takes place. If one fixes $N_i=0$, the Hamiltonian becomes $H=\int {\rm d} ^3x\, N{\cal H}$. From this expression one could reproduce the same process followed before in the case of ordinary analytical dynamics. To interpret the dynamics described by a Hamiltonian such as (\ref{160}) it suffices, maintaining $N_i=0$, to consider the meaning of $N$, defined as ${\rm d} \tau/{\rm d} t$ where ${\rm d} \tau$ is the proper time distance between two shells of the foliation. Note that $N$ is an arbitrary dynamical variable which plays thus the same role as $\dot{x}^0$ and $\dot{\sigma}$ in the previous cases: all of them are derivatives with respect to the parametric time. We find, therefore, that the dynamical time coincides with the proper time. Nevertheless, as is suitable to general relativity, the dynamical time is just a local time. Fixing $N=1$ implies the use of proper time. In the case of the electromagnetism equations in a gravitational field, a geometrical contribution to the permittivity and permeability appear which modify the values of $\varepsilon_0$ and $\mu_0$ and thus the speed of light. This change is easily avoided by using the proper time as the dynamical time \cite{Lan75}. In any case, the speed of light is still a fundamental constant if measured with atomic clocks since the periods of atomic oscillations are obviously constant with respect to it. Curiously, atomic clocks measure proper time, notwithstanding the fact that they are quantum devices described by quantum physics, while the proper time is a classical concept. The choice of a physical clock is then a most relevant question. The clock must comply with some obvious conditions. It must be a dynamical system, the solution of its equation of motion $\sigma (t)$ being a well behaved and monotonously increasing function of the parametric time $t$, as for instance the number of cycles of an harmonic oscillator or of the Earth rotation. Strictly speaking the fixing of a gauge is a mathematical question, though physically relevant since it is equivalent to the choice of a clock. It must be underscored that the complete description of a dynamical system needs to specify the clock which is used. This is a very important problem, specially for cosmological models. It must be underscored that the previous arguments imply that the parametric invariance is the main characteristic of classical dynamics. {\it I.e.}, this invariance states that the equations of motion are independent of the clock used to observe the trajectory. Otherwise said, it is a way to restrict to the time variable the principle of general covariance of relativistic physics. Let us see what would happen if parametric invariance is not taken into account. For this purpose and in order to understand general relativity, simplified models have been proposed to obtain valuable information in areas such as quantum gravity or cosmology. The usual strategy is to kill some degrees of freedom. There is a way, however, to achieve the same result but going in the opposite direction, {\it i.e., } adding degrees of freedom. This is the case of the Husain--Kucha$\check{\mbox{r}}$ model \cite{Hus90}, which lacks the Hamiltonian (scalar constraint) in such a way that the number of degrees of freedom per space point grows from 2 to 3. In such a theory, parametric invariance would be absent. The price to be paid then is that the four dimensional metrics that can be constructed seem to be degenerate. Without discussing this point here, it is important to state that the Husain--Kucha$\check{\mbox{r}}$ model is a particular case of a more general theory (see \cite{Bar98} for details) that includes nondegenerate metrics if a dynamical time variable is present. \section{A principle of coherence} As was pointed out at the end of Section 1, when two clocks are involved the question of their coherence must be considered. There is no problem if the dynamics of both the system and the clock are governed by the same physical theory. This is because any discrepancy between two clocks must be solvable, from the theoretical point of view, in the frame of the theory itself. For instance, the effect of the tides on the Earth's rotation modifies the value of the day, an effect that can be calculated by taking into account the gravitation involved in the Earth--Moon system. This requirement of coherence, which guarantees that the equation of motion of the clock ({\it i.e.}, its march) is given by the same theory as that of the dynamical system, cannot be maintained when the clock and the system obey two different theories. This is the case when atomic clocks are used in classical general relativity. Lacking a quantum gravity theory, the equation of motion of the atomic clocks $\sigma_2(t)$ cannot be determined {\it a priori} and, consequently, it is not possible to compare it with the equation of motion $\sigma_1(t)$ of a classical clock. The only way to do so relies necessarily on empirical methods. Note that if it is found that the two marches are different, this does not necessarily imply a violation of parametric invariance. The previous considerations certainly clarify the role of the clocks and the meaning of the word ``time''. The two main kinds of clocks used in physics are the astronomical and the atomic ones, which are dynamical systems based on classical and quantum physics, respectively. The solar system taken as a clock gives the ephemeris time while the vibrations of quantum systems measure the atomic one. Current wisdom assumes implicitly that these two types of clocks give the same time but, as explained before, this is not necessarily so. Indeed there is no {\it a priori} reason to postulate that two clocks beat at the same rate if they are based on two different theories, such as gravitation and quantum physics which are not only different but, even more, all efforts to unify them have failed up to now. \section{Looking for observational evidence} The arguments of this paper show that the difference between $t_{\rm astr}$ and $t_{\rm atom}$ is either nil or very small, otherwise an unexpected new effect should have been detected by now. Let us admit that it is non-nil. Because of the continuous improvement of measurement devices during the last decades, an observational test of the relative acceleration between these two times might already be available, although we could be unaware of this possibility. What's more, the effect could have been observed by now but without being properly interpreted. A provoking case could be a spaceship receding from the Sun. Since its trajectory is calculated with standard gravity theories that use astronomical time but it is measured with devices based on quantum physics that use atomic time, some anomaly could be observed. In fact the theory gives the ship's trajectory as a certain function parametrized by astronomical time ${\bf r}={\bf r}(t_{\rm astr})$ but the observations see the same three-dimensional trajectory, although parameterized by atomic time and given by a different function ${\bf r}'={\bf r}'(t_{_{\rm atom}})$. The two times are related as ${\bf r}'(t_{\rm atom})={\bf r}(t_{\rm astr})$ (they are examples of the aforementioned clock-times $\sigma_2(t)$ and $\sigma_1(t))$. It is clear that they can be synchronized at a certain initial time so that $t_{\rm astr,\,0}=t_{\rm atom,\,0}=t_0$, but they will start to desynchronize progressively afterwards as \begin{equation} {\rm d} t_{\rm atom}=[1+a(t-t_0)]\,{\rm d} t_{\rm astr},\quad \mbox{ with }\qquad a={{\rm d} ^2t_{\rm atom}\over {\rm d} t_{\rm astr}^2}, \label{180}\end{equation} where the small inverse time $a$ is the relative acceleration of $t_{\rm atom}$ and $t_{\rm astr}$, and $u={\rm d} t_{\rm atom} /{\rm d} t_{\rm astr}=1+a(t-t_0)$ the march of $t_{\rm atom}$ with respect to $t_{\rm astr}$. Note that it is not necessary, at first order, to specify which one of the two times is $t$. Defining the velocities of a mobile with respect to the two times as $v_{\rm atom} ={\rm d} \ell/{\rm d} t_{\rm atom}$ and $v_{\rm astr} ={\rm d} \ell/{\rm d} t_{\rm astr}$, it follows that \begin{equation} v_{\rm atom} = {v_{\rm astr}\over u}, \qquad \qquad {\Delta v\over v}= -a(t-t_0), \label{190}\end{equation} with $\Delta v =v_{\rm atom} -v_{\rm astr}$. As could have been expected, the observational fingerprint of the relative acceleration of the two clock-times would be a discrepancy between the expected and observed speeds of a mobile. This implies that the speed of light would depend on which clock-time is used: it is a fundamental constant only if measured with atomic clock-time. It must be so since the periods of the atomic oscillations are obviously constant with respect to $t_{\rm atom}$, in fact they are its basic units (see \cite{RT08,Ran04} where the details are explained). Note that (\ref{180})--(\ref{190}) imply that if $a<0$, then $v_{\rm atom} >v_{\rm astr}$ while if $a>0$, then $v_{\rm atom} <v_{\rm astr}$ (assuming $t>t_0$). In the latter case, the ship would seem to lag behind the position predicted by gravitation theories. In fact quite a similar lag has already been observed and has even a name: the Pioneer anomaly. Surprisingly, it remains unexplained more than thirty years after being discovered by Anderson {\it et al.} in 1980 \cite{And98,And02,Tur10b} in spite of many efforts to account for it. What is important here is that the observational fingerprint of the anomaly has the same form as the second equation (\ref{190}). What Anderson {\it et al.} found is that the frequencies of the two-way signals to and from the Pioneer 10 spaceship included an unexpected Doppler residual which did not correspond to any known motion of the ship. They were able to measure the value $a =(5.84\pm 0.88)\times 10^{-18}\mbox{ s}^{-1}$, although using the inverse time $a_{\rm t}=a/2$, and suggested that $a_{\rm t}$ could be ``like a non-homogeneity of time''\ or a ``clock acceleration'' \cite{And98}. But they did not explain acceleration with respect to what, nor did they develop any theoretical analysis of this idea, assuming at first that $2a_{\rm t}$ was just the measure of a real Doppler effect. However it was soon understood that this interpretation is neither compatible with the equivalence principle nor with the cartography of the solar system. For several years it was thought that systematics would be the most probable explanation of the anomaly (see the conclusions of \cite{And02}) but no error was found in spite of several different mathematical analyses of the data, including independent ones \cite{Mar02,Lev09,Ols07,Tot09b,Tur06,Tot07}. Currently the so called thermal model is investigated but, up to now, it has not given a solution to the riddle \cite{Tur09,Tur10}. For a relation of the recent attempts to explain the anomaly, see \cite{Tur06}, Section 2.3, \cite{Tur06a}, Section 2 or \cite{Tur10b}, Section 6. Up to now and more than thirty years after its discovery, the Pioneer anomaly remains without a generally accepted solution, even though it happens in our backyard, the solar system. Note, however, that this work's arguments, based on the principle of parametric invariance, give a solution of the riddle. In fact, the authors of this paper proposed a model to explain this intriguing phenomenon \cite{RT08,Ran04}, in which the non-equivalence of $t_{\rm atom}$ and $t_{\rm astr}$ is due to the combination of the fourth Heisenberg relation and the unavoidable coupling between the quantum vacuum and the background gravitational potential $\Psi_{\rm bg}(t)$ that must pervade the universe. The acceleration of the clocks is given in that model as $a=\eta\dot{\Psi}_{\rm bg}(t)$, where $\eta$ is a pure number related to the electromagnetic properties of empty space and the overdot means time derivative. However, as the presence of $\Psi_{\rm bg}(t)$ indicates, that previous model is objectionable since the very idea of potential is not well defined in general relativity and cosmology, except in some cases as an approximation. On the other hand, the arguments used in this work do not use the concept of potential. They are based instead on the principle of parametric invariance which is a very fundamental principle in classical dynamics. Nevertheless, the previous work \cite{RT08,Ran04} has some interesting features. It can be applied to a limited region of space, using the potential of the nearby bodies, not the background one. Moreover it suggests a mechanism to explain the physical reasons of the time acceleration. In fact, it is clear that gravity surely affects the value of $t_{\rm astr}$, while the quantum vacuum does not, the opposite being true for $t_{\rm atom}$. One example of the application of the previous model is given in reference \cite{RT09}, where it is shown that it is fully compatible with the cartography of the solar system. \section{Conclusions} 1. Building on the principle of parametric invariance, it was shown that the concept of time is much more complex than is usually assumed. It is important, in particular, to distinguish between parametric time and dynamical time and to understand that two stable, accurate and good but different clocks can be non-equivalent. By this we mean that the times they measure could accelerate with respect to one another if they are based on different physical theories, as happens in the case of atomic and astronomical times, $t_{\rm atom}$ and $t_{\rm astr}$, which are based on classical gravity and quantum electromagnetism, respectively. This could be stated by saying that the principle of parametric invariance has room for non-equivalent clock-times. 2. It is very important to understand that the description of a dynamical system can not be considered complete without the explicit mention of the chosen physical clock. This is specially true in cosmology problems. 3. Although these arguments might seem rather formal, they are also of practical importance. In particular, this work proposes an explanation of the Pioneer anomaly that is a refinement of a previous one and is fully compatible with the cartography of the solar system \cite{RT08,RT09}. It is based on the non-equivalence of the atomic time and the astronomical time, which happens to have the same observational fingerprint as the anomaly. The inverse time $a$ that characterizes the observations turns out to be the second derivative of $t_{\rm atom}$ with respect to $t_{\rm astr}$. {\bf Acknowledgements} We are indebted to Profs. A. I. G. de Castro, J. Mart\'in and J. Us\'on for discussions.
1,314,259,995,672
arxiv
\section{introduction} The stability of spatially extended dynamical systems and the transition towards statio-temporal chaos is still a wide open problem. Considerable progress has been made recently by models for pattern formation \cite{hohenberg93} and advanced three-wave interaction schemes \cite{chow95,kaup79}. The plasma turbulence phenomenon \cite{tsytovich} is considered to be one possible application to the concept of spatio-temporal chaos. In particular, drift wave turbulence is of special interest for the understanding of anomalous transport in magnetically confined plasmas \cite{horton84}. For the investigation of the transition from a stable plasma to drift wave turbulence, the study of nonlinear model descriptions is unambiguous\cite{horton90}. In the present paper, we present a simple dynamical model for spatio-temporal bifurcations towards chaos. It is obtained from the numerical analysis of new experimental observations of the space-time structure of current-driven drift waves in cylindrical geometry. In Section \ref{exp}, drift waves are briefly introduced. The experimental arrangement as well as the diagnostic tools for the measurement of the spatio-temporal structure of regular and turbulent drift waves are described. For the present investigation, three paradigmatic data sets are considered for three different values of the control parameter : one single propagating mode, nonlinear interaction of drift modes, and fully developed turbulence. These different dynamical states result after successive bifurcations caused by an increase of an appropriately chosen control parameter. The powerful tool of biorthogonal decomposition \cite{LimaCom,dudok94} is used to analyse the experimental data. The results of this analysis provide the basis for the model description. In Section \ref{moduldef}, the complex modulations of two--dimensional spatio--temporal systems are defined and the importance of modulated monochromatic travelling waves (modulated MTW) is underlined. The modulated MTWs are identified in Section \ref{LookModTW} and characterized in Section \ref{secorg}. A speed doubling phenomenon is observed. A low--dimensional model is provided in Section \ref{secmodel}. \section{Experimental investigations}\label{exp} Drift waves are universal instabilities of magnetically confined plasmas. In the present study we consider a cylindrical plasma column with a constant axial homogenous magnetic field $\vec{B}=B\vec{z}$ and a radial electron density profile $n_{\rm e}(r)$. A fluid description \cite{nicholson} of the plasma shows that the electrons drift in the azimuthal direction $\vec{\theta}$ with diamagnetic drift velocity \begin{equation} \vec{v}_{\rm dia}=-\frac{k_{\rm B}T_{e}}{eB}L^{-1}\,\vec{\theta} \end{equation} where $L^{-1}=\frac{1}{n(r)}\frac{{\d}n(r)}{{\d}r}$ describes the inverse gradient length of the density profile. From the thermodynamic point of view, the diamagnetic drift provides a source of free energy for the drift instability \cite{krall}. The drift instability propagates as a plasma wave in the azimuthal direction with diamagnetic velocity. The wave number is predominantly in the $\vec{\theta}$-direction, but has a small $\vec{z}$-component to allow the electrons to flow freely along the magnetic field lines. The frequency of the instability is roughly given by $\omega^*=(m/r_0)v_{\rm dia}$, where $r_0$ is the position of the maximum density gradient and $m$ is the azimuthal mode number. The linear analysis of a cylindrical, weakly ionized plasma has revealed that the growth rate of drift instabilities is strongly enhanced by electron neutral collisions as well as by an additional electron drift along the $\vec{z}$-axis \cite{ellis80}. Generally, only the onset regime of growing drift instability allows a linear description. In the nonlinear regime, a large variety of new dynamical phenomena become important (for a detailed review see Refs.~\cite{horton84,horton90}). Of particular interest is the transition from a stable plasma to turbulence, because fluctuation-induced transport plays a decisive role in the plasma edge physics of fusion devices \cite{carreras92,wagner93}. The experimental investigations are carried out in the central section of a large magnetized triple plasma device. A schematic drawing is shown in Fig.~\ref{kiwi}. \begin{figure}[htb] \centering \caption{Schematic representation of the large triple plasma device. The plasma is produced in two independent source chambers. Drift waves are observed in the plasma column in the magnetized midsection.} \label{kiwi} \end{figure} A homogenous quiet plasma is produced in two independent multidipole-confined discharge chambers \cite{limpaecher73}. For the present studies, argon is used as the filling gas and the degree of ionization is below $0.1\%$. The magnetized central section is separated from the plasma chambers by two electrically isolated mesh grids (transparency $50\%$). The plasma produced in the discharge chambers enters the central section by (i) diffusion and (ii) drift. The diffusion process (i) is based on the axial density gradient between the plasma chamber and the central section. The effect of magnetic mapping by the fringing magnetic field lines at the end of the central section is minimized by a particular magnetic compensation technique \cite{pierre86}. An additional electron drift (ii) is superimposed by biasing the grid positively with respect to the plasma potential in the source chamber. This electron drift is known to destabilize drift waves (see above) and is consequently an appropriate control parameter for the dynamics of the system. Experimentally, the axial electron drift is varied by the bias of the injection grid $U_{\rm G1}$ (cf.~Fig.~\ref{kiwi}). The second grid is considered as a plasma loss surface and remains at anode potential, i.e : $U_{G2}=0$. In order to be close to the threshold value for the onset of the drift instability, only one plasma chamber is operated and a gradient--driven electron drift is present. The most important discharge parameters and the plasma parameters for the present measurements are summarized in Table \ref{param}. The radial profiles of the electron-density, plasma potential, and electron-temperature are plotted in Fig.~ \ref{profiles}. A more detailed description of the experiment can be found in Ref.~\cite{latten95}. \begin{table}[htb] \begin{center} \caption{List of discharge parameters and plasma parameters for which the experimental investagation is performed.} \label{param} \begin{tabular}{l|l|c} parameter & symbol & value\\ \hline\hline magnetic field & $B$ & $70{\rm mT}$ \\ neutral gas pressure (argon) & $p$ & $5.6\cdot 10^{-2} {\rm Pa}$ \\ discharge voltage & $U_{\rm d}$ & $60{\rm V}$\\ discharge current & $I_{\rm d}$ & $13{\rm A}$\\ plasma radius (FWHM) & $R$ & $0.1{\rm m}$\\ plasma length & $L$ & $1.8{\rm m}$\\ $e$ drift velocity & $v_{\rm d,e}$ & $\leq 0.4v_{\rm th,e}$\\ \hline electron density (center) & $n_{\rm e}$ & $5 \cdot 10^10{\rm cm}^{-3}$\\ electron temperature (center) & $T_{\rm e}$ & $1.5{\rm eV}$\\ ion temperature (center) & $T_{\rm i}$ & $0.03{\rm eV}$\\ \hline\hline \end{tabular} \end{center} \end{table} \begin{figure}[htb] \centering \caption{Radial profiles of a) the electron density, b) the plasma potential, c) the electron temperature} \label{profiles} \end{figure} The spatiotemporal structure of regular and turbulent drift waves is investigated by an azimuthally arranged multi-channel Langmuir probe array \cite{latten95}. Each single Langmuir probe provides the temporal fluctuations of the plasma density. For the present experiment, a circular arrangement of $N=64$ probes at constant radial and axial position is used. The fixed radial position of the probes, imposed by technical constraints, doesn't allow to address here the interesting problem of the radial profile of the perturbation. The probe array provides the temporal evolution of the spatial structure of drift waves which propagate mainly in the azimuthal direction. The temporal resolution is given by the maximum sample rate of the aquisition system ($\Delta t=1\mu{\rm s}$), and the spatial resolution is given by the azimuthal angle between each two probes ($\Delta x=2\pi/64$). The data is stored as an $N \times M$-matrix $u_{i,j}=n_{e}(i\Delta x,j\Delta t)$ where $N=64$ (space) and $M=2048$ (time). Note that investigations of spatiotemporal phenomena can be done with one probe by a method based on conditional averaging \cite{Iizuka}. In order to study the bifurcation behaviour of drift waves, the data sets of three representative dynamical states are considered: ${\cal U}_1$ a single monochromatic drift mode, ${\cal U}_2$ drift modes with nonlinear interaction, ${\cal U}_3$ turbulent drift waves. The different dynamical states ${\cal U}_k$ are recorded successively by increasing the accessable control parameter $U_{\rm G1}$. In Fig.~\ref{data}, the three data sets ${\cal U}_k$ are shown as greylevel plots. \begin{figure}[htb] \caption{Greylevel plots of the three drift wave data sets. Grey corresponds to the zero level, black is the minimum, and white is the maximum deviation from the equlibrium electron density in the plasma. The vertical axis is labelled with the probe number index, the horizontal axis is the time in multiples of $\Delta t=1\mu{\rm s}$. Control parameter values are (a) $U_{G1}=4.5{\rm V}$, (b) $U_{G1}=4.6{\rm V}$, (c) $U_{G1}=10{\rm V}$.} \label{data} \end{figure} Data set ${\cal U}_1$ [Fig.~\ref{data}(a)] is a single propagating mode, data set ${\cal U}_2$ [Fig.~\ref{data}(b)] shows interacting drift modes, and data set ${\cal U}_3$ [Fig.~\ref{data}(c)] is the state of strong drift wave turbulence. These three data sets are the basis for the following analysis and model description. For this purpose the plasma is considered as a spatiotemporal dynamical system, whose state is described by a function $u_{\epsilon}(x,t)$ where $\epsilon$ represents the complete set of experimental parameters. For the present investigations only one parameter, the grid bias $U_{\rm G1}$, is varied. The grid bias has been chosen as control parameter because it allows to consider that all other parameters remain in good approximation constant. Each dynamical state given by the data set ${\cal U}_k$ corresponds to a control parameter value $\epsilon_k$. \medskip The analysis tool used here is the biorthogonal decomposition (BOD). This tool provides a way to study in the space and time properties simultaneously. We present here just the most important features of the BOD. (For more details see Ref.~\cite{LimaCom}) Suppose that our system is described by a function $u(x,t)$ defined on a spatial range $X$ and a temporal interval $T$. In the experimental situation, $X$ is the domain of the azimuthal angle $x$ in cylindrical coordinates, i.e. $X=[0,2\pi]$. The biorthogonal decomposition provides the smallest linear subspace $\chi(X)$ containing the phase space trajectory $\xi_t$ (described as time $t$ runs) defined by : \begin{equation} \forall x \in X,\ \xi_t(x)=u(x,t). \end{equation} The set of all the vectors $\xi_t$ is the trajectory and the evolution of $\xi_t$ is the dynamics of the system. \medskip The biorthogonal decomposition also provides the smallest linear subspace $\chi(T)$ containing the spatial structure $\xi_x$ (described as the spatial position $x$ varies) defined by : \begin{equation} \forall t \in T,\ \xi_x(t)=u(x,t). \end{equation} In the present paper, the $L^2$ scalar product is used to define $H(X)$ and $H(T)$, the Hilbert spaces of the functions of $x$ defined on $X$, and the functions of $t$ defined on $T$ respectively. The BOD is the spectral analysis of the operator $U$, which acts from $H(X)$ into $H(T)$ : \begin{equation} (U\phi)(t)=\int u(x,t)\phi(x) dx, \end{equation} where $U$ defines a one--to--one relation between the vectors of $\chi(X)$ and $\chi(T)$, the orthogonal complements of the kernels of $U$ and its adjoint $U^*$. \medskip If $U$ is compact, as it is here the BOD decomposes $u(x,t)$ into temporal and spatial orthogonal modes and $u(x,t)$ can be written as follows : \begin{equation} u(x,t)=\sum \alpha_n \psi_n(t)\phi_n(x), \end{equation} with $\alpha_1\geq\alpha_2\geq\dots\geq 0$, and the orthogonality relations $(\phi_n,\phi_m)=\delta_{n,m}$ and $(\psi_n,\psi_m)=\delta_{n,m}$. The $\phi_n$ are called topos, and the $\psi_n$ chronos. \section{Modulated travelling waves}\label{moduldef} In this section we discuss the conditions for a two--dimensional system to be considered a modulated travelling wave. The introduction of modulated travelling waves is of interest as we shall see in the drift wave turbulence, where the decomposition of a high--dimensional system into a set of two--dimensional sub--systems allows us to focus on the dominating properties of the dynamical behaviour. The way in which a monochromatic travelling wave may be deformed in our case of study is now analysed in the following theorm for which we need first to introduce some simple definitions concerning a complex formalism for the corresponding modulations. \medskip Let us now specify what is meant by two--dimensional projections. of a N-dimensional system be described by its BOD \begin{equation} u(x,t)=\sum_{k=1}^N a_k \psi_k(t)\phi_k(x). \end{equation} defined on the space interval $X=[0,2\pi]$ and on the time interval $T=[t_0,t_1]$. \begin{defn} The {\bf projection} of a system $u(x,t)$ onto two vectors of index $m$ and $n$ is the two dimensional system $u_{m,n}(x,t)$ described by : \begin{equation} u_{m,n}(x,t)=a_m \psi_m(t)\phi_m(x)+a_n \psi_n(t)\phi_n(x) \end{equation} \end{defn} Thus the projection of the dynamics $\xi_t$ (associated with the spatio--temporal system $u(x,t)$) onto the eigenvectors of index $m$ and $n$ is simply the projection of $\xi_t$ onto the two topos $\phi_m$ and $\phi_n$, i.e. \begin{equation} \forall x \in X,\ \xi_t^{m,n}(x)=u_{m,n}(x,t). \end{equation} The projection of the spatial structure (also given by the spatio--temporal system $u(x,t)$) onto the eigenvectors of index $m$ and $n$ is simply the projection of $\xi_x$ onto the two chronos $\psi_m$ and $\psi_n$. \begin{equation} \forall t \in T,\ \xi_x^{m,n}(t)=u_{m,n}(x,t) \end{equation} The schematic drawing Fig.\ref{bodsk} illustrates this projection process of the dynamics and the spatial structure. \begin{figure} \centering \caption{The operator $U$ maps the subspace of $\chi(X)$ spanned by $\phi_1$ and $\phi_2$ into the subspace of $\chi(T)$ spanned by $\psi_1$ and $\psi_2$.} \label{bodsk} \end{figure} \begin{defn} A {\bf modulatrix} ${\cal M}$ is a pair of complex valued continuous functions $M(x)$ and $N(t)$. $M$ is called the spatial modulatrix, and $N$ the temporal modulatrix. \end{defn} Note that each complex--valued continuous function is the parametric representation of an arc. Using the vocabulary of complex analysis (see for instance \cite{Henrici}), the arc $\Gamma_X$ is defined as \begin{equation} \Gamma_X\ :\ M=M(x), \ 0\leq x \leq 2\pi . \end{equation} \begin{defn} A modulatrix ${\cal M}=(M,N)$ is a {\bf continuous phase modulatrix} if $M$ and $N$ can be written as \begin{equation} M(x)=A(x)e^{iF(x)} \label{contx} \end{equation} and \begin{equation} N(t)=B(t)e^{iG(t)}, \label{contt} \end{equation} where $A$, $B$, $F$, and $G$ are continuous functions. It will be called a {\bf regular continuous phase modulatrix} if moreover the increase of the argument of $F$, $D_F=F(2\pi)-F(0)$, and the increase of the argument of $G$, $D_G=G(t_1)-G(t_0)$, are both equal to zero. \end{defn} It is known that a continuous complex--valued function can be written as a product of a continuous modulus function and a continuous argument function if the function is never equal to 0 (see \cite{Henrici}). The modulus function is unique and two different argument functions differ by a constant integral multiple of $2\pi$. The real numbers $D_F=F(2\pi)-F(0)$ and $D_G=G(t_1)-G(t_0)$ are independent of the choice of the argument functions $F$ and $G$ \cite{Henrici}. \begin{defn} The {\bf spatial complexification} of two--dimensional system $u(x,t)$ whose BOD is \begin{equation} u(x,t)=\alpha_1\psi_1(t)\phi_1(x)+\alpha_2\psi_2(t)\phi_2(x) \label{bod2d} \end{equation} is \begin{equation} Z(x)=\alpha_1\phi_1(x)+i\alpha_2\phi_2(x). \end{equation} The corresponding {\bf temporal complexification} is \begin{equation} Y(t)=\alpha_1\psi_1(t)+i\alpha_2\psi_2(t). \end{equation} \end{defn} Note that the spatial complexification is a representation of the spatial structure in the complex plane, and the temporal complexification is a representation of the dynamics in the complex plane. \medskip If the complexifications are never zero, the argument function can be introduced by the following definitions : \begin{defn} A spatial complexification $Z(x)$ is a {\bf phase continuous complexification} if it can be written as : \begin{equation} Z(x)=C(x)e^{iQ(x)}. \end{equation} \end{defn} \begin{defn} A temporal complexification $Y(t)$ is a {\bf phase continuous complexification} if it can be written as : \begin{equation} Y(t)=D(t)e^{iR(t)}. \end{equation} \end{defn} Of particular interest is the case in which $X$ is the circle. \begin{defn} Let $u(x,t)$ be a two--dimensional system defined on a circle $X$. Its spatial complexification $Z(x)$ is assumed to be always non--zero and continuous on $X$. Then $Z(x)$ can be written as $Z(x)=C(x)e^{iQ(x)}$. As $Z(0)=Z(2\pi)$ we have $Q(2\pi)=Q(0)+n2\pi$ where $n$ is an integer. The number $n$ is called the {\bf spatial winding number} of the system. \end{defn} Let us now introduce the notion of modulation of a system : \begin{defn} Let $u(x,t)$ be a two--dimensional system which BOD is given by \ref{bod2d}. Let ${\cal M}=(M(x),N(t))$ be a (regular) modulatrix. A spatiotemporal (regular) {\bf modulation} of $u(x,t)$ is the system $u'(x,t)$ defined by \begin{equation} u'(x,t)=\alpha'_1\psi'_1(t)\phi'_1(x)+\alpha'_2\psi'_2(t)\phi'_2(x) \end{equation} with \begin{equation} \alpha'_1\phi'_1(x)+i\alpha'_2\phi'_2(x)=M(x)(\alpha_1\phi_1(x)+ i\alpha_2\phi_2(x)) \end{equation} and \begin{equation} \alpha'_1\psi'_1(t)+i\alpha'_2\psi'_2(t)=N(t) (\alpha_1\psi_1(t)+i\alpha_2\psi_2(t)). \end{equation} \end{defn} Note that in general $\psi'_1$, $\phi'_1$, $\psi'_2$, and $\phi'_2$ are not eigenvectors of $u'$. \medskip Let us investigate the effect of a modulation on the dimension of the system. \begin{thm}\label{thmdim} Let $u(x,t)$ be a system described by two phase continuous complexifications $Z(x)=C(x)e^{iQ(x)}$ and $Y(t)=D(t)e^{iR(t)}$. Let ${\cal M}=(M,N)$ be a continuous phase modulatrix with $M(x)= A(x)e^{iF(x)}$ and $N(t)=B(t)e^{iG(t)}$. The dimension of the modulated system is reduced to one if \begin{equation} \forall x \in X, \ Q(x)+F(x)=z_1 \end{equation} or \begin{equation} \forall t \in T, \ R(t)+G(t)=z_2 \label{condredt} \end{equation} where $z_1$ and $z_2$ are two complex numbers. Otherwise the dimension of the modulated system remains equal to two. \end{thm} \begin{pf} Let us first apply the spatial modulation. Let $Z'(x)$ be the spatial complexification of the spatialy modulated system $u'(x,t)$ : \begin{equation} Z'(x)=M(x)Z(x) \end{equation} The dimension of the spatial structure is reduced only if it is embedded in a segment, i.e., if $Z'$ can be written as $Z'(x)=C'(x)e^{iq_1}$ where $q_1$ is real. In this case, \begin{equation} u'(x,t)=C'(x)\cos q_1 \psi_1(t)+C'(x)\sin q_1 \psi_2(t), \end{equation} it is clear that the new system is one--dimensional, more precisely \begin{equation} u'(x,t)=\alpha'_1\psi'_1(t)\phi'_1(x) \end{equation} with $\alpha'_1=||C'||$, $\phi_1=C'/\alpha'_1$, and $\psi'_1= \cos q_1 \psi_1+\sin q_1 \psi_2$. \medskip However, \begin{equation} C'(x)e^{iq_1}=A(x)e^{iF(x)}C(x)e^{iQ(x)} \end{equation} This equation implies $\forall x \in X,\ Q(x)+F(x)=e^{iq_1}$, where $z_1=e^{iq_1}$. Conversely, if we assume that $M(x)=A(x)e^{i(q_1-F(x))}$, then the modulated system becomes one--dimensional. We can show in the same way that the reduction of the dimension of the dynamics after a temporal modulation is equivalent to (\ref{condredt}).\qed\end{pf} \begin{thm} A regular spatial modulation does not change the spatial winding number. \end{thm} \begin{pf} Let $Z(x)=C(x)e^{iQ(x)}$ be a complexification with a winding number $n$. We then have $Q(2\pi)=Q(0)+n2\pi$. Let $M(x)=A(x)e^{iF(x)}$ be a regular spatial modulation. Therefore, because $X$ is here the circle, $F(2\pi)=F(0)$. The spatial complexification of the modulated system is by definition \begin{equation} Z'(x)=C'(x)e^{iQ'(x)} \end{equation} with $C'(x)=C(x)A(x)$ and $Q'(x)=Q(x)+F(x)$. Then using the values of $F(x)$ and $Q(x)$ at $x=0$ and $x=2\pi$ we get $Q'(2\pi)=Q'(0)+n2\pi$.\qed\end{pf} Note that this theorem implies that in order to change the winding number by modulation, a non--regular modulation is required. \begin{thm} Let $u(x,t)$ be a two--dimensional system defined on the circle $X$ whose spatial complexification $Z(x)$ and its temporal complexification $Y(t)$ never vanish. Then there exists one unique regular modulatrix and a pair consisting of an integer $k$ and a real $\omega$ such that $u(x,t)$ is the modulation of a monochromatic travelling wave $u_0(x,t)=cos(kx)cos(\omega t)+sin(kx)sin(\omega t)$.\label{theomod} \end{thm} \begin{pf} Let us write the spatial complexification $Z(x)$ as \begin{equation} Z(x)=C(x) e^{iQ(x)} \end{equation} and the temporal complexification as \begin{equation} Y(t)=D(t)e^{iR(t)} \end{equation} We set \begin{equation} k=\frac{1}{2\pi}D_Q \end{equation} and \begin{equation} \omega=\frac{1}{t_1-t_0}D_R, \end{equation} where $D_Q=Q(2\pi)-Q(0)$ and $D_R=R(t_1)-R(t_0)$ are the increases of the arguments of $Z$ and $Y$ respectively. The numbers $k$ and $\omega$ are unique, because $D_Q$ and $D_R$ are unique. Because $X$ is a circle, the winding number $k$ is an integer. \medskip Let us define : \begin{equation} M(x)=C(x)e^{i[Q(x)-kx]} \end{equation} and \begin{equation} N(t)=D(t)e^{i[R(t)-\omega t]} \end{equation} The modulatrix ${\cal M}=(M,N)$ is regular and $u(x,t)$ is the modulation of a system $u_0(x,t)$ by ${\cal M}$. The spatial complexification and the temporal complexification of $u_0(x,t)$ are respectively $Z_0(x)=e^{ikx}$ and $Y_0(t)=e^{i\omega t}$. The system $u_0(x,t)$ is thus of the form \begin{equation} u_0(x,t)=cos(kx)cos(\omega t)+sin(kx)sin(\omega t). \end{equation} The functions $Q(x)$ and $R(t)$ are defined modulo $2\pi$, so $C(x)$ and $D(t)$ are unique. Thus the functions $M$ and $N$ are unique. \qed\end{pf} This theorem provides a way to consider a two--dimensional experimental system as a spatio--temporal modulation of a monochromatic travelling wave. In particular, a continuous deformation of a wave is described by a modulatrix ${\cal M_\epsilon}=(M_\epsilon,N_\epsilon)$ that depends on the control parameter $\epsilon$. In the case where $X$ is the circle, we can define the winding number, and the jump in the winding number that is observed for a certain value of the control parameter will correspond to a bifurcation. Note the analogy of the complexifications with the loops in the order parameter space in the study of the topological defects \cite {Mermin}. \medskip Our complex modulation is a generalisation of the real modulation introduced in \cite{LimaMod}. The modulations introduced in \cite{LimaMod} are not appropriate for the description of the deformations of a function $\phi_1(x)$ (resp. $\psi_1(t)$) that has a zero which shifts as $\epsilon$ varies. Instead, we can describe the deformation of such a function provided that we find a complementary function $\phi_2(x)$ (resp. $\psi_2(t)$)that never vanishes for the value of $x$ (resp. $t$) where $\phi_1(x)$ (resp. $\psi_1(t)$) has a zero. Note that in the absence of resonances, as in the case of real modulatrix ${\cal M}=(M,N)$, the new eigenvectors are the modulation of the non perturbed system, as in Ref.~\cite{LimaMod}. \section{Identifying the modulated monochromatic travelling waves} \label{LookModTW} The aim of this section is to simplify the description of the dynamical behaviour of the present system by the identification of the two--dimensional structures existing in the system for different values of the control parameter $\epsilon$. In order to identify two--dimensional subsystems whose spatial and temporal complexification never vanishes, the idea that this system is a deformed monochromatic travelling wave (MTW) is used. More precisely, a MTW has two basic properties : (i) the degeneracy of the weights and (ii)the Fourier transform of the chronos and topos are delta functions. \medskip We show in Fig.~\ref{poitot} the plot of the weight on a logarithmic scale. In the analysis of the plot of weights, we have to look for pairs of eigenvalues that are degenerated. The degeneracy of pairs is obvious only for the first two eigenvalues of each data set. However, if the ratio $a_n/a_{n+1}$ is plotted versus $n$, the degeneracy of all weights can be quantified (see Fig.~\ref{poitot}) \def\textplotpoic{Plot of the weights for the data in y-log scale (left) and plot of the ratios $a_{n+1}/a_n$ with respect to $n$ (right). (a) data ${\cal U}_1$ (b) data ${\cal U}_2$(c) data ${\cal U}_3$. They exhibit the degeneracy of the weights which may be associated to a spatio-temporal symmetry} \begin{figure} \caption{\textplotpoic} \label{poitot} \end{figure} We are thus looking for values of $n$ such that $a_n/a_{n+1}$ is close to unity. This plot provides a way to couple eigenvalues : two eigenvalues $a_n$ and $a_{n+1}$ are considered to be coupled if the ratio $a_n/a_{n+1}$ forms a local maximum in the plot ratios. Following the previous rule, we obtain the pairs listed in Table.~\ref{tabpairpoi} that we call structures. \begin{table}[htb] \begin{center} \caption{Pairs obtained after the analysis of the weigths} \label{tabpairpoi} \begin{tabular}{l|c|c|c|c|c|c|c|c} data set \\ \hline ${\cal U}_1$ &1--2&3--4& -- &6--7&9--10&12--13&14--15&16--17\\ ${\cal U}_2$ &1--2&3--4& -- &7--8&9--10&12--13&14--15&16--17\\ ${\cal U}_3$ &1--2& -- &6--7&8--9&10--11&12--13&14--15&16--17\\ \hline \end{tabular} \end{center} \end{table} Notice that, considering only the weight distribution is not sometimes sufficient to decide the pairing. Such is, for instance, the case in data ${\cal U}_3$ for the $a_n$, $n=3,4,5$. A careful look to the corresponding chronos and topos, specially by considering their Fourier transform, will give, in this case an unambiguous pairing. The next step in the identification of modulated MTW is to consider pairs of functions that have the same spatial (temporal) frequencies. An appropriate way to detect such modulated sine and cosine functions is to apply a Fourier transform to the eigenfunctions. The plot of the square modulus of the Fourier transforms of the chronos and topos are represented in Fig.~\ref{bodfftc} and Fig.~\ref{bodfftt}. \begin{figure} \centering \caption{Modulus squared Fourier transforms of the chronos. The three columns correspond to the three data sets. The $x$--axis is the frequency. The frequency unit is $d\omega=2048.10^{-6}s^{-1}$. The $y$ axis is labelled by the eigenvector index. } \label{bodfftc} \end{figure} \begin{figure} \centering \caption{Modulus squared Fourier transforms of the topos. The three columns correspond to the three data sets. The $x$--axis is the spatial frequency. The frequency unit is the spatial loop. The $y$ axis is labelled by the eigenvector index.} \label{bodfftt} \end{figure} A single--peaked spectrum is found for the first two chronos and topos of each data set. Pairs of eigenvectors are found by inspection : each two Fourier spectra with almost or exactely the same structure indicates the presence of a pair. Using the conditions on the Fourier transform, Table~\ref{tabpairpoi} can be improved. Table ~\ref{tabpairtf} give the new list of pairs. The pairs are labelled $w_k$, where $k$ is the index of the wave associated with the pair of eigenvectors. \begin{table}[htb] \begin{center} \caption{Pairs obtained after the analysis of the Fourier spectrum of the eigenvectors. The modulated MTW associated with those pairs are labeled $w_k$, where $k$ is the index of the wave.} \label{tabpairtf} \begin{tabular}{l|c|c|c|c|c|c|c|c} data set &$w_1$&$w_2$&$w_3$&$w_4$&$w_5$&$w_6$&$w_7$&$w_8$\\ \hline ${\cal U}_1$ &1--2&3--4&5--6&7--8&9--10&12--13&14--15&16--17\\ ${\cal U}_2$ &1--2&3--4&5--6&7--8&9--10&12--13&14--15&16--17\\ ${\cal U}_3$ &1--2&3--4&5--6&7--8&9--10&12--13&14--15&16--17\\ \hline \end{tabular} \end{center} \end{table} Note that the Eigenvector 11 for the sets ${\cal U}_1$ and ${\cal U}_2$ and Eigenvector 7 for the set ${\cal U}_3$ do not belong to a pair. Note also that the degeneracy of the weights $a_6$ and $a_7$ doesn't correspond to a modulated travelling wave. This is because the energy of the eigenfunction 7 wich is a global oscillation of the plasma without any propagation is close to the energy of the third travelling wave. \section{\label{secorg}Organization of the different modulated MTWs} In the previous section, two--dimensional subsystems which are modulated travelling waves were identified . In this section, these waves are studied with respect to both the eigenvector indexes and the control parameter. \medskip First the spatial and temporal frequencies $k$ and $\omega$ are determined. Actually, Theorem \ref{theomod} provides an explicit way to compute $k$ and $\omega$. However, this direct approach is cumbersome in practice. Therefore the winding number $k$ is obtained in a different manner : The first eight spatial structures associated with the first eight modulated travelling waves are shown in Fig.~\ref{strucx}. The winding number becomes easier to inspect in the plot of the spatial structures (as in the case of the Fourier transform) as the control parameter becomes larger. This shows that the spatial modulation becomes more regular as $\epsilon$ increases. \begin{figure} \centering \caption{Spatial structures associated with the first eight modulated travelling waves. a) Data set ${\cal U}_1$, b) Data set ${\cal U}_2$, c) Data set ${\cal U}_3$. Each square subfigure is labelled in the corner by the index of the eigenvectors spanning the structure.} \label{strucx} \end{figure} Fig.~\ref{strucx} shows that the winding number for the data set ${\cal U}_1$ is well defined only for the first wave $w_1$ (The labelling is defined in the Table ~\ref{tabpairtf}). For the data set ${\cal U}_2$, the spatial winding number is defined for the three first waves, and for the data set ${\cal U}_3$ it is well defined for all eight waves studied. The winding numbers found are shown in the Table ~\ref{tabkomeg}. The temporal frequencies $\omega$ associated with these waves are determined from the Fourier spectra (Fig.~\ref{bodfftc}). A broad Fourier spectra corresponds to the presence of zeros in the temporal complexifications. The $\omega$ is well--defined from the Fourier spectra only for the first wave of the data set ${\cal U}_1$ and ${\cal U}_2$, and for the first three waves for the data set ${\cal U}_3$. \begin{table}[htb] \begin{center} \caption{Spatial winding numbers $k$ and temporal frequencies $\omega$ associated with the first eight waves. The wave number $i$ is labelled wave $w_i$. } \label{tabkomeg} \begin{tabular}{l|c|c|c|c|c|c|c|c|} &$w_1$&$w_2$&$w_3$&$w_4$&$w_5$&$w_6$&$w_7$&$w_8$\\ \hline data set ${\cal U}_1$ &$k=2$ & -- & --& -- & --& -- & -- & --\\ &$\omega=20$& -- & --& -- & --& -- & -- & --\\ \hline data set ${\cal U}_2$ &$k=2$ &$k=3$ & $k=3$ & -- & --& -- & -- & --\\ &$\omega=22$& -- & --& -- & --& -- & -- & --\\ \hline data set ${\cal U}_3$ &$k=2$ &$k=1$ & $k=3$&$k=4$&$k=5$ & $k=6$&$k=7$ & $k=8$ \\ &$\omega=44$&$\omega=25$&$\omega=63$& -- & --& -- & -- & --\\ \hline \end{tabular} \end{center} \end{table} The phase speed of a modulated wave is defined as the ratio of the temporal frequency $\omega$ to the spatial frequency $k$ of the associated unmodulated monochromatic wave. Considering the first waves given in Tab.~\ref{tabkomeg} it is noted that a speed doubling occurs when the control parameter $\epsilon$ increases from the value that belongs to the data set ${\cal U}_2$ to that of the data set ${\cal U}_3$. Indeed, the phase speed associated with the waves $w_1$, $w_2$, and $w_3$ in the data set ${\cal U}_3$ is very close to twice the speed of the first travelling wave of the data ${\cal U}_2$. In the next section, a simple model for this spatio--temporal bifurcation is discussed. Another way to study the effect of the spatial modulation is to plot a graylevel plot of the two dimensional restriction $u_{m,n}(x,t)$ associated with the modulated wave. The graylevel plots for the first eight waves of each data set is presented in Fig.~\ref{2Dgraytot}. \begin{figure} \centering \caption{Graylevel plots of the two--dimensional structures. Each column is associated with a data set. For each column, the letters a) \dots h) indicate the wave $w_1$,\dots,$w_8$ that is plotted. For each figure, the $x$--axis is time and the $y$--axis space.} \label{2Dgraytot} \end{figure} In those plots a phase spatial modulation implies a distortion of the wave front, this distorsion depending only on $x$, and an amplitude spatial modulation corresponding to global crashes in the amplitude of the waves, those crashes having the same intensity for fixed positions. In the graylevel plot, the phase modulation is easier to observe than in the probe structure, where a phase modulation implies only a non-uniform spacing of the vectors $\xi_x$. \medskip The graylevel plots show that in the data sets ${\cal U}_1$ and ${\cal U}_2$, the first wave has a phase defect that localy makes the wave front more vertical. The other waves are strongly phase modulated with strong tearing of the wave fronts. However, the wave fronts of the waves in the data set ${\cal U}_2$ are more regular than in the data set ${\cal U}_1$. In the data set ${\cal U}_3$, all the strong phase defects have disappeared or been reduced. \medskip In the gray-level plots the strong amplitude modulations can be observed as well. The global crashes of the amplitude of the wave correspond to an amplitude modulation close to zero. This amplitude modulation is itself chaotic. The data set ${\cal U}_3$ is thus a state where chaos is essentially present in time. The spatial structures are, on the contrary, fairly regular. The temporal chaos is then due to a chaotic modulation acting on a non--chaotic spatio--temporal structure, i.e. a travelling wave. Note that, furthermore, in this case we are faced to an homogeneous turbulence \cite{Lumley} as it is seen in Fig.~\ref{bodfftc}. Also note that the chaotic modulation of monochromatic waves has a close relationship with the three--wave interaction model of the drift wave instability \cite{horton90}. However, it one may ask if the restriction to three waves is pertinent here. Indeed, our study reveals a set of at least eight waves relevant to the data set ${\cal U}_3$. \medskip It is easier to study the spatial amplitude modulation in the probe structure plots [Fig.\ref{strucx}]. The modulation of the first wave increases with $\epsilon$. In contrast, the next waves become more regular as $\epsilon$ increases. The spatial structure of the data set ${\cal U}_3$ coincides with a large scale of eigenvectors ($m=3,\dots,19$). \medskip To study these large scale phenomena, the plot of the weights is considered [Fig.~\ref{poitot}]. Neglecting the pairwise degeneracy, the weights $a_n$ decrease exponentially with $n$ in well defined regions. In particular, in the ditribution of the weights for the data set ${\cal U}_3$, the boundaries of such a region are given by the indices $n_1=7$ and $n_2=20$. These boundaries correspond to a pronounced broadening of the Fourier spectrum in the low--frequency regime [cf. Fig.~\ref{bodfftc}]. In this region, the spatial structures have a well defined winding number as shown in Fig.~\ref{strucx}. Note that the spectrum of the chronos is bounded by a frequency near 60 in the data set ${\cal U}_1$ and ${\cal U}_2$. In the data set ${\cal U}_3$ this bound has been shifted to 250. Even in the eigenvectors with a broad spectrum, i.e. those with an index greater than 23, the frequencies higher than 250 are nonexistent. However, the spatial frequencies are not bounded and increase with the index of the eigenvector. \begin{rem} Even if the radial dependency of the electric field is not directly accessible to the present diagnostic, we may wonder that such effects is present giving rise to a collective rotation of the whole plasma column and therefore to a shift in the (Fourier) dispersion relation for the waves. This effect seems present in the frequencies reported in Table 4, for the data ${\cal U}_3$. However, since we restrict ourselves to each two dimensional structures , this fact would have no effect in our analysis. It simply changes the position of $\chi(T)$ in the space of the functions of time $H(X)$ by a global rotation. We thank one of the referees to have pointed us this question. \end{rem} \section{\label{secmodel}Model for the speed doubling} In the previous section, it was noted that a speed doubling occurs when the control parameter increases. The data set ${\cal U}_2$ corresponded to a state before this bifurcation and the data set ${\cal U}_3$ corresponded to a state after the bifurcation. In this section we give a simple model for this speed doubling. Our model is built with two two--dimensional structures. \medskip We first describe one of the two structures. The figure Fig.~\ref{modelsketch} shows how a spatial modulation can change the winding number and the speed of the speed of the eigenstructure. \begin{figure} \centering \caption{A spatial modulation can imply a topological change in the spatial structure, which corresponds to a change of the speed of the modulated travelling wave. The spatial structures, the trajectories, and a sketch of the crest of the wave are plotted for four values of the control parameter.} \label{modelsketch} \end{figure} The phase speed (cf. Section~\ref{secorg}) is defined as a ratio. In order to define the states to model uniquely, the speed alone is not sufficient. The model structure describes an evolution from a state described by the function $u_{-1}(x,t)=\cos(2x)\cos(\omega t)+\sin(2x)\sin(\omega t)$ to a state described by the function $u_{1}(x,t)=\cos(x)\cos(\omega t)+\sin(x)\sin(\omega t)$. \medskip Let us consider a temporal complexification which is independant on $\epsilon$ defined by \begin{equation} Y_0(t)=e^{i\omega t}. \end{equation} and the corresponding spatial complexification by \begin{equation} Z_{\epsilon}(x)=(1-\epsilon)e^{i2x}+(1+\epsilon)e^{ix}. \end{equation} These spatial and temporal complexifications correspond for $\epsilon=-1$ to the complexifications of $u_{-1}(x,t)$ and for $\epsilon=1$ to the complexification of $u_{1}(x,t)$. Fig.~\ref{model} shows the plots for four values of the control parameter $\epsilon$, the spatial complexification $Z_{\epsilon}(x)$ in the right column, and the gray level representation of the two dimensionnal system $u_{\epsilon}(x,t)$. \begin{figure} \centering \caption{Model for the speed doubling. The gray level plot of the modulated MTW is shown for four values of $\epsilon$. The corresponding spatial structures are shown in the right column.} \label{model} \end{figure} The spatial complexification is never equal to zero except for the value $\epsilon=0$. For this value, the spatial complexification vanishes for $x=\pi$. For negative $\epsilon$ the winding number is 2, and for positive $\epsilon$ it is 1. However, for $\epsilon=0$, the winding number is not defined. A bifurcation in the winding number thus occurs. \medskip A regular spatial modulation is just defined (i) before the bifurcation by dividing the spatial complexification $Z_{\epsilon}(x)$ by $Z_{-1}(x)$ (ii) after the bifurcation by dividing the spatial complexification $Z_{\epsilon}(x)$ by $Z_{1}(x)$. The spatial modulation before the bifurcation $M^b_{\epsilon}(x)$ is defined by \begin{equation} M^b_{\epsilon}(x)=(1-\epsilon)+(1+\epsilon)e^{-ix}. \end{equation} $M^b_{\epsilon}(x)$ is a regular modulation because $(1-\epsilon)>(1+\epsilon)$ when $\epsilon<0$. The spatial modulation after the bifurcation $M^a_{\epsilon}(x)$ is defined by \begin{equation} M^a_{\epsilon}(x)=(1+\epsilon)+(1-\epsilon)e^{ix} \end{equation} $M^a_{\epsilon}(x)$ is a regular modulation after the bifurcation because $(1+\epsilon)>(1-\epsilon)$ when $\epsilon<0$. The spatial defect which occurs at $x=\pi$ for $\epsilon=0$ permits the wave front to change its shape as shown in Fig.~\ref{model}. The two (unnormalized) topos are the real part and the imaginary part of the spatial complexification \begin{equation} \phi_1(x)=(1-\epsilon)\cos(2x)+(1+\epsilon)\cos(x), \end{equation} \begin{equation} \phi_2(x)=(1-\epsilon)\sin(2x)+(1+\epsilon)\sin(x). \end{equation} The chronos are left unchanged \begin{equation} \psi_1(t)=\cos(\omega t), \end{equation} \begin{equation} \psi_2(x)=\sin(\omega t). \end{equation} Both the spatial behaviour (see Fig~\ref{strucx}, Fig~\ref{2Dgraytot})---i.e. the bifurcation of a winding number from the value 2 to the value 1--- and the temporal behaviour (see Fig~\ref{bodfftc}, Fig~\ref{2Dgraytot})---i.e. a frequency $\omega$ roughly independant of the control parameter $\epsilon$ shows that this structure corresponds to the modulated MTW $w_1$ for the data sets ${\cal U}_1$ and ${\cal U}_2$ and $w_2$ for the data sets ${\cal U}_3$. (The notations $w_k$ were introduced in the Table ~\ref{tabpairtf}). \medskip In the same way we model the structure corresponding to the second pair in ${\cal U}_1$ and ${\cal U}_2$ and the first in ${\cal U}_3$. Notice that in this case the bifurcation is due to the simultaneous deformation of the temporal modulation and of the corresponding spatial modulation. \medskip Finally, the model is built by glueing the two structures weighted by two corresponding eigenvalues (the relative energies of the two structures). According to the general theory of bifurcations described by the BOD \cite{LimaSym}, these energies cross at the bifurcation and this is the reason why the energies of the two structures are interchanged when passing from ${\cal U}_2$ to ${\cal U}_3$.\section{Summary and Conclusions} The BOD of the experimental data of the drift--waves experiment showed the importance of the two--dimensional sub--systems. It has been shown that the two--dimensional structures are complex--modulated monochromatic travelling waves (modulated MTW). A spatial regularity counterpart of a temporal chaos has been discovered in the most turbulent data set. The most important feature of the evolution of the system with the control parameter $\epsilon$, i.e. the speed doubling, has been modeled as well. The model consists of a pair of simple two--dimensional structures that undergo an exchange of energy as the value of the control parameter varies. \medskip A study of new experimental data will be done in the future in order to improve the model and better characterize the bifurcation. The regular behaviour also needs to be better understood. Furthermore, the connection between the modulated MTW and the waves--interactions models for the drift--waves will be investigate in the future. {\bf Acknowledgments} One of us (A.M.) kindly aknowledges Prof A. Piel of the Institut f\"ur Experimentalphysik at Kiel for his warm hospitality during part of this work. A.M. also thanks very much Prof. R. Lima. This work would not have been possible without him. We also thank Dr. T. Dudok de Wit for fruitful discussions. Thanks also to Dr. P. N. Watts who helped to improve the manuscript.
1,314,259,995,673
arxiv
\section{Introduction} One of the most important quantum physical quantities is probably the zero-point energy. It is needed, for instance, when we want to study back-reaction, i.e., the influence the matter fields moving in a curved back-ground assert on the back-ground geometry itself. This would be done by solving the Einstein equations with the expectation value of the energy-momentum tensor as source: \begin{displaymath} G_{\mu\nu} = R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R = \langle T_{\mu\nu}\rangle, \end{displaymath} where we have chosen units such that $\kappa=\hbar=c =1$. It is also important for the study of renormalisation properties of a quantum field theory in a curved space-time.\\ In order to find this quantity, we evaluate the heat-kernel corresponding to the equations of motion for a scalar field. This calculation is carried out at finite temperature. The integral of this heat-kernel with respect to some fictitious fifth coordinate, $\sigma$, will give the density of Helmholtz' free energy (its integral over the entire space-time will give the zeta-function, which is essentially just Helmholtz' free energy). The resulting expression is regularised and renormalised in the subsequent sections.\\ We do this by using the spherical symmetry of the spacetime in order to collect all the unknown bits of the heat kernel into a function $g_{nl}(r,r'; \sigma)$ depending only upon the radial coordinates and $\sigma$. A recursive relation for an asympototic expansion of this unknow function can be found and solved, thereby allowing us to find the heat kernel.\\ From the freen energy one can derive expressions for the entropy and the pressure by using standard thermodynamic relations. We find these and comment on their meaning. We also show that, to lowest order in the mass generating the Schwarzschild geometry, the free energy is that of an infinite family of particles moving in one dimension with an $r$-dependent mass. \section{Set-Up} We consider a minimally coupled scalar field $\phi$ moving in a Schwarzschild back-ground, hence the action is \begin{equation} S=\frac{1}{2}\int(\partial_\mu \phi\partial_\nu\phi g^{\mu\nu} -\mu^2 \phi^2)\sqrt{|g|}d^4x, \end{equation} where the metric is given by the standard expression \begin{displaymath} ds^2=\left(1-\frac{2M}{r}\right)dt^2-\left(1-\frac{2M}{r}\right)^{-1} dr^2 - r^2d\theta^2-r^2\sin^2\theta d\phi^2, \end{displaymath} where $M$ is the mass of the (classical) massive object generating the Schwarzschild geometry, a black hole or a star, say. We will put the mass, $\mu$, of the scalar field equal to zero. We can later reinsert it if we find it desirable.\\ The d'Alembertian becomes \begin{displaymath} \Box = \frac{1}{h}\frac{\partial^2}{\partial t^2}-\frac{1}{r^2}\frac{\partial}{\partial r}r^2h\frac{\partial}{\partial r} -\frac{L^2}{r^2}, \end{displaymath} where $h(r)=(1-2M/r)$ and $L^2$ is the square of the angular momentum operator. A finite temperature can be included by complexifying the time coordinate $t$, $t\mapsto \tau$. As $\phi$ describes a Bose field, it becomes periodic in $\tau$, $\phi(\tau+\beta)=\phi(\tau)$. This implies that the time direction is not only complexified but also compactified -- by appropriate scaling we can take $\tau$ to lie on the unit circle $S^1$ (see e.g. Ramond \cite{Ram} or Itzykson and Zuber, \cite{IZ}, for further details on this). The action, furthermore, becomes ``euclideanised''.\\ Now, the partition function \begin{displaymath} Z=\int e^{-S} {\cal D}\phi = \int e^{-\int \frac{1}{2}\phi \Box \phi \sqrt{|g|}d^3xd\tau}{\cal D}\phi, \end{displaymath} is simply a Gaussian and hence the functional integral can be carried out, the result being \begin{equation} Z = (\det\Box)^{-1/2}. \end{equation} See for instance \cite{Ram}. The quantity we are particularly interested in, is Helmholtz' free energy, $F$, which is defined as \begin{equation} F=-\frac{1}{\beta}\ln Z =\frac{1}{2\beta}\ln\det\Box. \end{equation} The internal energy and pressure, which appear in the energy momentum tensor, are related to $F$ by the usual thermodynamic relations, as is the entropy. \section{Functional Determinants} The major problem is apparently the calculation of $\det\Box$. For consistency, and in order to establish notation, I will give a short introduction to the topic here. Descriptions can be found in e.g. Hawking \cite{Haw} or Ramond \cite{Ram}.\\ A priori, the determinant of an operator $A$ must be given by the product of its eigenvalues $\lambda$ \begin{equation} \det A = \prod\lambda. \end{equation} Now, obviously this is not an easy thing to calculate. This is where the zeta function, $\zeta_A(s)$, and the heat-kernel, $G_A(x,x';\sigma)$, comes into play. Define \begin{equation} \zeta_A(s) = \sum \lambda^{-s}, \end{equation} then \begin{equation} \det A = e^{-\zeta_A'(0)}. \end{equation} The heat-kernel is defined through the differential equation \begin{equation} A_xG_A(x,x';\sigma) = -\frac{\partial}{\partial\sigma}G_A(x,x';\sigma), \end{equation} subject to the boundary condition \begin{equation} \lim_{\sigma\rightarrow 0}G_A(x,x';\sigma) = \delta(x-x'). \end{equation} The reason for the terminology is transparent: when $A$ is $\frac{d^2}{d\theta^2}$, the Laplacean on $S^1$, then $\zeta_A(s)\propto \zeta(s)$, where $\zeta(s)=\sum_{n=1}^\infty n^{-s}$, is the Riemann zeta-function, and when $A=\nabla^2$ and $\sigma$ is the temperature, then the heat kernel satisfies the usual heat equation.\\ Denoting the eigenfunctions of $A$ by $\psi_\lambda$ we have \begin{equation} G_A(x,x';\sigma) = \sum_\lambda \psi_\lambda(x)\psi_\lambda^*(x') e^{-\lambda\sigma}, \end{equation} and one easily proves the important relationship \begin{equation} \zeta_A(s) = \frac{1}{\Gamma(s)}\int_0^\infty d\sigma \sigma^{s-1} \int d^4x G_A(x,x;\sigma). \end{equation} Notice that the integral is only along the diagonal $x=x'$. We don't have to know the eigenfunctions in order to solve the heat equation, in a $d$-dimensional Euclidean space, for instance, one can show \cite{Ram} \begin{displaymath} G_A(x,x';\sigma) = (4\pi\sigma)^{-d/2}e^{-\frac{(x-x')^2}{4\sigma}}, \end{displaymath} in Cartesian coordinates. We will later use a mixture of techniques to arrive at a workable expression for the heat kernel. The angular and thermal part will be handled by a mode sum, whereas we will have to make do with an asymptotic expransion for the radial part in order to find the Casimir energy density.\\ We can arrive at an important interpretation of the value of the heat kernel along the diagonal by noting \begin{equation} f(x) \equiv-\frac{1}{2\beta}\left.\frac{\partial}{\partial s}\right|_{s=0} \frac{1}{\Gamma(s)}\int_0^\infty d\sigma \sigma^{s-1}G_A(x,x;\sigma), \end{equation} is the density of Helmholtz' free energy \begin{equation} F = \int f(x)\sqrt{|g|}d^4x, \end{equation} and is thus exactly the quantity we are interested in calculating. \section{Finding the Heat-Kernel} We now return to our particular problem, the minimally coupled scalar field in a Schwarzschild back-ground, i.e. the calculation of the determinant of the d'Alembertian. For this particular operator we know that the eigenfunctions can be written \begin{equation} \psi_\lambda(\tau,r,\Omega) = e^{-i\omega_n\tau}Y_{lm}(\Omega) g_\lambda(r), \label{eq:ef} \end{equation} where $\Omega$ denotes the angles, and where $\omega_n$ is the {\em Matsubara frequency} \begin{equation} \omega_n = \frac{2\pi n}{\beta}\qquad n=0,\pm 1, \pm 2,... . \end{equation} We have no intention of finding the eigenvalues $\lambda$, nor of finding $g_\lambda(r)$, instead we will rewrite the d'Alembertian as \begin{equation} \Box = -\frac{\omega_n^2}{h}-\frac{1}{r^2}\frac{\partial}{\partial r} r^2h \frac{\partial}{\partial r} - \frac{l(l+1)}{r^2}. \end{equation} And the heat-kernel will be written as \begin{equation} G(x,x';\sigma) = \sum_{nlm}g_{nl}(r,r';\sigma)Y_{lm}(\Omega) Y_{lm}^* (\Omega')e^{-\lambda_{nl}(r)\sigma-i\omega_n(\tau-\tau')}, \label{eq:G} \end{equation} where we have defined \begin{equation} \lambda_{nl}(r) \equiv \frac{l(l+1)}{r^2}+\frac{\omega_n^2}{h}. \end{equation} The reasoning behind this formula is as follows. The eigenfunctions can be written according to (\ref{eq:ef}) as a product of a spherical harmonic, a plane wave invloving the Matsubara frequency and the complexified time, and finally some unknown radial function. Clearly, we would expect the heat kernel, being a sum of products of eigenfunctions, to have a similar form, but since we cannot find the radial eigenvalues $\lambda$ we cannot simply use (\ref{eq:ef}) to find $G$. On the other hand, it must be possible to expand it on the functions $Y_{lm}(\Omega)Y_{lm}^*(\Omega')$ and $e^{-i\omega_n(\tau- \tau')}$, hence the expression (\ref{eq:G}).\\ A mass, $\mu$, of the scalar field can be reinserted simply by adding $\mu^2$ to $\lambda_{nl}(r)$. The heat equation determines the unknown functions $g_{nl}$. We will Taylor expand these in $\sigma$, writing (asymptotic expansion)\footnote{For simplicity we will follow the common abuse of terminology and refer to this expression as ``the heat kernel'' although it strictly speaking is only an asymptotic formula valid for $\sigma$ not too large.} \begin{equation} g_{nl}(r,r';\sigma) = \frac{1}{\sqrt{4\pi\sigma}}e^{-\frac{(r-r')^2}{4 \sigma}}\sum_{k=0}^\infty a_k(r,r')\sigma^k, \end{equation} where the first term is the form $g_{nl}$ would have in a flat space-time. The boundary condition implies $a_0\equiv 1$. This asymptotic expansion is similar to the standard Schwinger-DeWitt expansion \cite{BD,GMM}. The difference here lies in the fact that we only make an asymptotic expansion of the radial part of the heat kernel, whereas we use a mode sum for the remaining part. Our expansion is thus a hybrid, combining the asymptotic Schwinger-DeWitt expansion with the exact mode sum.\\ Inserting this expansion into the heat equation yields a recursion relation for the $a_k$'s. As we only need to know the value of the heat-kernel along the diagonal $x=x'$, we will put $r=r'$ to arrive at the following recursion relation by straightforward computation \begin{equation} \left(1+k+\frac{M}{r}\right)a_{k+1} =- L_1 a_{k} -L_2 a_{k-1}, \end{equation} where we have defined the operators \begin{eqnarray} L_1 &=& \left(1-\frac{2M}{r}\right)\frac{d^2}{dr^2} + \left(\frac{2}{r} - \frac{8M}{r^2}\right)\frac{d}{dr}\nonumber\\ &=& h\frac{d^2}{dr^2}+\frac{2}{r}(2h-1)\frac{d}{dr},\\ L_2 &=& h\lambda_{nl}'\frac{d}{dr}+2h\lambda'_{nl}r^{-1} + h' \lambda_{nl}' + h\lambda_{nl}''. \end{eqnarray} Putting \begin{equation} H = L_2 a_0 = 2h\lambda_{nl}'r^{-1}+h'\lambda_{nl}'+h\lambda_{nl}'', \end{equation} the first few functions $a_k$ becomes \begin{eqnarray} a_0 &=& 1,\\ a_1 &=& 0,\\ a_2 &=& \frac{-2H}{5-h},\\ a_3 &=& \frac{(-1)^22^2}{7-h}L_1\frac{H}{5-h},\\ a_4 &=& \frac{(-1)^32^3}{9-h}L_1\frac{1}{7-h}L_1\frac{H}{5-h} +\frac{(-1) ^22^2}{9-h}L_2\frac{H}{5-h},\\ a_5 &=& \frac{(-1)^42^4}{11-h}L_1\frac{1}{9-h}L_1\frac{1}{7-h}L_1\frac{H} {5-h} + \frac{(-1)^32^3}{11-h}L_1\frac{1}{9-h}L_2\frac{H}{5-h}+\nonumber\\ &&\frac{(-1)^32^3}{11-h}L_2\frac{1}{7-h}L_1\frac{H}{5-h}. \end{eqnarray} There are singularities in these at $r=2M$, as one would expect. Noting that both $H$ and $L_2$ are linear in $\omega_n^2$, we see that $a_k$ can contain all even powers of the Matsubara frequencies up to and including $\omega_n^{2[k/2]}$.\\ The method of summation of modes when calculating zeta functions have been applied in a number of other spacetimes by various authors, \cite{others}, and a number of explicit results exist for some rather simple cases. \section{The Free Energy Density} We have now found an asymptotic expansion for the heat-kernel and are ready to integrate it. The $\sigma$ integration is trivial (it usually is), and will just give a sum of $\Gamma$-functions. Explicitly \begin{eqnarray} \tilde{f}(x;s)&\equiv &\frac{1}{\Gamma(s)}\int_0^\infty d\sigma \sum_{nlm}|Y_{lm}(\Omega)|^2\frac{1}{\sqrt{4\pi}}\sum_{k=0}^\infty a_k e^{-\lambda_{nl}(r)\sigma}\sigma^{s+k-3/2}\nonumber\\ &=& \sum_{nlm}|Y_{lm}|^2\frac{1}{\sqrt{4\pi}}\sum_{k=0}^\infty a_k(r) \frac{\Gamma(s+k-\frac{1}{2})}{\Gamma(s)}\lambda_{nl}^{\frac{1}{2}-s-k}. \end{eqnarray} One should note that even though the asymptotic series is only valid for $\sigma$ small, this integral still makes sense since the factor $\lambda_{nl}$ acts as a mass term regularisation. If we hadn't used our hybrid method we would have had to assume a mass for the scalar field in order to obtain a meaningfull integral. We can still do this, if we wish, by simply adding $\mu^2$ to $\lambda_{nl}$. The addition of such a finite mass, however, will not modify the convergence properties of $\tilde{f}$, hence the succesive regularisations to be performed below are still needed. This is so because only the sum over $n$ need to be regularised, and for fixed $l$, $\lambda_{nl}$ acts like a finite mass irrespective of whether $\mu^2\neq 0$ or not.\\ For $k\neq 0$, differentiating this with respect to $s$ at $s=0$ simply amounts to removing the $\Gamma$-function in the denominator and putting $s=0$ in the remaining terms. The $k=0$ term is singular and has to be treated separately. We have \begin{displaymath} \tilde{f}_{k=0}(x;s) = \sum_{nlm} |Y_{lm}|^2\frac{1}{\sqrt{4\pi}} \frac{\Gamma(s-\frac{1}{2})}{\Gamma(s)}\lambda_{nl}^{\frac{1}{2}-s}, \end{displaymath} so we have a divergent sum (for $s=0$) in the numerator, namely $\sum_n \sqrt{z^2+4\pi^2 n^2}$, where $z^2=\frac{l(l+1)h(r)\beta^2}{r^2}$. We can carry out the sum for an arbitrary $s$, see e.g. Ramond \cite{Ram}: \begin{eqnarray*} \frac{1}{\Gamma(s)}\sum_{n=-\infty}^\infty \left(z^2+4\pi^2 n^2\right)^ {\frac{1}{2}-s} &=& \frac{1}{\Gamma(s)}\left[ z^{1-2s}+\frac{2\zeta(2s+1)}{(4\pi^2)^ {s-\frac{1}{2}}} +\right.\\ &&\hspace{-20mm}\left. \frac{2}{\Gamma(s+\frac{1}{2})}\sum_{\nu=1}^\infty \frac{x^{2\nu}}{\nu!}\frac{(-1)^\nu} {(4\pi^2)^{s+\nu-\frac{1}{2}}}\Gamma(s+\nu-\frac{1}{2}) \zeta(2s+2\nu-1)\right]. \end{eqnarray*} Differentiating and putting $s=0$ we obtain \begin{displaymath} z+\frac{1}{2\pi^{3/2}}(\gamma-2\ln 2\pi)(4\pi-\frac{z^2}{\pi}) +\frac{2}{\sqrt{\pi}}\sum_{\nu=2}^\infty\frac{z^{2\nu}}{\nu!}\frac{(-1)^\nu} {(4\pi^2)^{\nu-\frac{1}{2}}}\Gamma(\nu-\frac{1}{2})\zeta(2\nu-1), \end{displaymath} which is finite. This expression is to be multiplied by $\beta^{-1}h^{-1/2}$.\\ The free energy density $f(x)$ is then \begin{eqnarray} f(x) &=& -\frac{1}{2\beta}\left.\frac{\partial\tilde{f}} {\partial s}\right|_{s=0}\nonumber\\ &=& -\frac{1}{2\beta}\sum_{lm}|Y_{lm}|^2(4\pi)^{-1/2}\left[ -2\sqrt{\pi}\sqrt{\frac{l(l+1)}{r^2}}-\right.\nonumber\\ &&2\sqrt{\pi}\frac{1}{2\pi^{3/2}}(\gamma-2\ln 2\pi)\left(\frac{4\pi} {\sqrt{h}\beta} -\frac{l(l+1)\sqrt{h}\beta}{\pi r^2}\right)+\nonumber\\ &&\left. 4\sqrt{\pi}\beta^{-1}h^{-1/2}\sum_{\nu=2}^\infty \frac{(-1)^\nu} {\nu!}\left( \frac{l(l+1)h\beta^2}{4\pi^2 r^2}\right)^\nu \Gamma(\nu-\frac{1}{2})\zeta(2\nu-1)\right]-\nonumber\\ &&-\frac{1}{2\beta}\sum_{nlm}|Y_{lm}|^2(4\pi)^{-1/2}\sum_{k=2}^\infty a_k(r)\lambda_{nl}(r)^{\frac{1}{2}-k}\Gamma(k-\frac{1}{2}). \end{eqnarray} On the face of it, this function depends on {\em all} the coordinates, but looking closer we see that there is no explicit time-dependence, and using the sum rule \begin{displaymath} \sum_m Y_{lm}(\Omega)Y^*_{lm}(\Omega') = \frac{2l+1}{4\pi} P_l({\bf u} \cdot {\bf u}'), \end{displaymath} where ${\bf u, u}'$ are unit vectors given by the solid angles $\Omega, \Omega'$, we see that \cite{GR} \begin{eqnarray*} \sum_m |Y_{lm}(\Omega)|^2 &=& \frac{2l+1}{4\pi}P_l(1)\\ &=& \frac{2l+1}{2^{l+2}\pi}\sum_{\nu=0}^{\left[\frac{1}{2}l\right]} \frac{(-1)^\nu (2l-2\nu)!}{\nu!(l-\nu)!(l-2\nu)!}. \end{eqnarray*} So the angular dependency also disappears. In accordance with what we would expect, the energy density depends only on the radial coordinate. We will write the density as a mode sum \begin{equation} f(r) = \sum_l f_l(r), \end{equation} where \begin{eqnarray} f_l(r) &\equiv& \left\{\pi^{-1}\beta^{-2} \frac{1}{2\beta}(4\pi)^{-3/2}(2l+1)2^{-l}\left( \sqrt{\frac{l(l+1)\beta^2}{r^2}}+\right.\right.\nonumber\\ &&\frac{1}{2\pi^{3/2}}(\gamma-2\ln 2\pi)\left(\frac{4\pi}{\sqrt{h}\beta^2} -\frac{l(l+1)\sqrt{h}\beta^2}{\pi r^2}\right)+\nonumber\\ &&\left. \frac{2}{\beta\sqrt{h\pi}}\sum_{\nu=2}^\infty \frac{(-1)^\nu} {\nu!}\left(\frac{l(l+1)h\beta^2}{4\pi^2 r^2}\right)^\nu \Gamma(\nu-\frac{1}{2})\zeta(2\nu-1)\right)-\nonumber\\ &&\left.\frac{1}{2\beta}(4\pi)^{-3/2}(2l+1)2^{-l}\sum_{n=-\infty}^\infty \sum_{k=2}^\infty \Gamma(k-\frac{1}{2})a_k(r)\lambda_{nl}(r)^{1/2-k} \right\}\times\nonumber\\ &&\sum_{\nu=0}^{[l/2]}\frac{(-1)^\nu(2l-2\nu)!}{\nu!(l-\nu)!(l-2\nu)!}. \end{eqnarray} The major problem here is that the coefficients $a_k(r)$ depend on $n$ through the Matsubara frequencies. \\ This expression needs regularisation. For instance, when $l=0$ we encounter a singularity. Remembering that $a_k$ can include all even powers of $n$ up to and including $n^{2[k/2]}$ we write \begin{displaymath} a_k(r) = \sum_{t=0}^{\left[\frac{k}{2}\right]} b_k^{(t)}(r)n^{2t}, \end{displaymath} where $b_k^{(s)}$ is independent of $n$. Defining $\lambda=4\pi^2/h\beta^2$ we then have \begin{eqnarray*} \sum_{n=-\infty}^\infty\sum_{k=2}^\infty \left.a_k\right|_{l=0} \Gamma(k-\frac{1}{2}) \lambda_{n0}^{\frac{1}{2}-k} &=& 2\sum_{k=2}^\infty \Gamma(k-\frac{1}{2}) \lambda^{\frac{1}{2}-k}\sum_{t=0}^{[k/2]}b_k^{(t)}(r)\sum_{n=1}^\infty n^{2t+1-2k}\\ &=& 2\sum_{k=2}^\infty \Gamma(k-\frac{1}{2})\lambda^{\frac{1}{2}-k} \sum_{t=0}^{[k/2]}b_k^{(t)}(r)\zeta(2k-2t-1), \end{eqnarray*} which is divergent whenever $2k-2s-1=1$, as $s\leq \frac{1}{2}k$ this can only happen when $k=2, s=1$. \section{The Minkowski Space Contribution and the Casimir Energy Density} To get a physical quantity we must subtract the Minkowski space contribution \cite{BD,GMM}, because the physical energy must be such that it vanishes in flat spacetime. Doing this we thereby get the ``Casimir energy density''. The relationship between this Casimir energy density, $\zeta$-functions and cut-off regularisations has been studied in \cite{casimir}. This subtracting off the flat spacetime contribution is sometimes enough to cancel divergences, because any manifold looks locally like Minkowski spacetime and hence will have the same leading divergence as in flat spacetime. It will turn out, however, that the Casimir energy thus obtained in this case is not finite and still needs some regularisation. The subtraction off the flat contribution should therefore not be seen as much as a regularisation/renormalisation as simply a normalisation, \cite{BD,GMM}.\\ The heat kernel for the d'Alembertian is already known in flat spacetime using Cartesian coordinates, but this will not be useful to us here, since we need to act on the asymptotic expansion (\ref{eq:G}). Hence what we need, is to find a similar expression for the flat spacetime heat kernel using a mixture of mode sum (angular coordinates and Matsubara frequencies) and asymptotic expansion (radial coordinate). We obtain this contribution by letting $M\rightarrow 0$ in the metric (and hence the d'Alembertian etc.).\footnote{This is not the same as saying our result for the heat kernel is smooth in $M$. We merely use the $M\rightarrow 0$ as a short hand for saying that in that limit the metric tensor reduces to the Minkowski spacetime one - with a compact time due to the finite temperature. The coefficients $a_k$ will not a priori be smooth in this limit, however. Hence, new recursion relations have to be found.} Denoting the resulting coefficients by $\tilde{a}_k(r)$ we get the following much simpler recursion relation \begin{eqnarray} \tilde{a}_{k+1} &=& -\frac{1}{k+1}\tilde{L}_1\tilde{a}_k - \frac{1}{k+1} \tilde{L}_2\tilde{a}_{k-1}\nonumber\\ &=& -\frac{1}{k+1}\left(\frac{d^2}{dr^2}+\frac{2}{r}\frac{d}{dr}\right) \tilde{a}_k -\frac{2l(l+1)}{(k+1)r^3}\left(\frac{d}{dr}+\frac{2}{r} \right) \tilde{a}_{k-1}. \end{eqnarray} The solutions go like \begin{equation} \tilde{a}_k(r) = \alpha_k(l)r^{-2k}, \end{equation} there is {\em no} dependency upon the frequency $\omega_n$. The coefficients $\alpha_k$ satisfy (for $k\neq 0$) \begin{equation} \alpha_{k+1} = -\frac{2k(2k-1)}{k+1}\alpha_k + \frac{4kl(l+1)}{k+1} \alpha_{k-1}, \end{equation} and $\alpha_0=1, \alpha_1=0$. Written down explicitly, the first few of the $\tilde{a}_k$ are\footnote{If the mass, $\mu$, of the quanta of the scalar field is non-zero, then these would be completely different. They would still be independent of the Matsubara frequencies, though.} \begin{eqnarray*} \tilde{a}_0 &=& 1 = \alpha_0(l),\\ \tilde{a}_1 &=& 0 = \alpha_1(l)r^{-2},\\ \tilde{a}_2 &=& 2l(l+1)r^{-4} = \alpha_2(l)r^{-4},\\ \tilde{a}_3 &=& -8l(l+1)r^{-6} = \alpha_3(l)r^{-6},\\ \tilde{a}_4 &=& 12l(l+1)\left(5 -l(l+1) \right)r^{-8} = \alpha_4(l)r^{-8},\\ \tilde{a}_5 &=& -672l(l+1)\left(1-\frac{17}{105} l(l+1) \right)r^{-10} = \alpha_5(l)r^{-10}. \end{eqnarray*} Denoting the corresponding free energy density by $f^{(0)}$, the Casimir density is defined to be \begin{equation} f^{\rm Cas}(r) = f(r)-f^{(0)}(r) = \sum_l f^{\rm Cas}_l(r). \end{equation} The very simple form the functions $\tilde{a}_k$ take on actually allow us to carry out the summation over the Matsubara frequencies, using essentially the same formulas as in the $M\neq 0$ case. Defining \begin{equation} c_l \equiv -\frac{1}{2\beta}(4\pi)^{-3/2} (2l+1)2^{-l} \sum_{\nu=0}^{[l/2]}\frac{(-1)^\nu(2l-2\nu)!}{\nu!(l-\nu)!(l-2\nu)!}, \end{equation} the resulting free energy becomes \begin{eqnarray} f^{\rm Cas}_l &=& 2\sqrt{\pi}c_l\left[- \frac{\gamma-2\ln 2\pi}{2\pi^{5/2}}(z^2h^{-1/2}-\tilde{z}^2- \frac{4\pi^2}{\sqrt{h}\beta^2}+\frac{4\pi^2}{\beta^2}) +\right.\nonumber\\ && \frac{2}{\beta\sqrt{\pi}}\sum_{\nu=2}^\infty (z^{2\nu}h^{-1/2} - \tilde{z}^{2\nu}) \frac{(-1)^\nu}{\nu ! (4\pi^2)^{\nu-\frac{1}{2}}} \Gamma(\nu-\frac{1}{2}) \zeta(2\nu-1)+\sum_{k=2}^\infty\Gamma(k-\frac{1}{2})\times\nonumber\\ &&\left(\sum_{n=-\infty}^\infty a_k(r)\lambda_{nl}^{\frac{1}{2}-k}- \alpha_k(l)r^{-2k}\beta^{2k-1}(2\pi)^{1-2k}\left\{\left( \frac{\tilde{z}}{2\pi}\right)^{1-2k} + 2\zeta(2k-1)+\right.\right.\nonumber\\ &&\left.\left.\left.\frac{2}{\Gamma(k-\frac{1}{2})}\sum_{\nu=1}^\infty \frac{\tilde{z}^{2\nu}}{\nu !}\frac{(-1)^\nu}{(2\pi)^{2\nu}} \Gamma(k+\nu-\frac{1}{2})\zeta(2k+2\nu-1)\right\}\right)\right], \end{eqnarray} where \begin{equation} z^2 \equiv \frac{l(l+1)h(r)\beta^2}{r^2} \qquad\qquad \tilde{z}^2 \equiv \frac{l(l+1)\beta^2}{r^2}, \end{equation} and where we have used $zh^{-1/2}-\tilde{z}=0$. We should note that $f_0^{(0)}$ contains a countable infinity of divergences, this time coming from the terms $\tilde{z}^{1-2k}\alpha_k(l)\sim l(l+1)^{1-2k+ [k/2]}$. We should furthermore notice that all the terms $z^{2\nu}h^{-1/2}- \tilde{z}^{2\nu}$ are finite as $r\rightarrow 2M$, since $zh^{-1/2}\rightarrow 0$ in this limit. The only divergent parts as $r\rightarrow 2M$ are the $a_k(r)\lambda^{\frac{1}{2}-k}$-terms. We can find an improved expression for these by writing once more \begin{displaymath} a_k(r)=\sum_{t=0}^{\left[\frac{k}{2}\right]}b_k^{(t)}(r)n^{2t}, \end{displaymath} we then have to perform sums of the form \begin{equation} \xi_2(s,t;a;z) \equiv \sum_{n=-\infty}^\infty n^{2t}(z^2+an^2)^{-s}, \end{equation} with $a=2\pi$. For $t=0$ we know the result; it is just the formula we have been using a number of times by now. Denote this sum by $\xi_1(s;a;z)$. Explicitly it is \begin{eqnarray*} \xi_1(s;a;z) &\equiv& \sum_{n=-\infty}^\infty (z^2+an^2)^{-s}\\ &=& z^{-2s}+\frac{2}{\Gamma(s)} \sum_{\nu=0}^\infty\frac{(-1)^\nu z^{2\nu}}{\nu! a^{s+\nu}} \Gamma(s+\nu)\zeta(2s+2\nu). \end{eqnarray*} For $t$ a positive integer (which is the only case we need to worry about) we notice the following relationship \begin{equation} \xi_2(s,t;a;z) = \frac{(-1)^t\Gamma(s)}{\Gamma(s-t) }\frac{\partial^t}{\partial a^t}\xi_1(s-t;a;z). \end{equation} Inserting the expression of $\xi_1$ we finally arrive at \begin{equation} \xi_2(s,t;a;z) = \frac{2\Gamma(s)}{\Gamma(s-t)}\sum_{\nu=0}^\infty (-1)^\nu\frac{z^{2\nu}}{\nu!}\frac{\Gamma(s+\nu)}{\Gamma(s-t)} \zeta(2s-2t+2\nu)a^{-s-t-\nu}. \end{equation} We have to evaluate this at $s=k-\frac{1}{2}, a=4\pi^2$ and multiply it by $(\beta^2 h)^{k-\frac{1}{2}}2\sqrt{\pi}c_l$ in order to get the contribution to the free energy, which then reads \begin{eqnarray} f_l^{\rm Cas}(r) &=&2\sqrt{\pi}c_l\left[-\frac{\gamma- 2\ln 2\pi}{2\pi^{5/2}} (z^2h^{-1/2}-\tilde{z}^2-\frac{4\pi^2}{\sqrt{h}\beta^2}+\frac{4\pi^2} {\beta^2})\right.+\nonumber\\ &&\frac{2}{\beta\sqrt{\pi}}\sum_{\nu=2}^\infty \frac{(-1)^\nu} {\nu!(2\pi)^{2\nu-1}} (z^{2\nu}h^{-1/2}-\tilde{z}^{2\nu})\Gamma(\nu-\frac{1}{2}) \zeta(2\nu-1)+\nonumber\\ &&\sum_{k=2}^\infty\Gamma(k-\frac{1}{2})(2\pi)^{1-2k}\left(2\sum_{t=0}^{[k/2]} b_k^{(t)}(r)\frac{\Gamma(k-\frac{1}{2})} {\Gamma(k-\frac{1}{2}-t)} \sum_{\nu=0}^\infty(\beta^2h)^{k-\frac{1}{2}}(-1)^\nu\frac{z^{2\nu}}{\nu!} \times\right.\nonumber\\ &&\frac{\Gamma(k-\frac{1}{2}+\nu)}{\Gamma(k-\frac{1}{2}-t)} (2\pi)^{-2k-2\nu+1-2t}\zeta(2\nu+2k-2t-1)-\nonumber\\ &&\alpha_k(l)r^{-2k}\beta^{2k-1}(2\pi)^{1-2k}\left\{\left( \frac{\tilde{z}}{2\pi}\right)^{1-2k}+2\zeta(2k-1) +\right.\nonumber\\ &&\left.\left.\left.\frac{2}{\Gamma(k-\frac{1}{2})} \sum_{\nu=1}^\infty \frac{(-1)^\nu\tilde{z}^{2\nu}}{\nu!(2\pi)^{2\nu}} \Gamma(k+\nu-\frac{1}{2}) \zeta(2k+2\nu-1)\right\}\right)\right]. \end{eqnarray} We note the presence of a singularity when $k=2, t=1$. It is the same kind of singularity we encountered in the $l=0$ case. The regularisation of such singularities is the subject of the next section.\\ It is an easy matter to derive a recursion relation for the new coefficients $b_k^{(t)}$; inserting $a_k=\sum_tb_k^{(t)}n^{2t}$ into the recursion relations for the $a_k$ yields \begin{equation} \left(1+k+\frac{M}{r}\right)b_{k+1}^{(t)} = -L_1b_k^{(t)}-L_2^{(0)} b_{k-1}^{(t)}-L_2^{(1)}b_{k-1}^{(t-1)}, \end{equation} where we have written $L_2=L_2^{(0)}+n^2L_1^{(1)}$. Thus \begin{eqnarray*} L_2^{(0)} &=& -2\frac{l(l+1)}{r^3}\left(h\frac{d}{dr}+h'-h\frac{1}{r} \right),\\ L_2^{(1)} &=& -\frac{4\pi^2}{h\beta^2}\left(h'\frac{d}{dr}+2h'\frac{1}{r} -\frac{h'{^2}}{h}+h''\right). \end{eqnarray*} The first few of these have been written out explicitly in the appendix.\\ Remembering that $b_k^{(t)}$ vanishes whenever $t<0$ or $t>\left[\frac{k}{2} \right]$, we can calculate these coefficients systematically. We furthermore notice that $b_k^{(t)}$ is $\beta^{-2t}$ times some function which only depends upon the metric and its derivatives (i.e. on $h,h',h'',...$), we can thus write \begin{equation} b_k^{(t)}(r,\beta) = \beta^{-2t}c_k^{(t)}(r), \end{equation} in this way separating the coefficients into a purely thermal and a purely geometric part.\\ Now, from Helmholtz' free energy density $f^{\rm Cas}$ we can calculate various interesting physical quantities, namely the renormalised expressions for the (modes of) the pressure, $p_l$, internal energy density, $u_l$, and finally the entropy density $s_l$ by \begin{eqnarray} p_l &=& -\left(\frac{\partial f^{\rm Cas}_l}{\partial V}\right)_\beta =-\frac{1}{4\pi r^2}\left(\frac{\partial f_l^{\rm Cas}(r)}{\partial r} \right)_\beta,\\ u_l &=& \beta^2 f_l^{\rm Cas}+\beta s_l,\\ s_l &=& \beta^2\left(\frac{\partial f_l^{\rm Cas}}{\partial\beta} \right)_r. \end{eqnarray} However, if one attempts to calculate, say, the entropy density, one will find that it is divergent; thus there is still some regularisations to be done. \section{The Final Regularisations} As we saw, $f^{\rm Cas}$, had a singularity from $l=0$, stemming from two kinds of singularities in $f$ and $f^{(0)}$ which did not cancel each other. We furthermore noticed a similar singularity in the $k=2,t=1$ term. We will now return to this problem once more.\\ The singularities in the curved and flat space-time free energies came from two different sources, one was $\zeta(1)$, i.e. the pole of the Riemann $\zeta$-function, whereas the flat space-time contribution had a countable infinity of poles $(l(l+1))^{-\nu}$. We thus need two different ways of dealing with the problems.\\ It is well-known that singularities in the $\zeta$-function regularization are to be removed by taking the appropriate principal parts (see e.g. \cite{princip}), which for a meromorphic function simply amounts to extracting the finite part near a pole. Hence \begin{eqnarray} f_0(r) &=&-2\sqrt{\pi}c_0\left[\frac{1}{2}\sqrt{\pi} \left(\frac{4\pi^2}{h\beta^2}\right)^{-3/2}\left( \left.b_2^{(0)}\right|_{l=0}(r)\zeta(3)+\gamma \left.b_2^{(1)}\right|_{l=0}(r)\right)+\right.\nonumber\\ &&\left.\sum_{k=3}^\infty\Gamma(k-\frac{1}{2}) \left(\frac{4\pi^2}{h\beta^2}\right)^{\frac{1}{2}-k} \sum_{t=0}^{\left[\frac{k}{2}\right]}\left.b_k^{(t)}\right|_{l=0}(r) \zeta(2k-2t-1)\right], \end{eqnarray} where we have written \begin{equation} a_k(r) = \sum_{t=0}^{[k/2]}b_k^{(t)}(r)n^{2t}, \end{equation} as before. In obtaining this result we have used \cite{GR,AS} \begin{displaymath} \zeta(s)=\frac{1}{s-1}+\gamma+\sum_{n=1}^\infty (-1)^n \frac{\gamma_n}{n!}(s-1)^n. \end{displaymath} Now \begin{eqnarray*} a_2 &=&\frac{-2H}{5-h}\\ &=& \frac{8\pi^2 n^2}{(5-h)h\beta^2}\left( 2r^{-1} -h^{-1}(h')^2+h''\right)\mbox{ (for $l=0$)}, \end{eqnarray*} so $\left.b_2^{(0)}\right|_{l=0}=0$\footnote{The $n=0$ contribution will always be proportional to $l(l+1)$, as this is the only $\omega_n$-independent term on the right hand side of the recursion relations. For $k=2$ we find explicitly $b_2^{(0)} = 4r^{-4}l(l+1)(5-h)^{-1}(h'-h/r)$.}, leading to \begin{eqnarray} f_0(r) &=& \frac{1}{16\pi^{3/2}}\sqrt{h}\gamma (5-h)^{-1} (2r^{-1}-h^{-1}(h')^2+h'')+\nonumber\\ && \hspace{-10mm}\frac{1}{8\beta\sqrt{\pi}} \sum_{k=3}^\infty\Gamma(k-\frac{1}{2})\left( \frac{\sqrt{h}\beta}{2\pi}\right)^{2k-1}\sum_{t=0}^{[k/2]} \left.b_k^{(t)}\right|_{l=0}(r)\zeta(2k-2t-1), \end{eqnarray} where we have inserted $c_0=-\frac{1}{2\beta}(4\pi)^{-3/2}$. We notice the appearance of a temperature independent term, this will not give any contribution to the entropy then.\\ The remaining singularity in $f_l$ for $l\geq 1$ also comes from the pole in Riemann's zeta-function and again appears only in the $k=2$ contribution. Here $b_2^{(0)}$ gets multiplied by \begin{displaymath} 4\sqrt{\pi}c_l\beta^3h^{3/2} \sum_{\nu=0}^\infty(-1)^\nu\frac{z^{2\nu}}{\nu!} \Gamma(\nu+\frac{3}{2})\zeta(3+2\nu)(4\pi^2)^{-\nu-3/2}, \end{displaymath} which is non-singular, but $b_2^{(1)}$ gets multiplied by \begin{displaymath} 4\sqrt{\pi} c_l\frac{1}{4} \beta^3h^{3/2}(\zeta(1)\Gamma(\frac{3}{2})(4\pi^2)^{-3/2}+...), \end{displaymath} with the non-singular terms left out. In the spirit of $\zeta$-function regularization then, we must interpret this $\zeta(1)$ as $\gamma$, and the result is then well behaved.\\ For the Minkowski space contribution we have to return to the initial problem, the solution of the heat-equation, and we have to redo the calculation with $l=0$. Hence \begin{equation} f^{(0)}_0(r) = -\frac{1}{2\beta}\left.\frac{d}{ds}\right|_{s=0} \left(\frac{1}{\Gamma(s)}\sum_{n=-\infty}^\infty \int_0^\infty \sigma^{s-1}g_{n0}^{(0)}(r,\tau,r,\tau;\sigma) Y_{00}^2e^{-\omega_n^2\sigma} d\sigma\right), \end{equation} where $g_{n0}^{(0)}$ solves $\Box g_{n0}^{(0)} = \omega_n^2 g_{n0}^{(0)} -\frac{\partial}{\partial\sigma} g_{n0}^{(0)}$. So $g_{n0}^{(0)}$ is the Minkowski space analogue of the function $g_{nl}$ we introduced in order to solve the heat-equation in a Schwarzschild geometry. The solution is easily seen to be \begin{equation} g_{n0}^{(0)} = \frac{1}{\sqrt{4\pi\sigma}} e^{-\frac{(r-r')^2}{4\sigma}} +\mbox{terms vanishing at $r=r'$}. \end{equation} This leads to (standard calculation) \begin{equation} f_0^{(0)}(r) = -\frac{1}{91\pi\beta^2}. \end{equation} We notice that this contribution is independent of $r$ and that it vanishes as $\beta\rightarrow \infty$, it is thus a pure effect of the finite temperature. This also implies that there is no Minkowski spacetime contribution, for $l=0$, to the pressure, but only to the entropy and internal energy. It is interesting to point out that in fact the entire Minkowski space contribution is $\beta$-dependent, also for $l\neq 0$.\\ The $l=0$ contribution to the Casimir energy density can now be written down: \begin{eqnarray} f_0^{\rm Cas} &=& \frac{\sqrt{h}\gamma}{16\pi^{3/2}(5-h)}(2r^{-1}-h^{-1} h^{'2}+h'')+\frac{1}{91\pi\beta^2}+\nonumber\\ &&\frac{1}{8\beta\sqrt{\pi}}\sum_{k=3}^\infty\Gamma(k-\frac{1}{2}) \left(\frac{\sqrt{h}\beta}{2\pi}\right)^{2k-1}\sum_{t=0}^{[k/2]} \left.b_k^{(t)}\right|_{l=0}\zeta(2k-2t-1). \end{eqnarray} The temperature independent part of this is negative for $r$ not too far away from the Scwharzschild radius, and in fact looks pretty much like the usual effective potential (i.e., Coulomb plus angular momentum part) for the hydrogen atom, thereby suggesting the existence of bound states. We will later see more convincing arguments for this.\\ From $f_0^{\rm Cas}$ we can evaluate the $l=0$ contribution to the entropy density, which turns out to be \begin{equation} s_0 =\frac{1}{4}\pi^{-1/2}\sum_{k=3}^\infty\Gamma(k-\frac{1}{2}) \left(\frac{\sqrt{h}\beta}{2\pi}\right)^{2k-1}\sum_{t=0}^{[k/2]} b_k^{(t)}\zeta(2k-t-1)(k-1-t)-\frac{2}{91\pi\beta}. \end{equation} For $l\neq 0$ we still had a singularity, using the above prescription for its regularization we arrive at (including the $l=0$ contribution) \begin{eqnarray} f_l^{\rm Cas}(r)&=&2\sqrt{\pi}c_l\left[-\frac{\gamma -2\ln 2\pi}{2\pi^{5/2}}\left(z^2h^{-1/2}-\tilde{z}^2-\frac{4\pi^2}{\sqrt{h}\beta^2} +\frac{4\pi^2}{\beta^2}\right)\right.+\nonumber\\ &&\frac{2}{\beta\sqrt{\pi}}\sum_{\nu=2}^\infty\frac{(-1)^\nu}{\nu! (2\pi)^{2\nu-1}} \left(z^{2\nu}h^{-1/2}-\tilde{z}^{2\nu}\right)\Gamma(\nu-\frac{1}{2}) \zeta(2\nu-1)+\nonumber\\ &&\beta^3\frac{1}{2}\sqrt{\pi}\left(4b_2^{(0)}h^{3/2}\pi^{-1/2}\sum_{\nu=0}^\infty \frac{(-1)^\nu z^{2\nu}}{\nu!(2\pi)^{2\nu}}\Gamma(\frac{3}{2}+\nu) \zeta(2\nu+3)+\right.\nonumber\\ &&b_2^{(1)}h^{3/2}(2\pi)^{-5}\left(\frac{1}{2}\gamma+\sum_{\nu=1}^\infty (-1)^\nu\frac{z^{2\nu}}{\nu!(2\pi)^{2\nu}}\Gamma(\nu+\frac{3}{2})\frac{1} {\sqrt{\pi}} \zeta(2\nu+1)\right)-\nonumber\\ &&\left.\alpha_2(l)r^{-4}(2\pi)^{-3}\left\{\left(\frac{2\pi}{\tilde{z}}\right)^3 +\frac{4}{\sqrt{\pi}}\sum_{\nu=0}^\infty\frac{(-1)^\nu\tilde{z}^{2\nu}}{\nu! (2\pi)^{2\nu}} \Gamma(\nu+\frac{3}{2})\zeta(2\nu+3)\right\}\right)+\nonumber\\ &&\sum_{k=3}^\infty\Gamma(k-\frac{1}{2})\beta^{2k-1}\left(2\sum_{t=0}^{[k/2]} \frac{\Gamma(k-\frac{1}{2})}{\Gamma(k-\frac{1}{2}-t)}b_k^{(t)}h^{k-\frac{1}{2}} \sum_{\nu=0}^\infty\frac{(-1)^\nu z^{2\nu}}{\nu!(2\pi)^{2k+2\nu+2t-1}}\times\right.\nonumber\\ &&\frac{\Gamma(k-\frac{1}{2}+\nu)}{\Gamma(k-\frac{1}{2}-t)}\zeta(2k-1+2\nu-2t)- \alpha_k(l)r^{-2k}(2\pi)^{1-2k}\left\{\left(\frac{\tilde{z}}{2\pi}\right)^{1-2k} +\right.\nonumber\\ &&\left.\left.\left.+\frac{2}{\Gamma(k-\frac{1}{2})}\sum_{\nu=0}^\infty \frac{(-1)^\nu\tilde{z}^{2\nu}} {\nu!(2\pi)^{2\nu}}\Gamma(k+\nu-\frac{1}{2})\zeta(2k+2\nu-1)\right\} \right)\right]+\nonumber\\ && \frac{\sqrt{h}\gamma}{16\pi^{3/2}(5-h)}(2 r^{-1}-h^{-1}(h')^2 +h'') +\frac{1}{91\pi\beta^2}. \end{eqnarray} This leads to the following expression for the modes of the entropy density \begin{eqnarray} s_l^{\rm Cas} &=& -\beta f_l^{\rm Cas} +2\sqrt{\pi}c_l\left[ -\frac{\gamma-2\ln 2\pi}{\pi^{5/2}}(\beta(z^2h^{-1/2}-\tilde{z}^2)+ \frac{4\pi^2} {\sqrt{h}\beta}-\frac{4\pi^2}{\beta}\right.+\nonumber\\ &&\frac{2}{\sqrt{\pi}}\sum_{\nu=2}^\infty\frac{(-1)^\nu(2\nu-1)}{\nu!(2\pi)^{2\nu-1}} (z^{2\nu}h^{-1/2}-\tilde{z}^{2\nu})\Gamma(\nu-\frac{1}{2})\zeta(2\nu-1)+\nonumber\\ &&\frac{1}{2}\sqrt{\pi}\beta^4\left(4b_2^{(0)}h^{3/2}\pi^{-1/2}\sum_{\nu=0}^\infty \frac{(-1)^\nu z^{2\nu}(2\nu+3)}{\nu!(2\pi)^{2\nu}}\Gamma(\nu+\frac{3}{2})\zeta(2\nu+3)+ \right.\nonumber\\ &&b_2^{(1)}h^{3/2}\pi^{-1/2}\left(\frac{5}{2}\gamma+\pi^{-1}\sum_{\nu=0}^\infty \frac{(_1)^\nu z^{2\nu}(2\nu+5)}{\nu!(2\pi)^{2\nu}}\Gamma(\nu+\frac{3}{2}\zeta(2\nu+1)\right) -\nonumber\\ &&\left.\frac{1}{2}\alpha_2(l)r^{-4}\pi^{-7/2}\sum_{\nu=0}^\infty \frac{(-1)^\nu \tilde{z}^{2\nu}(2\nu+3)}{\nu!(2\pi)^{2\nu}} \Gamma(\nu+\frac{3}{2})\zeta(2\nu+3)\right)+\nonumber\\ &&\sum_{k=3}^\infty\beta^{2k}\left(2h^{k-1/2}\sum_{t=0}^{[k/2]} \frac{\Gamma(k-\frac{1}{2})}{\Gamma(k-\frac{1}{2}-t)}b_k^{(t)} \sum_{\nu=0}^\infty \frac{(-1)^\nu z^{2\nu}(2\nu+2t+2k-1)} {\nu!(2\pi)^{2\nu+2k+2t-1}}\times\right.\nonumber\\ &&\frac{\Gamma(k-\frac{1}{2}+\nu)}{\Gamma(k-\frac{1}{2}-t)} \zeta(2\nu+2k-2t-1)-\nonumber\\ &&2\alpha_k(l)r^{-2k}(2\pi)^{1-2k}\frac{1}{\Gamma(k-\frac{1}{2})}\sum_{\nu=0}^\infty \frac{(-1)^\nu\tilde{z}^{2\nu}}{\nu!(2\pi)^{2\nu}}\Gamma(k+\nu-\frac{1}{2}) \times\nonumber\\ &&\left.\left. (2k+2\nu-1) \zeta(2k+2\nu-1)\right)\right]-\frac{2}{91\pi\beta}. \end{eqnarray} One should note that $s_l$ is negative in some regions, this need not indicate that the entropy as such is negative, but only that one can not really localise the entropy: the correct physical quantity is $S=\sum_l \int s_l r^2dr$ and not $s_l(r)$, and this could very well be positive still. Furthermore, the entropy density is not uniquely defined as one could add a total derivative and still get the same over all entropy. This arbitrariness in the choice of $s_l$ will of course be fixed by defining a zero for the entropy proper. Furthermore, the calculation is only valid outside the horizon, $r\geq 2M$, so in principle one could have a negative entropy in this part of the system (here the universe), provided that a suitable positive entropy is present inside the horizon $r\leq 2M$. It does seem rather strange, however, to insist on this interpretation, as $r\geq 2M$ is the only observable part of the universe, and we would certainly expect physical quantities to ``behave properly'' in this region.\\ The entropy density we have found here is partly geometric in nature (induced by the curvature) and partly thermal (coming from $\beta\neq \infty$). The ``geometric entropy'' of black holes has been studied by Moretti in \cite{entropy}, where he is also using $\zeta$-function techniques. He shows that the Bekenstein-Hawking entropy is purely geometrical. We refer to his paper for further details.\\ The $\beta$-independent part of $f_l^{\rm Cas}$ is seen to be simply (remember, $c_l\sim \beta^{-1}, b_k^{(t)} = \beta^{-2t}c_k^{(t)}$) \begin{eqnarray} \left.f^{\rm Cas}_l\right|_{\rm no~~temp.}&=& 2\sqrt{\pi}\beta c_l \frac{1}{2}\sqrt{\pi} c_2^{(1)} h^{3/2} (2\pi)^{-5} \frac{1}{2} \gamma+\frac{\sqrt{h}\gamma}{16\pi^{3/2}(5-h)}(2r^{-1}-h^{-1}(h')^2 +h'')\nonumber\\ &=& \frac{(2l+1)}{2^l 64 \pi^{7/2}} M^2r^{-2} (r-2M)^{-1/2} (2r+M)^{-1} \sum_{\nu=0}^{[l/2]}\frac{(-1)^\nu (2l-2\nu)!}{\nu !(l-\nu)!(l-2\nu)!} +\nonumber\\ &&\frac{1}{16\pi^{3/2}}\sqrt{h}\gamma (5-h)^{-1} (2r^{-1}-h^{-1}(h')^2+ h''), \end{eqnarray} (the last term coming from $l=0$) which is divergent on the horizon. By using \begin{displaymath} 1=\sum_{lm} |Y_{lm}|^2 = \frac{1}{4\pi}\sum_{l=0}^\infty\frac{2l+1} {2^l}\sum_{\nu=0}^{[l/2]} \frac{(-1)^\nu (2l-2\nu)!}{\nu!(l-\nu)! (l-2\nu)!}, \end{displaymath} which follow from the formula for $P_l(1)$ used before, we can actually carry out the sum over $l$ to obtain \begin{equation} \left.f^{\rm Cas}\right|_{\rm no~temp} = \frac{M^2\gamma}{16\pi^{5/2}} r^{-2}(r-2M)^{-1/2} (2r+M)^{-1} +\frac{\sqrt{h}\gamma}{16\pi^{3/2} (5-h)}(2r^{-1}-h^{-1}(h')^2+h''). \label{eq:fnotemp} \end{equation} It is also interesting to note that outside the Schwarzschild radius this is positive, whereas as inside it is negative imaginary. The integral, moreover, over $r$ is finite (for $l\neq 0$) in any case, even though the function is divergent for $r=2M$, and we get the following result \begin{equation} \left.F\right|_{\rm no~~ temp} = \frac{\gamma}{16\pi^{5/2}}\left[ \frac{\pi}{\sqrt{10 M}} -i\sqrt{\frac{2}{5M}} {\rm Artanh}\left(\frac{2}{\sqrt{5}}\right)\right] \qquad l\geq 1, \end{equation} the first term is the $r> 2M$ contribution, whereas the second comes from $0<r<2M$. This result represents, then the energy coming from the removal of the point $r=0$ from Minkowski space and placing a mass $M$ in that singularity. It is, in other words, a proper Casimir energy similarly to the one one obtains in the original case of two plates in flat spacetime.\\ From (\ref{eq:fnotemp}) we can also find a temperature independent pressure by simply taking the derivative with respect to $r$. Doing this we find \begin{eqnarray} \left. p\right|_{\rm no~temp} &=&- \frac{M^2\gamma}{64\pi^{7/2}} \frac{8 M^2+19 Mr-14 r^2}{2 r^4 h^{3/2}(2r+M)^2}-\nonumber\\ &&\frac{\gamma}{64\pi^{5/2}}\frac{10M^4+16 M^3 r-32 M^2 r^2-2 M^3 r^2 +12Mr^3-11M^2r^3+10Mr^4-2r^5}{r^5 h^{3/2} (2r+M)^2} \end{eqnarray} This local pressure is negative outside the horizon, i.e., for $r>2M$, and negative imaginary for $r<2M$. \\ For small but non-vanishing $\beta$ we get (writing $c_l=\beta^{-1} \bar{c}_l, z^2=\beta^2Z^2, \tilde{z}^2=\beta^2\tilde{Z}^2$) \begin{eqnarray} f_l^{\rm Cas} &=& 2\sqrt{\pi}\bar{c}_l \left[-\frac{\gamma-2\ln 2\pi} {2\pi^{5/2}}\beta(Z^2 h^{-1/2} -\tilde{Z}^2-\beta^{-4}4\pi^2 (h^{-1/2}-1))+\right.\nonumber\\ &&\frac{1}{8}\pi^{-7/2}\beta^2 (Z^2 h^{-1/2}-\tilde{Z}^2)\Gamma(3/2) \zeta(3)+\frac{1}{2\pi}c_2^{(0)}h^{3/2}\beta^2\Gamma(3/2)\zeta(3)+ \nonumber\\ && (2\pi)^{-5} h^{3/2} c_2^{(1)}\left(\frac{1}{2}\gamma-(2\pi)^{-2} Z^2\beta^2\Gamma(5/2)\zeta(3)\right)-\nonumber\\ &&\alpha_2(l) r^{-4}(2\pi)^{-3}\left((2\pi)^3\tilde{Z}^{-3}\beta^{-4} +\frac{4}{\sqrt{\pi}}\beta^2\Gamma(3/2)\zeta(3)\right)-\nonumber\\ &&\left.\sum_{k=3}^\infty\Gamma(k-\frac{1}{2})\alpha_k(l) r^{-2k} \tilde{Z}^{1-2k}\beta^{-1}\right] +\nonumber\\ &&\frac{\sqrt{h}\gamma}{16\pi^{3/2}(5-h)}(2r^{-1}-h^{-1}(h')^2+h'') +\frac{1}{91\pi\beta^2} +O(\beta^3). \end{eqnarray} Note that in the summation over $k$ contribution, the summation does not extend to a summation over powers of $\beta$; $\beta^{-1}$ is multiplied by a function of $r,l$, $A_l(r) := \sum_{k=3}^\infty \Gamma(k-1/2)\alpha_k(l) r^{-2k}\tilde{Z}^{1-2k}$. The free energy resulting from this expression turns out to be everywhere negative for $r>2M$ and imaginary for $r<2M$.\\ The entropy density we derive from this is \begin{eqnarray} s_l^{\rm Cas} &=& 2\sqrt{\pi}\bar{c}_l\left[-\beta^2 \frac{\gamma-2\ln 2\pi} {2\pi^{5/2}}(Z^2h^{-1/2}-\tilde{Z}^2)-6\beta^{-2}\frac{\gamma- 2\ln 2\pi}{\sqrt{\pi}}(h^{-1/2}-1)\right.+\nonumber\\ &&\frac{1}{8}\pi^{-3}\zeta(3)\beta^3 (Z^2h^{-1/2}-\tilde{Z}^2) +\frac{1}{2}\pi^{-1/2}c_2^{(0)}h^{3/2}\zeta(3)\beta^3-\nonumber\\ &&\left.\frac{3}{2}\sqrt{\pi}(2\pi)^{-7}Z^2h^{3/2}c_2{(1)}\zeta(3) \beta^3 +4\alpha_2(l)r^{-4}(\tilde{Z}^{-3}\beta^{-3}- \zeta(3)(2\pi)^{-3}\beta^3)+A_l(r)\right]-\nonumber\\ &&\frac{2}{91\pi\beta}+O(\beta^3). \end{eqnarray} This is positive outside the Schwarzschild radius. We see that the temperature independent part of the entropy density is simply $A_l(r)$ for $l\geq 1$ which is independent of the mass of the black hole and comes from the renormalisation, and the $l=0$ contribution then has all the information about $M$. As well the free energy as the entropy and internal energy densities are divergent on the horizon $r=2M$, although in a rather mild way.\\ The internal energy density is found to be \begin{eqnarray} u_l^{\rm Cas} &=& 2\sqrt{\pi}\bar{c}_l\left[-(\gamma-2\ln 2\pi) \pi^{-5/2}\beta^3(Z^2h^{-1/2}-\tilde{Z}^2)-4\pi^{-1/2}(\gamma-2 \ln 2\pi)\beta^{-1}(h^{-1/2}-1)+\right.\nonumber\\ && \frac{3}{16\pi^3}\beta^4 \zeta(3)(Z^2h^{-1/2}-\tilde{Z}^2) +\frac{3}{4\sqrt{\pi}}c_2^{(0)}\zeta(3)h^{3/2}\beta^4+\nonumber\\ &&(2\pi)^{-5}h^{3/2}c_2^{(1)}(\frac{1}{2}\gamma\beta^2+\frac{9}{16 \pi^{3/2}}Z^2\zeta(3)\beta^4)-\nonumber\\ &&\left.\alpha_2(l)r^{-4}(-3\tilde{Z}^{-3}\beta^{-2}+ \frac{4}{3\pi^3}\zeta(3)\beta^4)\right]+\nonumber\\ &&\frac{\sqrt{h}\gamma}{16\pi^{3/2} (5-h)}(2r^{-1}-h^{-1}(h')^2 +h'')\beta^2-\frac{1}{91\pi}+O(\beta^5), \end{eqnarray} which is seen to be singular at $r=2M$ as mentioned above. It is interesting to note that, contrary to the free energy and the entropy densities, the internal energy is integrable inside the horizon. In fact we get \begin{eqnarray} \left.U_l^{\rm Cas}\right|_{\rm int} &=& \int_0^{2M}dr r^2u_l^{\rm Cas} \\ &=&-12 M^2 (l(l+1))^{-1/2}\beta^{-2}+\frac{\gamma M\ln 5}{128\sqrt{5} \pi^3}(24 \pi^{3/2}-32-5M\pi^{3/2})\beta^2+\nonumber\\ &&\frac{8}{273\pi}M^3\beta^{-1}(364\gamma\sqrt{\pi}-\beta-728\sqrt{\pi} \ln 2\pi)-\nonumber\\ &&i\frac{M}{128\sqrt{\pi}}\beta^{-1} (1280 M^2\pi(\gamma-2\ln 2\pi)+\gamma\beta^3 (M-8))-\nonumber\\ &&(i\pi+\ln 5)\frac{\gamma M}{128\sqrt{5}\pi^3}\beta^2(32-24\pi^{3/2} +5M\pi^{3/2}), \end{eqnarray} which is clearly complex but finite. The imaginary part of this internal energy is interpreted as giving a continous creation of particles. Notice, by the way, that only one of the terms in $U_l^{\rm Cas}$ is $l$-dependent. The temperature independent part, moreover, is seen to be $-\frac{8M^3} {273\pi}$, a negative constant. In any case, for all values of $\beta$ there is a region in which $U_l^{\rm Cas}<0$, the size of this $r$-interval, however, becomes rapidly smaller as $\beta$ increases. The actual depth of the resulting potential ``well'', moreover, also rapidly decreases as $\beta$ grows. But for $\beta\approx .1$ a very large range of $r$ values exist for which the internal energy is negative, once more pointing towards the possible existence of bound states. This region only exist for $l<10$, though. For higher values of angular momentum, no such region seems to exist, suggesting that only orbits with low angular momentum can be bound, which is what one would expect on physical grounds anyway.\\ Inside the Schwarzschild radius both the real and imaginary parts of the internal energy density seem to be negative for all values of $\beta$, with the energy density becoming almost entirely a large negative imaginary number near the horizon and a large negative real number near the origin. This apparently holds for all values of $l$. \section{Interpretation of $f^{\rm Cas}$} We can find a closed expression for the summations over powers of $z$, by noting with Ramond \cite{Ram} that \begin{displaymath} \sigma^{-d/2} = \frac{2^{\frac{d+1}{2}}}{\sqrt{\pi}(d-2)!!}\int_0^\infty e^{-\sigma w^2}w^{d-1}dw. \end{displaymath} Inserting this at an earlier stage (i.e. before the $\sigma$-integration) we arrive at the following closed formula \begin{equation} \sum_{\nu=0}^\infty\frac{(-1)^\nu y^{2\nu}}{\nu!}\Gamma(\nu-\frac{1}{2}) \zeta(2\nu-1) \propto -\beta^{-1}\pi^{-3/2}\int_0^\infty \sqrt{w^2+y^2} +2\ln\left(1-e^{-\sqrt{w^2+y^2}}\right)dw, \end{equation} where \begin{equation} y^2=\frac{z^2}{2\pi} = \frac{l(l+1)h(r)\beta^2}{2\pi r^2}. \end{equation} Now, this is essentially the expression for the free energy of a scalar quantum field in $d=1$ dimension (see e.g. \cite{Ram}) and with the energy of the individual quanta being given by $\omega^2=y^2/\beta^2=l(l+1)h(r)/(2\pi r^2)$. Our Casimir energy density is hence the regularised expression for the energy of an infinite family (labelled by their angular momentum quantum number $l$) of such quanta.\\ This was to be expected, since the spherical symmetry of the system implies that the problem is essentialy unidimensional (the radial coordinate). Moreover, the angular quantum number $l$ appears as defining an $r$-dependent mass.\\ Expanding to first order in $M$ we then arrive at expressions of the form (valid only for $l\neq 0$) \begin{eqnarray} f^{\rm Cas}_l&=&\beta^{-1}\pi^{-3/2}\int_0^\infty\left[ \sqrt{w^2+\frac{l(l+1)\beta^2}{2\pi r^2}} +2\ln\left(1-e^{\sqrt{w^2+\frac{l(l+1)\beta^2}{2\pi r^2}}}\right) \right]dw\nonumber\\ &&-M\frac{\beta l(l+1)}{2\pi^{5/2}r^4}\int_0^\infty (w^2r^2+\beta^2 l(l+1)))^{-1/2}\times\nonumber\\ &&\left(1+2 \frac{e^{-\sqrt{z^2+l(l+1)\beta^2/r^2}}}{1- e^{-\sqrt{w^2+l(l+1)\beta^2/r^2}}} \left(w^2+\frac{l(l+1)\beta^2}{2\pi r^2}\right)^{-1/2}\right)dw+O(M^2) .\nonumber\\ \end{eqnarray} The first two terms are the contribution coming from a massless particle in a flat one dimensional space. While the second term shows how the pressence of a gravitational field (here $M$) modifies the energy. \section{Discussion and Conclusion} We have obtained asymptotic expressions for the heat kernel and thereby for the free energy density in a Schwarzschild geometry by use of the $\zeta$-function technique. We also took the $M\rightarrow 0$ limit, i.e. the flat space limit, and we subtracted the two energies, to obtain what we called the Casimir energy density, this is the part of the zero point energy density due to the curvature (i.e. to the deviation from Minkowski spacetime) and is thus an intrinsically interesting quantity. As we would expect, these densities all turned out to depend only on the radial coordinate.\\ The major unanswered question, however, concerns the boundary conditions. The asymptotic expansion of the heat kernel is independent of the chosen boundary conditions, i.e. on the particular vacuum state. It is known, however, that the full renormalised energy-momentum tensor is sensitive to the particular vacuum state. Page, \cite{Page}, has computed the energy momentum tensor in a Schwarzschild background for conformal coupling, and he got the $00$ contribution to be \begin{eqnarray*} T^0_0 &\approx& (-9216 M^{10}-21504 M^9 r- 18688 M^8 r^2+7168 M^7 r^3 -\\ &&2560 M^6 r^4 + 4 M^2 r^8 - 4M r^9 + r^{10})/(122880 M^2 r^{10} \pi^2) \end{eqnarray*} Anderson et al, \cite{AHWY} have extended the original calculation by Page to arbitrary coupling to curvature in the Hartle-Hawking vacuum state, and they then find a finite energy density at the horizon $\rho(r=2M)=(15\xi-4)/(15360 M^4\pi^2)$. They also find that $\rho$ is positive for $r>2M$ only for $\frac{4}{15}<\xi <1.2575$, where $\xi$ is the non-minimal coupling. In our case we have $\xi=0$, and we found the Helmholtz free energy to be positive for $r>2M$ in the limit of vanishing temperature. We also found, however, the internal energy to be divergent at the Schwarzschild radius, and, moreover, to posses regions in which it was negative, suggesting the existence of bound states.\\ Our computation seems to have more in common with Boulware vacuum. Candelas, \cite{Candelas}, has computed the renormalised value of $\langle\phi^2\rangle$ in the Boulware, Unruh and Hartle-Hawking vacuum states, and found that for $r\rightarrow 2 M$ the Boulware vacuum expression diverges, whereas the others are finite. Furthermore, as $r\rightarrow \infty$, $\langle\phi^2\rangle$ vanishes in the Boulware vacuum and not in the otehr two. Both of these features are reproduced by the present calculation, which therefore seems to be closer related to the Boulware vacuum than any of the other two known vacuum states. It is impossible to tell precisely, however, since the Schwinger-DeWitt expansion which we used partially in obtaining our result is insensitive to the details of the vacuum state. This does not imply, though, that the present calculation is meaningless from a physical point of view, since Anderson and coworkers, \cite{A2}, have shown that for $(r-2M)/M$ not too large, the Schwinger-DeWitt expansion expression for $\langle T_{00}\rangle$ is valid, and only for $(r-2M)/M\gg 1$ (i.e., $r\gg 3M$) does the state-dependence become noticable. Hence, for $(r-2M)/M \lesssim 1$ at least, the present calculation is valid. Moreover, since we use a hybrid technique mixing the exact mode sum with the asymptotic Schwinger-DeWitt expansion, one can expect the region of validity of this calculation to be larger than the pure Schwinger-Dewitt approach. It remains for future research to specify the precise range of validity, to see just how far away from $r \approx 3M$ we can push the present computation.\\ All the calculations were carried out at finite temperature. The quantities we have found are therefore also the effective actions for a free (minimally coupled) scalar field in these backgrounds. The Casimir density then shows how the presence of curvature modifies the effective action for a flat spacetime.\\ We noticed, for instance, the presence of a constant (with respect to $r$), positive contribution to the entropy, which we can interpret as generation of radiation, somewhat related to the Hawking radiation but not quite the same, as this time the radiation clearly came from the distortion of the surrounding vacuum and could not be attributed to the internal (quantum) structure of the massive object generating the Schwarzschild geometry. This contribution, moreover, depended upon the temperature (was, in fact, proportional to it) and can in this way be seen as a correction to the Hawking radiation, by letting the temperature be equal to the Hawking temperature $T_H$.\\ Especially interesting would be the coupling of the Casimir energy density to the curvature, i.e. plugging in the Casimir energy density into the right hand side of Einstein's field equations and assume $M=M(r,t)$ is a sufficiently slowly varying function of $r,t$ (if not, one will have to redo the entire calculation of $f^{\rm Cas}$ with a time and radial dependent mass). Will the singularity at $r=0$, stemming from the point mass ($M(r,t)=M\delta(r)$ independent of $t$), become ``dressed'' in this way? If so, then quantum effects could be responsible for the removal of other singularities as well, most notably the singularity at the Big Bang (or the Big Crunch). Will this constitute a general ``quantum removal of singularities''-mechanism? This back reaction would also give us a more precise picture of the internal structure of a black hole, as well as of its stability; we already know that black holes can evaporate due to Hawking radiation, but the calculations put forward here suggests the existence of even more such effects, this time coming directly from the disturbance of the surrounding vacuum, from the ``dressing'' of the black hole so to speak. This means that at least part of the Bekenstein-Hawking entropy must be of a geometrical nature, consequently supporting Moretti's findings, \cite{entropy}. The fact that we get negative values for the candidate entropy density shows that this entropy cannot be localised in one particular region (such as near the Schwarzschild radius, say) but has to be considered a global object intimately related to the global topological properties of the spacetime manifold.
1,314,259,995,674
arxiv
\section{Introduction} A key component of our capacity to interact with others lies in our ability to recognize the poses of humans \cite{schmidtke2021unsupervised, liu2021aggregated,iccv_motiontrajectory}. Likewise, detecting human poses is crucial for an intelligent machine to adjust its action and properly allocate its attention when interacting with people. Nowadays, pose estimation finds abundant applications in a wide spectrum of scenarios including action recognition, augmented reality, surveillance, and tracking \cite{luo2018lstm, yang2021learning}. \begin{figure}[t] \begin{center} \includegraphics[width=0.98\linewidth]{Figures/Fig1-Align.jpg} \end{center} \caption{State-of-the-art methods like PoseWarper and DCPose directly aggregate unaligned contexts from neighboring frames, which may fail for scenes with fast motion or pose occlusion. We perform temporal feature alignment between each supporting frame and the key frame, delivering robust pose estimations.} \label{fig:align} \end{figure} An extensive body of literature focuses on pose estimation in \emph{static images}, ranging from earlier approaches \cite{wang2008multiple, wang2013beyond, zhang2009efficient, sapp2010cascaded} utilising tree models or random forest models to recent attempts employing deep convolutional neural networks \cite{Cao_2017_CVPR, Toshev_2014_CVPR, Wei_2016_CVPR, newell2016stacked}. For pose estimation in videos, such methods are severely challenged in handling deteriorated video frames arising from scenes with fast motion and pose occlusion. Incorporating and leveraging additional contexts from neighboring frames is desirable to fill in the absent motion dynamics within a single frame and facilitate pose estimation. One line of work \cite{wang2020combining, luo2018lstm, artacho2020unipose} proposes to aggregate \emph{vanilla} sequential features of neighboring frames (supporting frames). \cite{luo2018lstm} trains a convolutional LSTM to model both spatial and temporal features, and directly predicts pose sequences for videos. \cite{wang2020combining} presents a 3D-HRNet to assemble features over a tracklet. Another line of work \cite{song2017thin, pfister2015flowing, liu2021deep} employs optical flow or implicit motion estimation to polish the pose estimation of the current frame (key frame). \cite{song2017thin, pfister2015flowing} propose to compute dense optical flow between frames, and leverage the flow based motion field for refining pose heatmaps temporally across multiple frames. \cite{liu2021deep} aggregates the pose heatmaps of consecutive frames and models motion residuals to improve pose estimation of the key frame. Upon scrutinizing and experimenting on the released implementations of existing methods \cite{liu2021deep, bertasius2019learning, Dosovitskiy_2015_ICCV}, we observe that they suffer from performance deterioration in challenging cases such as rapid motion and pose occlusion. As illustrated in Fig. \ref{fig:align}, in the pose occlusion scenario, existing methods like DCPose fail to recognize the right ankle of the occluded person, leading to unexpected results. In the fast motion scenario, existing methods encounter difficulties in identifying the left wrist due to motion blur. We conjecture that the reasons are \textbf{twofolds}. \textbf{(1)} It is common that the same person in the current frame and a neighboring frame is not well aligned, especially for situations involving rapid motion of human subjects or cameras. However, existing methods tend to directly aggregate unaligned contexts from neighboring frames, these spatially misaligned features potentially diminish the performances of models. \textbf{(2)} State-of-the-art approaches simply employ the conventional MSE (Mean Square Error of joints) loss to supervise the learning of pose heatmaps, while lacking an effective constraint on guaranteeing information gain from neighboring frames as well as a supervision at the intermediate feature level. In this paper, we present a novel framework, along with theoretical analysis, to tackle the above challenges. The proposed method, termed FAMI-Pose (\textbf{\underline{F}}eature \textbf{\underline{A}}lignment and \textbf{\underline{M}}utual \textbf{\underline{I}}nformation maximization for \textbf{\underline{P}}ose estimation), consists of two key components. \textbf{(i)} FAMI-Pose conducts coarse-to-fine deformations that systematically update a neighboring frame to align with the current frame at the feature level. Specifically, FAMI-Pose first performs a \emph{global transformation}, which holistically rearranges neighboring frame feature to preliminarily rectify spatial shifts or jitter. Subsequently, a \emph{local calibration} is exploited to adaptively move and modulate each pixel of neighboring frame feature for enhanced feature alignment. \textbf{(ii)} FAMI-Pose further engages an information-theoretic objective as an additional intermediate supervision at the feature level. Maximizing this mutual information objective allows our model to fully mine task-relevant cues within the neighboring frames, extracting purposeful complementary knowledge to enhance pose estimation on the key frame. To the best of our knowledge, we are the first to methodically investigate the problem of feature alignment in human pose estimation and provide insights from an information-theoretic perspective. We extensively evaluate the proposed method on three widely used benchmark datasets, PoseTack2017, PoseTrack2018, and Sub-JHMDB. Empirical evaluations show that our approach significantly outperforms current state-of-the-art methods. Our method achieves \textbf{84.8} mAP, \textbf{82.2} mAP, and \textbf{96.0} mAP on PoseTrack2017, PoseTrack2018, and Sub-JHMDB, respectively. Our results are submitted to the official evaluation server of PoseTack2017, and rank \emph{No.1} for this large benchmark dataset. We also present extensive ablation analyses on the contribution of each component, and validate the efficacy of feature alignment and the proposed mutual information loss. The contributions of this work are summarized as: \begin{itemize} \item We propose to examine the multi-frame human pose estimation task from the perspective of effectively leveraging temporal contexts through feature alignment. \item To explicitly supervise the knowledge extraction from neighboring frames, we propose an information-theoretic loss function, which allows maximizing the task-relevant cues mined from supporting frames. \item Our approach sets new state-of-the-art results on three benchmark datasets, PoseTrack2017, PoseTrack2018, and Sub-JHMDB. Our source code has been released. \end{itemize} \section{Related Work} In this section, we briefly review the following three topics that are closely related to our work, namely image-based human pose estimation, video-based human pose estimation, and feature alignment. \subsection{Image-Based Human Pose Estimation} Conventional solutions to image-based human pose estimation utilize pictorial structures \cite{zhang2009efficient, sapp2010cascaded} to model the spatial relationships among body joints. These methods tend to rely on handcrafted features and have limited representational abilities. Fueled by the explosion of deep learning \cite{wang2020combining,hao2020person} and the availability of large-scale pose estimation datasets such as PoseTrack \cite{Iqbal_2017_CVPR, Andriluka_2018_CVPR} and COCO \cite{lin2014microsoft}, various deep learning methods \cite{artacho2020unipose, cheng2020higherhrnet, huang2020devil, zhang2020distribution, varamesh2020mixture, su2019multi, vqa1, vqa2, yang2017person, yang2021deconfounded} have been proposed. These methods can be broadly categorized into two paradigms: bottom-up and top-down. \emph{Bottom-up approaches} \cite{Cao_2017_CVPR, kocabas2018multiposenet, kreiss2019pifpaf,li2019crowdpose} first detect individual body parts, and then assemble these detected constituent parts into the entire person. \cite{Cao_2017_CVPR} proposes a dual convolution structure to simultaneously predict part confidence maps and part affinity fields (that represent the relationships between body parts). On the other hand, \emph{top-down approaches} \cite{xiao2018simple, Wei_2016_CVPR, sun2019deep, newell2016stacked, moon2019posefix} first detect human bounding boxes and then estimate human poses within each bounding box. \cite{xiao2018simple} leverages deconvolution layers to replace the commonly used bi-linear interpolation for spatial-upsampling of feature maps. A recent work in \cite{sun2019deep} presents a high resolution network (HRNet) that retains high resolution feature maps throughout the entire inference, achieving state-of-the-art results on multiple image-based benchmarks. \subsection{Video-Based Human Pose Estimation} Pose estimation models trained for image-based data could not generalize well to video sequences due to their inability to incorporate abundant cues from neighboring frames. To model and leverage temporal contexts across frames, one direct approach would be employing convolutional LSTMs as proposed in \cite{luo2018lstm, artacho2020unipose}. A key shortcoming of such models might be their tendency to misalign features across different frames, which unfavourably reduces the potency of the supporting frames. \cite{song2017thin, pfister2015flowing} explicitly estimate motion fields by computing optical flow between consecutive frames, and these motion cues are subsequently used for aligning pose heatmaps. \cite{liu2021deep} estimates motion offsets between the key frame and supporting frames, and these offsets provide the basis to perform resampling of pose heatmaps on consecutive frames. In both cases, the pose estimation accuracy would be heavily dependent on the performance of the optical flow or motion offset estimation. Furthermore, the lack of an effective supervision at the intermediate features level for these approaches could lead to inaccurate pose estimations. \begin{figure*} \begin{center} \includegraphics[width=1\linewidth]{Figures/Fig2-Pipeline.jpg} \end{center} \caption{Overall pipeline of our FAMI-Pose framework. The goal is to detect the pose of person $i$ in the key frame $I_t^i$, with the assistance of its supporting frames. For clarity of illustration, we only show a single supporting frame $I_{t+\delta}^i$ in this figure. We first extract their respective features $z_t^i$ and $z_{t+\delta}^i$. These features are then handed to our global transformation module and the local calibration module for temporal alignment. The key frame feature $z^i_t$ and aligned features $\bar{\bar{z}}^i_{t+\delta}$ for all supporting frames are aggregated to $\tilde{z}^i_t$, which is passed to a detection head that outputs pose estimates $\widehat{H}^i_t$. Besides the heatmap estimation loss $\mathcal{L}_{H}$ that measures the discrepancy between $\widehat{H}^i_t$ and the ground truth $H^i_t$, we introduce an additional feature level supervision through our Mutual Information objective $\mathcal{L}_{MI}$ to extract maximal task-relevant complementary information from supporting frames.} \label{fig:pipeline} \end{figure*} \subsection{Feature Alignment} Feature alignment is an important topic for many computer vision tasks (\emph{e.g.}, semantic segmentation \cite{mazzini2018guided, li2020semantic}, object detection \cite{he2017mask, chen2019revisiting}), and numerous efforts have recently been made to address this problem. \cite{lu2019indices} presents an index-guided framework that employs indices to guide the pooling and upsampling. \cite{huang2021fapn} proposes to learn the transformation offsets of pixels to align upsampled feature maps. \cite{huang2021alignseg} presents an aligned feature aggregation module to align the features of multiple different resolutions for better aggregation. Whereas previous methods mostly tackle spatial misalignment between network inputs and outputs, we focus on temporal (\emph{i.e.}, across frames) feature alignment in this work. \section{Our Approach} \textbf{Preliminaries}\quad To detect human poses from the video frames, we first extract the bounding box of each individual person. Technically, for a video frame $I_t$, we first employ an object detector to extract the bounding box for each individual person. This bounding box is then enlarged by 25\% to crop the same individual on a predefined window $\mathcal{N}$ of neighboring frames. Overall, for person $i$, we obtain the cropped image $I_t^i$ for the key frame and $\{I_{t+\delta}^i \mid \delta\in\mathcal{N}\}$ for the supporting (neighboring) frames. \textbf{Problem Formulation}\quad Presented with a key frame $I_t^i$ along with its supporting frames $\{I_{t+\delta}^i\mid\delta\in \mathcal{N}\}$, our goal is to estimate the pose in $I_t^i$. We seek to better leverage the supporting frames through a principled feature alignment and mining task relevant information, thereby addressing the common drawback of existing approaches in failing to adequately tap into the temporal information. \textbf{Method Overview}\quad An overview of our pipeline is illustrated in Fig. \ref{fig:pipeline}. For each supporting frame $I_{t+\delta}^i$, FAMI-pose performs a two-stage hierarchical transformation to align $I_{t+\delta}^i$ with the key frame $I_t^i$ at the feature level. Specifically, FAMI-Pose consists of two main modules, a global transformation module and a local calibration module. We first perform feature extraction on ${I}^{i}_{t}$ and ${I}^{i}_{t+\delta}$ to obtain ${z}^{i}_{t}$ and ${z}^{i}_{t+\delta}$, respectively. These features are then fed into our global transformation module, which learns the parameters of an affine transformation to obtain a coarsely aligned supporting frame feature $\bar{z}^{i}_{t+\delta}$. ${z}^{i}_{t}$ and $\bar{z}^{i}_{t+\delta}$ are then handed to the local calibration module, which performs pixel-wise deformation to produce finely aligned features $\bar{\bar{z}}^{i}_{t+\delta}$. Finally, we aggregate all aligned supporting frame features $\{\bar{\bar{z}}^{i}_{t+\delta}\mid\delta\in\mathcal{N}\}$ and the key frame feature $z_t^i$ to obtain our enhanced feature $\tilde{z}^{i}_{t}$. $\tilde{z}^{i}_{t}$ is passed to a detection head that outputs pose estimations $\widehat{H}^i_t$. The task objective is to minimize the heatmap estimation loss $\mathcal{L}_H$ which measures the discrepancy between $\widehat{H}^i_t$ and the ground truth ${H}^i_t$. On top of this, we also design a mutual information objective $\mathcal{L}_{MI}$ which effectuates a feature level supervision for maximizing the amount of complementary task-relevant information encoded in $\tilde{z}^{i}_{t}$. In what follows, we introduce the complete FAMI-Pose architecture and the mutual information objective in detail. \subsection {Feature Alignment} Feature alignment starts with feature extraction, which is done with the HRNet-W48 network \cite{sun2019deep} (the state-of-the-art method for image-based human pose estimation) as the backbone. The extracted features ${z}^{i}_{t}$ and ${z}^{i}_{t+\delta}$ are then passed through a global transformation module and a local calibration module, to progressively align ${z}^{i}_{t+\delta}$ with ${z}^{i}_{t}$. We would like to highlight that we do not pursue an image-level alignment, instead we drive the network to learn a feature-level alignment between a supporting frame and the key frame. \textbf{Global Transformation}\quad We observe that most failure cases for pose estimation in videos occur due to rapid movements of persons or cameras, which inevitably lead to large spatial shifts or jitters between neighboring frames. In order to align a supporting frame to the key frame, we design a global transformation module (GTM). The GTM computes spatial rearrangement parameters of a global affine transformation to obtain a coarse preliminary alignment of supporting frame feature ${z}^{i}_{t+\delta}$ with the key frame feature ${z}^{i}_{t}$. More specifically, the GTM includes two submodules: \begin{enumerate} \item A spatial rearrangement parameter estimation network $\phi$ that estimates affine transformation parameters $\Theta$ from the input feature pair as $\phi:({z}^{i}_{t}, {z}^{i}_{t+\delta}) \rightarrow \Theta \in \mathbb{R}^{2\times3}$. The elements of $\Theta$ correspond to translation, rotation, shear, and scaling operations. \item Subsequently, a global affine transformation $\mathcal{T}$ is performed to obtain the preliminarily aligned supporting frame feature $\mathcal{T}: ({z}^{i}_{t+\delta}, \Theta) \rightarrow \bar{{z}}^{i}_{t+\delta}$. \end{enumerate} The operations of the GTM can be expressed as follows: \begin{equation} \begin{aligned} \Theta &=\phi \left( {z}_{t}^i \oplus {z}_{t+\delta}^i \right),\\ \left(\begin{array}{l} {x}_{p} \\ {y}_{p} \\ \end{array}\right)&=\underbrace{\left[ \begin{array}{lll} \theta_{11} & \theta_{12} & \theta_{13} \\ \theta_{21} & \theta_{22} & \theta_{23} \end{array} \right]}_{\Theta} \left( \begin{array}{c} \bar{x}_{p} \\ \bar{y}_{p} \\ 1 \end{array}\right), \end{aligned} \end{equation} where $(x_{p}, y_{p})$ and $(\bar{x}_{p}, \bar{y}_{p})$ denote the coordinates of pixel $p$ for ${z}_{t+\delta}^i$ and $\bar{z}_{t+\delta}^i$, respectively. \textbf{Local Calibration}\quad The global transformation module produces a coarse alignment. We then design our local calibration module (LCM) to perform meticulous fine-tuning at a pixel-level, yielding finely aligned features $\bar{\bar{z}}^{i}_{t+\delta}$. Specifically, given $\bar{z}^{i}_{t+\delta}$ and ${z}^{i}_{t}$, we independently estimate convolution kernel sampling offsets $O$ and modulated scalars $M$ for the feature $\bar{z}^{i}_{t+\delta}$: \begin{equation} \begin{aligned} \bar{z}^{i}_{t+\delta} \oplus {z}^{i}_{t} &\xrightarrow[\text{blocks}]{\text{residual}} \xrightarrow[\text{convolution}]{\text{regular}} O,\\ \bar{z}^{i}_{t+\delta} \oplus {z}^{i}_{t} &\xrightarrow[\text{blocks}]{\text{residual}} \xrightarrow[\text{convolution}]{\text{regular}} M. \end{aligned} \end{equation} The adaptively learned kernel offsets $O$ and modulated scalars $M$ respectively correspond to \emph{location shifts} and \emph{intensity fluctuations} of each pixel in $\bar{z}^{i}_{t+\delta}$ with respect to the key frame feature ${z}^{i}_{t}$. Subsequently, we implement the local calibration operation through the modulated deformable convolution \cite{zhu2019deformable}. Given the preliminarily aligned features $\bar{z}^{i}_{t+\delta}$, the kernel sampling offsets $O$, and the modulated scalars $M$ as inputs, the modulated deformable convolution outputs the fine-tuned feature $\bar{\bar{z}}^{i}_{t+\delta}$: \begin{equation} \begin{aligned} \left(\bar{z}^{i}_{t+\delta}, O, M \right) \xrightarrow[\text{convolution}]{\text{modulated deformable}} \bar{\bar{z}}^{i}_{t+\delta}. \end{aligned} \end{equation} To anticipate the discussion of the mutual information loss, we would like to point out that the key frame feature $z^{i}_{t}$ is only used for computing the global transformation parameters in GTM and convolutional parameters in LCM. Its information will not be propagated into the final aligned supporting frame feature $\bar{\bar{z}}^{i}_{t+\delta}$. \textbf{Heatmap Generation}\quad Ultimately, we aggregate over all final aligned supporting frame features $\{\bar{\bar{z}}^{i}_{t+\delta}\mid\delta\in\mathcal{N}\}$ and the key frame feature ${z}^{i}_{t}$ via element-wise addition to obtain the enhanced feature $\tilde{z}^{i}_{t}$. $\tilde{z}^{i}_{t}$ is fed to a detection head to produce pose heatmap estimations $\widehat{H}_t^{i}$. We implemented the detection head using a stack of $3\times 3$ convolutions. By effectively leveraging temporal information from supporting frames through our coarse-to-fine alignment modules, our FAMI-Pose is more adept at tackling visual degeneration issues and therefore gives more accurate pose heatmaps. \subsection {Mutual Information Objective} \label{MI} We can certainly train the FAMI-Pose in a direct end-to-end manner with a pose heatmap loss, as is done in most previous methods \cite{sun2019deep, xiao2018simple, liu2021deep, wang2020combining, bertasius2019learning}. Given our systematic examination of extracting temporal features for pose estimation, it would be fruitful to investigate whether introducing \emph{supervision at the feature level} would facilitate the task. Naively, we could formulate the feature level objective as the $L1$ or $L2$ difference between supporting frames feature $z_{t+\delta}^i$ and the key frame feature $z_{t}^i$. However, such rigid-alignment is likely to lead to erosion of complementary task-specific information from supporting frames. Consequently, the temporal features thus optimized would be inadequate for providing relevant supporting information to facilitate pose estimation. It is therefore crucial that we highlight the purposeful complementary information from the supporting frames. Towards this end, inspired by \cite{zhao2021learning, hjelm2018learning}, we propose a mutual information objective, which seeks to maximize the amount of complementary task-relevant information in the enhanced feature $\tilde{z}_t^i$. \textbf{Mutual Information}\quad Mutual information (MI) is a measure of the amount of information shared between random variables. Formally, MI quantifies the statistical dependency of two random variables $\boldsymbol{v}_{1}$ and $\boldsymbol{v}_{2}$: \begin{equation} \mathcal{I}(\boldsymbol{v}_{1} ; \boldsymbol{v}_{2}) = \mathbb{E}_{p(\boldsymbol{v}_{1}, \boldsymbol{v}_{2})}\left[\log \frac{p(\boldsymbol{v}_{1}, \boldsymbol{v}_{2})}{p(\boldsymbol{v}_{1}) p(\boldsymbol{v}_{2})}\right], \end{equation} where $p(\boldsymbol{v}_{1}, \boldsymbol{v}_{2})$ is the joint probability distribution between $\boldsymbol{v}_{1}$ and $\boldsymbol{v}_{2}$, while $p(\boldsymbol{v}_{1})$ and $p(\boldsymbol{v}_{2})$ are their marginals. \textbf{Mutual Information Loss} \quad Within this framework, our primary objective for learning effective temporal feature alignment can be formulated as: \begin{equation} \begin{aligned} \label{eq.max} \text{max } \mathcal{I}\left({y}^{i}_{t} ; \tilde{z}^{i}_{t} \mid {z}^{i}_{t} \right), \end{aligned} \end{equation} where ${y}^{i}_{t}$ represents the label, and $\mathcal{I}\left({y}^{i}_{t} ; \tilde{z}^{i}_{t} \mid {z}^{i}_{t} \right) $ denotes the amount of task-relevant information in the enhanced feature $\tilde{z}^{i}_{t}$, complementary to (\emph{i.e.}, excluding) the information from the key frame feature ${z}^{i}_{t}$. Intuitively, optimizing this objective will maximize the additional relevant and complementary information we seek to extract from neighboring frames to support the pose estimation task. Due to the notorious difficulty of the conditional MI computations especially in neural networks \cite{hjelm2018learning, tian2021farewell}, we perform a simplification. We first factorize Eq. \ref{eq.max} as follows: \begin{equation} \begin{aligned} \label{eq.com.} \mathcal{I} \left({y}^{i}_{t} ; \tilde{z}^{i}_{t} \mid {z}^{i}_{t} \right) = \mathcal{I} \left( {y}^{i}_{t} ; \tilde{z}^{i}_{t} \right) &- \mathcal{I} \left( \tilde{z}^{i}_{t} ; {z}^{i}_{t} \right)\\ &+ \mathcal{I} \left( \tilde{z}^{i}_{t} ; {z}^{i}_{t} \mid {y}^{i}_{t} \right), \end{aligned} \end{equation} where $\mathcal{I} \left( {y}^{i}_{t};\tilde{z}^{i}_{t} \right)$ measures the relevance of the label ${y}^{i}_{t}$ and feature $\tilde{z}^{i}_{t}$, $\mathcal{I} \left( \tilde{z}^{i}_{t} ; {z}^{i}_{t} \right)$ indicates the dependence between the two features $\tilde{z}^{i}_{t}$ and ${z}^{i}_{t}$, and $\mathcal{I} \left( \tilde{z}^{i}_{t} ; {z}^{i}_{t} \mid {y}^{i}_{t} \right)$ represents the \emph{task-irrelevant information} in both $\tilde{z}^{i}_{t}$ and ${z}^{i}_{t}$. Heuristically, when optimizing over the task objective, the task-specific information will have an overwhelming presence over the task-irrelevant information. Therefore, we may assume that the task-irrelevant information will be negligible upon sufficient training \cite{zhao2021learning, federici2019learning}. This simplifies Eq. \ref{eq.com.} to: \begin{equation} \begin{aligned} \label{eq:Simplification of MI} \mathcal{I}\left( {y}^{i}_{t} ; \tilde{z}^{i}_{t} \mid {z}^{i}_{t} \right) \rightarrow \mathcal{I} \left( {y}^{i}_{t} ; \tilde{z}^{i}_{t} \right) - \mathcal{I} \left( {z}^{i}_{t} ; \tilde{z}^{i}_{t} \right). \end{aligned} \end{equation} Moreover, we introduce two regularization terms to alleviate information dropping: \begin{equation} \begin{aligned} \label{eq.re.} \min \left[ \mathcal{I}\left({y}^{i}_{t} ; {z}^{i}_{t+\delta} \mid \tilde{z}^{i}_{t} \right) + \mathcal{I}\left({y}^{i}_{t} ; {z}^{i}_{t} \mid \tilde{z}^{i}_{t} \right) \right]. \end{aligned} \end{equation} The terms $\mathcal{I}\left({y}^{i}_{t} ; {z}^{i}_{t+\delta} \mid \tilde{z}^{i}_{t} \right)$ and $\mathcal{I}\left({y}^{i}_{t} ; {z}^{i}_{t} \mid \tilde{z}^{i}_{t} \right)$ respectively measure the vanishing task-relevant information in ${z}^{i}_{t+\delta}$ and $z^i_t$ during feature alignment. They serve to facilitate the nondestructive propagation of information. Simultaneously minimizing these two terms would prevent excessive information loss in ${z}^{i}_{t+\delta}$ and $z^i_t$ while maximizing the primary complementary task-relevant mutual information objective. Similar to Eq. \ref{eq:Simplification of MI}, we simplify the two regularization terms in Eq. \ref{eq.re.} as follows: \begin{equation} \begin{aligned} \mathcal{I} \left({y}^{i}_{t} ; {z}^{i}_{t+\delta} \mid \tilde{z}^{i}_{t} \right) &\rightarrow \mathcal{I} \left({y}^{i}_{t} ; {z}^{i}_{t+\delta} \right) - \mathcal{I} \left({z}^{i}_{t+\delta} ; \tilde{z}^{i}_{t} \right), \\ \mathcal{I} \left({y}^{i}_{t} ; {z}^{i}_{t} \mid \tilde{z}_{t} \right) &\rightarrow \mathcal{I} \left( {y}^{i}_{t} ; {z}^{i}_{t} \right) - \mathcal{I} \left( {z}^{i}_{t} ; \tilde{z}^{i}_{t} \right). \end{aligned} \end{equation} Finally, we simultaneously optimize the complementary information term in Eq. \ref{eq.max} and the two regularization terms in Eq. \ref{eq.re.} to provide feature level supervision: \begin{equation} \begin{aligned} \mathcal{L}_\text{MI} = \overbrace{\mathcal{I}\left({y}^{i}_{t} ; {z}^{i}_{t} \mid \tilde{z}^{i}_{t} \right)}^{\text{Vanishing w.r.t. }{z}^{i}_{t}} &+ \overbrace{\mathcal{I}\left({y}^{i}_{t} ; {z}^{i}_{t+\delta} \mid \tilde{z}^{i}_{t} \right)}^{\text{Vanishing w.r.t. }{z}^{i}_{t+\delta}} \\ &- \alpha \cdot \underbrace{\mathcal{I}\left({y}^{i}_{t} ; \tilde{z}^{i}_{t} \mid {z}^{i}_{t} \right)}_\text{Complementary}, \end{aligned} \label{alpha} \end{equation} where $\alpha$ serves as a hyper-parameter in our network to balance the ratios of different terms. These MI terms can be estimated by existing MI estimators \cite{belghazi2018mutual, tian2021farewell, van2018representation, cheng2020club}. In our experiments, we employ the Variational Self-Distillation (VSD) \cite{tian2021farewell} to estimate the MI for each term. \subsection{Training Objective} Our training objective consists of two parts. (1) We adopt the heatmap estimation loss function $\mathcal{L}_\text{H}$ to supervise the learning of final pose estimates: \begin{equation} \begin{aligned} \mathcal{L}_\text{H}=\left\| \widehat{H}_t^{i} - {H}_t^{i}\right\|_{2}^{2}, \end{aligned} \end{equation} where $\widehat{H}_t^{i}$ and ${H}_t^{i}$ denotes the prediction heatmap and ground truth heatmap, respectively. (2) We also leverage the proposed MI loss to supervise the temporal features as described in Sec. \ref{MI}. The overall loss function is given by: \begin{equation} \begin{aligned} \mathcal{L}_{total} = \mathcal{L}_\text{H} + \beta \cdot \mathcal{L}_\text{MI}. \end{aligned} \label{beta} \end{equation} \section{Experiments} In this section, we present our experimental results on three widely used benchmark datasets, namely PoseTrack2017 \cite{Iqbal_2017_CVPR}, PoseTrack2018 \cite{Andriluka_2018_CVPR}, and Sub-JHMDB \cite{Jhuang:ICCV:2013}. \renewcommand\arraystretch{1.2} \begin{table} \resizebox{0.48\textwidth}{!}{ \begin{tabular}{l|c|c|c|c|c|c|c|c} \hline Method &Head &Shoulder &Elbow &Wrist &Hip &Knee &Ankle &{\bf Mean}\cr \hline PoseTracker \cite{girdhar2018detect} &$67.5$ &$70.2$ &$62.0$ &$51.7$ &$60.7$ &$58.7$ &$49.8$ &{$60.6$}\cr PoseFlow \cite{xiu2018pose} &$66.7$ & $73.3$ &$68.3$ &$61.1$ &$67.5$ &$67.0$ &$61.3$ &{$ 66.5$}\cr JointFlow \cite{doering2018joint} & - & - &- &- &- &- &- &{ $ 69.3$}\cr FastPose \cite{zhang2019fastpose} &$80.0$ &$80.3$ &$69.5$ &$59.1$ &$71.4$ &$67.5$ &$59.4$ &{$ 70.3$}\cr TML++ \cite{hwang2019pose} &- &- &- &- &- &- &- &{$ 71.5$}\cr Simple (ResNet-50) \cite{xiao2018simple} &$79.1$ &$80.5$ &$75.5$ &$66.0$ &$70.8$ &$70.0$ &$61.7$ &{$72.4$}\cr Simple (ResNet-152) \cite{xiao2018simple} &$81.7$ &$83.4$ &$80.0$ &$72.4$ &$75.3$ &$74.8$ &$67.1$ &{$ 76.7$}\cr STEmbedding \cite{jin2019multi} &$83.8$ &$81.6$ &$77.1$ &$70.0$ &$77.4$ &$74.5$ &$70.8$ &{$ 77.0$}\cr HRNet \cite{sun2019deep} &$82.1$ &$83.6$ &$80.4$ &$73.3$ &$75.5$ &$75.3$ &$68.5$ &{$ 77.3$}\cr MDPN \cite{guo2018multi} &$85.2$ &$88.5$ &$83.9$ &$77.5$ & $79.0$&$77.0$ &$71.4$ &{$ 80.7$}\cr Dynamic-GNN \cite{yang2021learning} &$88.4$ &$88.4$ &$82.0$ &$ 74.5$ &$79.1$ &$78.3$ &$73.1$ &{ $81.1$}\cr PoseWarper \cite{bertasius2019learning} &$81.4$ &$88.3$ &$83.9$ &$ 78.0$ &$82.4$ &$80.5$ &$73.6$ &{ $ 81.2$}\cr DCPose \cite{liu2021deep} &$ 88.0$ &$ 88.7$ &$ 84.1$ &$78.4$&$ 83.0$ &$ 81.4$&$ 74.2$ &$ 82.8$\cr \hline \rowcolor{gray!20} \bf FAMI-Pose (Ours) &$\bf 89.6$ &$\bf 90.1$ &$\bf 86.3$ &$\bf 80.0$ &$\bf 84.6$ &$\bf 83.4$ &$\bf 77.0$ &$\bf 84.8$\cr \hline \end{tabular}} \caption{{Quantitative results on the PoseTrack2017 \textbf{validation} set}.} \label{17val} \end{table} \renewcommand\arraystretch{1.2} \begin{table} \resizebox{0.48\textwidth}{!}{ \begin{tabular}{l|c|c|c|c|c|c|c|c} \hline Method &Head&Shoulder &Elbow &Wrist &Hip &Knee &Ankle &{\bf Total}\cr \hline PoseTracker \cite{girdhar2018detect} &- &- &- &$51.5$ &- &- &$50.2$ &{$ 59.6$}\cr PoseFlow \cite{xiu2018pose} &$64.9$ &$67.5$&$65.0$ &$59.0$ &$62.5$ &$62.8$ &$57.9$ &{$ 63.0$}\cr JointFlow \cite{doering2018joint} &- &- &- &$53.1$ &- &- &$50.4$ &{$ 63.4$}\cr TML++ \cite{hwang2019pose} &- &- &- &$60.9$ &- &- &$56.0$ &{$ 67.8$}\cr KeyTrack \cite{snower202015} &- &- &- &$71.9$ &- &- &$65.0$ &{$ 74.0$}\cr DetTrack \cite{wang2020combining} &- &- &- &$69.8$ &- &- &$65.9$ &{$ 74.1$}\cr Simple (ResNet-152) \cite{xiao2018simple} &$80.1$ &$80.2$ &$76.9$ &$71.5$ &$72.5$ &$72.4$ &$65.7$ &{$ 74.6$}\cr HRNet \cite{sun2019deep} &$80.1$ &$80.2$ &$76.9$ &$72.0$ &$73.4$ &$72.5$ &$67.0$ &{$ 74.9$}\cr PoseWarper \cite{bertasius2019learning} &$79.5$ &$84.3$ &$80.1$ &$75.8$ &$77.6$ &$76.8$ &$70.8$ &{$ 77.9$}\cr DCPose \cite{liu2021deep} &$84.3$ &$ 84.9$ &$ 80.5$ &$ 76.1$ &$ 77.9$ &$ 77.1$ &$ 71.2$ &$ 79.2$\cr \hline \rowcolor{gray!20}\bf FAMI-Pose (Ours)&$\bf 86.1$&$\bf 86.1$&$\bf 81.8$&$\bf 77.4$&$\bf 79.5$&$\bf 79.1$&$\bf 73.6$&$\bf 80.9$\cr \hline \end{tabular}} \caption{Performance comparisons on the PoseTrack2017 \textbf{test} set. These results are published in the \emph{PoseTrack2017 leaderboard}.} \label{17test} \end{table} \renewcommand\arraystretch{1.2} \begin{table} \resizebox{0.48\textwidth}{!}{ \begin{tabular}{l|c|c|c|c|c|c|c|c} \hline Method &Head &Shoulder &Elbow &Wrist &Hip &Knee &Ankle &{\bf Mean}\cr \hline STAF \cite{raaj2019efficient} &- &- &- &$64.7$ &- &- &$62.0$ &{$70.4$}\cr AlphaPose \cite{fang2017rmpe} &$63.9$ &$78.7$&$77.4$ &$71.0$ &$73.7$ &$73.0$ &69.7 &{$71.9$}\cr TML++ \cite{hwang2019pose} &- &- &- &- &- &- &- &{$ 74.6$}\cr MDPN \cite{guo2018multi} &$75.4$ &$81.2$ &$79.0$ &$74.1$ &$72.4$ &$73.0$ &$69.9$ &{$75.0$}\cr PGPT \cite{bao2020pose} &- &- &- &$72.3$ &- &- &$72.2$ &{$76.8$}\cr Dynamic-GNN \cite{yang2021learning} &$80.6$ &$84.5$ &$80.6$ &$ 74.4$ &$75.0$ &$76.7$ &$71.8$ &{ $77.9$}\cr PoseWarper \cite{bertasius2019learning} &$79.9$&$86.3$&$82.4$&$77.5$&$79.8$&$78.8$&$73.2$ &{ $79.7$}\cr DCPose \cite{liu2021deep}&$ 84.0$&$ 86.6$&$ 82.7$&$ 78.0$&$ 80.4$&$ 79.3$&$ 73.8$&$ 80.9$\cr \hline \rowcolor{gray!20}\bf FAMI-Pose (Ours)&$\bf 85.5$&$\bf 87.7$&$\bf 84.2$&$\bf 79.2$&$\bf 81.4$&$\bf 81.1$&$\bf 74.9$&$\bf 82.2$\cr \hline \end{tabular}} \caption{\footnotesize{Quantitative results on the PoseTrack2018 \textbf{validation} set}.} \label{18val} \end{table} \renewcommand\arraystretch{1.2} \begin{table} \resizebox{0.48\textwidth}{!}{ \begin{tabular}{l|c|c|c|c|c|c|c|c} \hline Method &Head&Shoulder &Elbow &Wrist &Hip &Knee &Ankle &{\bf Total}\cr \hline TML++ \cite{hwang2019pose} &- &- &- &$60.2$ &- &- &$56.9$ &{$ 67.8$}\cr AlphaPose++ \cite{fang2017rmpe, guo2018multi} &- &- &- &$66.2$ &- &- &$65.0$ &{ $ 67.6$}\cr DetTrack \cite{wang2020combining} &- &- &- &$69.8$ &- &- &$67.1$ &$73.5$\cr MDPN \cite{guo2018multi} &- &- &- &$74.5$ &- &- &$69.0$ &{$ 76.4$}\cr PoseWarper \cite{bertasius2019learning} &$78.9$&$ 84.4$&$ 80.9$&$76.8$&$75.6$&$77.5$&$71.8$& $ 78.0$\cr DCPose \cite{liu2021deep} &$ 82.8$&$ 84.0$&$ 80.8$&$ 77.2$&$ 76.1$&$ 77.6$&$ 72.3$&$ 79.0$\cr \hline \rowcolor{gray!20}\bf FAMI-Pose (Ours)&$\bf 83.6$&$\bf 84.5$&$\bf 81.4$&$\bf 77.9$&$\bf 76.8$&$\bf 78.3$&$\bf 72.9$&$\bf 79.6$\cr \hline \end{tabular}} \caption{Performance comparisons on the PoseTrack2018 \textbf{test} set.} \label{18test} \end{table} \renewcommand\arraystretch{1.2} \begin{table} \resizebox{0.48\textwidth}{!}{ \begin{tabular}{l|c|c|c|c|c|c|c|c} \hline Method &Head&Shoulder &Elbow &Wrist &Hip &Knee &Ankle &{\bf Avg}\cr \hline Part Models\cite{park2011n} &$79.0$ &$60.3$&$28.7$ &$16.0$ &$74.8$ &$59.2$ &$49.3$ &$52.5$\cr Joint Action\cite{xiaohan2015joint} &$83.3 $ &$63.5$ &$33.8$ &$21.6$ &$76.3$ &$62.7$ &$53.1$ &{$55.7$}\cr Pose-Action\cite{iqbal2017pose} &$90.3$ &$76.9$ &$59.3$ &$55.0$ &$85.9$ &$76.4$ &$73.0$ &{$73.8$}\cr CPM\cite{wei2016convolutional} &$98.4$ &$94.7$ &$85.5$ &$81.7$ &$97.9$ &$94.9$ &$90.3$ &{$91.9$}\cr Thin-slicing Net\cite{song2017thin} &$97.1$ &$95.7$ &$87.5$ &$81.6$ &$98.0$ &$92.7$ &$89.8$ &{$92.1$}\cr LSTM PM\cite{luo2018lstm} &$98.2$ &$96.5$ &$89.6$ &$86.0$ &$98.7$ &$95.6$ &$90.0$ &{$93.6$}\cr DKD(ResNet-50)\cite{nie2019dynamic} &$98.3$ &$96.6$ &$90.4$ &$87.1$ &$ 99.1$ &$ 96.0$ &$92.9$ &{$94.0$}\cr K-FPN(ResNet-18)\cite{zhang2020key} &$94.7$ &$96.3$ &$ 95.2$ &$90.2$ &$96.4$ &$95.5$ &$93.2$ &{$94.5$}\cr K-FPN(ResNet-50)\cite{zhang2020key} &$95.1$ &$96.4$ &$\bf 95.3$ &$91.3$ &$96.3$ &$95.6$ &$92.6$ &{$94.7$}\cr MotionAdaptive \cite{fan2021motion} &$98.2$ &$97.4$ &$ 91.7$ &$85.2$ &$\bf 99.2$ &$\bf 96.7$ &$92.2$ &{$94.7$}\cr \hline \rowcolor{gray!20}\bf FAMI-Pose (Ours)&$\bf 99.3$&$\bf 98.6$&$ 94.5$&$\bf 91.7$&$\bf 99.2$&$91.8$&$\bf 95.4$&$\bf 96.0$\cr \hline \end{tabular}} \caption{Performance comparisons on the \textbf{Sub-JHMDB} dataset. } \label{jhmdb} \end{table} \subsection{Experimental Settings} \textbf{Datasets}\quad PoseTrack is a large-scale benchmark for human pose estimation and articulated tracking in videos, containing challenging sequences of people in crowded scenarios and performing rapid movement. The \textbf{PoseTrack2017} dataset includes 514 video sequences with a total of $16,219$ pose annotations. These are split (following the official protocol) into 250, 50, and 214 video sequences for training, validation, and testing. The \textbf{PoseTrack2018} dataset contains $1,138$ video sequences (and $153,615$ pose annotations), with 593 for training, 170 for validation, and 375 for testing. Both datasets are annotated with 15 joints, with an additional label for joint visibility. Training videos provide dense pose annotations in the center 30 frames, and validation videos further provide pose annotations every four frames. The \textbf{Sub-JHMDB} dataset contains 316 videos for a total of $11,200$ frames. Annotations are done for 15 joints but only visible joints are annotated. Three different data splits are performed for this dataset, each with a training to testing ratio of $3:1$. Following previous works \cite{luo2018lstm, nie2019dynamic, zhang2020key}, we report the mean accuracy over the three splits. \begin{figure*} \begin{center} \includegraphics[width=0.98\linewidth]{Figures/Fig3-Results2.jpg} \end{center} \caption{Visual results of our FAMI-Pose on benchmark datasets. Challenging scenes such as high-speed motion or pose occlusion are involved. \label{fig:results} \end{figure*} \textbf{Implementation Details}\quad Our FAMI-Pose is implemented with PyTorch. The input image size is fixed to $384 \times 288$. We perform data augmentation including random rotation $[-45^{\circ}, 45^{\circ}]$, random scaling $[0.65, 1.35]$, random truncation, and horizontal flipping. The predefined window $\mathcal{N}$ of neighboring frames is set to $\{ -2, -1, 1, 2 \}$, \emph{i.e.}, 2 previous and 2 future frames. We employ the HRNet-W48 model pre-trained on the COCO dataset for feature extraction. Subsequent weight parameters are initialized from a standard Gaussian distribution, while biases are initialized to 0. We employ the Adam optimizer with a base learning rate of $1e-4$ (decays to $1e-5$, $1e-6$, and $1e-7$ at the $8^{th}$, $12^{th}$, and $16^{th}$ epochs, respectively). Training is done with 4 Nvidia Geforce RTX 2080 Ti GPUs and a batch size of 48. All training process is terminated within 20 epochs. To weigh different losses in Eq. \ref{alpha} and Eq. \ref{beta}, we set $\alpha = 1.0$ and $\beta = 0.1$, and have not densely tuned them. \textbf{Evaluation Metric}\quad We benchmark our model using the standard human pose estimation protocol \cite{sun2019deep, xiao2018simple}, namely the average precision (\textbf{AP}) metric. We compute the AP for each body joint, and then average over all joints to get the final results (\textbf{mAP}). Note that only visible joints are calculated in performance evaluation. \subsection{Comparison with State-of-the-art Approaches} \textbf{Results on the PoseTrack2017 Dataset}\quad We first evaluate our model on the PoseTrack2017 validation set and test set. A total of $14$ methods are compared, including PoseTracker \cite{girdhar2018detect}, PoseFlow \cite{xiu2018pose}, JointFlow \cite{doering2018joint}, FastPose \cite{zhang2019fastpose}, TML++ \cite{hwang2019pose}, SimpleBaseline (ResNet-50 $and$ ResNet-152), STEmbedding \cite{jin2019multi}, HRNet \cite{sun2019deep}, MDPN \cite{guo2018multi}, Dynamic-GNN \cite{yang2021learning}, PoseWarper \cite{bertasius2019learning}, DCPose \cite{liu2021deep}, and our FAMI-Pose. Their performance on the PoseTrack2017 validation set is reported in Table \ref{17val}. The proposed FAMI-Pose consistently outperforms existing methods, achieving an mAP of $84.8$. Significantly, our FAMI-Pose is able to improve the mAP by $7.5$ points over the widely adopted backbone network HRNet-W48 \cite{sun2019deep}. Our model also achieves a $2.0$ mAP gain over the previous state-of-the-art approach DCPose \cite{liu2021deep}. In particular, we obtain encouraging improvements for the more challenging joints (\emph{i.e.}, wrist, ankle): with an mAP of $80.0$ $(\uparrow 1.6)$ for wrists and an mAP of $77.0$ $(\uparrow 2.8)$ for ankles. Another interesting observation is that pose estimation approaches that incorporate neighboring frames (such as PoseWarper and DCPose) outperforms methods that use only the single key frame. This suggests the importance of embracing complementary cues from neighboring frames. The quantitative comparisons on the PoseTrack2017 test set are reported in Table \ref{17test}. Since the pose annotations are not publicly available, we upload our model predictions to the PoseTrack official evaluation server: \url{https://posetrack.net/leaderboard.php} to obtain results. FAMI-Pose again surpasses previous state-of-the-art, attaining an mAP of $80.9$ ($\uparrow 1.7$), with an mAP of $81.8$, $77.4$, $79.1$, and $73.6$ for the elbow, wrist, knee, and ankle, respectively. As illustrated in in Fig. \ref{fig:results}, the visualized results for scenes with rapid motion or pose occlusions attest to the robustness of our method. More visualized results can be found on our project page\footnote{\url{https://github.com/Pose-Group/FAMI-Pose}}. \begin{table} \resizebox{0.48\textwidth}{!}{ \begin{tabular}{c|ccc|c|c|c} \hline Method &Global Transformation &Local Calibration &MI Loss &Wrist &Ankle &Mean\cr \hline HRNet \cite{sun2019deep} & & & & 73.3 & 68.5 & 77.3\cr (a) &\checkmark & & & $78.1$ & $74.3$ & $82.9$\cr (b) &\checkmark &\checkmark & & $79.7$ & $76.0$ & $84.0$\cr (c) &\checkmark & \checkmark &\checkmark & $\bf80.0$ & $\bf77.0$ & $\bf84.8$\cr \hline \end{tabular}} \caption{Ablation of different components in FAMI-Pose.} \label{abl-com} \end{table} \renewcommand\arraystretch{1.3} \begin{table} \resizebox{0.48\textwidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline Supp. Frame Window $\mathcal{N}$ &Head&Shoulder &Elbow &Wrist &Hip &Knee &Ankle &{ Mean}\cr \hline $\mathcal{N}=\{-1\}$ &$88.1$ &$89.2$ &$83.9$ &$78.0$ &$83.5$ &$80.7$ &$73.4$ &{$ 82.8$}\cr $\mathcal{N}=\{-1,1\}$ &$89.1$ &$89.5$ &$84.8$ &$79.0$ &$84.2$ &$82.3$ &$74.9$ &{$ 83.9$}\cr $\mathcal{N}=\{-2,-1,1\}$ &$89.3$ &$89.8$ &$85.3$ &$79.8$ &$84.2$ &$82.6$ &$76.2$ &{$ 84.5$}\cr $\mathcal{N}=\{-2,-1,1,2\}$ &$\bf 89.6$ &$\bf 90.1$ &$\bf 86.3$ &$\bf 80.0$ &$\bf 84.6$ &$\bf 83.4$ &$\bf 77.0$ &$\bf 84.8$\cr \hline \end{tabular}} \caption{Impact of modifying the supporting frame window.} \vspace{-1em} \label{abl-supp} \end{table} \textbf{Results on the PoseTrack2018 Dataset}\quad We further benchmark our model on the PoseTrack2018 dataset. The detailed results on the validation and test sets are tabulated in Table \ref{18val} and Table \ref{18test}, respectively. From these tables, we observe that our FAMI-Pose consistently attains the new state-of-the-art results for all joints. We obtain a $82.2$ mAP on the validation set and a $79.6$ mAP for the test set. \textbf{Results on the Sub-JHMDB Dataset}\quad Results for the Sub-JHMDB dataset are reported in Table \ref{jhmdb}. We observe that existing methods have already achieved an impressive accuracy. Specifically, the current state-of-the-art method MotionAdaptive obtains a $94.7$ mAP on this dataset. In contrast, our method is able to achieve a $96.0$ mAP. We also obtain a $99.3$ mAP for the head joint and a $99.2$ mAP for the hip joint. The $1.3$ mAP improvement over the already impressive state-of-the-art methods might be an evidence to show the effectiveness of the proposed method. \begin{figure} \begin{center} \includegraphics[width=0.98\linewidth]{Figures/Fig4-Contrast.jpg} \end{center} \vspace{-1em} \caption{Visual comparisons of the predictions of our FAMI-Pose (a), HRNet-W48 (b), PoseWarper (c), and DCPose (d) on the challenging cases from PoseTrack2017 and PoseTrack2018 datasets. Inaccurate pose estimations are highlighted by the red dotted circles.}\vspace{-1.2em} \label{fig:contrast} \end{figure} \subsection{Ablation Study} We perform ablation experiments to examine the contribution of feature alignment as well as the influence of each component in our method (\emph{i.e.}, Global Transformation Module, Local Calibration Module, and MI Loss). We also investigate the impact of modifying the predefined window $\mathcal{N}$ of supporting frames. These experiments are conducted on the PoseTrack2017 validation dataset. \textbf{Feature Alignment}\quad We empirically evaluate the efficacy of proposed components for facilitating and guiding feature alignment in our FAMI-Pose framework. We report the AP for the wrist and ankle joints as well as the mAP for all joints in Table \ref{abl-com}. \textbf{(a)} For the first setting, we remove the local calibration module and MI loss in FAMI-Pose, employing only the global transformation module (GTM) for feature alignment. Remarkably, the coarse feature alignment with the GTM already improves upon the baseline (HRNet-W48 backbone) by a significant margin of $5.6$ mAP and the $82.9$ mAP is in fact on par with the previous state-of-the-art $82.8$ mAP of DCPose \cite{liu2021deep}. This corroborates the effectiveness of our approach in introducing feature alignment to facilitate video-based pose estimation. Feature alignment is noticeably more effective in leveraging temporal information from supporting frames as compared to previous methods which adopt optical flow or motion offset estimations. \textbf{(b)} For the next setting, we incorporate the local calibration module (LCM) on top of the global alignment to obtain fine-tuned feature alignment. This fine-tuning improves the mAP by $1.1$ to $84.0$. \textbf{(c)} The final setting includes the MI objective and corresponds to our complete FAMI-Pose framework. The improvement of $0.8$ mAP provides empirical evidence that our proposed MI loss is effective as an additional supervision to facilitate the learning of complementary task-specific information in temporal features. \textbf{Supporting Frames}\quad In addition, we investigate the effects of adopting different supporting frame windows $\mathcal{N}$ for pose estimation. The results in Table \ref{abl-supp} suggest a performance improvement with higher number of supporting frames, whereby the mAP increases from $82.8$ for $\mathcal{N}=\{-1\}$ to $83.9$, $84.5$, $84.8$ at $\mathcal{N}=\{-1,1\}$, $\mathcal{N}=\{-2,-1,1\}$, $\mathcal{N}=\{-2,-1,1,2\}$, respectively. This is in line with our intuitions, \emph{i.e.}, incorporating more supporting frames enables accessing a larger temporal context with more complementary and useful information that are beneficial for improving the pose estimation on the key frame. \subsection{Comparison of Visual Results} In addition to the quantitative analysis, we further examine the ability of our model to handle challenging scenarios such as rapid motion or pose occlusions. We illustrate in Fig. \ref{fig:contrast} the side-by-side comparisons of a) our FAMI-Pose against state-of-the-art methods, namely b) HRNet-W48 \cite{sun2019deep}, c) PoseWarper \cite{bertasius2019learning}, and d) DCPose \cite{liu2021deep}. It is observed that our approach yields more robust and accurate pose estimates for such challenging scenes. HRNet-W48 is designed for image-based pose estimation and does not incorporate information from supporting frames, resulting in poor performance on degraded video frames. On the other hand, PoseWarper and DCPose implicitly estimate motion cues between frames to improve pose estimation but lack feature alignment and effective supervision on information gain. Through a principled design of the GTM and LCM for progressive feature alignment as well as the MI objective to enhance complementary information mining, FAMI-Pose shows a better ability to handle visual degradation. \section{Conclusion} In this paper, we examine the multi-frame human pose estimation task from the perspective of effectively leveraging temporal contexts through feature alignment and complementary information mining. We present a hierarchical coarse-to-fine network to progressively align supporting frame feature with the key frame feature. Theoretically, we further introduce a mutual information objective for effective supervision on intermediate features. Extensive experiments show that our method delivers state-of-the-art results on three benchmark datasets, PoseTrack2017, PoseTrack2018, and Sub-JHMDB. \section{Acknowledgements} This paper is supported by the National Natural Science Foundation of China (No. 61902348) and the Key R\&D Program of Zhejiang Province (No. 2021C01104). {\small \bibliographystyle{ieee_fullname}
1,314,259,995,675
arxiv
\section{introduction} Charge-density wave (CDW) and superconductivity are two important and closely linked broken symmetry states in solids. There has been tremendous interest in the interplay between these two states in condensed matter physics. The topic has motivated extensive exploration of new materials showing coexistence or competition between the two different instabilities. The discovery of a coexistence of superconductivity and a structural phase transition in Bi$_2$Rh$_3$Se$_2$ represents a new progress \cite{Sakamoto2007} on this subject and offers new opportunity to study the CDW instability and its interplay with superconductivity. Bi$_2$Rh$_3$Se$_2$ belongs to a quasi-two dimensional parkerite-type ternary chalcogenides A$_2$M$_3$X$_2$ (A = Sn, Pb, In, Tl, and Bi; M = Co, Ni, Rh, and Pd; X = S and Se) composed of sheets containing one-dimensional M-M chains \cite{Sakamoto2007,NATARAJAN1988215,Weihrich2007}. Sakamoto et al. reported that Bi$_2$Rh$_3$Se$_2$ was a new superconducting compound with a transition temperature T$_c \sim$ 0.7 K. Intriguingly, the compound exhibits a phase transition at about 240 K. Based on resistivity, magnetic susceptibility, specific heat measurement, thermoelectric power, thermal expansion, and low-temperature x-ray measurements, they identified the phase transition at 240 K as a CDW order \cite{Sakamoto2007}. Following this work, Kaluarachchi et al. \cite{Kaluarachchi2015} found that the isostructural compound Bi$_2$Rh$_{3.5}$S$_2$ has a higher superconducting transition temperature T$_c \sim$ 1.7 K, though the stoichiometric Bi$_2$Rh$_3$S$_2$ is not superconducting. A first order phase transition at 165 K was also found for Bi$_2$Rh$_3$S$_2$. The simultaneous observation of superconductivity and CDW order has brought a new perspective to the research in this field. However, the subsequent pressure and selected-area electron diffraction studies on Bi$_2$Rh$_3$Se$_2$ by Chen et al.\cite{Chen2014} indicated that the resistivity anomaly at 240 K shifted to higher temperature with increasing pressure, which is unusual for a conventional CDW transition. They argued that the phase transition at 240 K is not a CDW transition, but a purely structural phase transition with a symmetry reduction from a high-symmetry C-centered monoclinic lattice to a low-symmetry primitive one below the transition temperature \cite{Chen2014}. It is crucial to understand the nature of the phase transition in those compounds because it is an essential step towards understanding the properties of compounds and the possible connection to the superconductivity. Up to now, there is no spectroscopic experiment on those compounds. It is well known that most of the CDW states are driven by the nesting topology of Fermi surfaces, i.e. the matching of sections of FS to others by a wave vector \textbf{q} = 2\textbf{k}$_F$, where the electronic susceptibility has a divergence. A single-particle energy gap opens in the nested regions of the Fermi surfaces at the transition, which leads to the lowering of the electronic energies of the system. Simultaneously, the phonon mode of acoustic branch becomes softened to zero frequency at \textbf{q} = 2\textbf{k}$_F$ as a result of electron-phonon interaction, leading to structural distortion \cite{densitywave}. The formation of an energy gap below the transition has been generally taken as a characteristic feature of CDW order. On the contrary, a purely structural phase transition, if irrelevant to a CDW order, would lead to an entirely different band structure, resulting in a spectral change over broad energy scale rather than only in the low energies. Such broad-energy spectral change across the phase transition was demonstrated in a number of materials such as BiNi$_2$As$_2$ \cite{PhysRevB.80.094506}, IrTe$_2$ \cite{Fang2013}, RuP \cite{PhysRevB.91.125101}. Additionally, CDW also has collective excitations, being referred to as an amplitude mode (AM) and a phase mode. The amplitude mode involves the ionic displacement and has a finite energy even at \textbf{q}=0 limit, which could be identified by the ultrafast pump-probe experiment \cite{PhysRevLett.83.800,PhysRevB.66.041101,PhysRevLett.101.246402,PhysRevB.78.201101,1347-4065-46-2R-870}. Furthermore, the formation of a CDW energy gap would also impede the relaxation time of photoexcited quasiparticles which could be also probed from the pump-probe measurement. In this work, we performed optical spectroscopy and ultrafast pump-probe measurements on single crystal samples of Bi$_2$Rh$_3$Se$_2$. Our measurement reveals clearly the formation of an energy gap with associated spectral change only at low energy, yielding strong evidence for a CDW phase transition at 240 K. The opening of the energy gap removes most of the free carrier spectral weight and causes a dramatic reduction in the carrier scattering rate. Ultrafast pump-probe measurement reveals a significant change of the photoinduced reflectivity near the phase transition temperature. A strong enhancement of the amplitude and relaxation time of photoinduced carriers is extracted at the phase transition temperature, also yielding evidence for the CDW energy gap opening. Moreover, the time resolved measurement demonstrates presence of a strong oscillation at low temperature, which becomes damped gradually at elevated temperature. The temperature dependence of the oscillation suggests that it comes from the amplitude mode of CDW collective excitations. \section{Results and discussion} The single crystal samples of Bi$_2$Rh$_3$Se$_2$ were synthesized by self flux method. High-purity Bi, Rh, and Se elements with a molar ratio of Bi:Rh:Se=2:3:2 were mixed, placed in an alumina crucible, and sealed in a silica ampule filled with argon gas. The sealed ampule was heated to 1050 $^0$C and held for 5 h, then slowly cooled with the rate of 2 $^0$C/h to 750 $^0$C. At the final temperature, the mixture was decanted using a centrifuge. \begin{figure}[htbp] \centering \includegraphics[width=0.35\textwidth]{res-speheat1.eps} \caption{(a)Temperature dependent resistivity, (b) temperature dependent of specific heat of Bi$_2$Rh$_3$Se$_2$. A phase transition is evident near 240 K. }\label{Fig:Res-SH} \end{figure} The temperature dependent resistivity was measured by a standard four-probe method. The specific heat was measured by using the relaxation method. Both were performed in a Quantum Design physical property measurement system (PPMS). Figure \ref{Fig:Res-SH} (a) and (b) show the temperature dependence of resistivity and specific heat between 1.8-300 K, respectively. Similar to previous reports \cite{Sakamoto2007,Chen2014}, a phase transition is clearly observed starting from 240 K. \begin{figure}[htbp] \centering \includegraphics[width=0.35\textwidth]{Ref-Cond4.eps} \caption{The temperature dependent (a) reflectivity $R(\omega)$ and (b) optical conductivity $\sigma_1(\omega)$ for Bi$_2$Rh$_3$Se$_2$ single crystal below 4300 \cm, respectively. The insets of (a) and (b) present $R(\omega)$ and $\sigma_1(\omega)$ at 300 K over a broad frequency range, respectively.}\label{Fig:ref-cond} \end{figure} The in-plane reflectivity $R(\omega)$ was measured by the Fourier transform infrared spectrometer Bruker 80V in the frequency range from 40 to 25 000 \cm. The room temperature $R(\omega)$ over a broad frequency range is displayed in the inset of Fig.\ref{Fig:ref-cond} (a). The spectrum shows typically metallic frequency response: $R(\omega)$ has high values at low frequencies and approaches unit at zero frequency limit. A roughly linear-frequency dependent reflectivity is seen below 6000 \cm. The behavior is similar to high-T$_c$ cuprate superconductors, reflecting an overdamped behavior of carrier scattering. The main panel of Fig.\ref{Fig:ref-cond} (a) shows the reflectivity below 4300 \cm at several selected temperatures. Above 240 K, $R(\omega)$ increases monotonically as frequency deceases and the low energy part increases slightly upon cooling, both of which belong to simple metallic behaviors. With temperature further decreasing, a pronounced dip structure appears roughly near 600 \cm, which yields strong evidence for the formation of a charge gap in the vicinity of Fermi level due to the development of the CDW order. This dip feature grows more dramatic as temperature decreases, indicating the continuous enhancement of the CDW gap. In the meantime, the low energy reflectivity gets even higher than the values in the high temperature phase. The real part of optical conductivity $\sigma_1(\omega)$ was derived from $R(\omega)$ through Kramers-Kronig transformation, as shown in Fig.\ref{Fig:ref-cond} (b). The Hagen-Rubens relation was used for the low energy extrapolation of $R(\omega)$. For the high frequency extrapolation we have employed the x-ray atomic scattering functions \cite{PhysRevB.91.035123}. The main panel of Fig.\ref{Fig:ref-cond} (b) shows $\sigma_1(\omega)$ below 4300 \cm at different temperatures, the inset shows $\sigma_1(\omega)$ over broad energy scale at room temperature. Above the transition temperature, the optical conductivity exhibits clearly a Drude peak at low frequency. Its broad width indicates a large scattering rate $\gamma$ of the free carriers. Upon entering the low temperature phase, the spectral weight of the Drude peak was substantially removed and transferred to higher energies to form a broad peak centered around 1000 \cm. The feature becomes more prominent as temperature decreases. In fact, those are expected spectroscopic features for the density wave condensate, because either charge or spin density wave condensate has a so-called "case-I" coherence factor which would cause a sharp rise in the optical conductivity spectrum just above the energy gap\cite{densitywave}. Here, we can take the central position of the peak as the upper limit energy scale of the CDW gap. Then, we get the value of $2\Delta/k_B T_{CDW}\sim$6, a value being larger than the weak-coupling BCS theory but not uncommon for a density wave phase transition. We emphasize that those features are dramatically different from a purely structural phase transition. For a structural phase transition irrelevant to CDW order, the band structure at low temperature phase could be entirely different from the high temperature phase. That would result in a sudden spectral change over broad energy scale, which has been observed in a number of materials such as BiNi$_2$As$_2$ \cite{PhysRevB.80.094506}, IrTe$_2$ \cite{Fang2013}, RuP \cite{PhysRevB.91.125101} across the structural phase transitions. \begin{figure}[htbp] \centering \includegraphics[width=0.35\textwidth]{Ref-Cond-fit1.eps} \caption{The frequency dependent optical conductivity $\sigma_1(\omega)$ at 300 K (a) and 10 K (b), together with a Drude-Lorentz fit. The fitting parameters of high frequency Lorentz components are kept unchanged, therefore, the removed spectral weight from the Drude component at high temperature is transferred to the peak centered near 1000 \cm.} \label{Fig:cond-fit} \end{figure} To estimate the gapped spectral weight, we employ a Drude-Lorentz model to fit the optical conductivity, \begin{equation} \sigma_1(\omega)= {\frac{\omega_p^2}{4\pi}}{\frac{\Gamma_D}{\omega^2+\Gamma_D^2}}+ \sum_{j}{\frac{S_j^2}{4\pi}}{\frac{\Gamma_j\omega^2}{(\omega_j^2-\omega^2)^2+\omega^2\Gamma_j^2}}. \label{chik} \end{equation} The Drude term describes the response of itinerant carriers, while the Lorentz terms stand for interband transitions and excitations across energy gaps. Here, for simplicity, we use only one Drude component to estimate the spectral weight of itinerant carriers. Figure \ref{Fig:cond-fit} shows the fitting results at two representative temperatures, 300 K and 10 K. We find that the low frequency spectrum at 300 K can be approximately reproduced by one Drude component. At 10 K, the removed spectral weight forms a peak at about 1000 \cm ($\sim$ 120 meV), an indication of energy gap formation. Since the fitting parameters of high frequency Lorentz components are kept unchanged, the spectral weight transfer occurs only in the low energy scale. It is worthy noting that a sharp and narrow Drude peak remains in the low temperature state, which agrees well with the high reflectivity near zero frequency, indicating that the Fermi surfaces are partially gapped by the CDW phase transition. The number of lost free carriers could be estimated by the variation of plasma frequency $\omega_p\sim\sqrt{n/m^*}$, where $n$ and $m^*$ represent the number and effective mass of free carriers respectively. From the above decomposition, we find that the plasma frequency $\omega_p$ varies roughly from 2.8$\times10^4$ \cm at room temperature to 8$\times10^3$ \cm at 10 K, which indicates that over 90 \% of free carriers are removed by the opening of CDW energy gap. Meanwhile, the scattering rate $\Gamma_D$ decreases violently from 2540 \cm to 93 \cm, as been evidenced by the narrowing of Drude peak, which makes the compound to have even lower \emph{dc} resistivity at low temperature despite of the substantial carrier density loss. Actually, the plasma frequency associated with the Drude component spectral weight could also be estimated by $\omega_p^2=8\int_0^{\omega_c}\sigma_1(\omega)d\omega$, where $\omega_c$ is the cut-off frequency of the Drude component. Taking the location of the conductivity minimum as the cutoff frequency, being about 4800 \cm for 300 K and 290 \cm for 10 K, where the balance between the tails of Drude and Lorentz components are roughly taken into account, we obtained the same values of plasma frequencies as that from the above decomposition of the spectral weight. Further support for the formation of CDW order can be obtained from our ultrafast pump probe measurement, which has been proven to be particularly useful in detecting both of the single particle excitations across small energy gaps\cite{PhysRevLett.82.4918,PhysRevLett.104.027003,PhysRevB.84.174412,Chen2014a,PhysRevB.75.115120} and collective modes relevant to lang range ordering \cite{PhysRevLett.101.246402,Albrecht1992,PhysRevLett.111.057402}. We used a Ti:sapphire oscillator as the light source for both pump and probe beams, which can produce 800 nm pulsed laser at 80 MHz repetition. The 100 femtosecond time duration of the laser pulses enables ultrashort time resolved measurement. The fluence of the pump beam is about 6.4 $\mu J/cm^2$, and the fluence of the probe beam is ten times lower. In order to reduce the noise caused by stray light, the pump and probe pulses were set to be cross polarized and an extra polarizer was mounted just before the detector. \begin{figure*}[hbtp] \centering \includegraphics[width=18cm]{pump-probe-data5.eps}\\ \caption{(a) $\Delta R/R$ in the temperature range of 200 - 300 K. The absolute value of $\Delta R/R$ increases with decreasing temperature, reaching the maximum at 200 K. (b) $\Delta R/R$ in the temperature range of 10 - 200 K. The absolute value of $\Delta R/R$ decreases with further decreasing temperature. The grey dash-dot curve on top of the data at 200 K is the fitting curve from equation (2). (c) The intensity plot of $\Delta R/R$ below 15 ps in different temperatures. An intensity oscillation is clearly seen at low temperature, suggesting presence of a collective mode. The periodic time of the oscillation increases upon temperature increasing, suggesting a decrease of mode frequency at elevated temperature. } \label{Fig:DeltaR} \end{figure*} The photoinduced reflectivity change $\Delta R/R$ as a function of time delay at different temperatures are displayed in Fig.\ref{Fig:DeltaR} (a) and (b), respectively. Overall, the signal levels change quickly below 240 K. With temperature decreasing, the absolute value of $\Delta R/R$ increases and reaches the maximum near 200 K (Fig.\ref{Fig:DeltaR} (a)), then the amplitude drops with further decreasing temperature (Fig.\ref{Fig:DeltaR} (b)). The intensity plot for the time delay below 15 ps for different temperatures is shown in Fig.\ref{Fig:DeltaR} (c). We observe three distinct decay processes: a fast relaxation time with negative amplitude value, a longer relaxation time with positive amplitude and another very long relaxation time with negative amplitude. Indeed, the reflectivity change could be well reproduced by three exponential decays, \begin{equation} \Delta R/R= A_1 exp(-t/\tau_1) +A_2 exp(-t/\tau_2)+A_3 exp(-t/\tau_3), \label{chik} \end{equation} where $A_i$ (i=1,2,3) represents the amplitude of the photoinduced reflectivity change and $\tau_i$ stands for the relaxation time of the decay channel. As an example, we show the fitting curve for the data at 200 K (the bottom curve in Fig.\ref{Fig:DeltaR} (a)) by this formula. The amplitudes $A_1$, $A_2$ and $A_3$ have the same order of magnitude, whereas the relaxation times of the three decay processes are dramatically different: $\tau_1$ is sub-picosecond, $\tau_2$ is a few picoseconds, and $\tau_3$ are several hundred picoseconds. At 200 K, for example, $A_1$=-1.70 and $\tau_1$=0.6 ps, $A_2$=3.82 and $\tau_2$=3.8 ps, and $A_3$=-6.47 and $\tau_3$=350 ps, respectively. Presence of rapid and slow decay dynamics after excitation has been observed in many systems \cite{Kumar2013,Luo2012,Tomeljak2009}. In general, the number of photoexcited hot electrons (or quasiparticles) at zero time delay is related to the amplitude of $\Delta R/R$. Those excited high-energy hot electrons release and transfer their energy to lattice through the emission of longitudinal optical phonons, and those optical phonons further decay into longitudinal acoustic phonons via anharmonic interactions. The sub-picosecond decay would be mainly attributed to quasiparticle relaxation via the electron-phonon thermalization, the several to several hundred picosecond decay processes would be related to the lattice energy loss via phonon population decay (inelastic scattering) or dephasing (elastic scattering) \cite{Luo2012,Hase2005}. The energy loss from the excited hot spot to the ambient environment would take even longer time. \begin{figure}[htbp] \centering \includegraphics[width=0.35\textwidth]{pump-probe-fit4.eps} \caption{The amplitude A$_1$ and relaxation time $\tau_1$ of the fast decay process extracted from the fitting of equation (2) to the experimental data of $\Delta R/R$ at different temperatures. The red dash curves are fitting curves from equation (3) and (4) below the phase transition temperature.}\label{Fig:pump-probe-fit} \end{figure} The substantial change of $\Delta R/R$ below 230$\sim$240 K reflects the phase transition. From the fitting of equation (2) to $\Delta R/R$, we find that the $A_i$ and $\tau_i$ (i=1,2,3) parameters for all three decay processes change with temperatures, particularly near the phase transition temperature. Such situation was observed in other materials before, for example, in spin density wave compound CaFe$_2$As$_2$ \cite{Kumar2013}. As our focus here is on the issue of whether the structural phase transition near 240 K is related to the CDW order, we shall limit our attention to the quasiparticle relaxation in sub-picosecond decay channel. Figure \ref{Fig:pump-probe-fit} shows A$_1$ and $\tau_1$ extracted from the fitting of equation (2) to $\Delta R/R$ as a function of temperature. An abrupt increase of A$_1$ is observed at the transition temperature with a broad peak-like structure at lower temperature, and simultaneously, a divergence in $\tau_1$ is seen near the transition temperature. Those features represent ultrafast spectroscopy evidence for an energy gap opening, as we shall explain below. It is well known that the energy gap formation has a significant effect on the decay dynamics the photoexcited quasiparticles, which was described by the phenomenological Rothwarf-Taylor (R-T) model \cite{RT1967}. This model was initially established to explain the ultrafast relaxation dynamics of superconductors, but was proved to be applicable for a wide range of metallic systems with gap opening in the density of states. It proposes that the high energy phonons emitted by the recombination of photoinduced quasiparticles across an energy gap will introduce a bottleneck effect to the relaxation. That is, the depletion of states near the Fermi level would significantly impede the relaxation of the photoinduced quasiparticle. In the small photo excitation limit, the R-T model relates the density of thermally activated quasiparticles, $n_T$, to the measured transient reflectivity amplitudes $A(T)$ as, $n_T \propto \mathcal{A}^{-1}-1$, where $\mathcal{A}=A(T)/A(T\rightarrow 0)$. Assuming standard form of thermally activated quasiparticle density $n_T \propto \sqrt{T\Delta(T)}exp[-\Delta(T)/2T]$ and a BCS-like gap of the form $\Delta(T)=\Delta(0)\sqrt{1-T/T_c}$, the amplitude of photoinduced reflectivity signal is given by \cite{PhysRevB.59.1497} \begin{equation}\label{Eq:1} A(T) \propto {\frac{\Phi/(\Delta(T)+k_BT/2)}{1+\gamma\sqrt{k_BT/\Delta(T)}exp[-\Delta(T)/k_BT]}}, \end{equation} where $\Phi$ is the pump fluence and $\gamma$ is a fitting parameter. The equation describes an increase in the photoexcited quasiparticle density due to the decreasing gap value and corresponding enhanced phonon emission during the relaxation. As the gap closes at transition temperature, more and more low-energy phonons become available for reabsorption, a quasi-divergence in the relaxation time is resulted\cite{PhysRevB.59.1497}, \begin{equation}\label{Eq:1} \tau \propto {\frac{ln(g+exp(-\Delta(T)/k_BT))}{\Delta(T)^2}}, \end{equation} where g is a fitting parameter. Indeed, the sudden change of $A_1$ and $\tau_1$ near the phase transition temperature could be captured by equations (3) and (4) of R-T model, as shown in Fig. \ref{Fig:pump-probe-fit}. There exists some deviation at lower temperatures. It might be linked to the simultaneous change in $A_2$, $\tau_2$ and $A_3$ and $\tau_3$, which was ignored in the above analysis. Additionally, we expect that the R-T model, which is appropriate near the phase transition, would become less applicable at temperature far below the phase transition. Nevertheless, the sudden increase in $A_1$ and quasi-divergence in $\tau_1$ illustrate unambiguously the appearance of an energy gap, yielding further evidence for the CDW phase transition. \begin{figure}[htbp] \centering \includegraphics[width=0.35\textwidth]{mode3.eps} \caption{(a) $\Delta R/R$ with pump-probe time delay within 15 ps at 10 K. The decay background is subtracted. The inset shows the mode in frequency domain after the Fourier transformation. (b) The extracted mode frequency from the oscillations in pump-probe measurement as a function of temperature.}\label{Fig:CDWmode} \end{figure} On the other hand, CDW also has collective excitations, being referred to as amplitude mode and phase mode. The amplitude mode involves the ionic displacement and has a finite energy even at \textbf{q}=0 limit, usually at the energy level of terahertz frequency, which could be identified by the ultrafast pump-probe experiment \cite{PhysRevLett.83.800,PhysRevB.66.041101,PhysRevLett.101.246402,PhysRevB.78.201101,1347-4065-46-2R-870}. Indeed, the pump-probe signals exhibit pronounced oscillations at low temperature, as seen clearly in Fig. \ref{Fig:DeltaR} (b) and intensity plot of Fig. \ref{Fig:DeltaR} (c). It is easy to find that the periodic time of the oscillation increases upon temperature increasing, suggesting a decrease of mode frequency. In order to analyze the oscillatory component quantitatively, we subtract the exponential fitting part, then perform fast Fourier transformation of the residual part to get the mode frequency. As an example, Fig. \ref{Fig:CDWmode} (a) shows the $\Delta R/R$ data at 10 K as a function of time delay within 15 ps with the decay background subtracted. The inset shows the mode in frequency domain after the Fourier transformation. The result for different temperature is plotted in Fig. \ref{Fig:CDWmode} (b). The mode frequency of the oscillation drops as temperature increases. In CDW compound, the collective amplitude mode of CDW condensate usually has higher signal strength than phonon modes and behaves as an order parameter as a function of temperature. In some CDW materials, one indeed observes the disappearance of CDW amplitude mode precisely at the CDW transition temperature \cite{PhysRevLett.118.107402}. But quite often, the oscillations are heavily damped at elevated temperature and could not be resolved before reaching the transition temperature. This is also the case for the present compound. Thus, judging from the signal level and the temperature dependent trend, we can attribute the observed mode to the CDW amplitude mode. The mode frequency is about $\Omega_A$=1.25 THz at T=0 K, which is among the commonly observed energy scales for CDW order. In fact, the CDW amplitude mode frequency at the Brillouin zone center is known to be related to the phonon frequency of acoustic branch $\omega_{2k_F}$ at wave vector \textbf{q} = 2\textbf{k}$_F$ above $T_{CDW}$ and the electron-phonon coupling constant $\lambda$ by $\Omega_A\sim \lambda^{1/2}\omega_{2k_F}$ \cite{densitywave}. Unusual CDW mode frequency was observed only for certain compound with peculiar CDW wave vector \textbf{q} = 2\textbf{k}$_F$, for example, in LaAgSb$_2$ with extremely small CDW wave vector 2\textbf{k}$_F$ or unusual long lattice modulation period in real space \cite{PhysRevLett.118.107402}. \section{Summary} To summarize, we have utilized infrared spectroscopy and ultrafast pump-probe measurement to investigate the charge and coherent dynamics of the Bi$_2$Rh$_3$Se$_2$ single crystals, in an effort to address whether the phase transition at 240 K is a CDW order or a purely structural transition. Our optical spectroscopy measurement reveals clearly the formation of an energy gap with associated spectral change only at low energies, yielding strong evidence for a CDW phase transition at 240 K. The formation of the energy gap removes most part of the free carrier spectral weight. Time resolved pump-probe measurement provides further support for the CDW phase transition. The amplitude and relaxation time of quasiparticles extracted from the photoinduced reflectivity show strong enhancement near transition temperature, yielding further evidence for the CDW energy gap formation. Additionally, a collective mode is identified from the oscillations in the pump-probe time delay at low temperature. This mode, whose frequency decreases gradually at elevated temperature, is suggested to be the amplitude mode of CDW condensate state. \begin{center} \small{\textbf{ACKNOWLEDGMENTS}} \end{center} This work was supported by National Natural Science Foundation of China (No. 11888101), the National Key Research and Development Program of China (No. 2017YFA0302904, 2016YFA0300902). \bibliographystyle{apsrev4-1}
1,314,259,995,676
arxiv
\section{Introduction} Let $\mathbb{F}_q$ be the finite field with $q$ elements, where $q$ is a power of a prime $p$ and let $\overline{\mathbb{F}}_q$ be the algebraic closure of $\mathbb{F}_q$. For any $\alpha\in \overline{\mathbb{F}}_q$, we let $m_{\alpha}(x)$ denote the minimal polynomial of $\alpha$ over $\mathbb{F}_q$. A polynomial $F(x) \in \mathbb{F}_q[x]$ naturally induces a function in the set $\overline{\mathbb{F}}_q$: namely, we have the evaluation map $F:\overline{\mathbb{F}}_q\to \overline{\mathbb{F}}_q$ with $c\mapsto F(c)$. For each positive integer $k$, let $\mathcal I_k$ be the set of monic irreducible polynomials over $\mathbb{F}_q$ of degree $k$ and $\mathcal I=\cup_{k\ge 1}\mathcal I_k$ be the set of monic irreducible polynomials over $\mathbb{F}_q$. It turns out that any polynomial $F(x) \in\mathbb{F}_q[x]$ induces a map on the set $\mathcal I$; namely, for any $f\in \mathcal I$, let $\alpha\in \overline{\mathbb{F}}_q$ be any root of $f$, we define $F\diamond f$ as the minimal polynomial of $\beta=F(\alpha)$, i.e., $$F\diamond f:= m_{F(\alpha)}(x).$$ Of course, $F\diamond f$ is again in $\mathcal I$. Since $F(x)\in \mathbb{F}_q[x]$, this composition operation $\diamond$ is well defined: if $\alpha_0$ is another root of $f$, then $\alpha_0=\alpha^{q^i}$ is a conjugate of $\alpha$ over $\mathbb{F}_q$ and so $F(\alpha_0)=F(\alpha^{q^i})=F(\alpha)^{q^i}$ is a conjugate of $F(\alpha)$, i.e., $m_{F(\alpha)}=m_{F(\alpha_0)}$. In particular, the polynomial $F$ yields a dynamics on the set $\mathcal I$: for $f\in \mathcal I$, one may consider the orbit $\{f, F\diamond f, F\diamond (F\diamond f), \ldots\}$ of $f$ by $F$. The study of this kind of dynamics was initiated by Vivaldi~\cite{V92} and Batra and Morton~\cite{BM94-1, BM94-2}. In \cite{BM94-1, BM94-2}, the authors explored this dynamics over $\mathcal I$ for some special classes of linearized polynomials and, in particular, they show the existence of infinitely many fixed points, i.e., orbits of length one. This study was extended in \cite{M97}, where Morton showed that, for $F(x) =x^q-ax, a\in \mathbb{F}_q^*$, the dynamics induced by $F$ on $\mathcal I$ yields infinitely many periodic points with period $\pi_F(f)=n$, for any positive integer $n$. In \cite{CH00}, Cohen and Hachenberger extended this last result to any linearized polynomial $F(x) =\sum_{i=0}^{s}a_ix^{q^i}$ not of the form $ax^{q^s}$ for any $a\in \mathbb{F}_q^*$. In general, if $f\in \mathcal I_k$ and $\alpha\in \mathbb{F}_{q^k}$ is any root of $f$, then $F(\alpha)$ is in $\mathbb{F}_{q^k}$ and its minimal polynomial is of degree $d$, where $d$ is a divisor of $k$. In particular, we have $\deg(F\diamond f)\le \deg(f)$ for any $F\in \mathbb{F}_q[x]$ and $f\in \mathcal I$. It is then natural to study the extremal case, i.e., $\deg(F\diamond f)=\deg(f)$ for any $f\in \mathcal I$. In other words, we are interested in any polynomial $F$ for which $f\mapsto F\diamond f$ is a \emph{degree preserving} map. We shall prove in Theorem~\ref{thm:degree-preserve} that the class of polynomials $F$ inducing a degree preserving map on $\mathcal I$ is extremely small, namely, the class of polynomials $F(x)=ax^{p^h}+b$ with $a, b\in \mathbb{F}_q$, $a \neq 0$, and $h\ge 0$. Nevertheless, we may obtain many examples of polynomials $F$ for which $f\mapsto F\diamond f$ is \emph{locally} degree preserving. We recall that a polynomial $f\in \mathbb{F}_{q^k}[x]$ is called a permutation polynomial of $\mathbb{F}_{q^k}$ if the evaluation mapping is a permutation of $\mathbb{F}_{q^k}$. We prove that, if $F(x) \in \mathbb{F}_q[x]$ is a permutation polynomial of $\mathbb{F}_{q^k}$, then $F\diamond f \in \mathcal I_k$ for any $f \in \mathcal I_k$. This means that $\deg(F\diamond f) = \deg(f)$ for all $f \in \mathcal I_k$. Moreover, the map $f\mapsto F\diamond f$ from $\mathcal I_k$ to the set $\mathcal I_k$ is a permutation; for more details, see Theorem~\ref{thm:local}. Our paper mostly concentrates on permutations of the set $\mathcal I_k$ of monic irreducible polynomials of degree $k$. We prove in Theorem~\ref{thm:onto} that every permutation $\sigma$ of the set $\mathcal I_k$ can be represented as the map induced by a permutation polynomial $P(x)$ of $\mathbb{F}_{q^k}$ with all coefficients in $\mathbb{F}_q$, i.e,, $\sigma(f) = P\diamond f$ for any $f\in \mathcal I_k$. To study the dynamics of $P(x)$ on $\mathcal I_k$, we find that it is more convenient to study an associated operation $P\ast f = Q\diamond f$, where $Q$ is the compositional inverse of $P$. In particular, we show that \begin{equation*}P\ast f=\gcd(f(P(x)), x^{q^k}-x).\end{equation*} In Section 5, we provide some general results on the fix points of the compositions $P\ast f$. We prove that $P\ast f = f$ if and only if $f$ is a divisor of $\prod_{i=0}^{k-1} (x^{q^i} -P(x))$. Through this characterization, we derive the number of fixed points under this composition operation. In Section 6, we provide connections between the dynamics of the evaluation map $c\mapsto P(c)$ over $\mathbb{F}_{q^k}$ (restricted on the set $\mathcal C_k$ of elements of degree $k$ over $\mathbb{F}_q$) and the map $f \mapsto P\ast f$ over $\mathcal I_k$. Finally, we propose to use this operation to iterately generate irreducible polynomials of the same degree using any given irreducible polynomial. In some cases, we show that we can generate at least a proportion $1/k$ of all the irreducible polynomials of degree $k$, using any one irreducible polynomial of the degree $k$. \\ \section{Preliminaries} In this section, we provide a background material that is used along the paper. Most of the following results are standard in the theory of finite fields and, for this reason, we skip some details. \subsection{Multiplicative order in finite fields}\label{subsec:mult} For $\alpha\in \overline{\mathbb{F}}_q^{*}$, the multiplicative order of $\alpha$ is defined by $\mathrm{ord}(\alpha):=\min\{d>0\,|\, \alpha^d=1\}$. The degree $\deg(\alpha)$ of $\alpha\in\overline{\mathbb{F}}_q$ over $\mathbb{F}_q$ is defined as the degree of the minimial polynomial of $\alpha$ over $\mathbb{F}_q$. If $a, n$ are positive integers such that $\gcd(a, n)=1$, then $\mathrm{ord}_{n}a:=\min\{r>0\,|\, a^r\equiv 1\pmod n\}$. In the following theorem, we present some well-known results on the multiplicative order of elements in finite fields. \begin{lemma}\label{thm:mult} Let $\alpha\in \overline{\mathbb{F}}_q^*$ be an element of multiplicative order $\mathrm{ord}(\alpha)=e$. The following hold: \begin{enumerate}[(i)] \item $\deg(\alpha)=\mathrm{ord}_eq$; \item if $\beta=\alpha^s$, then $\mathrm{ord}(\beta)=\frac{e}{\gcd(e, s)}$. \end{enumerate} For any positive integer $E$ that is relatively prime with $q$, there exist $\varphi(E)$ elements $\alpha\in \overline{\mathbb{F}}_q^*$ such that $\mathrm{ord}(\alpha)=E$, where $\varphi$ is the \emph{Euler Function}. In addition, such elements are all in $\mathbb{F}_{q^k}$, where $k=\mathrm{ord}_Eq$. \end{lemma} If $f\in \mathcal I_k$ with $f(x)\ne x$, then the order $\mathrm{ord}(f)$ of $f$ is the order of any (hence all) of its roots. An element $\alpha\in \mathbb{F}_{q^k}^{*}$ is \emph{primitive} if $\mathrm{ord}(\alpha)=q^k-1$. Additionally, $f\in \mathcal I_k$ is a \emph{primitive polynomial} if any (hence all) of its roots is a primitive element, i.e., $\mathrm{ord}(f)=q^k-1$. In particular, Lemma~\ref{thm:mult} shows that there exist $\varphi(q^k-1)>0$ primitive elements in $\mathbb{F}_{q^k}$. \subsection{Linearized polynomials and the $\mathbb{F}_q$-order}\label{subsec:add} A polynomial of the form $L(x)=\sum_{i=0}^{m}a_ix^{q^i}\in \overline{\mathbb{F}}_q[x]$ is called \emph{linearized}. Given $f\in \overline{\mathbb{F}}_q[x]$ with $f(x)=\sum_{i=0}^{n}a_ix^i$, we can associate to $f$ the polynomial $L_f(x)=\sum_{i=0}^{n}a_ix^{q^i}$. In this case, $L_f$ is the \emph{$q$-associate} of $f$. Of course, any linearized polynomial is the $q$-associate of some polynomial in $\overline{\mathbb{F}}_q[x]$. In the following lemma, we show that linearized polynomials with coefficients in $\mathbb{F}_q$ behave well through basic operations. \begin{lemma}\label{lem:linearized-properties} Let $f, g\in \mathbb{F}_{q}[x]$. The following hold: \begin{enumerate}[(i)] \item $L_f(x)+L_g(x)=L_{f+g}(x)$, \item $L_g(L_f(x)=L_{fg}(x)=L_{gf}(x)$, \item $\gcd(L_f, L_g)=L_{\gcd(f, g)}$. \end{enumerate} \end{lemma} \begin{proof} Items (i) and (ii) follow by direct calculations and item (iii) is proved in Section 3.4 of \cite{LN}. \end{proof} For a polynomial $f\in \mathbb{F}_q[x]$ with $f(x)=\sum_{i=0}^{m}a_ix^i$ and an element $\alpha\in \overline{\mathbb{F}}_q$, we have $L_f(\alpha)=\sum_{i=0}^ma_i\alpha^{q^i}$. We observe that, if $\alpha\in \mathbb{F}_{q^k}$, $L_{x^{k}-1}(\alpha)=\alpha^{q^k}-\alpha=0$. We define $$\mathcal A_{\alpha}=\{g \in \mathbb{F}_q[x]\, |\, L_g(\alpha)=0\}.$$ From the previous lemma, $\mathcal A_{\alpha}$ is an ideal of $\mathbb{F}_q[x]$ and we observe that $x^k-1$ is in this ideal. In particular, $\mathcal A_{\alpha}$ is generated by a non-zero polynomial $m_{\alpha, q}(x)$, which we can suppose to be monic. \begin{define}The polynomial $m_{\alpha, q}(x)$ is defined as the \emph{$\mathbb{F}_q$-order} of $\alpha$. Namely, $m_{\alpha, q}(x)$ is a polynomial in $\mathbb{F}_q[x]$ with the lowest degree such that $\alpha$ is a root of its $q$-associate. \end{define} For instance the element $0$ has $\mathbb{F}_q$-order $m_{0, q}(x)=1$ and, for any $c\in \mathbb{F}_{q}^*$, $m_{c, q}(x)=x-1$. For any $\alpha\in \overline{\mathbb{F}}_q$ and any $f\in \mathbb{F}_q[x]$, by Lemma~\ref{lem:linearized-properties}, we have that $L_f(\alpha)=0$ if and only if $f$ is divisible by $m_{\alpha, q}(x)$. The polynomial $m_{\alpha, q}(x)$ works as an ``additive'' order, in duality to the multiplicative order over finite fields. It is clear that $\alpha$ is in $\mathbb{F}_{q^k}$ if and only if $0=\alpha^{q^k}-\alpha=L_{x^k-1}(\alpha)$, i.e., $m_{\alpha, q}(x)$ divides $x^k-1$. Also, $\alpha$ and any of its conjugates $\alpha^{q^i}$ have the same $\mathbb{F}_q$-order. This motivates us to introduce the following definition. \begin{define} For $f\in \mathbb{F}_q[x]$ an irreducible polynomial and let $\alpha$ be one of its roots, the $\mathbb{F}_q$-order of $f$ is defined as the $\mathbb{F}_q$-order of $\alpha$. We write $\mathrm{Ord}(f)=m_{\alpha, q}(x)$. \end{define} The elements $\alpha\in \mathbb{F}_{q^k}$ for which $m_{\alpha, q}(x)=x^k-1$ are called {\it normal} over $\mathbb{F}_q$. Normal elements work as ``additive generators'' (in the linearized sense) of the additive group $\mathbb{F}_{q^k}$, in duality to the primitive elements for the multiplicative group $\mathbb{F}_{q^k}^*$. Following the definition of primitive polynomials, an element $f\in \mathcal I_k$ is a \emph{normal polynomial} if $\mathrm{Ord}(f)=x^k-1$. In the additive-multiplicative order analogies, Lemma~\ref{thm:mult} can be translated to $\mathbb{F}_q$-order with a suitable change of functions. \begin{define} Let $f, g\in \mathbb{F}_q[x]$. \begin{enumerate}[(i)] \item The \emph{Euler Phi function} for polynomials over $\mathbb{F}_q$ is $$\Phi_q(f)=\left |\left(\frac{\mathbb{F}_q[x]}{\langle f\rangle}\right)^{*}\right |,$$ where $\langle f\rangle$ is the ideal generated by $f$ in $\mathbb{F}_q[x]$; \item If $\gcd(f, g)=1$, then $\mathcal O(f, g):=\min\{k>0\,|\, f^k\equiv 1\pmod g\}$ is the order of $f$ modulo $g$. \end{enumerate} \end{define} The function $\Phi_q$ is multiplicative. Also, $\Phi_q(g^s)=q^{(s-1)d}(q^d-1)$ if $g$ is an irreducible polynomial of degree $d$ and $s$ is a positive integer: one may compare with $\varphi(r^s)=r^{s-1}(r-1)$ if $r$ is a prime number. It is straightforward to check that $\mathcal O(f, g)$ divides $\Phi_q(g)$. \begin{theorem}\label{thm:add} Let $\alpha\in \overline{\mathbb{F}}_q$ and the $\mathbb{F}_q$-order of $\alpha$ be $m_{\alpha, q}(x)=h(x)$. The following hold: \begin{enumerate}[(i)] \item $\deg(\alpha)=\mathcal O(x, h(x))$; \item if $\beta=L_g(\alpha)$, then $\beta$ has $\mathbb{F}_q$-order $m_{\beta, q}(x)=\frac{h(x)}{\gcd(h(x), g(x))}$. \end{enumerate} In addition, for any polynomial $H$ relatively prime with $x$, there exist $\Phi_q(H)$ elements $\alpha\in \overline{\mathbb{F}}_q$ such that $m_{\alpha, q}(x)=H$. \end{theorem} \begin{proof} \begin{enumerate}[(i)] \item Observe that $\deg(\alpha)$ is the least positive integer $k$ such that $\alpha\in \mathbb{F}_{q^k}$. Also, for any positive integer $d$, we have that $\alpha\in \mathbb{F}_{q^d}$ if and only if $$L_{x^d-1}(\alpha)=\alpha^{q^d}-\alpha=0,$$ that is, $m_{\alpha, q}(x)=h(x)$ divides $x^d-1$. Now, the result follows from definition of $\mathcal O(x, h)$. \item This item follows by direct calculations. \end{enumerate} For the proof of the last statement, see Theorem 11 of~\cite{O34}. \end{proof} \subsection{Permutation polynomials}\label{subsec:permut} Here we present some well-known classes of permutations of finite fields. \begin{enumerate}[$\bullet$] \item \emph{Monomials}: The polynomial $x^n$ is a permutation of $\mathbb{F}_{q^k}$ if and only if $$\gcd(n, q^k-1)=1.$$ \item \emph{Linearized}: If $g(x) \in \mathbb{F}_{q}[x]$ with $g(x) =\sum_{i=0}^{k-1}a_ix^i$ and $L_g(x)=\sum_{i=0}^{k-1}a_ix^{q^i}$ is the \emph{$q$-associate of $f$}, then $L_g$ is a permutation of $\mathbb{F}_{q^k}$ if and only if $$\gcd(g(x), x^k-1)=1.$$ \item \emph{M\"{o}bius}: Let $\mathrm{GL}_2(\mathbb{F}_q)$ be the group of all invertible $2\times 2$ matrices over $\mathbb{F}_q$. Given $[A]\in \mathrm{GL}_2(\mathbb{F}_q)$ with $A=\left(\begin{matrix} a&b\\ c&d \end{matrix}\right) $, we set $$\tau_A:\mathbb{F}_{q^k}\to \mathbb{F}_{q^k},$$ where $\tau_A(z)=\frac{az+b}{cz+d}$ if $c=0$ or $c\ne 0$ and $z\ne -d/c$, and $\tau_A(-d/c)=a/c$ if $c\ne 0$. Then $\tau_A$ is a permutation of $\mathbb{F}_{q^k}$ for any $k\ge 1$. The map $\tau_A(z)$ admits a polynomial representation; for any $z\in \mathbb{F}_{q^k}$, $\tau_A(z)=(az+b)d^{-1}$ if $c=0$ and $\tau_A(z)=(az+b)\left[(cz+d)^{q^k-2}+\varepsilon\cdot \frac{z^{q^k}-z}{z+d/c}\right]$ if $c\ne 0$, where $\varepsilon=\frac{a}{\det A}$. \end{enumerate} \subsection{Notations} We want to emphasize that the following notations and consequences are frequently employed in this paper. \begin{enumerate}[$\bullet$] \item If $a, n$ are positive integers such that $\gcd(a, n)=1$, then $\mathrm{ord}_{n}a:=\min\{r>0\,|\, a^r\equiv 1\pmod n\}$. \item $\overline{\mathbb{F}}_q$ denotes the algebraic closure of $\mathbb{F}_q$. \item For $\alpha\in \overline{\mathbb{F}}_q^{*}$, $\mathrm{ord}(\alpha):=\min\{d>0\,|\, \alpha^d=1\}$ is the multiplicative order of $\alpha$. \item $\mathcal I_k$ denotes the set of irreducible monic polynomials of degree $k$ over $\mathbb{F}_q$. \item For $\alpha\in \overline{\mathbb{F}}_q$, $m_{\alpha}(x)$ is the minimal polynomial of $\alpha$ over $\mathbb{F}_q$. \item $\deg(\alpha):= \deg(m_{\alpha})=\min\{s>0\,|\, \alpha\in \mathbb{F}_{q^s}\}$. \item For $\alpha\in \overline{\mathbb{F}}_q$, the $\mathbb{F}_q$-order $m_{\alpha, q}(x)$ is the polynomial of the lowest degree polynomial such that $\alpha$ is a root of its $q$-associate. \item For $\alpha\in \overline{\mathbb{F}}_q$, $\alpha$ is in $\mathbb{F}_{q^k}$ if and only if $m_{\alpha, q}(x)$ divides $x^k-1$. \item $\mathcal C_k$ denotes the set of elements $\alpha \in \overline{\mathbb{F}}_q$ such that $\deg(\alpha)=k$, i.e., $m_{\alpha}(x)\in \mathcal I_k$. \item $\mathbb G_k:=\{P\in \mathbb{F}_q[x]\,|\, P\; \text{is a permutation polynomial of}\,\, \mathbb{F}_{q^k}\, \text{and}\, \deg(P)<q^k\}.$ \end{enumerate} \section{On degree preserving maps} In this section, we provide the proofs of Theorems~\ref{thm:degree-preserve} and~\ref{thm:local}. We start with recalling some notations: for $\alpha\in \overline{\mathbb{F}}_q$, $\deg(\alpha):= \deg(m_{\alpha})=\min\{s>0\,|\, \alpha\in \mathbb{F}_{q^s}\}$. Moreover, for each positive integer $k$, $\mathcal C_k$ is the set of elements $\alpha \in \overline{\mathbb{F}}_q$ such that $\deg(\alpha)=k$, i.e., $m_{\alpha}\in \mathcal I_k$. In order to prove Theorem~\ref{thm:degree-preserve}, the following proposition is crucial. \begin{proposition}\label{prop:degree-preserving} Suppose that $F\in \mathbb{F}_q[x]$ is a polynomial of degree $d\ge 1$ such that its induced map $f\mapsto F\diamond f$ on $\mathcal I$ preserves the degree of the elements in $\mathcal I$. Then, for each $\alpha\in \overline{\mathbb{F}}_q$, the equation $F(x)=\alpha$ has exactly one solution $\gamma\in \overline{\mathbb{F}}_q$ with multiplicity $d$ and, in fact, $\deg(\gamma)=\deg(\alpha)$. In addition, for any positive integer $k$, the evaluation map induced by $F$ on $\mathbb{F}_{q^k}$ is a permutation and, in particular, the evaluation map induced by $F$ over the field $\overline{\mathbb{F}}_q$ is a permutation. \end{proposition} \begin{proof} Let $j$ be any positive integer and, for each $\alpha\in \mathcal C_j$, let $C(\alpha, F)\subseteq \overline{\mathbb{F}}_q$ be the set of (distinct) solutions of $F(x)=\alpha$, $|C(\alpha, F)|\ge 1$. For each $\gamma\in C(\alpha, F)$, we have $F(\gamma)=\alpha$ hence $F\diamond m_{\gamma}(x)=m_{\alpha}(x)$. Since $F$ preserves degree, it follows that $\deg(\gamma)=\deg(\alpha)=j$, hence $\gamma\in \mathcal C_j$. In particular, for each $\alpha\in \mathcal C_j$, $C(\alpha, F)\subseteq \mathcal C_j$. It is straightforward to check that the sets $C(\alpha, F)$ are pairwise disjoint for distinct $\alpha$'s. Since $\bigcup_{\alpha\in \mathcal C_j}C(\alpha, F)\subseteq \mathcal C_j$ is a disjoint union of nonempty sets, it follows that $$|\mathcal C_j|\le \sum_{\alpha\in \mathcal C_j}|C(\alpha, F)|\le |\mathcal C_j|,$$ hence $|C(\alpha, F)|=1$ for any $\alpha$. In other words, $F(x)-\alpha$ has one root $\gamma\in \mathcal C_j$ with multiplicity $d$. In particular, since $\mathbb{F}_{q^k}=\bigcup_{j|k}\mathcal C_j$ one can see that, for each $\gamma\in \mathbb{F}_{q^k}$, the equation $F(x)=\gamma$ has exactly one solution over $\overline{\mathbb{F}}_q$ and this solution lies in $\mathbb{F}_{q^k}$. Therefore, the evaluation map $a\mapsto F(a)$ on $\mathbb{F}_{q^k}$ (a finite set) is onto and so is a permutation. \end{proof} As follows, we provide a complete characterization of the polynomials $F$ satisfying the properties given in Proposition~\ref{prop:degree-preserving}. \begin{lemma}\label{lem:trivials} Suppose that $F\in \mathbb{F}_q[x]$ is a polynomial of degree $d\ge 1$ such that, for each $\alpha\in \overline{\mathbb{F}}_q$, the $F(x)-\alpha$ has a unique root in $\overline{\mathbb{F}}_q$ with multiplicity $d$. Then $F(x)=ax^{p^h}+b$ for some $a, b\in \mathbb{F}_q$ and some $h\ge 0$. \end{lemma} \begin{proof} Write $F(x)=\sum_{i=0}^{d}a_ix^i$ and let $p^h$ be the greatest power of $p$ that divides each index $i$ for which $a_i\ne 0$. Hence $F=G^{p^h}$, where $G=\sum_{i=0}^sa_i^\prime x^i\in \mathbb{F}_q[x]$ is such that there exists at least one positive integer $1\le i\le s$ for which $a_i^\prime \ne 0$ and $i$ is not divisible by $p$. In particular $G'(x)$, the formal derivative of $G$, is not the zero polynomial. One can easily see that $G$ also satisfies the required properties of our statement. We shall prove that $G$ has degree $s=1$. For this, suppose that $s>1$ and, for each $\alpha\in \overline{\mathbb{F}}_q$, let $\gamma(\alpha)$ be the only root of $G(x)=\alpha$, hence $\gamma(\alpha)$ has multiplicity $s\ge 2$. In particular, $\gamma(\alpha)$ is a root of $\gcd(G(x)-\alpha, G'(x))$. This shows that $G'(x)$ vanishes at each element $\gamma(\alpha)$ with $\alpha\in \overline{\mathbb{F}}_q$. Of course, the set of elements $\gamma(\alpha)$ is infinite and so $G'$ is the zero polynomial, a contradiction. Therefore, $G$ has degree $s=1$ and so $G=Ax+B$, hence $F(x) =ax^{p^h}+b$ with $a=A^{p^h}$ and $b=B^{p^h}$. \end{proof} We observe that, if $F(x)=ax^{p^h}+b$ is a polynomial of degree $p^h\ge 1$ (hence $a\ne 0$), then $\deg(F\diamond f)=\deg(f)$ for any monic irreducible $f$. For this, suppose that $\deg(f)=k$ and let $\alpha\in \mathbb{F}_{q^k}$ be any root of $f$. It is straightforward to check that $F$ permutes the whole field $\overline{\mathbb{F}}_q$. In particular, since the compositions $F^{(n)}(\alpha)$ are in $\mathbb{F}_{q^k}$ (which is a finite set), there exists a positive integer $m$ such that $F^{(m)}(\alpha)=\alpha$. Therefore, if we set $f_0=f$ and, for $1\le i\le m$, $f_i=F\diamond f_{i-1}$, one has $f_m=F^{(m)} \diamond f=m_{F^{(m)}(\alpha)}=m_{\alpha}=f$. However, since $\deg(f)\le \deg(F\diamond f)$, it follows that $$k=\deg(f)\le \deg(f_1)\le \ldots \le \deg(f_m)=k,$$ and so $\deg(f_1)=\deg(F\diamond f)=k$. Combining this last argument with Proposition~\ref{prop:degree-preserving} and Lemma~\ref{lem:trivials}, we obtain the following theorem. \begin{theorem}\label{thm:degree-preserve} Let $\mathcal I$ be the set of all monic irreducible polynomials over $\mathbb{F}_q$ and $F(x)\in \mathbb{F}_q[x]$ is a polynomial of degree $\ge 1$. Then the induced map $f\mapsto F\diamond f$ on $\mathcal I$ preserves the degree of any irreducible polynomial in $\mathcal I$ if and only if $F(x)=ax^{p^h}+b$ for some $a, \in \mathbb{F}_q^*$, $b\in \mathbb{F}_q$, and $h\ge 0$. \end{theorem} Proposition~\ref{prop:degree-preserving} implies that if the map $f\mapsto F\diamond f$ on $\mathcal I$ is degree preserving then $F$ permutes the field $\overline{\mathbb{F}}_q$. In the following, we study maps induced by permutations of $\mathbb{F}_{q^k}$, which are not neccessarily permutations of $\overline{\mathbb{F}}_q$. \begin{theorem}\label{thm:local} Let $k$ be a positive integer and $\mathcal I_k$ be the set of monic irreducible polynomials over $\mathbb{F}_q$ of degree $k$. Let $F(x)\in \mathbb{F}_q[x]$ such that the evaluation map $c\mapsto F(c)$ of $F$ on $\mathbb{F}_{q^k}$ is a permutation. Then for any $f\in \mathcal I_k$, the polynomial $F\diamond f$ is also in $\mathcal I_k$, i.e., the restriction of the map $f\mapsto F\diamond f$ to the set $\mathcal I_k$ is a degree preserving map. Moreover, this restriction is also a permutation of the set $\mathcal{I}_k$. \end{theorem} We observe that, since $\mathcal I_k$ comprises the minimal polynomials of the elements in $\mathcal C_k$, it is sufficient to prove that $F$ permutes the set $\mathcal C_k$: in fact, if this occurs, $f\mapsto F\diamond f$ maps the set $\mathcal I_k$ to itself. In addition, if $f, g\in \mathcal I_k$ and $\alpha, \beta\in \mathcal C_k$ are roots of $f$ and $g$, respectively, then $F\diamond f=F\diamond g$ implies that $F(\alpha)$ and $F(\beta)$ have the same minimal polynomial. In particular, $F(\alpha)=F(\beta)^{q^i}=F(\beta^{q^i})$ for some $i\ge 0$ and, since $F$ is a permutation of $\mathcal C_k$, it follows that $\alpha=\beta^{q^i}$ and so $\alpha$ and $\beta$ are conjugates. Therefore, $\alpha$ and $\beta$ have the same minimal polynomial over $\mathbb{F}_q$, i.e., $f=g$. In other words, $f\mapsto F\diamond f$ maps $\mathcal I_k$ (a finite set) into itself and is an one-to-one map, hence is a permutation. Next we show that $F$ permutes the set $\mathcal C_k$ and so we finish the proof of Theorem~\ref{thm:local}. \begin{proposition}\label{local-permut} Let $F\in \mathbb{F}_q[x]$ and let $k$ be a positive integer such that the evaluation map $c\mapsto F(c)$ of $F$ on $\mathbb{F}_{q^k}$ is a permutation. Then the restriction of this evaluation map on the set $\mathcal C_k$ is a permutation of $\mathcal C_k$. \end{proposition} \begin{proof} Let $p_1, \ldots, p_s$ are the distinct prime divisors of $k$, hence $$\bigcup_{i=1}^s \mathbb{F}_{q^{k/p_i}}=\mathbb{F}_{q^k}\setminus \mathcal C_k.$$ We observe that, for any positive integer $d$, $F(\mathbb{F}_{q^d})\subseteq \mathbb{F}_{q^d}$, hence $$\bigcup_{i=1}^s \mathbb{F}_{q^{k/p_i}}\supseteq F\left(\bigcup_{i=1}^s \mathbb{F}_{q^{k/p_i}}\right),$$ and, since $F$ is a permutation of $\mathbb{F}_{q^k}$, the previous inclusion is an equality of sets and then $F(\mathcal C_k)=\mathcal C_k$, i.e., $F$ permutes the set $\mathcal C_k$. \end{proof} \section{Permutations of irreducible polynomials} Theorem~\ref{thm:local} says that, for a permutation polynomial $F\in \mathbb{F}_q[x]$ of $\mathbb{F}_{q^k}$, the map $f\mapsto F\diamond f$ is a permutation of the set $\mathcal I_k$. In this context, it is interesting to study the permutations of $\mathbb{F}_{q^k}$ that are induced by polynomials in $\mathbb{F}_q[x]$. We first observe that two polynomials $F$ and $F_0$ induce the same evaluation map in $\mathbb{F}_{q^k}$ if and only if $F\equiv F_0\pmod{x^{q^k}-x}$. We consider the following set $$\mathbb G_k:=\{P\in \mathbb{F}_q[x]\,|\, P\; \text{is a permutation polynomial of}\,\, \mathbb{F}_{q^k}\, \text{and}\, \deg(P)<q^k\}.$$ We shall prove that the set $\mathbb G_k$ has a group structure. First, we introduce a simple (but powerful) result. \begin{proposition}[Frobenius-Stable Polynomials]\label{prop:frobenius} Let $f\in \overline{\mathbb{F}}_q[x]$ be a polynomial of degree $n$. The following are equivalent. \begin{enumerate}[(i)] \item There exists a set $C\subseteq \overline{\mathbb{F}}_q$ with cardinality at least $n+1$ such that $$f(\alpha^q)=f(\alpha)^q, \alpha\in C.$$ \item The coefficients of $f$ lie in $\mathbb{F}_q$, i.e., $f\in \mathbb{F}_q[x]$. \end{enumerate} \end{proposition} \begin{proof} We observe that, if $f(x) \in \mathbb{F}_q[x]$, then $f(\alpha^q)=f(\alpha)^q$ for any $\alpha\in \overline{\mathbb{F}}_q$ so it suffices to prove that (i) implies (ii). For this, let $C$ be as above and write $f(x)=\sum_{i=0}^{n}a_ix^i$. Consider the polynomial $f^*(x)=\sum_{i=0}^{n}(a_i^q-a_i)x^i$. Since $f(\alpha^q)=f(\alpha)^q$ for any $\alpha\in C$, we see that $f^*$ vanishes at every element of $C$. However, since the degree of $f^*$ is at most $n$ and $C$ has at least $n+1$ elements, it follows that $f^* = 0$, i.e., $a_i=a_i^q$. In other words, $f$ is a polynomial with coefficients in $\mathbb{F}_q$. \end{proof} As a consequence of the previous proposition, we obtain the following result. \begin{corollary}\label{cor:group} The set $\mathbb G_k$ is a group with respect to the composition modulo $x^{q^k}-x$. \end{corollary} \begin{proof} Let $P, Q\in \mathbb G_k$. In particular, $P, Q\in \mathbb{F}_q[x]$ and so the reduction of $P\circ Q$ modulo $x^{q^k}-x$ is also a polynomial with coefficients in $\mathbb{F}_q$ and induces the same permutation of $P\circ Q$ in $\mathbb{F}_{q^k}$, i.e., $P\circ Q\in \mathbb G_k$. The identity element is the trivial permutation $P(x)=x\in \mathbb{F}_{q}[x]$. Observe that, from definition, any element of $\mathbb G_k$ has inverse (not necessarily in $\mathbb G_k$). Let $P\in \mathbb G_k$ and let $P_0$ be its inverse; without loss of generality, $n=\deg P_0<q^k$. We just need to show that $P_0$ is a polynomial with coefficients in $\mathbb{F}_q$. Indeed, for any $\alpha\in \mathbb{F}_{q^k}$, we have $P(P_0(\alpha)^q) = P(P_0(\alpha))^q = \alpha^q = P(P_0(\alpha^q))$ and so $P_0(\alpha)^q = P_0(\alpha^q)$ since $P \in \mathbb G_k$. Since $|\mathbb{F}_{q^k}|=q^k\ge n+1$, from Proposition~\ref{prop:frobenius}, it follows that $P_0\in \mathbb{F}_q[x]$. \end{proof} If $\mathrm{Sym}(\mathcal I_k)$ denotes the symmetric group of the set $\mathcal I_k$, from Theorem~\ref{thm:local}, we have the group homomorphism $\tau_k:\mathbb G_k\to \mathrm{Sym}(\mathcal I_k)$ given as follows: for $P\in \mathbb G_k$, $\tau_k(P)=\varphi_P$, where $\varphi_P:\mathcal I_k\to \mathcal I_k$ is such that $\varphi_P(f)=P\diamond f$. The following theorem shows that this homomorphism is \emph{onto} and, in particular, this implies that any permutation of the set $\mathcal I_k$ can be viewed as a map $f\mapsto P\diamond f$ for some $P\in \mathbb G_k$. \begin{theorem}\label{thm:onto} For any permutation $\sigma$ of the set $\mathcal I_k$, there exists an element $P\in \mathbb G_k$ such that $\sigma(f)=P\diamond f$ for any $f\in \mathcal I_k$. \end{theorem} \begin{proof} Let $f_1, \ldots, f_{n_k}$ be a list of all elements in the set $\mathcal I_k$, where $n_k=|\mathcal I_k|$. For each $1\le i\le n_k$, let $\alpha_i$ be any root of $f_i$. We observe that $\mathcal C_k$ comprises the elements $\alpha_i^{q^j}$ with $1\le i\le n_k$ and $0\le j\le k-1$. Fix $\sigma\in \mathrm{Sym}(\mathcal I_k)$. Then $\sigma$ induces a permutation $\lambda$ of the set $\{1, \ldots, n_k\}$ such that $\sigma(f_i)=f_{\lambda(i)}$. Let $P\in \mathbb{F}_{q^k}[x]$ be the polynomial of least degree such that $$P\left(\alpha_i^{q^j}\right)=\alpha_{\lambda(i)}^{q^j}, \,\text{for any}\; 1\le i\le n_k\; \text{and} \;0\le j\le k-1,$$ and $P(\beta)=\beta$ if $\beta\in \mathbb{F}_{q^k}\setminus \mathcal C_k$. Using \emph{Lagrange interpolation}, such a $P$ exists and has degree at most $q^k-1$. From definition, $P$ is a permutation of $\mathbb{F}_{q^k}$ and we can verify that $P(\alpha^q)=P(\alpha)^q$ for any $\alpha\in \mathbb{F}_{q^k}$. From Proposition~\ref{prop:frobenius}, it follows that $P\in \mathbb{F}_q[x]$. In conclusion, $P$ is an element of $\mathbb G_k$. We observe that, for each $1\le i\le n_k$, $P(\alpha_i)=\alpha_{\lambda(i)}$ and so $$P\diamond f_i=m_{\alpha_{\lambda(i)}}(x)=f_{\lambda(i)}=\sigma(f_i).$$ \end{proof} We observe that, for polynomials $F\in \mathbb{F}_q[x]$ and $f\in \mathcal I_k$, $F\diamond f$ is the minimal polynomial of $F(\alpha)$, where $\alpha$ is any root of $f$. In particular, the computation of $F\diamond f$ requires the construction of the field $\mathbb{F}_{q^k}$. When $F=P$ is a permutation of $\mathbb{F}_{q^k}$, the following lemma gives an alternative way of obtaining $P\diamond f$. \begin{lemma}\label{lem:gcd} Let $P\in \mathbb G_k$ and let $Q\in \mathbb G_k$ be the inverse of $P$. For any $f\in \mathcal I_k$, the polynomial $f(Q(x))$ is such that any of its irreducible factors over $\mathbb{F}_q$ has degree divisible by $k$. Additionally, $P\diamond f$ is the only irreducible factor of degree $k$ of $f(Q(x))$ (possibly with multiplicity greater than one). In particular, \begin{equation}\label{eq:gcd-action-0}P\diamond f(x)=\gcd(f(Q(x)), x^{q^k}-x).\end{equation} \end{lemma} \begin{proof} Let $g$ be any irreducible factor of $f(Q(x))$ and let $\beta\in \overline{\mathbb{F}}_q$ be any element such that $g(\beta)=0$. We observe that $f(Q(\beta))=0$ and so, there exists a root $\alpha\in \mathbb{F}_{q^k}$ of $f$ such that $Q(\beta)=\alpha$. The last equality says that $\beta$ is a root of $Q(x)-\alpha$. Since $\alpha$ is an element of degree $k$, we conclude that $\beta$ is in an extension of $\mathbb{F}_{q^{k}}$, hence $\deg(\beta)=kd$ for some $d\ge 1$. Clearly $g$ is the minimal polynomial of $\beta$ over $\mathbb{F}_q$ and so $\deg(g)=kd$. In particular, if $\deg(g)=k$, then $\beta\in \mathbb{F}_{q^k}$ and, since $Q\in \mathbb G_k$ is a permutation of $\mathbb{F}_{q^k}$ with $Q(\beta)=\alpha$, it follows that $\beta=P(\alpha)$ and, from definition, $g=P\diamond f$. This shows that $P\diamond f$ is the only factor of degree $k$ of $f(Q(x))$. Since any other factor of $f(Q(x))$ has degree $kd$ for some $d>1$ and $x^{q^k}-x$ has no repeated irreducible factors, it follows that $P\diamond f=\gcd(f(Q(x)), x^{q^k}-x)$. \end{proof} Though the computation of GCD's in $\mathbb{F}_{q}[x]$ does not require the construction of the field $\mathbb{F}_{q^k}$, the explicit computation of inverse of a permutation is, in general, a hard problem. For this reason, we introduce the following alternative operation. \begin{define} For an element $P\in \mathbb G_k$ and $f\in \mathcal I_k$, we set $$P\ast f=Q\diamond f,$$ where $Q\in \mathbb G_k$ is the compositional inverse of $P$. \end{define} One can easily see that, if $\alpha$ is any root of $f\in \mathcal I_k$, $P\ast f=m_{\beta}$, where $\beta=Q(\alpha)$ is the only element in $\mathbb{F}_{q^k}$ such that $P(\beta)=\alpha$. For an element $P\in \mathbb G_k$, the compositions $P\ast f$ and $P\diamond f$ are \emph{dual}, in the sense that $f=P\diamond (P\ast f)=P\ast (P\diamond f)$. The advantage is that the composition $P\ast f$ can be easily computed; from Lemma~\ref{lem:gcd}, it follows that \begin{equation}\label{eq:gcd-action}P\ast f(x)=\gcd(f(P(x)), x^{q^k}-x),\end{equation} for any $P\in \mathbb G_k$ and $f\in \mathcal I_k$. \begin{remark} We emphasize that there is no loss of generality on working with the compositions $P\ast f$: in fact, the dynamics of $P$ on $\mathcal I_k$ via the compositions $P\ast f$ and $P\diamond f$ are essentially the same. \end{remark} \noindent For the rest of this paper, we consider the maps $f\mapsto P\ast f$ induced by elements $P\in \mathbb G_k$ on the set $\mathcal I_k$. \begin{example}\label{ex:1} Let $q=2$ and $k=4$. It is direct to verify that $P=x^7$ is a permutation of $\mathbb{F}_{2^4}=\mathbb{F}_{16}$. We have $\mathcal I_4=\{f_1, f_2, f_3\}$, where $f_1(x)=x^4+x+1, f_2(x)=x^4+x^3+1$ and $f_3(x)=x^4+x^3+x^2+x+1$. Using the formula given in Eq.~\eqref{eq:gcd-action}, we obtain $P\ast f_1=f_2$, $P\ast f_2=f_1$ and $P\ast f_3=f_3$. This corresponds to the permutation of three symbols with cycle decomposition $(1\, 2)\, (3)$. \end{example} \begin{example}\label{ex:2} Let $q=3$ and $k=3$. It is direct to verify that, for $h=x+1$, $L_h=x^3+x$ is a permutation of $\mathbb{F}_{3^3}=\mathbb{F}_{27}$. We have $\mathcal I_3=\{f_1, \ldots, f_8\}$, where $f_1(x)=x^3-x+1, f_2(x)=x^3-x-1, f_3(x)=x^3+x^2-1, f_4(x)=x^3+x^2+x-1, f_5(x)=x^3+x^2-x+1, f_6(x)=x^3-x^2+1, f_7(x)=x^3-x^2+x+1$ and $f_8(x)=x^3-x^2-x-1$. Using the formula given in Eq.~\eqref{eq:gcd-action}, we obtain $P\ast f_1=f_2$, $P\ast f_2=f_1$, $P\ast f_3=f_8$, $P\ast f_8=f_4$, $P\ast f_4=f_6$, $P\ast f_6=f_5$, $P\ast f_5=f_7$ and $P\ast f_7=f_3$. This corresponds to the permutation of eight symbols with cycle decomposition $(1\, 2)\, (3\, 8\, 4\, 6\, 5\, 7)$. \end{example} \subsection{M\"{o}bius maps on irreducible polynomials}\label{subsec:Mobius-action} If $\gamma(x)=\frac{ax+b}{cx+d}$ is a M\"{o}bius map with $a, b, c, d\in \mathbb{F}_q$ and $ad-bc\ne 0$, we have that $\gamma$ is a permutation of $\mathbb{F}_{q^k}$ for any $k$ (with a suitable evaluation at the possible pole of $\gamma(x)$): its inverse is given by $\gamma^{-1}(x)=\frac{dx-b}{a-cx}$. If $f\in \mathcal I_k$ with $k\ge 2$ and $\alpha\in \mathbb{F}_{q^k}\setminus \mathbb{F}_q$ is any root of $f$, the minimal polynomial of $\gamma^{-1}(\alpha)=\frac{d\alpha-b}{a-c\alpha}$ over $\mathbb{F}_q$ equals $c_{f}(cx+d)^{k}f\left(\frac{ax+b}{cx+d}\right)$, where $c_f$ is the only nonzero element of $\mathbb{F}_{q}$ such that $c_{f}(cx+d)^{k}f\left(\frac{ax+b}{cx+d}\right)$ is a monic polynomial. In particular, \begin{equation}\label{eq:mobius-map}\gamma \ast f=c_{f}(cx+d)^{k}f\left(\frac{ax+b}{cx+d}\right).\end{equation} M\"{o}bius transformations on irreducible polynomials of degree $k\ge 2$ like the one given in Eq.~\eqref{eq:mobius-map} are considered in \cite{ST12}. \section{Invariant theory for the maps $f\mapsto P\ast f$} In this section, we provide a general invariant theory on the dynamics of $P\in \mathbb G_k$ on $\mathcal I_k$ via the map $f\mapsto P\ast f$. The study of \emph{fixed points} plays a main role in the theory of dynamics. When considering dynamics on finite sets, the number of fixed points is frequently considered. Throughout this section, $k$ is a fixed positive integer. \begin{define}Given $P\in \mathbb G_k$, $\mathcal C_P=\{f\in \mathcal I_k\,|\, P\ast f=f\},$ is the set of fixed points of $\mathcal I_k$ by $P$ and $n_P=|\mathcal C_P|$ is the number of fixed points.\end{define} In the following theorem, we give a simple characterization of polynomials $f\in \mathcal I_k$ that are fixed by an element $P\in \mathbb G_k$. \begin{theorem}\label{thm:1} For $f\in \mathcal I_k$ and $P\in \mathbb G_k$, the following are equivalent: \begin{enumerate}[(i)] \item $P\ast f=f$, i.e., $f\in \mathcal C_P$; \item $f(x)$ divides $x^{q^i}-P(x)$ for some $0\le i\le k-1$. \end{enumerate} In particular, if we set $R_d[P](x)=\prod_{i=0}^{d-1}(x^{q^i}-P(x))$, then $P\ast f=f$ if and only if $f$ divides $R_k[P]$. \end{theorem} \begin{proof} Let $\alpha\in \mathbb{F}_{q^k}$ be any root of $f$ and let $\beta\in \mathbb{F}_{q^k}$ be the element such that $P(\beta)=\alpha$. We observe that $P\ast f=f$ if and only if the minimal polynomial of $\beta$ is $f$, i.e., $f(\beta)=0$. In other words, $P\ast f=f$ if and only if $\beta=\alpha^{q^j}$ for some $0\le j\le k-1$. The latter holds if and only if $P(\alpha^{q^j})=\alpha$, i.e., $f$ divides $x^{q^{k-j}}-P(x)$, where $0\le k-j\le k-1$. \end{proof} \begin{define} For each positive integer $d$, we set $$\Psi_d(x)=\prod_{f\in \mathcal I_d}f(x)=\prod_{\alpha\in \mathcal C_d}(x-\alpha).$$ \end{define} It is clear that $x^{q^k}-x=\prod_{d|k}\Psi_d(x)$. From the previous theorem, a general implicit formula for the number of fixed points can be derived. \begin{theorem} For any polynomial $P$ and integers $i\ge 0$, $d\ge 1$, set $$R_d[P](x)=\prod_{i=0}^{d-1}(x^{q^i}-P(x)), \; g_P^{(i, d)}(x)=\gcd(x^{q^i}-P(x), x^{q^d}-x),$$ and $h_P^{(d)}(x)=\gcd(R_d[P](x), \Psi_d(x))$. If $P\in \mathbb G_k$, the number $n_P$ of fixed points of $\mathcal I_k$ by $P$ satisfies the following identity \begin{equation}\label{fix-points}n_P=\frac{1}{k}\deg(h_P^{(k)})=\sum_{d|k}\frac{\mu(k/d)}{d}\sum_{i=0}^{d-1}\deg(g_P^{(i, d)}).\end{equation} \end{theorem} \begin{proof} Since $\Psi_k$ is squarefree, the equality $n_P=\frac{1}{k}\deg(h_P^{(k)})$ follows directly from Theorem~\ref{thm:1}. Notice that any irreducible factor of $g_P^{(i, k)}$ is of degree a divisor of $k$. Let $F$ be any monic irreducible polynomial of degree $d$, where $d$ is a divisor of $k$. We observe that $F$ divides $g_P^{(i, k)}$ if and only if $F$ divides $g_P^{(i, d)}$. In this case, the minimal $i_0$ with such property satisfies $i_0\le d-1$, i.e., $F$ divides $R_d[P]$. Additionally, if such $i_0$ exists, $F$ divides $g_p^{(j, k)}$ with $0\le j\le k-1$ if and only if $j=i_0+ds$ with $0\le s< \frac{k}{d}$. Indeed, $F$ divides $x^{q^{j}-q^{i_0}}-1$ if and only if $F$ divides $x^{q^{j-i_0}-1}-1$, or $x^{q^{j-i_0}}-x$. Because $F$ is irreducible and has degree $d$, we must have $j\equiv i_0\pmod d$. In particular, if $F$ is an irreducible polynomial of degree $d$ and divides $R_k[P]$, then $F^{k/d}$ is the greatest power of $F$ that divides $R_k[P]$; this implies that $$\prod_{i=0}^{k-1}\gcd(x^{q^i}-P(x), \Psi_d(x))=\gcd\left(\prod_{i=0}^{d-1}(x^{q^i}-P(x)), \Psi_d(x)\right)^{k/d}=(h_P^{(d)}(x))^{k/d}.$$ Therefore, since $x^{q^k}-x=\prod_{d|k}\Psi_d(x)$ is squarefree, the following holds: \begin{equation}\label{mobius}\prod_{i=0}^{k-1}g_P^{(i, k)}(x)=\prod_{i=0}^{k-1}\prod_{d|k}\gcd(x^{q^i}-P(x), \Psi_d(x))=\prod_{d|k}(h_P^{(d)}(x))^{k/d}.\end{equation} We observe that Eq.~\eqref{mobius} holds for any positive integer $k$ and any polynomial $P$. For a positive integer $s$, we set $\mathcal L(s)=\frac{1}{s}\sum_{i=0}^{s-1}\deg(g_P^{(i, s)})$ and $\mathcal M(s)=\frac{1}{s}\deg (h_P^{(s)})$. Taking degrees on Eq.~\eqref{mobius}, we see that $\mathcal L(k)=\sum_{d|k}\mathcal M(d)$ for any positive integer $k$. From the \emph{M\"{o}bius inversion formula}, it follows that $$n_P=\frac{1}{k}\deg(h_P^{(k)})=\mathcal M(k)=\sum_{d|k}\mathcal L(d)\cdot \mu(k/d).$$ \end{proof} When the permutation $P$ is a monomial or a linearized polynomial, Eq.~\eqref{fix-points} can be fairly simplified. \begin{corollary}\label{cor:fix-points-lin-monomial} The following hold: \begin{enumerate}[(i)] \item If $P(x)=x^n$ is a permutation polynomial of $\mathbb{F}_{q^k}$, i.e., $\gcd(n, q^k-1)=1$, then \begin{equation}\label{eq:monomial-fix}n_P=\varepsilon(k)+\sum_{d|k}\frac{\mu(k/d)}{d}\sum_{i=0}^{d-1}\gcd(q^i-n, q^d-1),\end{equation} where $\varepsilon(k)=0$ if $k\ne 1$ and $\varepsilon(1)=1$. \item Let $h(x)\in \mathbb{F}_q[x]$ with $h(x)=\sum_{i=0}^{k-1}a_ix^i$ be a polynomial such that $L_h(x)=\sum_{i=0}^{k-1}a_ix^{q^i}$ is a permutation of $\mathbb{F}_{q^k}$, i.e., $\gcd(h(x), x^k-1)=1$. For $P=L_h$, we have \begin{equation}\label{eq:linearized-fix}n_P=\sum_{d|k}\frac{\mu(k/d)}{d}\sum_{i=0}^{d-1}q^{r_{i, d}},\end{equation} where $r_{i, d}=\deg(\gcd(x^i-h, x^d-1))$. \end{enumerate} \end{corollary} \begin{proof} Applying Eq.~\eqref{fix-points}, we just need to compute $\deg(g_P^{(i, d)})$ explicitly. \begin{enumerate}[(i)] \item We observe that, for $P=x^n$, the following holds: $$g_P^{(i, d)}(x)=\gcd(x^{q^i}-x^n, x^{q^d}-x)=x\cdot \gcd(x^{q^i-n}-1, x^{q^d}-1)=x\cdot (x^{\gcd(q^i-n, q^d-1)}-1).$$ Therefore, $\deg(g_P^{(i, d)})=1+\gcd(q^i-n, q^d-1)$. To finish the proof, we observe that $\sum_{d|k}\mu(\frac{k}{d})=\varepsilon(k)$ for any positive integer $k$. \item From Lemma~\ref{lem:linearized-properties}, if $f$ and $g$ are polynomials with coefficients in $\mathbb{F}_q$, then $\gcd(L_f, L_g)=L_{\gcd(f, g)}$. In particular, for $P=L_h$, we have $$g_P^{(i, d)}(x)=\gcd(x^{q^i}-L_h, x^{q^d}-x)=\gcd(L_{x^i-h}, L_{x^d-1})=L_{\gcd(x^i-h, x^d-1)},$$ and so $\deg(g_P^{(i, d)})=q^{r_{i, d}}$, where $r_{i, d}=\deg(\gcd(x^i-h, x^d-1))$. \end{enumerate} \end{proof} \begin{example} We consider the monomial permutation polynomial $P(x)=x^7$ over $\mathbb{F}_{2^4}=\mathbb{F}_{16}$. From the previous corollary, the number of fixed points of $P$ is $$n_P=\sum_{d|4}\frac{\mu(4/d)}{d}\sum_{i=0}^{d-1}(\gcd(2^i-7, 2^d-1)+1)=1,$$ as expected by Example~\ref{ex:1}. \end{example} In the following proposition we show that, in some special extensions of finite fields, the number of fixed points can be given explicitly when considering monomial and linearized permutations. \begin{proposition} The following hold: \begin{enumerate}[(i)] \item If $k$ and $r:= \frac{q^k-1}{q-1}$ are prime numbers and $P(x)=x^n\in \mathbb G_k$ is a permutation polynomial of $\mathbb{F}_{q^k}$, then the number of fixed points of $\mathcal I_k$ by $P$ is \begin{eqnarray*} n_P = \left\{ \begin{array}{ll} \frac{(r-1)}{k} \gcd(n-1, q-1), & \text{if~} n\equiv q^i ~(\bmod~{r}) \text{~for some~} 0 \leq i \leq k-1; \\ 0, & \text{otherwise}. \end{array} \right. \end{eqnarray*} \item If $k$ is a prime number, $T(x):=\frac{x^k-1}{x-1}=x^{k-1}+\cdots+x+1$ is an irreducible polynomial and $P(x)=L_f(x) \in \mathbb G_k$ is a permutation polynomial of $\mathbb{F}_{q^k}$ with $f(x)\in \mathbb{F}_q[x]$, then we have {\small \begin{eqnarray*} n_P = \left\{ \begin{array}{ll} \frac{q^2-q}{2}=|\mathcal I_2|, & \text{if~} q \text{~is even}, k = 2 \text{ and } f(x) = 1 \text{ or } x; \\ \frac{q^k-q}{k}=|\mathcal I_k|, & \text{if~} q \text{~is odd or } k \neq 2\text{ and } f(x) = x^i \text{~for ~} 0 \leq i \leq k-1; \\ \frac{q^{k-1}-1}{k}, & \text{if~} q \text{~is odd or } k \neq 2, f(x) = a T(x) + x^i \text{~for ~} 0 \leq i \leq k-1, a \in \mathbb{F}_q^*; \\ 0, & otherwise. \end{array} \right. \end{eqnarray*} } \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate}[(i)] \item Since $k\ge 2$ and $r$ is prime, we have $r>q+1$ and so $1=\gcd(r, q-1)=\gcd(k, q-1)$. Since $P=x^n\in \mathbb G_k$ is a monomial permutation, $\gcd(n, q^k-1)= 1$ and $n<q^k$. In particular, $n=ar+b$ with $a\le q-1$ and $b<r$. Set $s=\gcd(n-1, q-1)$ and, for each positive integer $0\le i\le k-1$, set $s_i=\gcd(q^i-n, r)$. Therefore, $\gcd(q^i-n, q^k-1)=\gcd(q^i-n, r)\cdot \gcd(q^i-n, q-1)=s_i\cdot \gcd(n-1, q-1)=s_is$. Since $k$ is prime, from Eq.~\eqref{eq:monomial-fix}, we obtain the following equality: $$n_P=-s+\frac{1}{k}\sum_{i=0}^{k-1}s_is=\frac{s}{k}\sum_{i=0}^{k-1}(s_i-1).$$ We first observe that if $N$ is a positive integer that divides $s_i$ and $s_j$, then $N$ divides $q^i-q^j$, where $|q^i-q^j|\le q^{k-1}-1<r$. In particular, for at most one index $0\le i\le k-1$, we have $s_i\ne 1$. If $s_i=1$ for every $0\le i\le k-1$, the previous equality yields $n_P=0$. If there exists $0\le i\le k-1$ such that $s_i\ne 1$, then $q^i-n$ is divisible by $r$. However, since $n=ar+b$, it follows that $q^i-b$ is divisible by $r$. Because $0\le q^i, b<r$, the latter implies $q^i-b=0$. Hence $n=ar+q^i=a\cdot \frac{q^k-1}{q-1}+q^i$ and, in this case, $s_j=1$ if $j\ne i$ and $s_i=r$. In addition, $s=\gcd(n-1, q-1)=\gcd(ar+q^i-1, q-1)=\gcd(ar, q-1)=\gcd(a, q-1)$, since $\gcd(r, q-1)=1$. Therefore, $n_P=\frac{s}{k}(r-1)=\gcd(a, q-1)\cdot \frac{q^k-q}{k(q-1)}$. \item We split the proof into cases. \begin{itemize}\item If $q$ is even and $k=2$, $x-1=x+1$ and $x^2-1=(x+1)^2$. Since $P=L_f\in \mathbb G_k=\mathbb G_2$, $\deg(L_f)<q^2$ and so $f$ is a linear polynomial. If $G=\gcd(f(x)-1, x+1), G_1=\gcd(f(x)-1, (x+1)^2)$ and $G_2=\gcd(f(x)-x, (x+1)^2)$ are polynomials of degrees $m, m_1$ and $m_2$, respectively, from Eq.~\eqref{eq:linearized-fix}, we obtain the following equality: $$n_P=-q^m+\frac{q^{m_1}+q^{m_2}}{2}.$$ Since $f(x)-x\equiv f(x)-1\pmod {x+1}$, one can see that the polynomials $G, G_1$ and $G_2$ are all equal to $1$ or all distinct from $1$. In particular, the numbers $m, m_1$ and $m_2$ are all zero or all nonzero. If they are all zero, $c_P=0$. Suppose now that $m, m_1$ and $m_2$ are not zero. Since $f$ is a linear polynomial, $m, m_1$ and $m_2$ are at most one unless $f=1$ or $f=x$ and, in this case, $m=1$, and $\{m_1, m_2\}=\{1, 2\}$ (in some order). For $m=m_1=m_2=1$, we obtain $n_P=0$. Otherwise, $n_P=-q+\frac{q^2+q}{2}=\frac{q^2-q}{2}=|\mathcal I_2|$. \item If $q$ is odd or $k\ne 2$, we observe that $T(x)=\frac{x^k-1}{x-1}$ is divisible by $x-1$ if and only if $T(1)=k$ equals zero, i.e., $k$ is divisible by $p$, the characteristic of $\mathbb{F}_{q}$. Since $k$ is prime, $k=p$, hence $T(x)=(x-1)^{p-1}$ which is not irreducible if $p>2$, a contradiction. Therefore $T(x)$ and $x-1$ are relatively prime. Since $P=L_f\in \mathbb G_k$, $\deg(L_f)<q^k$ and thus $f$ is a polynomial of degree at most $k-1$. Write $f=bT+R$, where $b\in \mathbb{F}_q$ and $R$ is a polynomial of degree at most $k-2$. Set $G=\gcd(f-1, x-1)$ and, for each positive integer $0\le i\le k-1$, set $G_i=\gcd(x^i-f, T)$. Therefore, $\gcd(x^i-f, x^k-1)=\gcd(x^i-f, T)\cdot \gcd(x^i-f, x-1)=G_i\cdot \gcd(f-1, x-1)=G_iG$. If we set $m_i=\deg(G_i)$, from Eq.~\eqref{eq:linearized-fix}, we obtain the following equality: $$n_p=-q^{m}+\frac{1}{k}\sum_{i=0}^{k-1}q^{m_i+m}=\frac{q^m}{k}\sum_{i=0}^{k-1}(q^{m_i}-1).$$ We observe that if $A(x)$ is a polynomial that divides $G_i$ and $G_j$, then $A(x)$ divides $x^i-x^j$, which is a polynomial of degree at most $k-1$ (and distinct from $T(x)$ since $q$ is odd or $k\ne 2$). In particular, for at most one index $0\le i\le k-1$, we have $G_i\ne 1$, i.e., $m_i\ne 0$. If $G_i=1$ for every $0\le i\le k-1$, then $m_i=0$ and the previous equality yields $n_P=0$. If there exists $0\le i\le k-1$ such that $G_i\ne 1$, then $x^i-f$ is divisible by $T$: since $f=bT+R$, it follows that $x^i-R$ is divisible by $T$, where $R$ has degree at most $k-2$. If $i\le k-2$, then $x^i-R$ is a polynomial of degree at most $k-2$ and the latter implies $x^i-R=0$, i.e., $f(x)=bT(x)+x^i$. If $i=k-1$, then it follows that $x^{k-1}-R=T$, hence $f=bT+R=(b-1)T+x^{k-1}$. In particular, if $n_P\ne 0$, then $f$ is of the form $aT+x^i$ for some $0\le i\le k-1$ and, in this case, $m_j=0$ if $j\ne i$ and $m_i=k-1$.Therefore, $$n_P=q^{m}\frac{1}{k}\sum_{i=0}^{k-1}(q^{m_i}-1)=\frac{q^{m}(q^{k-1} -1) }{k}.$$ Now we just need to compute $m$. If $f(x)=aT(x) +x^i$, then $f(1)=aT(1)+1=ak+1$. Hence $G=\gcd(f-1, x-1)=1$ unless $ak+1=1$, i.e., $ak=0$. We have seen that $k$ is not divisible by $p$, hence $a=0$. In conclusion, $m=\deg(G)$ is zero unless $a=0$ and, for $a=0$, we must have $m=1$. This concludes the proof. \end{itemize} \end{enumerate} \end{proof} \subsection{Invariants via special permutations} So far we have considered polynomials $f$ that are fixed by an element $P\in \mathbb G_k$, i.e., $P\ast f=f$. One may ask if there is any general characteristic of $P\ast f$ that is \emph{inherited} from $f$. In other words, we are interested in the algebraic polynomial structures that are preserved by the compositions $P\ast f$. As follows, we show that monomial and linearized permutations preserve certain algebraic structures of polynomials. \subsubsection{Multiplicative properties invariant by monomial permutations} Here we employ many definitions and notations introduced in Subsection~\ref{subsec:mult}. \begin{proposition} Suppose that $f\in \mathcal I_k$ with $f\ne x$ and $P=x^n$ is a monomial permutation over $\mathbb{F}_{q^k}$. Then $\mathrm{ord}(f)=\mathrm{ord}(P\ast f)$, i.e., the permutation of $\mathcal I_k$ induced by $P=x^n$ preserves the multiplicative order of polynomials. In particular, if $f$ is a primitive polynomial, then so is $P\ast f$. \end{proposition} \begin{proof} Let $\alpha$ be any root of $f$ and let $\beta$ be the unique element of $\mathbb{F}_{q^k}$ such that $\beta^n=\alpha$. We know that $\gcd(n, q^k-1)=1$ and then, from item (ii) of Lemma~\ref{thm:mult}, it follows that $\mathrm{ord}(\beta)=\mathrm{ord}(\alpha)$. To finish the proof, just recall that $P\ast f=m_{\beta}(x)$, $\mathrm{ord}(f)=\mathrm{ord}(\alpha)$ and $\mathrm{ord}(m_{\beta})=\mathrm{ord}(\beta)$. \end{proof} \begin{define} For an element $f\in \mathcal I_k$ such that $f(x)=x^k+\sum_{i=0}^{k-1}a_ix^i$, the element $a_0$ is the \emph{norm} of $f$. \end{define} We can easily verify that, if $\alpha$ is any root of $f$, then $a_0=\prod_{0\le i\le k-1}\alpha^{q^i}=\alpha^{\frac{q^k-1}{q-1}}$. In the following proposition, we show how the norms of $f$ and $P\ast f$ are related, in the case when $P$ is a monomial. \begin{proposition} Suppose that $f\in \mathcal I_k$, $P=x^n$ is a monomial permutation of $\mathbb{F}_{q^k}$ (i.e., $\gcd(n, q^k-1)=1$) and let $n_0<q^k-1$ be the positive integer such that $nn_0\equiv 1\pmod {q^k-1}$. If $f$ has norm $a\in \mathbb{F}_q$, then $x^n\ast f$ has norm $a^{n_0}$. In particular, the following hold. \begin{enumerate}[(i)] \item if $f$ has norm $1$, then so has $P\ast f$; \item if $n\equiv 1\pmod {q-1}$, the polynomials $f$ and $P \ast f$ have the same norm for every $f\in \mathcal I_k$. \end{enumerate} \end{proposition} \begin{proof} Let $\alpha$ be any root of $f$ and let $\beta$ be the unique element of $\mathbb{F}_{q^k}$ such that $\beta^n=\alpha$, hence $\beta=\alpha^{n_0}$. Hence $f=m_{\alpha}$ and $x^n\ast f=m_{\beta}$. With this notation, $f$ has norm $a=\alpha^{\frac{q^k-1}{q-1}}$ and $x^n\ast f$ has norm $\beta^{\frac{q^k-1}{q-1}}=a^{n_0}$. From this fact, items (i) and (ii) are straightforward to check. \end{proof} In particular, the previous proposition entails that, if $P=x^n$ is a monomial permutation such that $n\equiv 1 \pmod {q-1}$, then $f$ and $P\ast f$ have the same constant coefficient, for any $f\in \mathcal I_k$. \subsubsection{Additive properties invariant by linearized permutations} Here we employ many definitions and notations introduced in Subsection~\ref{subsec:add}. \begin{proposition} Suppose that $f\in \mathcal I_k$ and $L_g$ is a permutation over $\mathbb{F}_{q^k}$, where $L_g$ is the $q$-associate of $g\in \mathbb{F}_q[x]$. Then $\mathrm{Ord}(f)=\mathrm{Ord}(L_g\ast f)$, i.e., the permutation of $\mathcal I_k$ induced by $L_g$ preserves the $\mathbb{F}_q$-order of polynomials. In particular, if $f$ is a normal polynomial, then so is $L_g\ast f$. \end{proposition} \begin{proof} Let $\alpha$ be any root of $f(x)$ and let $\beta$ be the unique element of $\mathbb{F}_{q^k}$ such that $L_g(\beta)=\alpha$. We know that $\gcd(g, x^k-1)=1$ and then, from item (ii) of Theorem~\ref{thm:add}, it follows that $m_{\alpha, q}(x)=m_{\beta, q}(x)$. To finish the proof, we recall that $L_g\ast f=m_{\beta}(x)$, $\mathrm{Ord}(f)=m_{\alpha, q}$ and $\mathrm{Ord}(m_{\beta})=m_{\beta, q}$. \end{proof} \begin{define} For an element $f\in \mathcal I_k$ such that $f(x)=x^k+\sum_{i=0}^{k-1}a_ix^i$, $a_{k-1}\in \mathbb{F}_q$ is the \emph{trace} of $f(x)$ . \end{define} We can easily verify that, if $\alpha$ is any root of $f$, then $a_{k-1}=\sum_{0\le i\le k-1}\alpha^{q^i}=L_T(\alpha)$, where $L_T$ is the $q$-associate of $T=\frac{x^k-1}{x-1}$. In the following proposition, we show how the traces of $f$ and $P\ast f$ are related, in the case when $P$ is linearized. \begin{proposition} Suppose that $f\in \mathcal I_k$ and $L_g$ is a permutation of $\mathbb{F}_{q^k}$, where $L_g$ is the $q$-associate of $g=\sum_{i=0}^{k-1}a_ix^i\in \mathbb{F}_q[x]$. Set $g(1)=c\in \mathbb{F}_q$. If $f$ has trace $a\in \mathbb{F}_q$, then $L_g\ast f$ has trace $c^{-1}a$. In particular, the following hold: \begin{enumerate}[(i)] \item if $f$ has trace $0$, then so has $L_g\ast f$. \item if $g(1)=1$, then the polynomials $f$ and $L_g\ast f$ have the same trace for every $f\in \mathcal I_k$. \end{enumerate} \end{proposition} \begin{proof} Let $\alpha$ be any root of $f(x)$ and let $\beta$ be the unique element of $\mathbb{F}_{q^k}$ such that $L_g(\beta)=\alpha$. Hence $f=m_{\alpha}$ and $L_g\ast f=m_{\beta}$. With this notation, $f$ and $L_g\ast f$ have traces $L_T(L_g(\beta))$ and $L_T(\beta)$, respectively, where $T(x) =\frac{x^k-1}{x-1}$. If we set $L_T(\beta)=b\in \mathbb{F}_q$, we have that $b^{q^i}=b$ for any $i\ge 0$ and then $a=L_T(L_g(\beta))=L_g(L_T(\beta))=L_g(b)=\sum_{i=0}^{n-1}a_i b=bg(1)=bc$. Since $L_g$ is a permutation of $\mathbb{F}_{q^k}$, $\gcd(g, x^k-1)=1$ and, in particular, $g$ is not divisible by $x-1$ and so $g(1)=c\ne 0$. Therefore, $b=c^{-1}a$. From this fact, items (i) and (ii) are straightforward to check. \end{proof} In particular, the previous proposition says that, if $P=L_g$ is a linearized permutation such that $g(1)=1$, then $f$ and $P\ast f$ have the same coefficient of $x^{k-1}$, for any $f\in \mathcal I_k$. \section{Dynamics of $\mathbb G_k$ on the sets $\mathcal C_k$ and $\mathcal I_k$} If $F:S\to S$ is any map from a set $S$ to itself, we can associate to it a directed graph with nodes $\{a;\, a\in S\}$ and directed edges $\{(a, F(a);\, a\in S\}$: such a graph is called the {\it functional graph} of $F$ over $S$. The functional graph gives many informations of the dynamics of the function on the set: for instance, the \emph{orbit} $\{a, f(a), f(f(a)), \ldots\}$ of a point $a\in S$ is described by a \emph{path} in the functional graph. We know that any element $P\in \mathbb G_k$ induces permutations on the sets $\mathcal C_k$ and $\mathcal I_k$: namely, $P$ induces the evaluation map $c\to P(c)$ on $\mathcal C_k$ (see Proposition~\ref{local-permut}) and the map $f\mapsto P\ast f$ on $\mathcal I_k$. \begin{define} For $P\in \mathbb G_k$, $G(P, \mathcal C_k)$ and $G(P, \mathcal I_k)$ denote the functional graphs of the evaluation map of $P$ on $\mathcal C_k$ and the map $f\mapsto P\ast f$ on $\mathcal I_k$, respectively. \end{define} We observe that $\mathcal I_k$ can be describe as the set of the minimal polynomials of the elements in $\mathcal C_k$. In this point of view, the following lemma shows how the graphs $G(P, \mathcal C_k)$ and $G(P, \mathcal I_k)$ interact. Its proof is straightforward so we omit. \begin{lemma} For any $\alpha, \beta \in \mathcal C_k$ and any $P\in \mathbb G_k$, the following hold: \begin{enumerate}[(i)] \item if $(\beta, \alpha)\in G(P, \mathcal C_k)$, then $(m_{\alpha}, m_{\beta})\in G(P, \mathcal I_k)$, \item $(m_{\alpha}, m_{\beta})\in G(P, \mathcal I_k)$ if and only if $(\beta^{q^i}, \alpha)\in G(P, \mathcal C_k)$ for some $i\ge 0$ (or, equivalently, for some $0\le i\le k-1$). \end{enumerate} \end{lemma} Since the map induced by a permutation $P\in \mathbb G_k$ on the set $\mathcal C_k$ (resp. $\mathcal I_k$) is one-to-one, any $a\in G(P, \mathcal C_k)$ (resp. any $f\in G(P, \mathcal I_k)$) is periodic. \begin{define}\label{def:periods} Let $P\in \mathbb G_k$, $\alpha\in \mathcal C_k$ and $f\in \mathcal I_k$. \begin{enumerate}[(i)] \item $c_P(\alpha)$ is the least period of $\alpha$ by $P$, i.e., $c_P(\alpha):=\min\{n>0\,|\, P^{(n)}(\alpha)=\alpha\}$. \item $c_P^*(f)$ is the least period of $f$ by $P$, i.e., $c_P^{*}(f):=\min \{n>0\,|\, P^{(n)}\ast f=f\}$. \item $S_P$ (resp. $S_P^*$) is the spectrum of the \emph{distinct} period lengths $c_P(\alpha), \alpha\in \mathcal C_k$ (resp. $c_P^*(f), f\in \mathcal I_k$). \item $\mu_k(P)=\min\{j\,|\, j\in S_P\}$ and $\mu_k^*(P)=\min\{j\,|\, j\in S_P^*\}$ are the minimal period lengths, respectively. \end{enumerate} \end{define} In the following theorem, we present several relations between the numbers $c_P(\alpha)$ and $c_P^*(m_{\alpha})$. \begin{theorem}\label{thm:dynamics} For any $\alpha \in \mathcal C_k$ and $P\in \mathbb G_k$, the number $c_P^{*}(m_{\alpha})$ is divisible by $\frac{c_P(\alpha)}{\gcd(c_P(\alpha), k)}$ and divides $c_P(\alpha)$. In particular, for any $P\in \mathbb G_k$ and $\alpha\in \mathcal C_k$, the following hold: \begin{enumerate}[(i)] \item If $m_{\alpha}$ is fixed by $P$, then $c_P(\alpha)$ divides $k$, i.e., $P^{(k)}(\alpha)=\alpha$. \item The numbers $c_P(\alpha)$ and $c_P^*(m_{\alpha})$ satisfy the following inequality $$\frac{c_P(\alpha)}{k}\le c_P^{*}(m_{\alpha})\le c_P(\alpha).$$ \item If $\gcd(c_{P}(\alpha), k)=1$, then $ c_P^{*}(m_{\alpha})=c_P(\alpha)$. In particular, if $\gcd(j, k)=1$ for any $j\in S_P$, then $S_P=S_P^*$. \item The numbers $\mu_k^*(P)$ and $\mu_k(P)$ satisfy the following inequality $$\frac{\mu_k(P)}{k}\le \mu_k^*(P)\le \mu_k(P).$$ \end{enumerate} \end{theorem} \begin{proof} For $\alpha\in \mathcal C_k$, set $i=c_P(\alpha)$ and $j=c_P^*(m_{\alpha})$. Since $P^{(i)}(\alpha)=\alpha$, from definition, $P^{(i)}\ast m_{\alpha}=m_{\alpha}$, hence $j$ divides $i$. Also, if $P^{(j)}\ast m_{\alpha}=m_{\alpha}$, then $P^{(j)}(\alpha)=\alpha^{q^s}$ for some $0\le s\le k-1$. Since $P$ has coefficients in $\mathbb{F}_q$ and $\alpha\in \mathbb{F}_{q^k}$, it follows that $P^{(jk)}(\alpha)=\alpha^{q^{sk}}=\alpha$, hence $jk$ is divisible by $i$. If we write $i_0=\gcd(i, k)$, we see that $ji_0$ is also divisible by $i$, and so $j$ is divisible by $\frac{i}{i_0}$. This shows that $c_P^{*}(m_{\alpha})$ is divisible by $\frac{c_P(\alpha)}{\gcd(c_P(\alpha), k)}$ and divides $c_P(\alpha)$. The proofs of (i), (ii), (iii) and $(iv)$ are straightforward. \end{proof} In particular, the previous theorem entails that the orbit length of a point $\alpha\in \mathcal C_k$ ``contracts'' in a factor at most $k$ to the orbit length of $m_{\alpha}\in \mathcal I_k$. As we further see, the bounds in item (ii) of Theorem~\ref{thm:dynamics} can be reached. \begin{example} For $q=2$ and $k=6$, we observe that the polynomial $P=x^{13}+1$ permutes $\mathbb{F}_{64}=\mathbb{F}_{2^6}$. The functional graphs $G(P, \mathcal C_6)$ and $G(P, \mathcal I_6)$ are shown in Fig.~\ref{fig:dynamics} and~\ref{fig:dynamics-2}, respectively. In particular, we see that each cycle of $G(P, \mathcal C_6)$ contracts by the factor $6=k$ to a cycle of $G(P, \mathcal I_6)$. \end{example} \begin{figure}[H] \centering {\includegraphics[width=1.0\linewidth]{graph-1-dynamics.png}} {\caption{The functional graph $G(x^{13}+1, \mathcal C_6)$ for $q=2$.} \label{fig:dynamics}} \centering \end{figure} \begin{figure}[H] \centering {\includegraphics[width=0.4\linewidth]{graph-0-dynamics.png}} {\caption{The functional graph $G(x^{13}+1, \mathcal I_6)$ for $q=2$.} \label{fig:dynamics-2}} \centering \end{figure} Furthermore, we emphasize that the lower bound in item (ii) of Theorem~\ref{thm:dynamics} can always be attained. Fix $k\ge 1$ and consider $P=x^q$. Then $P\in \mathbb G_k$ and we can see that, for any $\alpha\in \mathcal C_k$, $c_P(\alpha)=k$. In addition, it follows from definition that $P\ast f=f$ for any $f\in \mathcal I_k$ and, in particular, $c_P^{*}(f)=1$ for any $f\in \mathcal I_k$. In other words, for any $\alpha\in \mathcal C_k$, the following holds $$1=c_P^*(m_{\alpha})=\frac{c_P(\alpha)}{k}.$$ \subsection{The functional graph of certain permutations} So far we have provided connections between the functional graphs of the maps induced by a permutation $P\in \mathbb G_k$ on the sets $\mathcal C_k$ and $\mathcal I_k$. The study of functional graphs of polynomial maps is usually made over the entire field: in \cite{A69}, monomial permutations are explored via the multiplicative order of finite fields and, in \cite{MV88}, the case of linearized permutations is considered, where the $\mathbb{F}_q$-order is employed. In general, there is no deterministic method to determine the functional graph of an arbitrary permutation (other than constructing the whole graph). The cases of monomial and linearized permutations are very special, since they are naturally connected to the algebraic structure of finite fields. Following the techniques employed in \cite{A69} and \cite{MV88}, we shall obtain the complete description of the dynamics of monomial and linearized permutations in the set $\mathcal C_k$. First, we fix some graph notation: for each positive integer $n$, $\mathrm{Cyc}(n)$ denotes the cyclic graph of length $n$. \begin{theorem}\label{thm:monomial-cycle} Let $P=x^n$ with $\gcd(n, q^k-1)=1$ be a permutation of $\mathcal C_k$. The following holds: \begin{equation*} G(x^n, \mathcal C_k)=\bigoplus_{\mathrm{ord}_eq=k}\frac{\varphi(e)}{\mathrm{ord}_en}\times \mathrm{Cyc}(\mathrm{ord}_en). \end{equation*} In particular, $\mu_k^*(x^n)\ge \min\limits_{\mathrm{ord}_eq=k}\frac{\mathrm{ord}_e n}{k}$. \end{theorem} \begin{proof} From Lemma~\ref{thm:mult}, $\mathcal C_k$ comprises the elements with multiplicative order $e$, where $e$ varies through the positive integers such that $k=\mathrm{ord}_eq$. In addition, for each positive integer $e$ with such property, $\mathcal C_k$ has $\varphi(e)$ elements with multiplicative order $e$. Therefore, for each positive integer $e$ such that $k=\mathrm{ord}_eq$, we just need to see how the elements with multiplicative order $e$ are distributed in the graph $G(x^n, \mathcal C_k)$. Let $\alpha\in \mathcal C_k$ be an element with multiplicative order $e$. From definition, $\alpha$ belongs to a cycle of length $j$ if and only if $j$ is the least positive integer such that $P^{(j)}(\alpha)=\alpha$, i.e., $\alpha^{n^j}=\alpha$. The latter is equivalent to $n^j\equiv 1\pmod e$. From definition, $j=\mathrm{ord}_en$. In particular, we have shown that each element of multiplicative order $e$ belongs to a cycle of length $j=\mathrm{ord}_en$. Since $\gcd(n, q^k-1)=1$, $\alpha$ and $\alpha^n$ have the same multiplicative order (see item (ii) of Lemma~\ref{thm:mult}) and, in particular, this shows that elements in a same cycle of $G(x^n, \mathcal C_k)$ have the same multiplicative order. Therefore, the divisor $e$ of $q^k-1$ contributes with $\frac{\varphi(e)}{\mathrm{ord}_en}$ copies of the cyclic graph of length $\mathrm{ord}_en$. The inequality $\mu_k^*(x^n)\ge \min_{\mathrm{ord}_eq=k}\frac{\mathrm{ord}_e n}{k}$ follows directly from item (iv) of Theorem~\ref{thm:dynamics}. \end{proof} Employing similar ideas (with suitable changes in the multiplicative-additive analogues), we obtain the linearized case. \begin{theorem}\label{thm:linear-cycle} Let $P=L_f$ be a linearized permutation of $\mathcal C_k$, where $L_f$ is the $q$-associate of $f\in \mathbb{F}_q[x]$ and $\gcd(f, x^k-1)=1$. The following holds: \begin{equation}\label{eq:cycle-linear} G(L_f, \mathcal C_k)=\bigoplus_{\mathcal O(x, g)=k}\frac{\Phi_q(g)}{\mathcal O(f, g)}\times \mathrm{Cyc}(\mathcal O(f, g)), \end{equation} where $g\in \mathbb{F}_q[x]$ is monic. In particular, $\mu_k^*(L_f)\ge \min\limits_{\mathcal O(x, g)=k}\frac{\mathcal O(f, g)}{k}$. \end{theorem} \begin{proof} From Theorem~\ref{thm:add}, $\mathcal C_k$ comprises the elements with $\mathbb{F}_q$-order $g$, where $g$ varies through the monic polynomials in $\mathbb{F}_q[x]$ such that $k=\mathcal O(x, g)$. In addition, for each monic polynomial $g$ with such property, $\mathcal C_k$ has $\Phi_q(g)$ elements with $\mathbb{F}_q$-order $g$. Therefore, for each monic polynomial $g$ such that $k=\mathcal O(x, g)$, we only need to show how the elements with $\mathbb{F}_q$-order $g$ are distributed in the graph $G(L_f, \mathcal C_k)$. Let $\alpha\in \mathcal C_k$ be an element with $\mathbb{F}_q$-order $g$. We observe that $\alpha$ belongs to a cycle of length $j$ if and only if $j$ is the least positive integer such that $P^{(j)}(\alpha)=\alpha$, i.e., $L_{f^j}(\alpha)=\alpha$. The latter is equivalent to $L_{f^j-1}(\alpha)=0$, i.e., $f^j\equiv 1\pmod g$. From definition, $j=\mathcal O(f, g)$. In particular, we have shown that each element of $\mathbb{F}_q$-order $g$ belongs to a cycle of length $j=\mathcal O(f, g)$. Since $\gcd(f, x^k-1)=1$, $\alpha$ and $L_f(\alpha)$ have the same multiplicative order (see item (ii) of Theorem~\ref{thm:add}) and, in particular, this shows that elements in a same cycle of $G(L_f, \mathcal C_k)$ have the same $\mathbb{F}_q$-order. Therefore, the monic divisor $g$ of $x^k-1$ contributes with $\frac{\Phi_q(g)}{\mathcal O(f, g)}$ copies of the cyclic graph of length $\mathcal O(f, g)$. Inequality $\mu_k^*(L_f)\ge \min\limits_{\mathcal O(x, g)=k}\frac{\mathcal O(f, g)}{k}$ follows directly from item (iv) of Theorem~\ref{thm:dynamics}. \end{proof} \subsubsection{M\"{o}bius dynamics} When considering M\"{o}bius maps, the dynamics is quite trivial. For $A\in \mathrm{GL}_2(\mathbb{F}_q)$ with $A=\left(\begin{matrix} a&b\\ c&d \end{matrix}\right)$, let $\gamma_{A}:\mathbb{F}_{q^k}\to \mathbb{F}_{q^k}$ be the map given by $\gamma_{A}(\alpha)=\frac{a\alpha+b}{c\alpha+d}$ (with a suitable evaluation at the possible pole of $\gamma_A$). In Subsection~\ref{subsec:permut} we have seen that $\gamma_A$ induces a permutation on $\mathbb{F}_{q^k}$ and, since $A\in \mathrm{GL}_2(\mathbb{F}_q)$, such a permutation admits a polynomial representation $P_{A}\in \mathbb{F}_q[x]$. Therefore, from Proposition~\ref{local-permut}, $P_{A}$ permutes the set $\mathcal C_k$ (or, equivalently, $\gamma_A$ permutes $\mathcal C_k$). We observe that the possible pole of $\gamma_A$ is in $\mathbb{F}_q$. For simplicity, we assume $k\ge 2$ (the case $k=1$ can be easily studied in details). For $k\ge 2$, $\mathcal C_k$ has no pole of any M\"{o}bius map with coefficients in $\mathbb{F}_q$. In particular, we can iterate the function $\gamma_A$ on $\mathcal C_k$ without any consideration on the possible poles. From direct calculations, we have the equality of maps $\gamma_A^{(n)}=\gamma_{A^n}, n\in \mathbb N$. In particular, if $D=\mathrm{ord}([A])$ is the order of the class of $A$ in $\mathrm{PGL}_2(\mathbb{F}_q)$, $\gamma_A^{(D)}$ is the identity map. Therefore, $\gamma_A^{(D)}(\alpha)=\alpha$ for any $\alpha\in \mathcal C_k$ and so any point has period a divisor of $D$. For $k\ge 3$, we observe that no point has period strictly smaller; if $\alpha\in \mathcal C_k$ with $k\ge 3$ and $\gamma_A^{(n)}(\alpha)=\alpha$, then $\gamma_{A^n}(\alpha)=\alpha$. If $[A^n]=[A]^n$ is not the identity $[I]$ of $\mathrm{PGL}_2(\mathbb{F}_q)$, the equality $\gamma_{A^n}(\alpha)=\alpha$ yields a nontrivial linear combination of $1, \alpha$ and $\alpha^2$ with coefficients in $\mathbb{F}_q$. This implies that $\alpha$ has minimal polynomial of degree at most $2$, a contradiction with $k\ge 3$. All in all, the previous observations easily imply the following result. \begin{theorem} Let $k\ge 3$ be a positive integer. For $A\in \mathrm{GL}_2(\mathbb{F}_q)$ with $A=\left(\begin{matrix} a&b\\ c&d \end{matrix}\right)$, the map $\gamma_{A}:\mathcal C_k\to \mathcal C_k$ given by $\gamma(\alpha)=\frac{a\alpha+b}{c\alpha+d}$ is well defined and is a permutation of $\mathcal C_k$. Additionally, if $n_k:= |\mathcal C_k|$ and $D$ denotes the order of $[A]$ in $\mathrm{PGL}_2(\mathbb{F}_q)$, the following holds \begin{equation} G(\gamma_A, \mathcal C_k)=\frac{n_k}{D}\times \mathrm{Cyc}(D). \end{equation} In other words, any element $\alpha\in \mathcal C_k$ is periodic with period $D$. \end{theorem} In particular, we obtain the following corollaries. \begin{corollary}\label{cor:mobius-divisible} For $A\in \mathrm{GL}_2(\mathbb{F}_q)$ and $f\in \mathcal I_k$ with $k\ge 2$, let $\gamma_A\ast f$ be defined as in Subsection~\ref{subsec:Mobius-action}. If $D$ is the order of $[A]$ in $\mathrm{PGL}_2(\mathbb{F}_q)$ and $\gamma_A\ast f=f$, then $k=2$ or $k$ is divisible by $D$. \end{corollary} \begin{proof} Let $\alpha\in \mathcal C_k$ be any root of $f$, hence $f=m_{\alpha}$ is the minimal polynomial of $\alpha$. From Theorem~\ref{thm:dynamics}, if $\gamma_A\ast f=f$ and $c_{P}(\alpha)$ denotes the least period of $\alpha$ by $\gamma_A$, then $c_P(\alpha)$ divides $k$. From the previous theorem, $c_P(\alpha)=D$ if $k\ge 3$. In particular, $k=2$ or $k$ is divisible by $D$. \end{proof} In particular, if $[A]$ has order $D$ in $\mathrm{PGL}_2(\mathbb{F}_q)$, the map $f\mapsto \gamma_A\ast f$ over $\mathcal I_k$ is free of fixed points whenever $k>2$ is not divisible by $D$. \begin{corollary} For $A\in \mathrm{GL}_2(\mathbb{F}_q)$ and $\alpha\in \mathcal C_k$ with $k\ge 2$, let $\gamma_A\ast m_{\alpha}$ be defined as in Subsection~\ref{subsec:Mobius-action}. If $D$ is the order of $[A]$ and $\gcd(D, k)=1$ then, for the permutation $P=\gamma_A$, we have that $$c_{P}(\alpha)=c_P^*(m_{\alpha}).$$ \end{corollary} In particular, we may attain the upper bound in item (ii) of Theorem~\ref{thm:dynamics}. \section{Iterated construction of irreducible polynomials} We have seen that if $f\in \mathbb{F}_q[x]$ is an irreducible polynomial of degree $k$ and $P\in \mathbb G_k$, then $$P\ast f=\gcd (f(P(x)), x^{q^k}-x),$$ is another irreducible polynomial of degree $k$. This identity suggests a recursive method for constructing irreducible polynomials of degree $k$ from a given $f\in \mathcal I_k$: we set $f_0=f$ and, for $i\ge 1$, $$f_i:= P\ast f_{i-1}=\gcd (f_{i-1}(P(x)), x^{q^k}-x).$$ From Definition~\ref{def:periods}, $j=c_{P}^*(f)$ is the least positive integer such that $f_j=f_0$: in particular, the sequence $\{f_i\}_{i\ge 0}$ has $c_P^*(f)$ distinct elements. We want to find a good permutation that generates a large number of irreducible polynomials from a single irreducible polynomial $f$. From the trivial bound $c_P^*(f)\ge \mu_k^*(P)$, it is sufficient to find permutations $P$ for which $\mu_k^*(P)$ is large. Since $\mu_k^*(P)\ge 1/k\cdot \mu_k(P)$, if $\mu_k(P)$ is large, then so is $\mu_k^*(P)$. We have the trivial bound $\mu_k^*(P)\le |\mathcal I_k|\approx \frac{q^k}{k}$. Theorem~\ref{thm:onto} shows that any permutation of $\mathcal I_k$ can be viewed as a map $f\mapsto P\ast f$ with $P\in \mathbb G_k$ and, in particular, there exist permutations $P\in \mathbb G_k$ for which $G(P, \mathcal I_k)$ comprises a \emph{full cycle} (i.e., all the elements of $\mathcal I_k$ lie in the same orbit): in this case, we have $\mu_k^*(P)=|\mathcal I_k|$. However, the construction of such permutations seems to be out of reach: in fact, even the construction of permutations of finite fields with a full cycle is not completely known \cite{CMT08}. Having this in mind, it would be interesting to obtain permutations $P$ for which $\mu_k^*(P)$ is reasonable large. As pointed out earlier, the description of the functional graph of general permutations of finite fields is still an open problem. However, for $P$ a monomial or a linearized polynomial, things are well understood and Theorems~\ref{thm:monomial-cycle} and \ref{thm:linear-cycle} provide lower bounds for the quantity $\mu_k^*(P)$: the lower bound for monomials depend on the positive integers $e$ for which $\mathrm{ord}_eq=k$ and, in the linearized case, the lower bound depends on the monic polynomials $g\in \mathbb{F}_q[x]$ for which $\mathcal O(x, g)=k$. The numbers $e$ and the polynomials $g$ depend on the prime factorization of $q^k-1$ and the factorization of $x^k-1$ over $\mathbb{F}_q$, respectively. In this section, we explore specific cases when $\mu_k^*(P)$ can be explicitly given and they are asymptotically the best possible in some cases. We suppose that either $\frac{q^k-1}{q-1}$ is a prime number or $\frac{x^k-1}{x-1}$ is an irreducible polynomial. Though these conditions are very particular, they are reasonable natural: for instance, if $q=2$, then $\frac{q^k-1}{q-1}$ equals $2^k-1$ and the primes of this form are the \emph{Mersenne Primes}. In addition, if $k$ is a prime number, then $\frac{x^k-1}{x-1}=E_k(x)$ is just the $k$-th \emph{cyclotomic polynomial}: if $q$ is a primitive root modulo $k$, according to Theorem 2.47 of~\cite{LN}, $E_k(x)$ is an irreducible polynomial. We start with the monomial case. \begin{proposition} Suppose that $r=\frac{q^k-1}{q-1}$ is a prime number and let $P=x^n$ be a monomial permutation of $\mathbb{F}_{q^k}$ (i.e., $\gcd(n, q^k-1)=1$). For any polynomial $f\in \mathcal I_k$, let $\{f_i\}_{i\ge 0}$ be the sequence of polynomials given by $f_0=f$ and, for $i\ge 1$, $f_i:= P\ast f_{i-1}=\gcd (f_{i-1}(x^n), x^{q^k}-x)$. Then $\{f_i\}_{i\ge 0}$ yields at least $\frac{\mathrm{ord}_rn}{k}$ irreducible polynomials of degree $k$. In particular, if $n$ is a primitive root modulo $r$, then $\frac{\mathrm{ord}_rn}{k}=\frac{q^k-q}{(q-1)k}$. \end{proposition} \begin{remark} We observe that if $r=\frac{q^k-1}{q-1}$ is prime, then $k$ is a prime number. In this case, $|\mathcal I_k|=\frac{q^k-q}{k}$ and so, if $n$ is a primitive root modulo $r$ and $f$ is any element of $\mathcal I_k$, the sequence $\{f_i\}_{i\ge 0}$ produces at least $\frac{1}{q-1}$ of the elements in $\mathcal I_k$. \end{remark} We now proceed to the linearized case. \begin{proposition}\label{prop:cycle-cyclotomic} Suppose that $E_k(x)=\frac{x^k-1}{x-1}$ is an irreducible polynomial and let $P=L_g\in \mathbb{F}_q[x]$ be a linearized permutation of $\mathbb{F}_{q^k}$ (i.e., $\gcd(g, x^k-1)=1$). For any polynomial $f\in \mathcal I_k$, let $\{f_i\}_{i\ge 0}$ be the sequence of polynomials given by $f_0=f$ and, for $i\ge 1$, $f_i:= P\ast f_{i-1}=\gcd (f_{i-1}(L_g(x)), x^{q^k}-x)$. Then $\{f_i\}_{i\ge 0}$ yields at least $\frac{\mathcal O(g, E_k)}{k}$ irreducible polynomials of degree $k$. In particular, if $\mathcal O(g, E_k)=\Phi_q(E_k)=q^{k-1}-1$, then $\frac{\mathcal O(g, E_k)}{k}=\frac{q^{k-1}-1}{k}$. \end{proposition} \begin{remark} We observe that if $E_k(x)=\frac{x^k-1}{x-1}$ is an irreducible polynomial, then $k$ is a prime number. In this case, $|\mathcal I_k|=\frac{q^k-q}{k}$ and so, if $\mathcal O(f, E_k)=\Phi_q(E_k)=q^{k-1}-1$ and $f$ is any element of $\mathcal I_k$, then the sequence $\{f_i\}_{i\ge 0}$ produces at least $\frac{1}{q}$ of the elements in $\mathcal I_k$. \end{remark} If $E_k(x)$ is irreducible, then the quotient $K=\frac{\mathbb{F}_q[x]}{\langle E_k(x)\rangle}$ is a field which is isomorphic to $\mathbb{F}_{q^{k-1}}$. In this case, if $\theta$ denotes the class of $x$ in the quotient $\frac{\mathbb{F}_q[x]}{\langle E_k(x)\rangle}$, it is direct to verify that $\mathcal O(g, E_k)=\mathrm{ord}(g(\theta))$. Therefore, $\mathcal O(g, E_k)=\Phi_q(E_k)=q^{k-1}-1=|\mathbb{F}_{q^{k-1}}^*|$ if and only if $g(\theta)$ is a \emph{primitive} element of $\mathbb{F}_{q^{k-1}}$. Primitive elements play important roles in cryptography and coding theory and have been extensively studied in the past few decades. However, the efficient construction of primitive element in finite fields $\mathbb{F}_{q^n}$ is still an open problem. Mainly, this is due to the fact that a general method for constructing such elements requires the prime factorization of $q^n-1$. Nevertheless, many authors have been treated the problem to find elements with a reasonable \emph{high multiplicative order} in finite fields as a relaxation of primitive elements. High order elements have been used in many practical ways, including cryptography, pseudo random number generator and the construction of gauss periods \cite{ASV}. Most notably, high order elements were employed in the celebrated AKS primality test~\cite{AKS}. Due to these applications, the construction of high order elements have been considered by many authors in the past years. In \cite{G99}, this problem is treated in general extensions of finite fields and some particular extensions are further more explored, including Artin-Schreier extensions~\cite{BR, V04} and cyclotomic extensions~\cite{P14}. The latter covers exactly the kind of extensions that we are considering here: from Theorem 2 of \cite{P14}, we obtain the following result. \begin{proposition}\label{prop:popovych} Let $q$ be a power of a prime $p$. If $a$ is any nonzero element of $\mathbb{F}_q$ and $q$ is primitive modulo the prime $k$, then the class of $x+a$ in $K=\frac{\mathbb{F}_q[x]}{\langle E_k(x)\rangle}$ has multiplicative order at least $\tau(p, k)$, where $\tau(2, k)=2^{\sqrt{2(k-2)}-2}$, $\tau(3, k)=3^{\sqrt{3(k-2)}-2}$ and $\tau(p, k)=5^{\sqrt{(k-2)/2}-2}$ for $p\ge 5$. \end{proposition} We observe that any nonzero element of $\mathbb{F}_{q^{k-1}}$ is written as $h(\theta)$, for some polynomial $h$ of degree at most $k-1$ such that $h$ is not divisible by $E_k$. In particular, if we know that $h(\theta)\ne 0$ has multiplicative order $e=\mathrm{ord}(h(\theta))$ in $\mathbb{F}_{q^{k-1}}$, we have a method to produce at least $\frac{e}{k}$ irreducible polynomials of degree $k$ from a single $f\in \mathcal I_k$: since $h(\theta)\ne 0$, $h$ is a polynomial of degree at most $k-1$ such that $\gcd(h, E_k)=1$. In particular, there exists a polynomial $H$ of degree at most $k$ such that $\gcd(H(x), x-1)=1$ and $H\equiv h\pmod {E_k}$: in this case, $\gcd(H(x), x^k-1)=1$ and so $L_H$, the $q$-associate of $H$, is a linearized permutation of $\mathbb{F}_{q^k}$. If $f$ is any irreducible polynomial of degree $k$, it follows from Proposition~\ref{prop:cycle-cyclotomic} that the sequence $\{f_i\}_{i\ge 0}$ given by $f_0=f$ and $f_i=\gcd(f_{i-1}(L_H(x)), x^{q^k}-x)$ yields at least $\frac{\mathcal O(g, E_k)}{k}=\frac{\mathrm{ord}(h(\theta))}{k}=\frac{e}{k}$ distinct irreducible polynomials of degree $k$. Using this approach, Proposition~\ref{prop:popovych} suggests to consider $h(x)=x+a$, where $a$ is a nonzero element of $\mathbb{F}_q$: in this case, the same proposition provides the bound $e\ge \tau(p, k)$. For $h(x)=x+a$, $H$ is a polynomial of degree at most $k$ such that $H\equiv x+a\pmod {E_k}$ and $\gcd(H(x), x-1)=1$. If $q\ne 2$, there exists $a\in \mathbb{F}_q\setminus \{0, 1\}$ and so $H(x)=x-a$ satisfies the required properties. For $q=2$, $a=1$ is the only nonzero element of $\mathbb{F}_2$ and so we consider $H(x)=E_k(x)+x+a=E_k(x)+x+1$: we observe that, since $q=2$ is a primitive root modulo $k$, $k$ is not $2$ and so $H(1)=E_k(1)+1+1=k+2=k$ which is not zero in $\mathbb{F}_q$, since $k\ne 2$ is a prime. Therefore, for $q=2$, we use $H(x)=E_k(x)+x+1=x^{k-1}+\cdots+x^2$. In summary, we obtain the following result. \begin{theorem} Let $q$ be a power of a prime $p$ and let $k$ be a prime number such that $q$ is a primitive root modulo $k$. In addition, let $\tau(p, k)$ be the function given in Proposition~\ref{prop:popovych}. Then, for any $f\in \mathcal I_k$, the sequence $\{f_i\}_{i\ge 0}$ given by $f_0=f$ and $f_i(x)=\gcd (f_{i-1}(L_H(x)), x^{q^k}-x)$ if $i\ge 1$ yields at least $\frac{\tau(p, k)}{k}$ distinct irreducible polynomials of degree $k$ in each of the following cases: \begin{enumerate}[(i)] \item $q=2$ and $L_H(x)=x^{q^{k-1}}+\cdots+x^{q^2}=x^{2^{k-1}}+\cdots+x^{4}$, \item $q\ne 2$ and $L_H(x)=x^{q}-ax$ with $a\ne 0, 1$ and $a\in \mathbb{F}_q$. \end{enumerate} \end{theorem} The following example provides a numerical instance of the previous theorem. \begin{example} We observe that $k=53$ is prime and $3$ is a primitive root modulo $53$. In particular, for any irreducible polynomial $f\in \mathbb{F}_3[x]$ of degree $53$, the sequence $\{f_i\}_{i\ge 0}$ given by $f_0=f$ and $f_i(x)=\gcd(f_{i-1}(x^3+x), x^{3^{53}}-x)$ if $i\ge 1$ yields at least $$\left\lceil\frac{\tau(3, 53)}{53}\right\rceil=1,672,$$ distinct irreducible polynomials of degree $53$ over $\mathbb{F}_3$. For instance, one may consider the initial input $f_0(x)=x^{53} - x^4 -x^3 -x^2 + 1\in \mathbb{F}_3[x]$. We emphasize that the acutal order of the class of $x+1$ in the quotient $K=\mathbb{F}_3[x]/(\Phi_{53}(x))=\mathbb{F}_{3^{53}}$ is $134718888901384\approx 1.3\cdot 10^{14}$ and so $$\left\lceil\frac{134718888901384}{53}\right\rceil \approx 2.5\times 10^{12}.$$ \end{example}
1,314,259,995,677
arxiv
\section{Introduction} In 2020, BEPCII implemented an energy upgrade project and increased the maximum center-of-mass energy from 4.61~$\,\mathrm{GeV}$ to 4.95~$\,\mathrm{GeV}$. During the data-taking years of 2020 and 2021, the BESIII experiment collected $e^+e^{-}$ annihilation data at 12 center-of-mass energy ($E_{\rm cms}$) points between 4.61~$\,\mathrm{GeV}$ and 4.95~$\,\mathrm{GeV}$. In this energy region, a few charmonium(-like) states can be produced, such as the Y(4630) and Y(4660)~\cite{Brambilla:2019esw,Guo:2017jvc,Chen:2016qju,Olsen:2017bmm,Olsen:2014qna}, which are potential candidates for multi-quark states other than the charmonium states predicted in the quark model~\cite{Brambilla:2010cs}. More strikingly, at 4.68~$\,\mathrm{GeV}$, the BESIII experiment observed the first candidate for a charged hidden-charm tetraquark with strangeness $Z_{cs}(3985)^{+}$~\cite{BESIII:2020qkh}. In addition, the $\Lambda_{c}^{+}\bar{\Lambda}_{c}^{-}$ pair-production is open in this energy region. This provides many opportunities for precise measurements of the properties of the lightest charmed baryon $\Lambda_{c}$, with threshold production and quantum coherence of the accumulated $\Lambda_{c}^{+}\bar{\Lambda}_{c}^{-}$ pairs. In 2014, the BESIII experiment collected 567~$\mathrm{pb}^{-1}$ of $e^+e^{-}$ annihilation data at 4.599~$\,\mathrm{GeV}$, which led to many pioneering measurements~\cite{Li:2021iwf}. About ten times more $\Lambda_{c}^{+}\bar{\Lambda}_{c}^{-}$ pair events are expected to be contained in all data taken above 4.6~$\,\mathrm{GeV}$, which would provide great potential to improve our knowledge of the strong and weak interactions in the charm sector~\cite{Cheng:2021qpd}. The $E_{\rm cms}$ and integrated luminosities of these data samples are important inputs for the analyses using these data samples. In this paper, we present measurements of $E_{\rm cms}$ and integrated luminosities for data samples at various energy points, as listed in Table~\ref{tab:sum}. The Beam Energy Measurement System (BEMS)~\cite{Abakumova:2011rp}, which was installed in 2008, is designed to precisely measure the beam energy based on the energies of Compton back-scattered photons. However, the working range of BEMS is below 4~$\,\mathrm{GeV}$ which implies the measurement of $E_{\rm cms}$ for data samples involved in this paper have to be performed offline. A novel method of using $e^+e^{-} \rightarrow\Lambda_{c}^{+}\bar{\Lambda}_{c}^{-}$ events is adopted, which was discussed in the energy measurement for the $\psi$(3770) data at BESIII~\cite{method}. In the luminosity measurement, the Bhabha scattering process $e^+e^{-} \rightarrow (\gamma)~e^+e^-$ is used, benefiting from its clear signature and large production cross section, which allow for a negligible statistical uncertainty and relatively small systematic uncertainty. A cross check of the luminosity results is performed by analyzing the di-photon process $e^+e^- \rightarrow (\gamma)~\gamma\gamma$. \section{The BESIII detector and MC simulations} The BESIII detector~\cite{BESIII:2009fln} records symmetric $e^+e^-$ collisions provided by the BEPCII storage ring~\cite{Yu:2016cof}, which operates in the center-of-mass energy range from 2.0~$\,\mathrm{GeV}$ to 4.95~$\,\mathrm{GeV}$. BESIII has collected large data samples in this energy region~\cite{BESIII:2020nme}. The cylindrical core of the BESIII detector covers 93\% of the full solid angle and consists of a helium-based multilayer drift chamber~(MDC), a plastic scintillator time-of-flight system~(TOF), and a CsI(Tl) electromagnetic calorimeter~(EMC), which are all enclosed in a superconducting solenoidal magnet providing a 1.0~T magnetic field. The solenoid is supported by an octagonal flux-return yoke with resistive plate counter muon identification modules interleaved with steel. The charged-particle momentum resolution at $1~\,\mathrm{GeV}/c$ is $0.5\%$, and the $dE/dx$ resolution is $6\%$ for electrons from Bhabha scattering. The EMC measures photon energies with a resolution of $2.5\%$ ($5\%$) at $1~\,\mathrm{GeV}$ in the barrel (end cap) region. The time resolution in the TOF barrel region is 68~ps, while that in the end cap region is 60~ps~\cite{etof}. Simulated samples produced with a {\sc geant4}-based~\cite{geant4} Monte Carlo (MC) package, which includes the geometric description of the BESIII detector and the detector response, are used to determine detection efficiencies and to estimate backgrounds. The simulation models the beam energy spread and initial state radiation (ISR) in the $e^+e^-$ annihilations with the generator {\sc kkmc}~\cite{ref:kkmc}. The inclusive MC sample includes the production of the process $\Lambda_{c}^{+}\bar{\Lambda}_{c}^{-}$ using the Born cross section line shape measured by BESIII, open charm processes, the ISR production of vector charmonium(-like) states, and the continuum processes incorporated in {\sc kkmc}~\cite{ref:kkmc}. The known decay modes are modelled with {\sc evtgen}~\cite{ref:evtgen} using branching fractions taken from the Particle Data Group (PDG)~\cite{ParticleDataGroup:2020ssz}, and the remaining unknown charmonium decays are modelled with {\sc lundcharm}~\cite{ref:lundcharm}. Final state radiation~(FSR) from charged final state particles is incorporated using {\sc photos}~\cite{photos}. \section{Measurement of center-of-mass energies} \label{sec:cms} In the process $e^+e^-\rightarrow \Lambda_c^{+}\bar{\Lambda}_c^{-}$, each $\Lambda_c^{+}$ ($\bar{\Lambda}_c^{-}$) baryon carries half the energy of the $E_{\rm cms}$. Hence, the $E_{\rm cms}$ is obtained from the calibrated beam energy $E_{\rm output}$ using the reconstructed mass of one $\Lambda_c$ with the following equations: \begin{eqnarray} \begin{aligned} &E_{\rm cms} = 2E_{\rm output}, \\ &E_{\rm output}^{2}=E_{\rm input}^{2} + m_{\Lambda_c}^{2}c^4 - m_{\rm BC}^{2}c^4. \label{eq:E2} \end{aligned} \end{eqnarray} Here, $E_{\rm input}$ is the uncalibrated beam energy, with input values of 2306, 2313, 2320, 2330, 2340, 2350, 2370, 2375, 2390, 2420, 2457 and 2473~$\,\mathrm{MeV}$, respectively, for the beam energies of 12 different energy points, and $m_{\Lambda_c}$ is the known $\Lambda_{c}$ mass of $2286.46\pm0.14$~$\,\mathrm{MeV}/c^2$~\cite{ParticleDataGroup:2020ssz,BaBar:2005wur}. The $m_{\rm BC}$ is the fitted peak position of the beam-constrained mass of the $\Lambda_c$ baryon calculated by $M_{\rm BC}c^2=\sqrt{E_{\rm input}^{2}-{p}^{2}_{\Lambda_c}{c^2}}$, where ${p}_{\Lambda_c}$ is the momentum of the $\Lambda_c$ measured in the center-of-mass system of the $e^+e^-$ collision. Essentially, Eq.~(\ref{eq:E2}) is equivalent to $E_{\rm output}^{2}=p_{\Lambda_c}^{2}c^2 + m_{\Lambda_c}^{2}c^4$. Taking advantage of its better resolution, the $M_{\rm BC}$ distributions are fit instead of $p_{\Lambda_c}$ since the fitting quality of the former is more easily controlled. Charge conjugation is always implied. To perform this measurement, one $\Lambda_c^{+}$ is reconstructed and the other one missing. To reconstruct the $\Lambda_c^{+}$ baryon, the $\Lambda_{c}^{+}\rightarrow p K^{-} \pi^+$ channel is used because of its relatively large decay rate and low background contamination. Each charged track must satisfy the following criteria. The distance of the closest approach of every charged track to the $e^+e^-$ interaction point (IP) is required to be within 10~cm along the beam direction and within 1~cm in the plane perpendicular to the beam direction. The polar angle $\theta$ between the direction of a charged track and that of the positron beam must satisfy $|\!\cos\theta|<0.93$ for an effective measurement in the active volume of the MDC. The $dE/dx$ information recorded by the MDC and the time-of-flight information measured by the TOF are combined to calculate particle identification (PID) probabilities for various particle hypotheses. Tracks are identified as protons if their PID probabilities ($\mathcal{P}$) satisfy $\mathcal{P}(p)>\mathcal{P}(K)$ and $\mathcal{P}(p)>\mathcal{P}(\pi)$, while charged kaons and pions are identified using $\mathcal{P}(K)>\mathcal{P}(\pi)$ and $\mathcal{P}(\pi)>\mathcal{P}(K)$, respectively. All $p K^{-} \pi^+$ combinations in one event are kept for further study. In the fit to the $M_{\rm BC}$ distributions, the signals are described by the Bukin function~\cite{bukin} and the backgrounds are described by a linear function. The fit result of the 4680$\,\mathrm{MeV}$ data sample is shown in Fig.~\ref{fig:fit_result}. In order to validate the analysis method, an input and output (I/O) check based on the inclusive MC simulation is performed. Based on the I/O results, we find that the obtained beam energy $E_{\rm output}$ is stable when the different input values of $E_{\rm input}$ are used. However, systematic shifts (0.09~$\sim$~0.25~$\,\mathrm{MeV}$) are noticed between the measured beam energies and the true simulated input values mainly due to the ISR effect. The shifts at different energy points are taken into account as individual correction factors. The final values of the determined $E_{\rm cms}$ are listed in Table~\ref{tab:sum}. \begin{center} \includegraphics[width=0.35\paperwidth]{Figure/4680_fit.pdf} \figcaption{Fit to the $M_{\rm BC}$ distribution for $\Lambda_{c}^{+}\rightarrow p K^{-} \pi^+$ candidates from the 4680~$\,\mathrm{MeV}$ data sample. Black dots with error bars are data, the red line is the sum of fit functions, the dotted green line is the fitted signal and the dotted blue line is the fitted background.} \label{fig:fit_result} \end{center} The systematic uncertainty for the $E_{\rm cms}$ measurement is mainly from the uncertainty of the $\Lambda_{c}$ mass quoted from the PDG, which is 0.28~$\,\mathrm{MeV}$ (twice the uncertainty of the $\Lambda_{c}$ PDG mass). Other small uncertainties are due to the $M_{\rm BC}$ fit range and the ISR correction. For the fit range, we vary the fit boundary and repeat the $M_{\rm BC}$ fit. The maximum relative changes on the $E_{\rm cms}$ are taken as the systematic uncertainties. For the ISR correction, we consider the cross section line shape and the influence of the background. An alternative cross section line shape is first obtained by varying the nominal line shape within uncertainties. The alternative is then used to generate an MC sample, and then the I/O procedure is repeated to get new ISR correction factors. The differences in the ISR correction factors are regarded as the systematic uncertainties. In order to take into account the potential effect of background simulation, we check the ISR correction using the signal MC sample of the process $e^+e^- \to \Lambda_{c}^{+}\bar{\Lambda}_{c}^{-}$, where one $\Lambda_c$ decays to $pK\pi$, and the other $\Lambda_c$ decays inclusively. The difference in the ISR correction between the signal MC sample and the inclusive MC sample is regarded as a systematic uncertainty. For the signal and background shapes, the uncertainties are negligible based on MC simulation studies. A summary of systematic uncertainties is given in Table~\ref{tab:sys}. For each energy point, the total systematic uncertainty is taken as the quadrature sum of each item. \begin{table*}[!htbp] \caption{Numerical results for the center-of-mass energy $E_{\rm cms}$, the integrated luminosity measured with the Bhabha process $\mathscr{L}_{\rm Bhabha}$, the integrated luminosity measured with the di-photon process $\mathscr{L}_{\rm di-photon}$ and their ratio for all data samples. For the $E_{\rm cms}$ and $\mathscr{L}_{\rm Bhabha}$ measurements, the first uncertainty is statistical and the second is systematic. For the $\mathscr{L}_{\rm di-photon}$ measurement, only statistical uncertainties are presented. For the ratio of $\mathscr{L}_{\rm di-photon}$ and $\mathscr{L}_{\rm Bhabha}$ all presented uncertainties are considered.} \begin{center} \begin{tabular}{ccccc} \hline \hline Sample &$E_{\rm cms}$ ($\,\mathrm{MeV}$) &$\mathscr{L}_{\rm Bhabha}$ (pb$^{-1}$) & $\mathscr{L}_{\rm di-photon}$ (pb$^{-1}$)&Ratio ($\%$)\\ \hline 4610& 4611.86$\pm$0.12$\pm$0.32&~~103.83$\pm$0.05$\pm$0.55&~103.37$\pm$0.13&~99.56$\pm$0.59\\ 4620& 4628.00$\pm$0.06$\pm$0.32&~~521.52$\pm$0.11$\pm$2.76&~520.17$\pm$0.28&~99.74$\pm$0.55\\ 4640& 4640.91$\pm$0.06$\pm$0.38&~~552.41$\pm$0.12$\pm$2.93&~550.67$\pm$0.29&~99.69$\pm$0.55\\ 4660& 4661.24$\pm$0.06$\pm$0.29&~~529.63$\pm$0.12$\pm$2.81&~527.53$\pm$0.29&~99.60$\pm$0.55\\ 4680& 4681.92$\pm$0.08$\pm$0.29& ~1669.31$\pm$0.21$\pm$8.85&1665.88$\pm$0.51&~99.79$\pm$0.54\\ 4700& 4698.82$\pm$0.10$\pm$0.39&~~536.45$\pm$0.12$\pm$2.84&~533.66$\pm$0.29&99.48$\pm$0.55\\ 4740& 4739.70$\pm$0.20$\pm$0.30&~~164.27$\pm$0.07$\pm$0.87&~165.08$\pm$0.16&100.36$\pm$0.58\\ 4750& 4750.05$\pm$0.12$\pm$0.29&~~367.21$\pm$0.10$\pm$1.95&~367.57$\pm$0.24&~99.89$\pm$0.56\\ 4780& 4780.54$\pm$0.12$\pm$0.33&~~512.78$\pm$0.12$\pm$2.72&~512.03$\pm$0.29&~99.61$\pm$0.55\\ 4840& 4843.07$\pm$0.20$\pm$0.31&~~527.29$\pm$0.12$\pm$2.79&~526.01$\pm$0.30&~99.68$\pm$0.55\\ 4920& 4918.02$\pm$0.34$\pm$0.35&~~208.11$\pm$0.08$\pm$1.10&~208.09$\pm$0.19&~99.51$\pm$0.57\\ 4950& 4950.93$\pm$0.36$\pm$0.44&~~160.37$\pm$0.07$\pm$0.85&~159.85$\pm$0.17&~99.67$\pm$0.58\\ \hline \hline \end{tabular} \end{center} \label{tab:sum} \end{table*} \begin{table*}[!htbp] \caption{Systematic uncertainties for the $E_{\rm cms}$ measurement (in units of $\,\mathrm{MeV}$). For each energy point, the total systematic uncertainty corresponds to the quadrature sum of each item.} \begin{center} \begin{tabular}{c|cccccccccccc} \hline \hline \multirow{2}{*}{Source}& \multicolumn{12}{c}{Sample} \\ & 4610 & 4620 & 4640 &4660 &4680 & 4700& 4740 &4750 &4780 &4840 &4920 &4950\\ \hline PDG mass &0.28&0.28&0.28&0.28&0.28&0.28 &0.28&0.28&0.28&0.28&0.28&0.28\\ Fit range &0.04&0.14&0.22&0.04&0.04&0.14&0.04&0.04&0.04&0.02&0.17&0.24\\ ISR effect &0.16&0.07&0.14&0.06&0.04&0.23&0.11&0.04&0.16&0.13&0.13&0.25 \\ \hline Total &0.32&0.32&0.38&0.29&0.29&0.39&0.30&0.29&0.33&0.31&0.35&0.44 \\ \hline \hline \end{tabular} \end{center} \label{tab:sys} \end{table*} We validate the energy measurements using the $e^+e^- \rightarrow D^+D^{*-}$ process, where the $D^+ $ is reconstructed via $D^+ \rightarrow K^-\pi^+\pi^+$. The recoil mass of the $D^+$, $RM_{D^+}$, is defined as \begin{eqnarray} RM_{D^+}=\sqrt{(E_{\rm cms}-E_{D^+})^2/c^4-(\overrightarrow{p}_{\rm cms}-\overrightarrow{p}_{D^+})^2/c^2}, \label{eq:def_mDsRec} \end{eqnarray} where $E_{D^+}$ ($\overrightarrow{p}_{D^+}$) is the energy (momentum) of the reconstructed $D^+$. The total energy (momentum) of the initial $e^+e^-$ system, $E_{\rm cms}$ ($\overrightarrow{p}_{\rm cms}$), is input according to our measurement. The peak values of the $RM_{D^+}$ distributions correspond to the known mass of the $D^{*-}$~\cite{ParticleDataGroup:2020ssz}. To improve the mass resolution~\cite{BESIII:2013mhi}, the variable $RM_{D^+} + M_{D^+}-m_{D^+}$ is adopted to represent the $D^+$ recoil mass spectrum, where $M_{D^+}$ is the $D^+$ invariant mass, and $m_{D^+}$ is the known $D^+$ mass~\cite{ParticleDataGroup:2020ssz}. The Bukin function is used to fit the $RM_{D^+} + M_{D^+}-m_{D^+}$ distribution to get the peak mass position, in which the tail shapes in the signal function are fixed according to MC simulation of the process $e^+e^- \rightarrow D^+D^{*-}$. Following the same procedure used for the ISR correction in the nominal analysis, the measured mass of the $D^{*-}$ in the validation sample shows consistency with the known $D^{*-}$ mass~\cite{ParticleDataGroup:2020ssz}. Figure~\ref{Energy_div} shows the mass difference at each energy point, which is consistent with zero and hence validates the measured center-of-mass energies. \section{Measurement of integrated luminosities} The integrated luminosity of the data sample is determined by \begin{eqnarray}\label{Lum_formular} \mathscr{L} = \frac{N^{\rm obs}_{e^{+}e^{-}\rightarrow X}} { \sigma^{\rm obs}_{e^{+}e^{-}\rightarrow X} \times \epsilon_{e^{+}e^{-}\rightarrow X}}, \end{eqnarray} where $X$ denotes any specific final state produced in $e^{+}e^{-}$ annihilations, $N^{\rm obs}_{e^{+}e^{-}\rightarrow X}$ is the observed yield for the $e^{+}e^{-}\rightarrow X$ process, $\mathscr{L}$ is the integrated luminosity for data and $\sigma^{\rm obs}_{e^{+}e^{-}\rightarrow X}$ is the visible cross section. Here, the Bhabha process ($e^{+}e^{-}\rightarrow (\gamma)e^{+}e^{-}$) is analyzed in the nominal method, and the di-photon $e^+e^-\rightarrow (\gamma)~\gamma\gamma$ process serves as the cross check channel. The observed cross sections for the two processes are provided by the BabaYaga@NLO generator~\cite{Balossini:2008xr} with 0.1\% precision. The configuration parameters for the BabaYaga@NLO generator in generating Bhabha events are listed in Table~\ref{tab:BabaYaga}. \begin{center} \includegraphics[width=0.35\paperwidth]{Figure/Energy_deviation.pdf} \figcaption{At each energy point, the difference between the measured $D^{*-}$ mass using the validation sample of $e^+e^- \rightarrow D^+D^{*-}$ and the known $D^{*-}$ mass~\cite{ParticleDataGroup:2020ssz}. Points with error bars are from data and the green band is due to the uncertainty of the $E_{\rm cms}$ value.} \label{Energy_div} \end{center} The criteria used for selecting Bhabha candidates include the following. We require only two oppositely charged tracks (nCharged) detected in the MDC that satisfy $|\!\cos\theta|<$ 0.8 and the distance requirement of the closest approach of each charged track to the IP is the same as in Section 3. Figure~\ref{fig:momentum} shows the distributions of momentum, polar angle $\cos\theta$ and azimuthal angle $\phi$ for the electron and positron tracks measured in the MDC in data and signal MC samples. Good consistency between data and MC simulation is shown. The momentum of each track is required to be larger than 2~$\,\mathrm{GeV}/c$ to reject backgrounds from hadronic processes. In addition, to suppress the backgrounds from di-photon process, $|\Delta\phi^{\rm EMC}|$ is required to be in the range [5$^{\circ}$, 40$^{\circ}$], where $\Delta\phi^{\rm EMC}$ = $|\phi^{\rm EMC}_1 - \phi^{\rm EMC}_2| -$180$^{\circ}$ and $\phi^{\rm EMC}_{1,2}$ are the azimuthal angles of the two clusters produced by the electron and positron in the EMC in the center-of-mass frame. \begin{figure*}[tp] \begin{center} \includegraphics[width=0.45\textwidth]{Figure/stack_4620_P1_all_range.pdf} \includegraphics[width=0.45\textwidth]{Figure/stack_4620_P2_all_range.pdf} \includegraphics[width=0.45\textwidth]{Figure/stack_4620_costheta1.pdf} \includegraphics[width=0.45\textwidth]{Figure/stack_4620_costheta2.pdf} \includegraphics[width=0.45\textwidth]{Figure/stack_4620_phi1.pdf} \includegraphics[width=0.45\textwidth]{Figure/stack_4620_phi2.pdf} \figcaption{Comparisons between the data and MC samples at the 4620 $\,\mathrm{MeV}$ energy point for the momentum (top), $\cos\theta$ (middle) and $\phi$ (bottom) distributions for the $e^+$ (left) and $e^-$ (right) from candidate Bhabha events. $N_{data}/N_{MC}$ is the ratio of the data and MC samples. Red points with error bars are data and the blue points are MC samples. The sizes of the MC samples are normalized to those in data. Except for the variable to be shown, all other requirements used in the event selection have been applied.} \label{fig:momentum} \end{center} \end{figure*} \begin{figure*}[tph] \centering \subfigure[]{\includegraphics[width=0.48\textwidth]{Figure/cross_2D.pdf}} \subfigure[]{\includegraphics[width=0.48\textwidth]{Figure/cross_2D_signal.pdf}} \subfigure[]{\includegraphics[width=0.48\textwidth]{Figure/cross_2D_BKG.pdf}} \caption{The two-dimensional distributions of $E^{\rm EMC}(e^-)$ versus $E^{\rm EMC}(e^+)$ in data (a), Bhabha MC sample (b) and background MC samples (c) for the 4620 $\,\mathrm{MeV}$ energy point. Three kinematic regions are presented: red square region [$E^{\rm EMC}(e^+)>1~\,\mathrm{GeV}$ and $E^{\rm EMC}(e^-)>1~\,\mathrm{GeV}$] for the \textsc{Normal Sample}, and the remaining regions for the \textsc{Saturation Sample}. The dimuon backgrounds concentrate in the blue square [$E^{\rm EMC}(e^+)<1~\,\mathrm{GeV}$ and $E^{\rm EMC}(e^-)<1~\,\mathrm{GeV}$], as shown in plot(c).} \label{fig:2D} \end{figure*} Figure~\ref{fig:2D} shows the two-dimensional $E^{\rm EMC}$ distributions of $e^+$ and $e^-$ of Bhabha candidate events, where $E^{\rm EMC}$ is the output of the deposited energies of clusters in the EMC. Due to electronic saturation~\cite{BESIII:2022xii} in the EMC readouts, $E^{\rm EMC}$ of the electron and positron becomes underestimated and distributes around 0.4~$\,\mathrm{GeV}$, much less than the expected energies. As shown in Fig.~\ref{fig:2D}(a), a fraction ($2\sim9\%$) of the electron or positron tracks, depending on the track momentum, is influenced by the EMC saturation effect. To evaluate the relative size of this effect, the sample is divided into two categories: \textsc{Normal Sample} ($E^{\rm EMC}(e^+)>1~\,\mathrm{GeV}$ and $E^{\rm EMC}(e^-)>1~\,\mathrm{GeV}$) without saturated $e^+e^-$ EMC clusters and \textsc{Saturation Sample} ($E^{\rm EMC}(e^+)<1~\,\mathrm{GeV}$ or $E^{\rm EMC}(e^-)<1~\,\mathrm{GeV}$) with at least one saturated $e^+$ or $e^-$ EMC cluster. However, as shown in Fig. 4(b), MC simulations do not reflect the saturation effect. In order to obtain the total signal yields correctly matching the MC-determined efficiency, the signal yields in both the \textsc{Normal Sample} and the \textsc{Saturation Sample} are counted in. For the \textsc{Normal Sample}, backgrounds are negligible compared to the statistics of the signal yields, which is validated by using the background MC sample which contains the inclusive MC sample and all QED events except the Bhabha signal, as shown in Fig.~\ref{fig:2D}(c). Hence, the survived \textsc{Normal Sample} is taken as the signal. For the \textsc{Saturation Sample}, a portion of the background is from the dimuon process $e^+e^-\to\mu^+\mu^-$, as indicated in Fig.~\ref{fig:2D}(c). To extract the signal yields, the normalized pulse height, $PH_{\rm norm}$, from the specific ionization energy lost by the charged track in the MDC is adopted to distinguish the Bhabha events from the dimuon backgrounds. Figure~\ref{fig:saturationBKG_fit} shows the fit to the $PH_{\rm norm}$ distribution, where the signals peak around 1.0 for the electron and the dimuon backgrounds peak around 0.86. In the fit, the shape of the electron signals is modelled using the electron sample in the \textsc{Normal Sample}, and the muon shape is taken from the control sample of the dimuon process, where the depth of the dimuon tracks penetrating into the muon counter is required to be larger than 10~cm and is implemented for data in the background region ($E^{\rm EMC}(e^+)<1~\,\mathrm{GeV}$ and $E^{\rm EMC}(e^-)<1~\,\mathrm{GeV}$) in Fig.~\ref{fig:2D}(a). The fitted yields of the Bhabha process are taken as signals in the \textsc{Saturation Sample}. The sum of the signal yields in the \textsc{Normal Sample} and \textsc{Saturation Sample} is taken as the total yield of the Bhabha events. The detection efficiency is estimated by using the Bhabha MC samples, and the observed Bhabha cross section is calculated based on the BabaYaga@NLO generator. Therefore, the integrated luminosity of the data sample is calculated using Eq.~\eqref{Lum_formular} and the corresponding results for the 12 energy points are given in Table~\ref{tab:sum}. The statistical precision of the measured luminosity is better than 0.05$\%$ at each energy point. Sources of systematic uncertainties in the luminosity measurement are summarized in Table~\ref{tab:Sys}. For the 12 energy points, common systematic uncertainties are assigned. Details are discussed as follows. \begin{center} \tabcaption{Configuration of the BabaYaga@NLO generator used for simulating Bhabha events.} \begin{tabular*}{80mm}{lc} \hline \hline Parameter & Value \\ \hline $E_{\rm cms}$ & refer to Table~\ref{tab:sum}\\ Beam energy spread&1.58~$\,\mathrm{MeV}$\\ MinThetaAngle&36.87$^{\circ}$\\ MaxThetaAngle&143.13$^{\circ}$\\ Maximum Acollinearity& 180$^{\circ}$\\ NSearch & 4000000\\ RunningAlpha & 1\\ Number of photon & $-1$ \\ \hline \hline \label{tab:BabaYaga} \end{tabular*} \end{center} High momentum electron samples are selected to study the systematic uncertainties due to tracking, the nCharged requirement, the momentum requirement and the $\cos\theta$ requirement. We select one $e^{+}$, with momentum larger than 2~$\,\mathrm{GeV}/c$ and $E^{\rm EMC}(e^+)$ greater than 1~$\,\mathrm{GeV}$ as the positron candidate from the Bhabha process, and take the recoil $e^{-}$ as the control sample to study the efficiency of the selection criteria. The relative efficiency difference between the control sample and the Bhabha MC sample is regarded as the systematic uncertainty. \begin{center} \tabcaption{Systematic uncertainties on the luminosity measurement. The total systematic uncertainty corresponds to the quadrature sum of each item.} \begin{tabular*}{80mm}{lc} \hline \hline Source & Uncertainty ($\%$)\\ \hline Tracking efficiency&0.30\\ nCharged requirement & 0.10\\ Momentum requirement &0.18\\ $\cos\theta$ requirement &0.30\\ Saturation events &0.20\\ BabaYaga@NLO generator&0.10\\ MC statistics&0.05\\ Cross section & 0.09\\ \hline Total & 0.53 \\ \hline \hline \end{tabular*} \label{tab:Sys} \end{center} For estimation of the signal yields, the main systematic issue is in the extraction of the signal yields in the \textsc{Saturation Sample}. As a check, a different method of counting the number of survived events has been adopted after removing the dimuon backgrounds by discarding events in the region $E^{\rm EMC}(e^+)<1~\,\mathrm{GeV}$ and $E^{\rm EMC}(e^-)<1~\,\mathrm{GeV}$. In the remaining events of the \textsc{Saturation Sample}, there is a small fraction of backgrounds (about 0.50\%), which is neglected. The resultant luminosity differs from the nominal result by 0.20\%, which is taken as the systematic uncertainty. \begin{center} \includegraphics[width=0.4\textwidth]{Figure/normPH_4620.pdf} \figcaption{Fit to the $PH_{\rm norm}$ distribution in the \textsc{Saturation Sample} from the 4620 $\,\mathrm{MeV}$ data sample. Black dots with error bars are data, the red line is the total fit, the long dashed blue line is background dominated by the dimuon process, and the dashed green line represents saturation events.} \label{fig:saturationBKG_fit} \end{center} For the BabaYaga@NLO Generator, the theoretical uncertainty of the cross section calculation is assigned to be 0.10$\%$~\cite{Balossini:2008xr}. The systematic uncertainty caused by MC statistics is estimated to be 0.05$\%$ according to the generated 5 million Bhabha MC events for each energy point. To study the effect due to the $E_{\rm cms}$ uncertainty on the cross section calculation in the BabaYaga@NLO generator, the input values of $E_{\rm cms}$ have been varied within 2~$\,\mathrm{MeV}$ and the corresponding maximum change on the obtained cross section is taken as the systematic uncertainty. As a cross check, the di-photon process is used to obtain the luminosity. To select the signal candidates, we require that there are at least two shower clusters in the EMC and no charged tracks detected in the MDC. The clusters must satisfy $|\!\cos\theta^{\rm EMC}|<0.8$. To select back-to-back photon showers and reduce the backgrounds of Bhabha events, their angles with respect to the IP are required to be larger than 178$^{\circ}$ and $\Delta\phi^{\rm EMC}$ of the two showers must be within [$-3^{\circ}$, $3^{\circ}$]. To account for the EMC saturation effect and reduce the dimuon backgrounds, the hit number of one EMC shower is required to be larger than 20. The requirements are optimized based on the inclusive MC sample. The cross section and detection efficiency are determined by the BabaYaga@NLO generator. Using Eq.~\eqref{Lum_formular}, the resultant luminosity results at different energy points and the ratios between the measured luminosities based on the Bhabha and di-photon processes are consistent with unity, as given in Table~\ref{tab:sum}. \section{Summary} The center-of-mass energies and the integrated luminosities of the $e^+e^-$ annihilation data between 4.61~$\,\mathrm{GeV}$ and 4.95~$\,\mathrm{GeV}$ collected from the years 2020 to 2021 with the BESIII detector at the BEPCII collider have been measured with high precision. By adopting a novel method for analyzing $\Lambda_{c}^{+}\bar{\Lambda}_{c}^{-}$ pair-production in electron-position annihilations, the center-of-mass energies are measured with a precision of $\sim$0.6~$\,\mathrm{MeV}$, which is dominated by the precision of the known $\Lambda_c$ mass. The integrated luminosities of the collected data samples are measured with a precision better than 1\% by analyzing large-angle Bhabha scattering events, after taking into account the EMC saturation effect. These results offer fundamental inputs for physics analyses based on these data samples. \acknowledgments{ The BESIII collaboration thanks the staff of BEPCII and the IHEP computing center for their strong support. } \end{multicols} \vspace{-1mm} \centerline{\rule{80mm}{0.1pt}} \vspace{2mm} \begin{multicols}{2}
1,314,259,995,678
arxiv
\section{Introduction.}\label{intro} \section{Introduction} With the astounding growth of automobile ownership, a series of transport-related problems has appeared worldwide. These problems, such as greenhouse gas emissions and urban traffic congestion, have severely impacted the economy and the environment \citep{schrank20122012}. One possible approach to address these concerns is to provide ride-sharing services \citep{jin2018ridesourcing}, which require customers to specify their origins and destinations. The underlying optimization problem is usually modeled as a Dial-A-Ride Problem (DARP), which consists in designing minimum-cost routes for a fleet of vehicles to serve a set of customer requests \citep{cordeau2007dial}. Each customer request contains an origin, a destination, and a time window on either the origin or the destination. The DARP was first introduced in \cite{wilson1971scheduling} and has received considerable attention from the literature \citep{parragh2008survey,molenbruch2017typology,ho2018survey}. The standard version of the DARP aims to minimize the total routing cost while respecting operational constraints such as time windows, capacity, and duration constraints. However, as customers can share rides with others, user inconvenience must be considered while minimizing the total routing cost. In the typical DARP model, a maximum user ride time constraint is introduced for each customer request. Due to the integration of maximum user ride time and time window constraints, scheduling vehicles to begin their services as early as possible does not necessarily result in a feasible schedule for a given sequence of pickup and drop-off locations. It is possible to reduce the user ride time by allowing delays in the service start time. Heuristic solution methods for the DARP usually apply the ``eight-step" method of \cite{cordeau2003tabu}, which constructs the feasible schedule by sequentially minimizing the possible violations of time windows, maximum route duration, and maximum user ride time. As well as providing ride-sharing services, other recently trending approaches that help to reduce emissions and congestion include using Electric Vehicles (EVs) and developing autonomous driving technology. The employment of EVs offers the benefits of potentially fewer greenhouse gas emissions, lower energy cost per mile, and lower noise \citep{feng2013economic}. The introduction of autonomous driving leads to more flexibility in managing vehicle fleets, considerably lower operational costs, and better service quality \citep{fagnant2015operations,chen2016operations,burns2013transforming}. This article studies the Electric Autonomous DARP (E-ADARP), which was first introduced by \cite{bongiovanni2019electric}. Although the E-ADARP shares some of the constraints of the typical DARP (e.g., maximum user ride time, time window constraints), the E-ADARP is different from the typical DARP in two aspects: (i) the employment of EAVs and a partial recharging policy, and (ii) a weighted-sum objective that minimizes both total travel time and total excess user ride time; The first aspect (i) requires checking battery feasibility for a given route, while the second aspect (ii) requires determining minimal-excess-time schedules for a feasible solution. The first aspect also implies other important features of the E-ADARP: (a) partial recharging is allowed en route, and (b) the maximum route duration constraints no longer exist due to the autonomy of vehicles. Allowing partial recharging introduces a trade-off between the time window and battery constraints: although longer recharging extends the driving range, it may also lead to time-window infeasibility for later nodes. Employing autonomous vehicles eliminates the need to predefine destination depots, as autonomous vehicles need to continuously relocate during their non-stop service. Other problem-specific constraints also increase the complexity of solving the E-ADARP. These constraints include a minimum battery level that must be maintained at the end of the route as well as limited visits to each recharging station. With these features and constraints, the possibility that the metaheuristic is trapped in local minima of poor quality increases, and feasible solutions are difficult to consistently find. This paper offers a fourfold contribution. Firstly, we propose a new approach that efficiently computes minimum excess user ride time by introducing a fragment-based representation of paths. Then, we apply an exact route evaluation scheme that executes feasibility checking in linear time. Combining these two methods, we propose an exact and efficient optimization of excess user ride time for an E-ADARP route. Secondly, we adapt a Deterministic Annealing (DA) algorithm to tackle the E-ADARP by integrating the proposed excess user ride time optimization method. To the best of our knowledge, this is the first time that a metaheuristic has been proposed to provide excess-ride-time optimal solutions for the E-ADARP. Thirdly, we demonstrate the performance of the proposed DA algorithm through extensive numerical experiments. On the previously solved instances, the DA algorithm improves the solution quality by 0.16\% on average. We provide the best solutions for 70 out of 84 instances, among which 25 are new best solutions. To further test our algorithm in solving large-scale instances, we construct new benchmark instances with up to 8 vehicles and 96 requests, and we provide 19 new solutions on newly-introduced instances. Finally, we extend the E-ADARP model to investigate the effects of allowing unlimited visits to recharging stations. The major difficulties for local search introduced by highly-constrained instances are lessened considering this more realistic situation, which opens perspectives in modeling constraints for recharging stations. The remainder of this paper is organized as follows. Section \ref{sec::LiteratureRview} presents a comprehensive literature review on the DARP with Electric Vehicles (EVs) and Electric Vehicle Routing Problems (E-VRPs). Section \ref{sec::Problem} provides the problem definition and the notations of sets, parameters, and variables. It also discusses the objective function and constraints of the E-ADARP. Section \ref{sec::schedule} introduces the fragment-based representation of paths and the method to minimize total excess user ride time. A novel route evaluation scheme of linear time complexity is then described. Based on Section \ref{sec::schedule}, Section \ref{sec::DAalgo} presents the framework of the proposed DA algorithm and its main ingredients. In Section \ref{sec::result}, we conduct extensive computational experiments to demonstrate the performance of the proposed DA algorithm. This paper ends in Section \ref{sec::conclusion} with a summary of the results and contributions of the paper, closing with discussions of future extensions. \section{Literature Review}\label{sec::LiteratureRview} The E-ADARP is a combination of the typical DARP and the E-VRP. However, it is distinct from these two contexts as it applies a weighted sum objective function that minimizes total travel time and total excess user ride time. This section briefly reviews the literature related to DARPs with EVs and E-VRPs. We emphasize works that apply heuristic and metaheuristic methods. We then review DARP-related articles that specifically focus on user ride time minimization. \subsection{Related literature of DARPs with EVs} \cite{masmoudi2018dial} is the first work that introduces DARP with EVs. In their work, EVs are recharged through battery swapping and assumed to have a constant recharging time. The authors use a realistic energy consumption model to formulate the problem and introduce three enhanced Evolutionary VNS (EVO-VNS) algorithm variants, which can solve instances with up to three vehicles and 18 requests. \cite{bongiovanni2019electric} considers EAVs in the DARP and introduces the E-ADARP. Partial recharging is allowed when vehicles visit recharging stations, and the authors impose a minimum battery level constraint for the vehicle's State of Charge (SoC) at the destination depot. The minimum battery level is formulated as $\gamma Q$, where $\gamma$ is the ratio of the minimum battery level to total battery capacity, and $Q$ is the total battery capacity. Three different $\gamma$ values are analyzed, i.e., $\gamma \in \{0.1,0.4,0.7\}$, meaning that 10\%, 40\%, and 70\% of the total battery capacity must be maintained at the destination depot. Solving the problem becomes more difficult when $\gamma$ increases. The authors formulate the problem into a three-index and a two-index model and introduce new valid inequalities in a Branch-and-Cut (B\&C) algorithm. In the case of $\gamma = 0.1, 0.4$, the proposed B\&C algorithm obtains optimal solutions for 42 out of 56 instances. However, in the case of $\gamma = 0.7$, the B\&C algorithm solves optimally 10 out of 28 instances, and 9 instances cannot be solved feasibly, even within two hours. The largest instance that can be solved optimally by the B\&C algorithm contains 5 vehicles and 40 requests. No heuristic or metaheuristic algorithm currently exists for the E-ADARP. \subsection{Related literature of E-VRPs} Extensive works have been conducted in the field of E-VRPs, e.g., \cite{erdougan2012green, schneider2014electric, goeke2015routing, hiermann2016electric, hiermann2019routing}. Among them, \cite{erdougan2012green} is the first to propose a Green VRP (G-VRP) using alternative fuel vehicles. These vehicles are allowed to visit a set of recharging stations during vehicle trips. The authors adapt two constructive heuristics to obtain feasible solutions and they further enhance these heuristics by applying local search. However, the proposed model does not consider capacity restrictions and time window constraints. \cite{schneider2014electric} propose a more comprehensive model named the Electric Vehicle Routing Problem with Time Windows (E-VRPTW). They extend the work of \cite{erdougan2012green} by using electric vehicles and considering limited vehicle capacity and specified customer time windows. They apply a Variable Neighborhood Search (VNS) algorithm hybridized by Tabu Search in local search to address E-VRPTW. The recharging stations are inserted or removed by a specific operator and the recharged energy is assumed to be linear with the recharging time. They apply a full recharging policy on each visit to recharging station. All the vehicles are assumed to be identical in terms of vehicle and battery capacity. \cite{goeke2015routing} extend the homogeneous E-VRPTW by considering a mixed fleet of electric and conventional vehicles. A realistic energy consumption model that integrates speed, load, and road gradient is employed. To address the problem, they propose an Adaptive Large Neighborhood Search algorithm (ALNS) using a surrogate function to evaluate violations efficiently. \cite{hiermann2016electric} extend the work of \cite{goeke2015routing} by taking into account the heterogeneous aspect (i.e., fleet composition). They solve the problem by ALNS and determine the positions of recharging stations via a labeling algorithm. The recharging policy considered is also full recharging with a constant recharging rate. \cite{hiermann2019routing} extend their previous study by considering partial recharging for a mixed fleet of conventional, plug-in hybrid, and electric vehicles. The engine mode selection for plug-in hybrid vehicles is considered as a decision variable in their study. A layered optimization algorithm is presented. This algorithm combines labeling techniques and a greedy route evaluation policy to calculate the amount of energy required to be charged and determine the engine mode and energy types. This algorithm is finally hybridized with a set partitioning problem to generate better solutions from obtained routes. More recently, \cite{lam2022branch} investigate a more practical case of E-VRPTW in which the availability of chargers at the recharging stations are considered. They propose a B\&C\&P algorithm which is capable of solving instances with up to 100 customers. \subsection{Minimizing total or excess user ride time in DARPs } \label{literature about minimizing excess user ride time} There are several examples where a service-quality oriented objective is considered in the context of DARP (e.g., \cite{parragh2009heuristic,parragh2011introducing,paquette2013combining,molenbruch2017multi,bongiovanni2022ride}). Among them, only three articles consider total user ride time/total excess user ride time as the second objective. In the work of \cite{parragh2009heuristic}, a two-phase heuristic method is developed. A set of non-dominated solutions is constructed, minimizing a weighted sum of total distance traveled and mean user ride time under different weight combinations. In the route evaluation, the authors point out that the ``eight-step" method of \cite{cordeau2003tabu} does not aim to minimize the total user ride time. An increase in user ride time may happen when delaying the service start time at destination nodes. Therefore, they improve the original scheme of the ``eight-step" method by adapting the computation of forward time slack to avoid any increase in excess user ride time for requests served on a route. The resulting scheme is more restrictive in terms of feasibility and may lead to incorrect infeasibility declaration. This drawback is tackled in the scheduling heuristic proposed by \cite{molenbruch2017multi}. The heuristic starts by constructing a schedule (which may be infeasible) by setting the excess ride time of each request to its lower bound. Then, it gradually removes the infeasibility by shifting the service start time at some nodes while minimizing excess user ride time. However, the developed scheduling procedures in \cite{parragh2009heuristic} and \cite{molenbruch2017multi} are not proven optimal to minimize user ride time for a given route. \cite{bongiovanni2022ride} first proposes an exact scheduling procedure that minimizes the excess user ride time for a path without charging stations. The time complexity of this procedure is $\mathcal{O}(M^2)$ for a sequence of length $M$. Then, the authors extend the proposed scheduling procedure in the E-ADARP by integrating a battery management heuristic. However, the obtained schedules for an E-ADARP route are no longer exact as the excess-time optimal schedules may not be battery-feasible. To the best of our knowledge, no work in the literature can handle excess user ride time minimization exactly in the E-ADARP. \subsection{Conclusion and proposed solution methodology} From our review, we conclude that the effect of electric vehicles on the DARP has rarely been investigated in the previous literature. \cite{bongiovanni2019electric} is the only work that conducts a comprehensive study to optimize the DARP with EVs. However, the proposed B\&C algorithm requires important run-times and has difficulties providing high-quality solutions when solving medium- to large-sized instances, which limits its application in practice. The above limitation of \cite{bongiovanni2019electric} motivates us to propose an efficient metaheuristic algorithm that can provide high-quality solutions for E-ADARP instances within reasonable computational time. The efficiency of a metaheuristic largely depends on its neighborhood search mechanisms, which perform a large number of evaluations. In the case of the DARP, these are route evaluations and cost computations. These two tasks are more complicated in the E-ADARP than in the DARP, as we allow partial recharging and minimize total excess user ride time for a given route. Existing scheduling procedures only obtain the approximation of minimum excess user ride time, which may deteriorate the solution quality and mislead search direction. Moreover, these procedures are time-consuming when applied in a metaheuristic as they are usually of quadratic time complexity and may introduce numerous repeated computations. Lastly, the battery constraints and a partial recharging policy increase the complexity of route evaluation in the E-ADARP. To overcome these issues, we propose an exact method of linear time complexity to compute the cost and evaluate the feasibility of an E-ADARP route based on battery-restricted fragments in Section \ref{sec::schedule}. Repeated computations are avoided via fragment enumeration in the preprocessing phase (Section \ref{sec::preproc}). These methods pave the way for an efficient DA algorithm (see Section \ref{sec::DAalgo}) and yield high-quality solutions for all instances (see Section \ref{sec::result}). \section{The E-ADARP Description} \label{sec::Problem} In this section, we present the mathematical notations of the E-ADARP that are used throughout the paper. Then, the objective function and the constraints of the E-ADARP are introduced. Finally, we discuss the practical interests of extending the original problem to allow unlimited visits to recharging stations. \subsection{Notation and problem statement} The problem is defined on a complete directed graph $G=(V,A)$, where $V$ represents the set of vertices and $A$ is the set of arcs, i.e., $A = \{(i,j):i,j \in V, i \neq j\}$. $V$ can be further partitioned into several subsets, i.e., $V= N \cup S \cup O \cup F$, where $N$ represents the set of all customers, $S$ is the set of recharging stations, $O$ and $F$ denote the set of origin depots and destination depots, respectively. The set of all pickup vertices is denoted as $P =\{1,\cdots,i,\cdots,n\}$ and the set of all drop-off vertices is denoted as $D =\{n+1,\cdots,n+i,\cdots,2n\}$. The union of $P$ and $D$ is $N$, i.e., $N = P \cup D$. Each customer request is a pair $(i,n+i)$ for $i \in P$ and the maximum ride time for users associated with request $i$ is assumed to be $m_i$. A time window is defined on each node $i\in V$, denoted as $[e_i,l_i]$, in which $e_i$ and $l_i$ represent the earliest and latest time at which vehicle starts its service, respectively. A load $q_i$ and a service duration $s_i$ is also associated for each node $i \in V$. For pickup node $i \in P$, $q_i$ is positive. For the corresponding drop-off node $n+i$, we have $q_{n+i} = -q_i$. For other nodes $j \in O \cup F \cup S$, $q_j$ and $s_j$ are equal to zero. In this article, all the customer requests are known at the beginning of the planning horizon $T_p$ and we tackle the static E-ADARP. For each vehicle $k \in K$, it must start with an origin depot $o \in O$ and end with a destination depot $f \in F$. In this study, the number of origin depots is equal to the number of vehicles, i.e., $|O| = |K|$. However, the set of destination depots can be larger than the set of origin depots, namely, $|F| \geqslant |O|$, which means a vehicle can select a depot from $F$ at the end of the route. An E-ADARP \emph{route} is defined as a path in graph $G$ passing through the origin and the destination depot that satisfies pairing and precedence, load, battery, time window, and maximum user ride time constraints. The E-ADARP consists in designing $K$ routes, one for each vehicle, so that all customer nodes are visited exactly once, each recharging station and destination depot is visited at most once, and the weighted-sum objective function (presented in Section \ref{obj E-ADARP}) is minimized. Vehicles are assumed to be heterogeneous in terms of their maximum vehicle capacities (denoted as $C_k$) and homogeneous in terms of battery capacities (denoted as $Q$). The travel time on each arc $(i,j) \in A$ is denoted as $t_{i,j}$ and the battery consumption is denoted as $b_{i,j}$. We assume that $b_{i,j}$ is proportional to $t_{i,j}$ and we have $b_{i,j} = \beta t_{i,j}$, with $\beta$ being the energy discharging rate. When a vehicle recharges at a recharging station, the energy recharged is proportional to the time spent at the facilities. The recharging rate is denoted as $\alpha$. To avoid the numerical problem when calculating time and energy, we define $h_{i,j} = b_{i,j}/\alpha$ to convert the battery consumption $b_{i,j}$ on arc $(i,j)$ to the time needed for recharging this amount of energy. Similarly, we can also convert the current energy level to the time needed to recharge to this energy level. Let $H$ denote the time required to recharge from zero to full battery capacity $Q$. Partial recharging is allowed while a vehicle visits recharging stations, and a minimum battery level $\gamma Q$ must be respected at destination depots, where $\gamma \in \{0.1,0.4,0.7\}$. The triangle inequality is assumed to hold for travel times and battery consumption. \subsection{Objective function of the E-ADARP } \label{obj E-ADARP} A weighted sum objective is considered in this paper, which includes the total travel time for all the vehicles $k \in K$ and the total excess user ride time for all the customer requests $i \in P$. Equation (\ref{objective}) presents the formulation for the objective function. The merit of considering total excess user ride time in the objective function is that it may help to improve the service quality by minimizing the total excess user ride time with no increase in the first objective if we consider the minimization in a strict lexicographical way. The objective function is: \begin{equation} \label{objective} \min w_1\sum\limits_{k \in K}\sum\limits_{i,j \in V}t_{i,j}x_{i,j}^k + w_2\sum\limits_{i \in P}R_i \end{equation} where $x_{i,j}^k$ is a binary decision variable which denotes whether vehicle $k$ travels from node $i$ to $j$. $R_i$ denotes the excess user ride time of request $i \in P$ and is formulated as the difference between the actual ride time and direct travel time from $i$ to $n+i$. $w_1$ and $w_2$ are the weight factors for these two objectives and we follow the settings in \cite{bongiovanni2019electric}: $w_1 = 0.75, w_2 = 0.25$. We report in Table \ref{tab::notation} the notations and definitions for sets and parameters. \begin{table}[ht] \renewcommand\arraystretch{0.8} \caption{The E-ADARP problem sets, parameters notations and descriptions} \label{tab::notation} \begin{center} \begin{tabular}{c c} \toprule Sets& Definitions\\ \hline $N = \{1,\cdots,n,n+1,\cdots,2n\}$& Set of pickup and drop-off nodes \\ $P=\{1,\cdots,i,\cdots,n\}$& Set of pickup nodes \\ $D=\{n+1,\cdots,n+i,\cdots,2n\}$& Set of drop-off nodes \\ $K=\{1,\cdots,k\}$& Set of available vehicles \\ $O=\{o_1,o_2,\cdots,o_k\}$& Set of origin depots \\ $F=\{f_1,f_2,\cdots,f_h\}$& Set of all available destination depots (supposing the total number is $h$)\\ $S=\{s_1,s_2,\cdots,s_g\}$& Set of recharging stations (supposing the total number is $g$) \\ $V= N \cup S \cup O \cup F$& Set of all nodes \\ \midrule Parameters& Definitions\\ \hline $t_{i,j}$& Travel time from location $i\in V$ to location $j\in V$ \\ $b_{i,j}$& Battery consumption from location $i\in V$ to location $j\in V$ \\ $h_{i,j}$& The time needed for recharging $b_{i,j}, i,j\in V$ \\ $e_i$& Earliest time at which service can begin at $i\in V$ \\ $l_i$& Latest time at which service can begin at $i\in V$ \\ $s_i$& Service duration at $i\in V$ \\ $q_i$& Change in load at $i\in N$ \\ $m_i$& Maximum user ride time for request $i\in P$ \\ $C_k$& The vehicle capacity of vehicle $k$ \\ $Q$& The battery capacity \\ $\alpha$& The recharged energy per time unit\\ $\beta$& The discharged energy per time unit\\ $T_p$& Planning horizon\\ $\gamma \in \{0.1,0.4,0.7\}$& The ratio of minimum battery level at destination depot to $Q$ \\%minimum battery level ratio \\ $w = \{0.75,0.25\}$&The weight factor for total travel time and total excess user ride time \\ \bottomrule \end{tabular} \end{center} \end{table} \subsection{Constraints of the E-ADARP } \label{constraints E-ADARP} The E-ADARP consists of the following features that are different from the typical DARPs: \begin{itemize} \item [1)] Battery limitation and minimum battery level restriction, which introduce the detour to recharging stations; \item [2)] We allow partial recharging at recharging stations, and the recharging time must be determined; \item [3)] Vehicle locates at different origin depots and selects the destination depot from a set of destination depots; \item [4)] Maximum route duration constraints are removed due to the autonomy of vehicles. \end{itemize} A solution of the E-ADARP is a set of $|K|$ routes and it is called ``feasible" when all the following constraints are satisfied: \begin{enumerate} \item Every route starts from an origin depot and ends at a destination depot; \item For each request, its corresponding pickup, and drop-off node belong to the same route, and the pickup node is visited before its drop-off node; \item User nodes and origin depots are visited exactly once, while each destination depot is visited at most once; \item The maximum vehicle capacity must be respected at each node; \item Each node is visited within its time window $[e_i,l_i]$ where $i \in V$. Vehicle can arrive earlier than $e_i$ but cannot arrive later than $l_i$. In the first case, waiting time occurs at $i$. \item The maximum user ride time is not exceeded for any of the users; \item The battery level at the destination depot must be at least equal to the minimal battery level; \item The battery levels at any nodes of a route can not exceed the battery capacity and cannot be negative; \item The recharging station can only be visited when there is no passenger on board; \item Each recharging station can only be visited at most once by all vehicles. \end{enumerate} \begin{figure}[!htp] \centering \includegraphics[width=.75\linewidth]{example_E_ADARP.png} \caption{\centering A solution of an E-ADARP instance} \label{example figure} \end{figure} Figure \ref{example figure} presents a solution of an E-ADARP instance that includes 4 vehicles and 16 requests. Each request contains the pickup node (denoted as $i+$) and the corresponding drop-off node $i-$. If minimum battery level constraints are not satisfied, vehicles must make detours to recharging stations before returning to destination depots. Each vehicle starts from a different origin depot and returns to a different destination depot. Each recharging station is visited at most once, and no passenger is onboard when recharging. \subsection{Multiple visits at recharging stations?}\label{sec::multiple?} Each E-ADARP instance of \cite{bongiovanni2019electric} only contains a few recharging stations. In \cite{bongiovanni2019electric}, they first restrict the visit to recharging station to at-most-one visit. Then, they investigate the effect of allowing multiple visits to recharging stations by replicating set $S$. Therefore, the number of visits to a recharging station must be predefined in their case, which seems unrealistic in practice. In our work, we remove this constraint and allow unlimited visits to the recharging stations in Section \ref{multiple section}. While having maximal time windows and a minimal energy restriction at destination depots, visiting recharging stations more frequently increases solution cost and the risk of violating time window constraints. We also conduct a sensitivity analysis on the maximum number of charging visits per station (denoted as $n_{as}$), and we perform our DA algorithm under different settings of $n_{as}$ ($n_{as} = \{1,2,3,\infty\}$). \section{Excess User Ride Time Optimization} \label{sec::schedule} The idea of our excess user ride time optimization method is as follows. We first introduce a fragment-based representation of paths, which extends the one proposed in \cite{rist2021new} by additionally considering battery constraints for ensuring overall route feasibility in terms of energy consumption. Based on this representation of paths, each E-ADARP route can be represented by a series of battery-restricted fragments (see Definition \ref{battery restricted fragment}). Then, we prove in Theorem \ref{theorem1} that the minimum total excess user ride time for a feasible route can be determined by summing the minimum excess user ride time of each \BRFrag. Following this idea, we enumerate all the feasible \BRFrags and calculate their minimum excess user ride times in the preprocessing phase (shown in Section \ref{sec::preproc}). With all the feasible fragments obtained as well as their minimum excess user ride time, we only need to check the feasibility of the route, which is realized via an exact route evaluation scheme of linear time complexity. \subsection{Representation of paths} \label{reprentation} The most important characteristic of the E-ADARP is the incorporation of total excess user ride time in the objective function as well as the maximum user ride time in the constraints. Usually, the maximum user ride time constraints can be tackled by calculating forward time slack and delaying the service start time at some nodes (e.g., \cite{kirchler2013granular,parragh2009heuristic}). To minimize the total excess user ride time, we declare one important point: total excess user ride time can only be minimized when vehicle finishes its delivery (i.e., no open request on the path). We then introduce battery-restricted fragment: \begin{definition}[Battery-restricted fragment] \label{battery restricted fragment} Assuming that $\mathcal{F} = (i_1,i_2, \cdots,i_k)$ is a sequence of pickup and drop-off nodes, where the vehicle arrives empty at $i_1$ and leaves empty at $i_k$ and has passenger(s) on board at other nodes. Then, we call $\mathcal{F}$ a \BRFrag if there exists a feasible route of the form: \begin{equation} \label{fragment definition} (o,s_{i_1},\cdots,s_{i_v},\overbrace{i_1,i_2, \cdots,i_k}^{\mathcal{F}},s_{i_{v+1}},\cdots,s_{i_m},f) \end{equation} where $s_{i_1},\cdots,s_{i_v},s_{i_{v+1}},\cdots,s_{i_m} (v,m \geqslant 0)$ are recharging stations, $o \in O$, and $f \in F$. \end{definition} It should be noted that, if no recharging station is required in the route of Definition \ref{battery restricted fragment}, i.e., $v=m=0$ in Equation (\ref{fragment definition}), the battery-restricted fragment is equivalent to the one defined in \cite{rist2021new}. Figure \ref{battery-restricted fragment example} presents an example of a feasible route which consists of two battery-restricted fragments, i.e., $\mathcal{F}_1 = \{1+,2+,1-,2-\}$ and $\mathcal{F}_2 = \{3+,3-\}$. Note that $\mathcal{F}_1 \cup \mathcal{F}_2$ is not a battery-restricted fragment, as the vehicle becomes empty at intermediate node 2- and 3+. Based on this definition, each E-ADARP route can be regarded as the concatenation of several battery-restricted fragments, recharging stations (if required), origin depot, and destination depot. \begin{figure}[!htp] \centering \includegraphics[width=16cm]{battery_restricted_fragment.png} \caption{\centering Example of \BRFrags} \label{battery-restricted fragment example} \end{figure} Clearly, on each \BRFrag (hereinafter referred to ``fragment"), the minimum excess user ride time can be exactly calculated. We prove in the next section (Theorem \ref{theorem1}) that the minimum excess user ride time of route $\mathcal{R}$ can be calculated by summing the minimum excess user ride time on each fragment $\mathcal{F}_i \subseteq \mathcal{R}$. Then, we only focus on optimizing excess user ride time for each fragment. \subsection{Excess user ride time optimization for a fragment} \label{ERT optimization} Let $EU_{min}(\mathcal{R})$ and $EU_{min}(\mathcal{F})$ be the minimum excess user ride over route $\mathcal{R}$ and fragment $\mathcal{F}$, respectively. We have the following Theorem. \begin{theorem} \label{theorem1} If $\mathcal{R}$ is a feasible route and $\mathcal{F}_1, \mathcal{F}_2, \cdots, \mathcal{F}_n$ are all the fragments on $\mathcal{R}$, then we have $EU_{min}(\mathcal{R}) = EU_{min}(\mathcal{F}_1) + EU_{min}(\mathcal{F}_2) + \cdots + EU_{min}(\mathcal{F}_n)$ \end{theorem} \begin{proof} In this proof, a schedule is called ``optimal" if it has minimal excess user ride time. Assuming that $\mathcal{T} = [\cdots,T_v,\cdots]_{v\in \mathcal{R}}$ is an optimal schedule of route $\mathcal{R}$, $T_v$ is the service start time at node $v$, and the arrival time of node $v$ is: $arr_v = T_{v-1} + t_{v-1,v} + s_{v-1}$. To prove the theorem, it is enough to show that for each fragment $\mathcal{F}_i \subseteq \mathcal{R}$, the restricted schedule $\mathcal{T}|_{\mathcal{F}_i} = [\cdots,T_v,\cdots]_{v\in \mathcal{F}_i}$ over $\mathcal{F}_i$ is also an optimal schedule for $\mathcal{F}_i$. To simplify the notation, we denote $\mathcal{T}|_{\mathcal{F}_i}$ as $\mathcal{T}_i$. Our proof consists of two different cases : \begin{enumerate} \item $arr_v= T_v$ for all $v\in \mathcal{F}_i$. In this case, vehicle starts service at its arrival on each node in $\mathcal{F}_i$. Clearly, $\mathcal{T}_i$ is also an optimal schedule over $\mathcal{F}_i$ as the waiting time on $\mathcal{F}_i$ is zero, proof is finished; \item $arr_v< T_v$ for some $v\in \mathcal{F}_i$. In this case, waiting time generated at some nodes. Let $v_1 \in \mathcal{F}_i$ be the first node such that $arr_{v_1}< T_{v_1}$ and $v_2 \in \mathcal{F}_i$ be the last node such that $arr_{v_2}< T_{v_2}$. Then we derive the following properties of $\mathcal{T}_i$: \begin{itemize} \item[(i)] $T_{v_0}=l_{v_0}$ for some $v_0<\footnote{we say $v_0< v_1$ if $v_0$ is a node before $v_1$ in the route and $v_0\neq v_1$.} v_1, v_0\in \mathcal{F}_i$. If not, we have $\Delta_1 = min\big\{T_{v_1}-arr_{v_1}, \{l_{v}-T_v\}_{v< v_1, v\in \mathcal{F}_i}\big\}>0$. We can obtain a new feasible schedule $\mathcal{T}_1$ by delaying the service start time of node $v< v_1, v\in \mathcal{F}_i$ to $T_v'= T_v + \Delta_1$. The the excess user ride time of $\mathcal{T}_1$ is at least $\Delta_1$ smaller than $\mathcal{T}$. It contradicts to our assumption that $\mathcal{T}$ is an optimal schedule; \item[(ii)] $T_{v_3}=e_{v_3}$ for some $v_3\geqslant v_2, v_3\in \mathcal{F}_i$. If not, we have $\Delta_2 = min\big\{T_{v_2}-arr_{v_2}, \{T_v-e_v\}_{v\geqslant v_2, v\in \mathcal{F}_i}\big\}>0$. We can obtain a new feasible schedule $\mathcal{T}_2$ by moving forward the service start time of node $v\geqslant v_2, k\in \mathcal{F}_i$ to $T_v''= T_v - \Delta_2$. The the excess user ride time of $\mathcal{T}_2$ is at least $\Delta_2$ smaller than $\mathcal{T}$. It contradicts to our assumption that $\mathcal{T}$ is an optimal schedule; \end{itemize} Based on (i) and (ii), assuming that $v_s, v_e$ are the first and the last node of $\mathcal{F}_i$, we derive that all the feasible schedules for $\mathcal{F}_i$ must satisfy the following two points: \begin{itemize} \item [(iii)] Since we have $arr_v= T_v$ for all $v<v_0 < v_1$ and $T_{v_0}=l_{v_0}$, any feasible schedules over $\mathcal{F}_i$ could not begin service at $v_s$ later than $T_{v_s}$ ($T_{v_s}$ is the latest possible service start time at $v_s$). Otherwise, it will surpass the latest time window $l_{v_0}$ at node $v_0$; \item [(iv)] Since we have $arr_v= T_v$ for all $v_2 \leqslant v_3 < v$ and $T_{v_3}=e_{v_3}$, any feasible schedules over $\mathcal{F}_i$ could not arrive at $v_e$ earlier than $arr_{v_e}$. \end{itemize} Assuming that $\mathcal{T}_i^* = [\cdots,T_v^*,\cdots]_{v\in \mathcal{F}_i} $ is an optimal schedule of $\mathcal{F}_i$, and the arrival time at $v$ is $arr_v^* = T_{v-1}^* + t_{v-1,v}+s_{v-1}$. Now, we prove that the excess user ride time of $\mathcal{T}_i$ is the same as $\mathcal{T}_i^*$ using the above properties. Note that we are still under the condition that $arr_v< T_v$ for some $v\in \mathcal{F}_i$. According to (iii) and (iv), we have $T_{v_s}^*\leqslant T_{v_s}, arr_{v_e}^*\geqslant arr_{v_e}$ for an optimal schedule $\mathcal{T}_i^*$ over $\mathcal{F}_i$. Clearly, $\mathcal{T}_i^*$ satisfies $ EU_{min}(\mathcal{T}_i^*)\leq EU_{min}(\mathcal{T}_i)$. Next, we will prove that $ EU_{min}(\mathcal{T}_i^*)= EU_{min}(\mathcal{T}_i)$. Then, we prove $\mathcal{T}_i$ is an optimal schedule over $\mathcal{F}_i$. Our prove contains two cases: \begin{enumerate} \item If $arr_v^* = T_v^*$ for all $v\in \mathcal{F}_i$: As we have $T_{v_s}^*\leqslant T_{v_s}$, then $arr_{v_e}^*\leqslant arr_{v_e}$. Therefore, we derive that $arr_{v_e}^*= arr_{v_e}$, $T_{v_s}^*= T_{v_s}$. As we assume in the condition that $arr_v^* = T_v^*$ for all $v\in \mathcal{F}_i$, we must have $T_v = T_v^*$ for all $k\in \mathcal{F}_i$. It contradicts to our assumptions that $arr_v< T_v$ for some $v\in \mathcal{F}_i$. Therefore, this case will not happen; \item If $arr_v^* < T_v^*$ for some $v\in \mathcal{F}_i$: Then we can prove the same result as in (i) (ii) and (iii) for $T_v^*$ in the same manner. Then $T_{v_s}\leqslant T_{v_s}^*, arr_{v_e}\geqslant arr_{v_e}^*$ and thus we derive $T_{v_s}= T_{v_s}^*, arr_{v_e}= arr_{v_e}^*$. Then we have $EU_{min}(\mathcal{T}_i^*)=EU_{min}(\mathcal{T}_i)$. Otherwise, if $EU_{min}(\mathcal{T}_i^*)< EU_{min}(\mathcal{T}_i)$, we can obtain a new feasible schedule $\mathcal{T}'$ over $\mathcal{R}$ from $\mathcal{T}$ by replacing $\mathcal{T}_i$ to $\mathcal{T}_i^*$, and $\mathcal{T}'$ has smaller excess user ride time than $\mathcal{T}$, which is a contradiction! \end{enumerate} \end{enumerate} \end{proof} Based on Theorem \ref{theorem1}, we convert the optimization of total excess user ride time for route $\mathcal{R}$ to the optimization of excess user ride time on its fragments $\mathcal{F} \subseteq \mathcal{R}$. Clearly, we can calculate the minimum excess user ride time directly if no waiting time is generated on a fragment. In the case of waiting time generated, one can compute the minimum excess user ride time if fragment $\mathcal{F}$ only contains a direct trip from one pickup node to the corresponding drop-off node. In the case that $\mathcal{F}$ contains two or more requests and waiting time generates for some $i \in \mathcal{F}$, the minimization of excess user ride time for $\mathcal{F}$ is equivalent to assign the right amount of waiting time at all nodes of $\mathcal{F}$. To obtain the minimum excess user ride time, we resort to solving a Linear Program (LP) presented as follows: Let $P_\mathcal{F}$ denote the set of requests served on a fragment $\mathcal{F}$: \begin{equation} \label{objective R} \quad \min \sum\limits_{i \in P_\mathcal{F}}R_i \end{equation} s.t. \begin{equation} \label{TW3} T_i + s_i + t_{i,j} \leqslant T_j, \quad \forall i \in \mathcal{F}, \quad idx_j = idx_i + 1, idx_i \neq |\mathcal{F}| \end{equation} \begin{equation} \label{user ride 1} T_{n+i} - (T_i + s_i) \leqslant m_i, \quad \forall i \in P_\mathcal{F} \end{equation} \begin{equation} \label{user ride 2} T_{n+i} - T_i - s_i - t_{i,n+i} \leqslant R_i, \quad \forall i \in P_\mathcal{F} \end{equation} \begin{equation} \label{TW2} e_i \leqslant T_i \leqslant l_i, \quad \forall i \in \mathcal{F} \end{equation} \begin{equation} R_i \geqslant 0, \quad \forall i \in P_\mathcal{F} \end{equation} where $T_i$ denotes the service start time at node $i$, $idx_i$ is the index of node $i$ on the fragment. The objective function is to minimize the total excess user ride time of $\mathcal{F}$. Constraints (\ref{TW3}) are time window constraints. Constraint (\ref{user ride 1}) and constraints (\ref{user ride 2}) are user ride time constraints. Note that we ensure the maximum user ride time and vehicle capacity constraints when we generate fragments (will be explained in Section \ref{sec::preproc}). If a route $\mathcal{R}$ contains an infeasible fragment, it is discarded directly without further evaluation. \subsection{Exact route evaluation scheme of linear time complexity} \label{exact route evaluation} One challenge of the E-ADARP is tackling the trade-off between recharging time and time window constraints. A longer recharging time will extend the driving range and is beneficial to meet the energy restriction at the destination depot. However, the vehicle risks violating the time window constraints for the succeeding nodes. These two aspects interact, and it is hard to check the feasibility of a generated route (denoted as $\mathcal{R}$). We construct an exact route evaluation scheme of linear time complexity based on the forward labeling algorithm of \cite{desaulniers2016exact}. To the best of our knowledge, it is the first time an exact route evaluation scheme is developed to handle the DARP with EVs. Given a route $\mathcal{R}$, we associate each node $i \in \mathcal{R}$ with a label $L_i :=\{(T_i^{rch_s})_{s \in S},T_i^{tMin},T_i^{tMax},T_i^{rtMax}\}$ including four resource attributes. We denote $\mathcal{P}_i$ as the partial path from the first node of $\mathcal{R}$ until node $i$. The definition of each resource attribute is shown as follows: \begin{enumerate} \item $T_i^{rch_s}$: The number of times recharging station $s \in S$ is visited along $\mathcal{P}_i$; \item $T_i^{tMin}$: The earliest service start time at vertex $i$ assuming that, if a recharging station is visited prior to $i$ along $\mathcal{P}_i$, a minimum recharge (ensuring the battery feasibility up to $i$) is performed; \item $T_i^{tMax}$: The earliest service start time at vertex $i$ assuming that, if a recharging station is visited prior to $i$ along $\mathcal{P}_i$, a maximum recharge (ensuring the time-window feasibility up to $i$) is performed; \item $T_i^{rtMax}$: In order to propagate the information along the path, we make the artificial assumption that vehicles can be recharged at all vertices. But in reality, the vehicle will never go to the recharging station when passengers are on board. With this assumption, $T_i^{rtMax}$ denotes the maximum recharging time required to fully recharge at vertex $i$ if a recharging station is visited prior to $i$ along $\mathcal{P}_i$, a minimum recharge (ensuring the battery feasibility up to $i$) is performed; \end{enumerate} The initial label is defined as $\{ (\overbrace{0,\cdots,0}^{|S| \text{ times}}),0,0,0\}$. We compute the succeeding label $L_j$ from the previous label $L_i$ by Resource Extension Functions (REFs): \begin{equation} T_j^{rch_s}= T_i^{rch_s} + \begin{cases} 1,\quad & \text{if $j = s$} \\ 0, \quad & \text{otherwise} \end{cases} \end{equation} \begin{equation} T_j^{tMin}= \begin{cases} \max \{e_j,T_i^{tMin}+t_{i,j}+s_i\} , \quad &\text{if $T_i^{rch}= \emptyset$} \\ \max \{e_j,T_i^{tMin}+t_{i,j}+s_i\}+Z_{i,j}, \quad &\text{otherwise} \end{cases} \end{equation} \begin{equation} T_j^{tMax}= \begin{cases} \min \{l_j,\max\{e_j,T_i^{tMin}+T_i^{rtMax}+t_{i,j}+s_i\}\}, \quad & \text{if $i \in S$} \\ \min \{l_j,\max\{e_j,T_i^{tMax}+t_{i,j}+s_i\}\}, \quad & \text{otherwise} \end{cases} \end{equation} \begin{equation} T_j^{rtMax}= \begin{cases} T_i^{rtMax}+h_{i,j}, \quad & \text{if $T_i^{rch} = \emptyset$} \\ \min\{H,\max\{0,T_i^{rtMax}-S_{i,j}\}+h_{i,j}\}, \quad & \text{otherwise} \end{cases} \end{equation} where: \begin{equation} S_{i,j}(T_i^{tMin},T_i^{tMax},T_i^{rtMax})= \begin{cases} \max\{0, \min\{e_j-T_i^{tMin}-t_{i,j}-s_i,T_i^{rtMax}\}\},\quad & \text{if $i\in S$ }\\ \max\{0, \min\{e_j-T_i^{tMin}-t_{i,j}-s_i,T_i^{tMax}-T_i^{tMin}\}\},\quad & \text{otherwise} \end{cases} \end{equation} \begin{equation} Z_{i,j}(T_i^{tMin},T_i^{tMax},T_i^{rtMax}) = \max\{0, \max\{0, T_i^{rtMax}-S_{i,j}(T_i^{tMin},T_i^{tMax},T_i^{rtMax})\}+h_{i,j}-H\} \end{equation} The $S_{i,j}$ is the slack time between the earliest time window $e_j$ at $j$ and the earliest arrival time to $j$. If $i$ is a recharging station, $S_{i,j}$ is the maximum amount of recharging time that can be performed at $i$, namely $T_i^{tMax}-T_i^{tMin}$. $Z_{i,j}$ is the minimum recharging time required to keep battery feasibility accounting for the available slack that the previous recharging station. According to \cite{desaulniers2016exact}, we have following proposition: \begin{proposition} The route $\mathcal{R}$ is feasible if and only of $ \forall j \in R $, the label $L_j$ satisfies: \begin{equation} \label{tw at j} T_j^{tMin} \leqslant l_j,\quad T_j^{tMin} \leqslant T_j^{tMax},\quad T_j^{rch_s} \leqslant 1,\quad T_j^{rtMax} \leqslant \begin{cases} (1-\gamma)H, \quad & j \in F\\ H, \quad & \text{otherwise}\nonumber \end{cases} \end{equation} \end{proposition} Clearly, the feasibility checking algorithm is of linear time complexity with respect to the length of the input route. After checking the feasibility, the total cost of route $\mathcal{R}$ is obtained by summing the travel time of arcs and the excess user ride time of fragments, recalling Theorem \ref{theorem1}. \section{Deterministic Annealing Algorithm for the E-ADARP} \label{sec::DAalgo} Based on Section \ref{ERT optimization} and Section \ref{exact route evaluation}, we establish a DA algorithm that ensures minimal excess user ride time for a generated solution and integrates an exact route evaluation. Different types of local search operators are embedded in the proposed DA algorithm to solve the E-ADARP. DA was first introduced by \cite{dueck1990threshold} as a variant of simulated annealing. Recent research shows that DA can obtain near-optimal or optimal solutions for a series of vehicle routing problems \citep{braysy2008effective,braekers2014exact}. To the best of our knowledge, the only paper that implements DA to solve the DARP is that of \cite{braekers2014exact}. Applying DA algorithm provides several advantages, and the most important one is its easy parameter tuning process, as the DA algorithm mainly relies on a single parameter. In addition, the DA algorithm is proved to be very efficient in solving the typical DARP. However, \cite{braekers2014exact} considers a single-objective case in the DARP. To solve the E-ADARP, we adapt the DA algorithm to accommodate problem-specific features of the E-ADARP by integrating the proposed excess user ride time optimization approach The framework for the proposed DA algorithm is depicted in Algorithm \ref{alg:meta-heuritic}. The algorithm input is an initial solution $x_{init}$ constructed by a parallel insertion heuristic (presented in Section \ref{4.2}) and the initial settings of DA-related parameters. These parameters include: (i) a maximal number of iterations $N_{iter}$; (ii) the initial and maximal temperature $\Theta_{max}$; (iii) restart parameter $n_{imp}$. It should be mentioned that the initial solution $x_{init}$ is feasible for the E-ADARP constraints, except that only a subset of requests may be served. The solution cost of the initial solution is denoted as $c(x)$, and the number of requests served in the initial solution is updated to $N_{req}$ so that a lexicographic optimization considers cost comparison in $c(x)$ values only if it does not worsen the number of requests served. A list of indexed operators $opt_1, \dots, opt_z $ are operated sequentially in each DA iteration. Our algorithm introduces seven local search operators (presented in Section \ref{4.4}), namely $z=7$. \begin{algorithm} \caption{DA Algorithm for the E-ADARP} \label{alg:meta-heuritic} \begin{algorithmic}[1] \small \Require Initial solution $x_{init}$, initial values of $N_{iter}$, $\Theta_{max}$, and $n_{imp}$. $T$ is set to $\Theta_{max}$ \Ensure Best solution $x_b$ found by our algorithm; \While{$iter \leqslant N_{iter}$} \State $i_{imp} \leftarrow i_{imp} + 1$; \For{$j=1 \rightarrow z-1$} \State Apply local search operator $opt_j$ on $x$ to obtain neighborhood solution $x'$; \If{$c(x') < c(x) + T $} \State $x \leftarrow x'$; \EndIf \EndFor \If{$N_{req} < n$} \State Apply $opt_z$ operator to add request to generate neighborhood solution $x'$; \State Update the number of requests served in $x'$ as $N_{req}'$; \EndIf \If{$(c(x')<c(x_b)$ \textbf{and} $N_{req}' = N_{req})$ \textbf{or} $N_{req}' > N_{req}$} \State $x_b \leftarrow x'$ \State $i_{imp} \leftarrow 0$ \Else{} \State $T \leftarrow T- \Theta_{max}/\Theta_{red}$ \If{$T<0$} \State $r \leftarrow$ random number between 0 and 1 \State $T \leftarrow r \times \Theta_{max}$ \If{$i_{imp} > n_{imp}$} \State $x \leftarrow x_b$ \State $i_{imp} \leftarrow 0$ \EndIf \EndIf \EndIf \State $iter \leftarrow iter + 1$ \EndWhile \State \textbf{return $x_b$} \end{algorithmic} \end{algorithm} There are two steps in the algorithm: local search and threshold update. At the beginning of the algorithm, the threshold value $T$ is set to $\Theta_{max}$, and the best solution $x_b$ and current solution $x'$ are initialized to an initial solution $x_{init}$. During the local search process, local search operators are applied to alter the current solution. In the next step, the threshold value is updated and restarted when the value is negative. In the local search process, we first remove the existing recharging stations on the current route and then generate a random neighborhood solution $x'$ from current solution $x$ by applying different operators. In the case of neighborhood solution $x'$ satisfies $c(x') < c(x) +T$ but violates battery constrain , we call an insertion algorithm (presented in Section \ref{4.3}) to repair $x'$ by inserting recharging stations at proper places. Solution $x'$ is accepted to become the new current solution when the number of assigned requests increases or the total cost is less than that of the current solution plus the threshold value $T$. In the threshold update process, when no new global best solution is found, $T$ is reduced by $\Theta_{max}/\Theta_{red}$, where $\Theta_{red}$ is a predefined parameter. To ensure that $T$ is always non-negative, we reset $T$ to $r \times \Theta_{max}$, with $r$ a random number generated between zero and one whenever $T$ becomes negative. The search is restarted from $x_b$ when no improvement is found in $n_{imp}$ iterations and $T$ becomes negative. \subsection{Parallel insertion heuristic} \label{4.2} While in most of the literature, the initial solution is often generated randomly, we construct our initial solution by a parallel insertion algorithm considering the time window and spatial closeness. First, we sort all the requests $(i,n+i), i \in P$ in increasing order based on $e_i$. Then, we randomly initialize $k$ routes $\{\mathcal{R}_1, \cdots, \mathcal{R}_k\}$ ($0 < k \leqslant K$ with $K$ being the number of total vehicles). Each of the $k$ first requests in the sorted request list are assigned randomly to different routes. These requests are deleted from the list of requests. Then, we sort the route list $\{\mathcal{R}_1, \cdots, \mathcal{R}_k\}$ in increasing order with regards to the distance between the last node of the analyzed route and the pickup node of the first request remaining in the request list. The first request is assigned to the first route in the route list. To insert the selected request, we enumerate all the possible insertion positions and insert the corresponding pickup node and drop-off node in a feasible way on this route. If this request cannot be inserted feasibly, then we move to the second route. This process is repeated until this request is inserted or all the routes are analyzed. If this request cannot be inserted in any of the existing routes, we move to the second request in the list and repeat the above process. After this process, if some requests are still not assigned, a new route is activated, and the above process will be repeated. The algorithm terminates when the request list is empty or the existing requests in the list cannot be inserted into any of routes in a feasible way. \subsection{Recharging station insertion for a given route} \label{4.3} If a route $\mathcal{R} \in x'$ only violates the battery constraints and neighborhood solution $x'$ has $c(x') < c(x)+T$, we insert recharging station to repair $\mathcal{R}$. For each possible insertion position, we select a random recharging station from the set of available stations to insert. If a feasible route is generated after insertion, we add it to the list of feasible routes. Otherwise, we store this route in a candidate route list. Suppose the route is still infeasible after trying all the possible insertion positions. In that case, we move to the next iteration to insert another recharging station for all the possible positions of all the candidate routes. The algorithm returns the repaired minimum-cost feasible route if $\mathcal{R} $ can be repaired or an empty set otherwise. For acceleration, we only consider repairing the route containing less than $N_{rch}$ recharging stations and we take $N_{rch} = \lceil |S|/2 \rceil$. \subsection{Local search} \label{4.4} We design seven operators (i.e., $opt_1, \cdots, opt_7$ in Algorithm \ref{alg:meta-heuritic}) to improve the initial solution generated from the constructive heuristic. Among them, three are intra-route operators (i.e., \textit{ex-pickup}, \textit{ex-dropoff}, and \textit{ex-2-neighbor}), three are inter-route operators (i.e., \textit{2-opt}, \textit{relocate}, and \textit{exchange}). The last operator named \textit{add-request} is applied in each iteration on neighborhood solution $x'$, which is generated after applying $opt_1, \cdots, opt_5$, if there exists un-served requests. \subsubsection{Intra-route operators} \textit{Ex-pickup} operator swaps the positions of two consecutive nodes $(i+,j+)$, where node $i+$ is a pick-up node and node $j+$ is not the corresponding drop-off node. An example is shown in Figure \ref{exchange pickup}. In each iteration, one pick-up node is selected randomly. If the successor of this pick-up node does not correspond to its drop-off node, then the two positions are exchanged. \textit{Ex-dropoff} operator creates a neighborhood solution by swapping the positions of two consecutive nodes $(j+,i-)$, where point $i-$ is a drop-off node and point $j+$ is not the corresponding pick-up node. Figure \ref{exchange dropoff} shows an example of how \textit{ex-dropoff} works. In each iteration, one drop-off node is selected randomly, if the precedent node of this drop-off node does not correspond to its pick-up node, then the two positions are exchanged. There is another situation shown in Figure \ref{exchange two neighbor}, where the successor of pick-up node $i+$ is its drop-off $i-$, and the predecessor of drop-off node $j-$ is its corresponding pick-up $j+$, but we can still exchange $i$- and $j+$ to create a new neighborhood solution. This operation is realized by \textit{ex-2-neighbor} operator. \begin{figure}[!htp] \centering \subfigure[Ex-pickup operator example.]{ \includegraphics[width=7cm]{expickup.png} \label{exchange pickup} } \quad \subfigure[Ex-dropoff operator example.]{ \includegraphics[width=6.5cm]{exdropoff.png} \label{exchange dropoff} } \quad \subfigure[Ex-2-neighbor operator example.]{ \includegraphics[width=7cm]{ex2neighbor.png} \label{exchange two neighbor} } \caption{\centering Intra-route operators example} \end{figure} \subsubsection{Inter-route operators} \textit{Two-opt} operator selects two random routes and splits each route into two parts by a randomly selected zero-split node $i$ such that $i \in D \cup S$. Then, the first part of the first route is connected with the second part of the second route and the first part of the second route is connected with the second part of the first route. Note that \textit{2-opt} is able to realize the exchange of several requests at one iteration. Figure \ref{path exchange} is an example of how \textit{2-opt} operator works. \begin{figure}[!htp] \centering \includegraphics[width=15cm]{2opt2.png} \caption{\centering 2-opt operator example} \label{path exchange} \end{figure} \textit{Relocate} operator randomly removes one request from a random route and re-inserts the request at the best position of another route. The best position means the position that brings the least increase on solution cost after inserting the selected request. A simple example is shown in Figure \ref{relocate segment}, where a request $(2+,2-)$ is removed from the first route and reinserted into the second route at the best positions. \begin{figure}[!htp] \centering \includegraphics[width=14cm]{relocate2.png} \caption{\centering Relocate operator example} \label{relocate segment} \end{figure} \textit{Exchange} operator (shown in Figure \ref{exchange}) swaps two random requests of two different routes. The selected requests are re-inserted into the best position of the other route. \begin{figure} \centering \includegraphics[width=14cm]{exchange2.png} \caption{\centering Exchange operator example} \label{exchange} \end{figure} \subsubsection{Insertion operator} \textit{Add-request} operator is applied in each iteration when there exists uninserted requests for current solution $x$. This operator tries to insert one uninserted request into a random route of $x$. When all the requests are served in $x$, this operator will no longer be applied. Figure \ref{addNewRequest} describes how \textit{add-request} adds uninserted request $(h+,h-)$ on a route. \begin{figure}[!htp] \centering \includegraphics[width=12cm]{addrequest.png} \caption{\centering \textit{Add-request} operator example} \label{addNewRequest} \end{figure} \subsection{Implementation details} \label{sec::preproc} This section presents the preprocessing works and the algorithm implementation details for allowing multiple/unlimited visits to recharging stations. The preprocessing works include: time window tightening, arc elimination, and fragment enumeration. \subsubsection{Preprocessing works} We first introduce two traditional methods introduced by \cite{cordeau2006branch}, which includes time window tightening and arc elimination. Then, we introduce fragments enumeration method. Time window tightening is executed as: \begin{itemize} \item For $i \in P$, $e_i$ is set to $\max\{ e_i,e_{n+i}-m_i-s_i \}$ and $l_i=\min \{ l_{n+i}-t_{i,n+i}-s_i,l_i \}$; \item For $i \in D$, $e_{n+i}= \max\{e_{n+i},e_i+t_{i,n+i}+s_i \}$, and $l_{n+i}=\min\{l_i+m_i+s_i,l_{n+i}\}$. \item For $s \in S$, the time window can be tightened by considering the travel time from the origin depot to recharging station and from recharging station to the destination depot. The earliest time to start service at charging station $s$ is set to $\min\{e_j+t_{j,s} \}$, $\forall j \in O$; the latest time at charging station $s$ to start service at recharging station is $\max\{T_p-t_{s,j} \}, \forall j \in F$; \item For $i \in O \cup F$, the earliest time window $e_i$ is set to $\max\{0,\min\{e_j-t_{i,j} \}\}, \forall j \in P$, and $l_i=\min\{l_i,\max\{l_j+s_i+t_{j,i} \}\}, \forall j \in D$. \end{itemize} The arc elimination process follows the method of \cite{cordeau2006branch}. We reduce the number of arcs in the graph by removing arcs that will not lead to a feasible solution. We further accelerate computations by enumerating all feasible fragments before computation, as in \cite{alyasiry2019exact}, \cite{rist2021new}. This method simplifies route evaluation and avoids recalculations as we only need to query information from each fragment. We enumerate all the feasible fragments with depth-first search and calculate their minimum excess user ride time. Then, the total excess user ride time of a route $\mathcal{R}$ can be calculated by summing $EU_{min}(\mathcal{F}), \mathcal{F} \subseteq \mathcal{R}$, recalling Theorem \ref{theorem1}. To generate all feasible fragments, we start from each pickup node and extend it node by node in a feasible way. To do so, we assume that the vehicle starts from each pickup node with a full battery level. The maximum user ride time, vehicle capacity constraints are checked during the extension process. For each node on a fragment, it must have a positive battery level. Note that if a fragment contains less than two requests, we calculate the excess user ride time directly. If a fragment contains two or more requests and has waiting time generated on some nodes, we resort to a LP solver (Gurobi) to solve the LP model (Section \ref{ERT optimization}). For each feasible fragment, the obtained minimum excess user ride time value is recorded. In \ref{preliminary for seg enumeration}, we conduct a preliminary test and provide details for fragment enumeration on each instance. For all the instances, the fragment enumeration can be fulfilled in a matter of seconds. In the computational experiments, we report the CPU time which includes the computational time for performing all the preprocessing works in Section \ref{sec::result}. \subsubsection{Adapt DA algorithm to allow multiple visits to each recharging station} \label{adapt DA to allow multiple visits} We extend the model of \cite{bongiovanni2019electric} to allow multiple visits to each recharging station in Section \ref{multiple section}. In case of $n_{as} = 2,3$, we replicate the recharging station set $S$ to allow at-most-two and at-most-three visits per station. All the ingredients remain the same in these two cases. In the case of $n_{as} = \infty$, we remove the feasibility checking rule $T_j^{rch_s} \leqslant 1$ to allow one route visiting multiple times for a station. When selecting a recharging station to insert in a route, we relax the set of available recharging stations to $S$. This operation allows inserting a recharging station that has already been used in other routes. \section{Computational Experiments and Results} \label{sec::result} In this section, we conduct extensive numerical experiments and analyze the results. All algorithms are coded in Julia 1.7.2 and are performed on a standard PC with an Intel Xeon Gold 6230 20C at 2.1GHz using a single thread. This section is organized as follows. The benchmark instances for the computational experiments and abbreviations used in the Tables are introduced in the first part. Then, a sensitivity analysis is conducted to find good parameter settings for the proposed DA algorithm in Section \ref{DA parameters}. After ensuring the robustness of parameters and operators, we validate the performance of the proposed algorithm on the standard E-ADARP instances compared to the state-of-the-art results in Section \ref{DA performance}. Section \ref{multiple section} investigates the effect of allowing multiple visits to recharging stations. \subsection{Benchmark instances and abbreviations} This section presents the benchmark instances used to test the algorithm performance, their characteristics, and the notations for the computational experiments. \subsubsection{Benchmark Instances} Instances are named following the pattern xK-n-$\gamma$, where $K$ is the number of vehicles, $n$ is the number of requests, and $\gamma \in \{0.1,0.4,0.7\}$. Three sets of instances are considered in the experiments, which differentiate by $x \in \{a,u,r\}$: \begin{itemize} \item “a” denotes the standard DARP benchmark instance set from \cite{cordeau2006branch} extended with features of electric vehicle and recharging stations by \cite{bongiovanni2019electric}. To simplify, we call them type-a instances. For type-a instances, the number of vehicles is in the range $2\leqslant K \leqslant 5$, and the number of requests is in the range $16\leqslant n \leqslant 50$. \item “u” denotes instances based on the ride-sharing data from Uber Technologies (instance name starts with “u”) that were adopted from \cite{bongiovanni2019electric}. To simplify, we call them type-u instances. For type-u instances, the number of vehicles is in the range $2\leqslant K \leqslant 5$, and the number of requests is in the range $16\leqslant n \leqslant 50$, as in type-a instances. \item “r” denotes larger DARP benchmark instances build from \cite{ropke2007models} using the same extension rules to have E-ADARP instances from DARP instances. To simplify, we call them type-r instances. For type-r instances, the number of vehicles is in the range $5\leqslant K \leqslant 8$ and the number of requests is in range $60\leqslant n \leqslant 96$. \end{itemize} Type-a instances are supplemented with recharging station ID, vehicle capacity, battery capacity, the final state of charge requirement, recharging rates, and discharging rates. The same operation is applied to type-r instances to generate a large-scale set of instances. The vehicle capacity is set to three passengers, and the maximum user ride time is 30 minutes. Recharging rates and discharging rates are all set to 0.055KWh per minute according to the design parameter of EAVs provided in: \url{https://www.hevs.ch/media/document/1/fiche-technique-navettes-autonomes.pdf}. The efficient battery capacity is set to 14.85 KWh, and the vehicle can approximately visit 20 nodes without recharging. The ride-sharing dataset of Uber is obtained from the link: \url{https://github.com/dima42/uber-gps-analysis/tree/master/gpsdata}. Type-u instances are created by extracting origin/destination locations from GPS logs in the city of San Francisco (CA, USA) and applying Dijkstra’s shortest path algorithm to calculate the travel time matrix with a constant speed setting (i.e., 35km/h). Recharging station positions can be obtained through Alternative Fueling Station Locator from Alternative Fuels Data Center (AFDC). For a more detailed description of instances development, the interested reader can refer to \cite{bongiovanni2019electric}. The preprocessed data that extract requests information from the raw data provided by Uber Technologies are published on the website (\url{https://luts.epfl.ch/wpcontent/uploads/2019/03/e_ADARP_archive.zip}). \subsubsection{Abbreviations in the tables} DA algorithm has deterministic rules to accept a solution and the sequence of neighborhoods, which is contrary to Simulated Annealing. There remains a randomized part in the selection of neighboring solutions. Unless indicated, we perform 50 runs on each instance with different seeds to analyze the statistical distribution of the solution quality. For each instance, we present the following values: \begin{itemize} \item $BC'$ is the cost of best solutions from B\&C algorithm reported in \cite{bongiovanni2019electric}; \item $BC$ is the cost of best solutions found by the proposed DA algorithm over 50 runs; \item $AC$ is the average-cost solution found by the proposed DA algorithm over the 50 runs. \item $Q1$ is the middle number between the best-obtained solution and the median of all the solutions over 50 runs; \item $Q3$ is the middle number between the median of all the solutions over 50 runs and the worst solutions yielded; \end{itemize} To analyze the distribution of the solution found for the 50 runs, we calculate solutions gaps to $BC'$. Assuming a solution with value $v$ ($v$ could be $BC$, $Q1$, $Q3$), we compute its gap to $BC'$ by: \begin{equation} gap = \dfrac { v - BC'} {BC'} \times 100\% \nonumber \end{equation} Note that type-r instances for the E-ADARP are studied here for the first time, we therefore replace $BC'$ with $BC$ in the above formula to analyze the gaps of $Q1$/$AC$/$Q3$ to $BC$. We present the following average values to analyze the consistency of the proposed DA algorithm: \begin{itemize} \item $Q1\%$ is the average gap to $BC'$ of the first quartile value over the different runs; \item $Q3\%$ is the average gap to $BC'$ of the third quartile value over the different runs; \item $BC\%$ is the average gap of $BC$ to $BC'$ over the different runs; \item $AC\%$ is the average gap of $AC$ to $BC'$ over the different runs; \item FeasRatio is the ratio of feasible solutions found among all the solutions generated by DA algorithm; \item CPU is the average computational time of the DA algorithm (preprocessing time is included) in seconds; \item CPU$'$ is the computational time of the B\&C algorithm reported in \cite{bongiovanni2019electric} in seconds; \item NC (Not Calculable) means that there are unsolved instances under the analyzed parameter and we cannot calculate gaps. \item NA (Not Available) indicates that corresponding value (e.g., $BC$, $BC'$) is not available as the analyzed algorithm cannot provide a feasible solution. \item A dash ``--" indicates that the DA algorithm finds new best solutions on a previously unsolved instance and we cannot calculate the gap. \end{itemize} In Section \ref{multiple section}, we present DA algorithm results when allowing multiple visits to each recharging station. To distinguish, subscripts ``2", ``3", and ``$\infty$" are added to $BC$, $AC$, and CPU to denote $n_{as} = 2,3, \infty$, respectively. As \cite{bongiovanni2019electric} provides results on type-u instances with $n_{as} = 2,3$, we add their reported results in the column named $BC_2'$ and $BC_3'$ of Table \ref{mul uber} and compare our DA algorithm results to theirs. \subsection{Parameter tuning for the DA algorithm} \label{DA parameters} The performance of the proposed algorithm depends on several parameters that must be set in advance. To ensure the algorithm performance, We first identify robust parameter settings. We analyze different settings of parameters on the type-a instance set, as it contains instances of different sizes and is enough to select good parameters. For a comprehensive overview, we take into account different scenarios, i.e., $\gamma = 0.1, 0.4, 0.7$, for each parameter setting. The DA-related parameters are: \begin{itemize} \item Number of iterations $N_{iter}$ ; \item Maximum threshold value $\Theta_{max}$; \item Threshold reduction value $\Theta_{red}$; \item Restart parameter $n_{imp}$. \end{itemize} To avoid re-tuning $\Theta_{max}$ when using different instances, we use a relative value for $\Theta_{max}$. The maximum threshold value is expressed as the product of the average distance between two nodes in the studied graph (denoted $\Bar{c}$) and a predefined parameter $\theta_{max}$, that is $\Theta_{max}= \Bar{c} \times \theta_{max}$, where $\theta_{max}$ is initially set to 1.5. For other parameters like $\Theta_{red}$ and $n_{imp}$, we take the same settings as in \cite{braekers2014exact}: $\Theta_{red}=300$ and $n_{imp}=50$. \subsubsection{Sensitivity analysis and parameter tuning for $\theta_{max}$} The sensitivity analysis results for $\theta_{max}$ under $\gamma = 0.1, 0.4, 0.7$ are shown in Table \ref{t_max}, and we test seven values for $\theta_{max}$. For each value of $\theta_{max}$, we perform ten runs on each instance and iterate the proposed algorithm 10000 times for each run. Under each energy restriction, we report $BC\%$, $AC\%$, $Q1\%$, $Q3\%$ over ten runs for the analyzed $\theta_{max}$ value. For the scenario of $\gamma = 0.7$, we report FeasRatio and average CPU time. We present detailed results on each instance under different settings of $\theta_{max}$ in \ref{detailed parameter tuning}. \begin{table}[!htp] \caption{Sensitivity analysis for $\theta_{max}$ under different $\gamma$ cases on type-a instances} \label{t_max} \begin{center} \footnotesize \begin{tabular}{c c c c c c c c} \toprule $\theta_{max}$ & 0.6 &0.9 &1.2 &1.5 &1.8 &2.1 &2.4 \\ \hline \textbf{$\gamma = 0.1$} \\ \cline{1-1} $BC\%$ &0.11\% & 0.10\% & 0.19\% & 0.32\% & 0.29\% & 0.28\% & 0.51\% \\ $AC\%$ &0.49\% & 0.53\% & 0.74\% & 0.83\% & 0.82\% & 0.94\% & 1.07\% \\ $Q1\%$&0.23\% & 0.30\% & 0.43\% & 0.54\% & 0.56\% & 0.66\% & 0.81\%\\ $Q3\%$&0.66\% & 0.73\% & 0.90\% & 0.93\% & 1.04\% & 1.22\% & 1.28\% \\ FeasRatio &140/140 &140/140 &140/140 &140/140 &140/140 &140/140 &140/140\\ CPU (s) &83.93 & 77.43 & 78.52 & 80.09 & 81.16 & 82.12 & 83.42\\ \hline \textbf{$\gamma = 0.4$} \\ \cline{1-1} $BC\%$&0.19\% & 0.27\% & 0.27\% & 0.40\% & 0.49\% & 0.70\% & 0.63\% \\ $AC\%$&NC & 0.68\% & 0.79\% & 0.95\% & 1.18\% & 1.36\% & 1.54\%\\ $Q1\%$&0.31\% & 0.49\% & 0.57\% & 0.65\% & 0.84\% & 0.96\% & 1.11\% \\ $Q3\%$&0.72\% & 0.84\% & 0.97\% & 1.21\% & 1.5\% & 1.68\% & 1.83\%\\ FeasRatio &139/140 &140/140 &140/140 &140/140 &140/140 &140/140 &140/140\\ CPU (s) &121.34 & 116.97 & 119.03 & 121.72 & 122.97 & 125.65 & 127.80\\ \hline \textbf{$\gamma = 0.7$} \\ \cline{1-1} FeasRatio &85/140 &106/140 &106/140 &108/140 &112/140 &105/140 &106/140 \\ CPU (s) &227.05 & 201.68 & 206.06 & 212.5 & 215.86 & 221.04 & 222.31\\ \bottomrule \end{tabular} \end{center} \end{table} From Table \ref{t_max}, in the case of $\gamma = 0.1$, the algorithm performs well under all the settings of $\theta_{max}$. Among them, 0.6 seems to be the best with regard to gap AC\% and computational efficiency. Other values, such as 0.9 and 1.2, can also be selected as a slight difference is found in the solution quality compared to that of 0.6. When $\gamma$ increases to 0.4, the problem becomes more constrained, and the algorithm with $\theta_{max} = 0.6$ cannot solve all the instances within ten runs. In this case, the algorithm with setting $\theta_{max} = 0.9$ still outperforms the algorithm with other $\theta_{max}$ settings in terms of solution quality. The problem is highly constrained when $\gamma = 0.7$, and some instances may not have feasible solutions among ten runs. From the results, $\theta_{max} = 1.8$ has the highest proportion of feasible solutions compared to the algorithm with other $\theta_{max}$ values. The DA algorithm with setting $\theta_{max} = 0.9$ has a number of feasible solutions slightly less than that of $\theta_{max} = 1.8$. From the overall performance, we conclude that $\theta_{max}= 0.9$ can provide us with good solution quality and acceptable computational time in all the cases. We set $\theta_{max}=0.9$ in all the further experiments. For values of $\Theta_{red}$ and $n_{imp}$, we keep the initial settings, i.e, $\Theta_{red}=300$ and $n_{imp}=50$. \subsubsection{Contribution of local search operators} As the algorithm largely relies on local search operators, their usefulness is verified. In this part, we analyze the contribution of local search operators to improve the solution quality. The effectiveness of each local search operator is presented, and the results of six different algorithm configurations are shown in Table \ref{operators}. In each of these configurations, one operator is excluded from the algorithm, and we run each algorithm configuration ten times, with each run iterating the respective algorithm 10000 times. We calculate the average solution gap of $BC\%$, $AC\%$, $Q1\%$, and $Q3\% $. Results for different algorithm configurations setting the previously selected parameter values ($\theta_{max}=0.9$) are summarized in Table \ref{operators}. For the scenario $\gamma = 0.7$, we report CPU times and FeasRatio \begin{table}[!htp] \caption{Experimental results when removing a single operator: \textit{Ex-pickup}, \textit{Ex-dropoff}, \textit{Ex-2-neighbor}, \textit{Relocate}, \textit{Exchange}, and \textit{2-opt}} \label{operators} \begin{center} \footnotesize \begin{tabular}{c c c c c c c c} \toprule Removing &None & \textit{Ex-pickup} &\textit{Ex-dropoff} &\textit{Ex-2-neighbor} &\textit{Relocate} &\textit{Exchange} &\textit{2-opt} \\ \midrule \textbf{$\gamma = 0.1$} \\ \cline{1-1} $BC\%$ &0.10\% &0.14\% & 0.23\% & 0.19\% & 0.25\% & 0.38\% & 2.64\% \\ $AC\%$ &0.52\% &0.52\% & 0.55\% & 0.56\% & 1.16\% & 0.68\% & 5.60\%\\ $Q1\%$ &0.30\% &0.40\% & 0.40\% & 0.44\% & 0.79\% & 0.51\% & 3.76\%\\ $Q3\%$ &0.73\% &0.74\% & 0.90\% & 0.79\% & 1.64\% & 1.00\% & 6.19\%\\ FeasRatio &140/140 &140/140 &140/140 &140/140 &139/140 &140/140 &140/140\\ CPU (s) &77.43 &74.88 & 71.41 & 88.97 & 57.53 & 79.51 & 68.92\\ \hline \textbf{$\gamma = 0.4$} \\ \cline{1-1} $BC\%$ &0.27\% &0.27\% & 0.27\% & 0.38\% & 0.38\% & 0.27\% & 2.56\%\\ $AC\%$ &0.68\% &0.73\% & 0.74\% & 0.78\% & 1.15\% & 0.84\% & 4.92\% \\ $Q1\%$ &0.49\% &0.51\% & 0.49\% & 0.64\% & 0.86\% & 0.63\% & 3.66\% \\ $Q3\%$ &0.84\% &0.93\% & 1.22\% & 1.06\% & 1.63\% & 1.10\% & 6.03\%\\ FeasRatio &140/140 &140/140 & 140/140 & 139/140 & 136/140 & 140/140 & 140/140\\ CPU (s) &116.97 &109.24 & 106.25 & 134.29 & 81.92 & 115.52 & 105.08\\ \hline \textbf{$\gamma = 0.7$} \\ \cline{1-1} FeasRatio &106/140 &96/140 & 106/140 & 90/140 & 86/140 & 97/140 & 74/140\\ CPU (s) &201.68 &191.54 & 185.69 & 237.5 & 137.17 & 210.65 & 182.31\\ \bottomrule \end{tabular} \end{center} \end{table} We can find that each operator performs very well in improving the solution quality, especially the \textit{2-opt} operator. Additionally, the \textit{relocate} and \textit{2-opt} operator contributes to provide more feasible solutions in the case of $\gamma = 0.4, 0.7$. Therefore, it is necessary to include these operators in local search. As for \textit{add-request}, it is essential for inserting requests that are not served in the current solution. From the above analysis, the usefulness of each local search operator is proved. \subsubsection{Sensitivity analysis on number of iterations} Then, we conduct the sensitivity analysis for the number of iterations $N_{iter}$. To identify a good $N_{iter}$, we conduct experiments with all the energy-level restrictions on type-a instances. We test ten values of $N_{iter}$, and report $BC\%$, $AC\%$, $Q1\%$, $Q3\%$ over ten runs. For the scenario of $\gamma = 0.7$, as different settings of $N_{iter}$ result in a different number of feasible solutions, we compare FeasRatio. The results are shown in Table \ref{dispersion alg}. \begin{table}[!htp] \caption{Statistical comparison of DA performance under different iteration times for all $\gamma$ values on type-a instances} \label{dispersion alg} \begin{center} \footnotesize \begin{tabular}{c c c c c c c c c c c} \toprule $N_{iter}$ & 1000& 2000 & 3000& 4000& 5000& 6000 &7000 &8000 &9000 &10000 \\ \hline \multicolumn{11}{c}{\textbf{Low energy restriction $\gamma = 0.1$}}\\ \hline $BC\%$ &0.60\% & 0.44\% & 0.35\% & 0.31\% & 0.20\% & 0.17\% & 0.14\% & 0.13\% & 0.11\% & 0.10\% \\ $AC\%$ & NC & NC & 0.95\% & 0.82\% & 0.73\% & 0.68\% & 0.63\% & 0.59\% & 0.56\% & 0.53\%\\ $Q1\%$ & 1.42\% & 0.82\% & 0.61\% & 0.52\% & 0.45\% & 0.42\% & 0.36\% & 0.34\% & 0.31\% & 0.30\% \\ $Q3\%$ & 2.35\% & 1.69\% & 1.12\% & 1.02\% & 0.96\% & 0.88\% & 0.83\% & 0.79\% & 0.76\% & 0.73\%\\ FeasRatio &138/140 & 139/140 & 140/140 & 140/140 & 140/140 & 140/140 & 140/140 & 140/140 & 140/140 & 140/140\\ CPU (s) & 10.15 & 17.5 & 24.86 & 32.43 & 39.92 & 47.45 & 54.92 & 62.38 & 69.79 & 77.43\\ \hline \multicolumn{11}{c}{\textbf{Medium energy restriction $\gamma = 0.4$}}\\ \hline $BC\%$ &1.07\% & 0.72\% & 0.57\% & 0.48\% & 0.40\% & 0.37\% & 0.34\% & 0.31\% & 0.30\% & 0.27\% \\ $AC\%$ & NC & NC & NC & 1.17\% & 1.03\% & 0.90\% & 0.84\% & 0.78\% & 0.73\% & 0.68\%\\ $Q1\%$ & 1.69\% & 1.18\% & 0.96\% & 0.82\% & 0.72\% & 0.66\% & 0.63\% & 0.57\% & 0.54\% & 0.49\%\\ $Q3\%$ & 2.98\% & 2.09\% & 1.61\% & 1.39\% & 1.22\% & 1.14\% & 1.08\% & 0.99\% & 0.87\% & 0.84\%\\ FeasRatio &138/140 & 139/140 & 139/140 & 140/140 & 140/140 & 140/140 & 140/140 & 140/140 & 140/140 & 140/140\\ CPU (s) &14.45 & 25.82 & 37.21 & 48.62 & 59.94 & 71.29 & 82.74 & 94.12 & 105.7 & 116.97 \\%&31.96 & 57.94 & 83.63 & 109.53 & 134.85 & 159.63 & 185.07 & 209.77 & 234.51 & 258.81\\ \hline \multicolumn{11}{c}{\textbf{High energy restriction $\gamma = 0.7$}}\\ \hline FeasRatio &79/140 & 88/140 & 94/140 & 95/140 & 96/140 & 97/140 & 100/140 & 102/140 & 103/140 & 106/140 \\ CPU (s) & 21.94 & 41.83 & 61.7 & 81.88 & 101.73 & 121.63 & 141.56 & 161.64 & 181.63 & 201.68 \\ \bottomrule \end{tabular} \end{center} \end{table} From Table \ref{dispersion alg}, we observe that the values of $BC\%$, $AC\%$, $Q1\%$, $Q3\%$ are improved with more iterations. Among ten values of $N_{iter}$, 10000 iterations provide us with the best solution quality. We therefore set $N_{iter}$ to 10000 to conduct experiments. The performance of DA is also demonstrated as small results dispersion is found under all the values of $N_{iter}$. Moreover, we also notice that the computational time grows approximately linearly with the number of iterations, which is a computational advantage compared with the B\&C algorithm. Note that choosing $N_{iter}=8000$ or $N_{iter}=9000$ slightly degrades the performances. With such parameters, the computational time will be decreased. Choosing $N_{iter}=10000$ is more robust, especially keeping in mind the evaluation of larger type-r instances. \subsection{DA algorithm performance on the E-ADARP instances} \label{DA performance} In this section, we present the performance of our DA algorithm after tuning parameters from the previous section. Table \ref{cordeau instances results}, Table \ref{uber instances results}, and Table \ref{ropke instances results} present our DA algorithm results on type-a, -u, and -r instances under $\gamma = 0.1,0.4,0.7$, respectively. In each table, we report the values of $BC$, $AC$, $Q1$, $Q3$, and their corresponding gaps with $BC'$ (presented in the column named ``$BC'$"). If we obtain better solutions than the best-reported results of \cite{bongiovanni2019electric}, we mark them in bold with an asterisk. We mark our solutions in bold if they are equal to those reported in \cite{bongiovanni2019electric}. It should be noted that we find strictly better integer solutions than the reported optimal results of \cite{bongiovanni2019electric} in case of $\gamma = 0.4, 0.7$. The reason is that in the model of \cite{bongiovanni2019electric}, the employed ``big M" values were not correctly computed. We refer to supplementary material for a more in-depth analysis and how the ``big M" values should be set correctly. To distinguish these incorrect results, we mark them in italics in the column of ``$BC'$" and mark our obtained solutions in bold with double stars. The corresponding $BC\%$ values are therefore negative. \subsubsection{Type-a instances results under different energy restrictions} We first conduct experiments on type-a instances considering different scenarios $\gamma = 0.1, 0.4, 0.7$. A higher $\gamma$ value means a higher minimum battery level that vehicles must keep when returning to the destination depot. Recalling that each recharging station can only be visited at most once. The E-ADARP model is more constrained with an increasing $\gamma$. In Table \ref{cordeau instances results}, we compare our algorithm results to the best reported results in \cite{bongiovanni2019electric}. We obtain equal/improved solutions for 36 out of 42 instances. Among them, 13 are the new best solutions. For some instances, we obtain better solutions than the reported optimal solutions in \cite{bongiovanni2019electric}. These instances are: a2-24-0.4, a3-30-0.4, a3-36-0.4, a2-24-0.7, a3-24-0.7, and a4-24-0.7. In all the scenarios, the proposed DA algorithm has quite small gaps to the best-reported results in \cite{bongiovanni2019electric}. The average $BC\%$ is 0.05\% and 0.13\% in case of $\gamma = 0.1,0.4$, and other values $AC\%$, $Q1\%$, $Q3\%$ are quite acceptable. In the case of $\gamma = 0.7$, we consistently provide new solutions for a2-20, a4-32, and a5-40, while B\&C cannot solve these instances optimally or feasibly within two hours. Particularly, the generated new solutions on instance a4-32 and a5-40 have a much lower solution cost compared to the former reported best solutions in \cite{bongiovanni2019electric}, with an average gaps of -7.49 \% and -5.22\%, respectively. Our algorithm demonstrates its performance by continuously finding high-quality solutions. In terms of computational efficiency, the CPU time for the proposed DA algorithm grows approximately in a linear way with the size of instances. The average CPU time for all instances is 96.71s, and the proposed DA algorithm can efficiently solve large-scale instances within maters of minutes. Therefore, we conclude that our DA algorithm can consistently provide high-quality solutions in a short computational time. \begin{table}[!htp] \renewcommand\arraystretch{0.95} \caption{Results of the proposed DA algorithm on type-a instances under $\gamma = 0.1, 0.4, 0.7$} \label{cordeau instances results} \begin{center} \footnotesize \begin{tabular}{c c c c c c c c c c c c} \hline \textbf{$\gamma = 0.1$}&\multicolumn{9}{c}{Proposed DA algorithm, 10000 iterations, 50 runs} & \multicolumn{2}{c}{Bongiovanni et al.,}\\ \hline Instance & $BC$ & $BC\%$ & $Q1$ & $Q1\%$ & $AC$ &$AC\%$ &$Q3$ & $Q3\%$ &CPU(s) & $BC'$ &CPU$'$(s)\\ \hline a2-16 &\textbf{237.38} & 0 & 237.38 & 0 & 237.38 & 0 & 237.38 & 0 & 39.27 & 237.38$^*$ & 1.2 \\ a2-20 &\textbf{279.08} & 0 & 279.08 & 0 & 279.08 & 0 & 279.08 & 0 & 73.83 &279.08$^*$ &4.2 \\ a2-24 &\textbf{346.21} & 0 & 346.21 & 0 & 346.21 & 0 & 346.21 & 0 & 160.57 & 346.21$^*$ &9.0 \\ a3-18 &\textbf{236.82} & 0 & 236.82 & 0 & 236.82 & 0 & 236.82 & 0 & 25.16 &236.82$^*$ &4.8 \\ a3-24 &\textbf{274.80} & 0 & 274.80 & 0 & 274.80 & 0 & 274.80 & 0 & 58.28 & 274.80$^*$ &13.80\\ a3-30 &\textbf{413.27} & 0 & 413.27 & 0 & 413.27 & 0 & 413.27 & 0 & 54.26 & 413.27$^*$ &102\\ a3-36 &\textbf{481.17} & 0 & 481.17 & 0 & 481.17 & 0 & 481.17 & 0 & 152.53 & 481.17$^*$ &106.80 \\ a4-16 &\textbf{222.49} & 0 & 222.49 & 0 & 222.49 & 0 & 222.49 & 0 & 19.47 &222.49$^*$ & 3.6 \\ a4-24 &\textbf{310.84} & 0 & 310.84 & 0 & 310.84 & 0 & 312.44 & 0.51\% & 29.57 &310.84$^*$ &31.2\\ a4-32 &\textbf{393.96} & 0 & 393.95 & 0 & 395.12 & 0.29\% & 397.58 & 0.92\% & 51.96 &393.96$^*$ &612 \\ a4-40 &\textbf{453.84} & 0 & 458.22 & 0.97\% & 459.42 & 1.23\% & 460.56 & 1.48\% & 92.04 &453.84$^*$ &517.2 \\ a4-48 &555.93 & 0.25\% & 560.19 & 1.02\% & 561.26 & 1.21\% & 562.87 & 1.50\% & 141.78 &554.54 & 7200 \\ a5-40 &414.80 & 0.07\% & 418.48 & 0.96\% & 420.35 & 1.41\% & 422.56 & 1.94\% & 64.92 &414.51$^*$ &1141.8 \\ a5-50 &561.41 & 0.40\% & 567.82 & 1.55\% & 570.58 & 2.04\% & 573.51 & 2.56\% & 137.31 &559.17 &7200\\ \hline Summary & & 0.05\% & & 0.32\% & & 0.44\% & & 0.64\% & 78.64 & &1210.54 \\ \hline \textbf{$\gamma = 0.4$} & $BC$ & $BC\%$ & $Q1$ & $Q1\%$ & $AC$ &$AC\%$ &$Q3$ & $Q3\%$ &CPU(s) & $BC'$ &CPU$'$(s)\\ \hline a2-16 &\textbf{237.38} & 0 & 237.38 & 0 & 237.38 & 0 & 237.38 & 0 & 52.85 &237.38$^*$ &1.8\\ a2-20 &\textbf{280.70} & 0 & 280.70 & 0 & 280.70 & 0 & 280.70 & 0 & 140.70 &280.70$^*$ &49.8 \\ a2-24 &\textbf{347.04$^{**}$} & -0.29\% & 347.04 & -0.29\% & 347.04 & -0.29\% & 347.04 & -0.29\% & 230.99 &\textit{348.04$^*$} &25.2 \\ a3-18 &\textbf{236.82} & 0 & 236.82 & 0 & 236.82 & 0 & 236.82 & 0 & 26.30 &236.82$^*$ &4.2 \\ a3-24 &\textbf{274.80} & 0 & 274.80 & 0 & 274.80 & 0 & 276.11 & 0.48\% & 67.85 &274.80$^*$ &16.8\\ a3-30 &\textbf{413.34$^{**}$} & -0.01\% & 413.34 & -0.01\% & 413.34 & -0.01\% & 413.34 & -0.01\% & 88.67 &\textit{413.37$^*$} &99 \\ a3-36 &\textbf{483.06$^{**}$} & -0.22\% & 483.83 & -0.06\% & 483.86 & -0.06\% & 485.43 & 0.27\% & 157.79 &\textit{484.14$^*$} &306.6 \\ a4-16 &\textbf{222.49} & 0 & 222.49 & 0 & 222.49 & 0 & 222.49 & 0 & 19.39 &222.49$^*$ &5.4 \\ a4-24 &\textbf{311.03} & 0 & 311.28 & 0.08\% & 311.65 & 0.20\% & 313.21 & 0.70\% & 31.97 &311.03$^*$ &39.6 \\ a4-32 &\textbf{394.26} & 0 & 395.05 & 0.20\% & 397.21 & 0.75\% & 400.32 & 1.54\% & 62.95 &394.26$^*$ &681.6 \\ a4-40 &\textbf{453.84} &0 & 457.20 & 0.74\% & 459.46 & 1.24\% & 461.06 & 1.59\% & 116.65 &453.84$^*$ &417.6 \\ a4-48 &558.11 & 0.63\% & 561.40 & 1.23\% & 563.47 & 1.60\% & 565.35 & 1.94\% & 177.51 &554.60 &7200 \\ a5-40 &416.25 & 0.42\% & 418.97 & 1.08\% & 420.32 & 1.40\% & 422.75 & 1.99\% & 72.64 &414.51$^*$ &1221 \\ a5-50 &567.54 & 1.26\% & 572.23 & 2.09\% & 574.56 & 2.51\% & 576.11 & 2.79\% & 162.82 &560.50 &7200 \\ \hline Summary & & 0.13\% & & 0.36\% & & 0.52\% & & 0.79\% & 100.65 & &1233.47 \\ \hline \textbf{$\gamma = 0.7$} & $BC$ & $BC\%$ & $Q1$ & $Q1\%$ & $AC$ &$AC\%$ &$Q3$ & $Q3\%$ &CPU(s) & $BC'$ &CPU$'$(s)\\ \hline a2-16 &\textbf{240.66} & 0 & 240.66 & 0 & 240.66 & 0 & 240.66 & 0 & 95.75 &240.66$^*$ &5.4 \\ a2-20 &\textbf{293.27$^*$} & -- & 293.27 & -- & 294.11 & -- & NA & NA & 172.77 &NA &7200 \\ a2-24 &\textbf{353.18$^{**}$} & -1.40\% & 366.49 & 2.31\% & NA & NA & NA & NA & 206.58 &\textit{358.21$^*$} &961.2 \\ a3-18 &\textbf{240.58} & 0 & 240.58 & 0 & 240.58 & 0 & 240.58 & 0 & 58.30 &240.58$^*$ &48 \\ a3-24 &\textbf{275.97$^{**}$} & -0.63\% & 275.97 & -0.63\% & 277.43 & -0.10\% & 279.13 & 0.51\% & 123.71 &\textit{277.72$^*$} &152.4 \\ a3-30 &\textbf{424.93$^*$} & -- & 432.29 & -- & 436.20 & -- & NA & NA & 77.73 &NA &7200 \\ a3-36 &\textbf{494.04} & 0 & 497.11 & 0.62\% & 502.27 & 1.67\% & 505.95 & 2.41\% & 125.42 &494.04 &7200 \\ a4-16 &\textbf{223.13} & 0 & 223.13 & 0 & 223.13 & 0 & 223.13 & 0 & 31.32 &223.13$^*$ &67.2 \\ a4-24 &\textbf{316.65$^{**}$} & -0.49\% & 318.21 & 0 & 318.31 & 0.03\% & 320.87 & 0.84\% & 53.73 &\textit{318.21$^*$} &1834.8 \\ a4-32 &\textbf{397.87$^*$} & -7.49\% & 401.58 & -6.63\% & 405.85 & -5.63\% & 408.69 & -4.97\% & 71.44 &430.07 &7200 \\ a4-40 &\textbf{479.02$^*$} & -- & NA & NA & NA & NA & NA & NA & 114.74 &NA &7200 \\ a4-48 &\textbf{582.22$^*$} & -- &610.75 & -- & NA & NA & NA & NA & 164.39 &NA &7200 \\ a5-40 &\textbf{424.26$^*$} & -5.22\% & 433.12 & -3.24\% & 436.94 & -2.39\% & 441.15 & -1.45\% & 97.51 &447.63 &7200\\ a5-50 &\textbf{603.24$^*$} & -- & NA & NA & NA & NA & NA & NA & 158.39 &NA &7200 \\ \hline Summary & &-- & &-- & &-- & &-- &110.84 & &4333.40 \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[!htp] \renewcommand\arraystretch{0.95} \caption{Results of the proposed DA algorithm on type-u instances under $\gamma = 0.1, 0.4, 0.7$} \label{uber instances results} \begin{center} \footnotesize \begin{tabular}{c c c c c c c c c c c c} \hline \textbf{$\gamma = 0.1$}&\multicolumn{9}{c}{Proposed DA algorithm, 10000 iterations, 50 runs} & \multicolumn{2}{c}{Bongiovanni et al.,}\\ \hline Instance & $BC$ & $BC\%$ & $Q1$ & $Q1\%$ & $AC$ &$AC\%$ &$Q3$ & $Q3\%$ &CPU(s) & $BC'$ &CPU$'$(s)\\ \hline u2-16 &\textbf{57.61} & 0 & 57.61 & 0 & 57.61 & 0 & 57.61 & 0 & 120.06 & 57.61$^*$ & 21 \\ u2-20 &\textbf{55.59} & 0 & 55.59 & 0 & 56.34 & 1.34\% & 56.34 & 1.34\% & 401.82 &55.59$^*$ &9.6 \\ u2-24 &\textbf{90.73$^{**}$} & -0.60\% & 90.84 & -0.47\% & 90.84 & -0.47\% & 90.98 & -0.32\% & 599.73 & \textit{91.27$^*$} &432 \\ u3-18 &\textbf{50.74} & 0 & 50.74 & 0 & 50.74 & 0 & 50.93 & 0.37\% & 108.32 &50.74$^*$ &10.8 \\ u3-24 &\textbf{67.56} & 0 & 67.87 & 0.46\% & 68.16 & 0.89\% & 68.16 & 0.89\% & 111.49 & 67.56$^*$ &130.2\\ u3-30 &\textbf{76.75} & 0 & 77.21 & 0.60\% & 77.80 & 1.37\% & 78.65 & 2.47\% & 174.11 & 76.75$^*$ &438\\ u3-36 &104.27 & 0.22\% & 104.87 & 0.79\% & 105.42 & 1.33\% & 106.36 & 2.23\% & 420.72 & 104.04$^*$ &1084.8 \\ u4-16 &\textbf{53.58} & 0 & 53.58 &0 & 53.58 & 0 & 53.58 & 0 & 51.37 &53.58$^*$ & 48 \\ u4-24 &90.13 & 0.34\% & 90.72 & 1.00\% & 90.85 & 1.14\% & 90.95 & 1.25\% & 55.26 &89.83$^*$ &13.2\\ u4-32 &\textbf{99.29} & 0 & 99.29 & 0 & 99.42 & 0.13\% & 99.67 & 0.38\% & 119.12 &99.29$^*$ &1158.6 \\ u4-40 &\textbf{133.11} & 0 & 134.46 & 1.02\% & 135.18 & 1.55\% & 136.08 & 2.23\% & 154.00 &133.11$^*$ &185.4 \\ u4-48 &\textbf{147.75$^*$} & -0.37\% & 148.87 & 0.39\% & 149.69 & 0.93\% & 150.42 & 1.43\% & 840.96 &148.30 & 7200 \\ u5-40 &\textbf{121.86} & 0 & 123.11 & 1.03\% & 123.38 & 1.25\% & 124.47 & 2.14\% & 113.81 &121.86 &1141.8 \\ u5-50 &144.22 & 0.78\% & 145.04 & 1.36\% & 145.63 & 1.77\% & 146.30 & 2.24\% & 245.52 &143.10 &7200\\ \hline Summary & & 0.03\% & & 0.44\% & & 0.80\% & & 1.19\% & 251.16 & &1795.11 \\ \hline \textbf{$\gamma = 0.4$} & $BC$ & $BC\%$ & $Q1$ & $Q1\%$ & $AC$ &$AC\%$ &$Q3$ & $Q3\%$ &CPU(s) & $BC'$ &CPU$'$(s)\\ \hline u2-16 &\textbf{57.65} & 0 & 57.65 & 0 & 57.65 & 0 & 57.65 & 0 & 156.61 &57.65$^*$ &25.8\\ u2-20 &\textbf{56.34} & 0 & 56.34 & 0 & 56.34 & 0 & 56.34 & 0 & 606.64 &56.34$^*$ &12 \\ u2-24 &\textbf{91.24$^{**}$} & -0.43\% & 91.27 & -0.39\% & 91.72 & 0.10\% & 92.06 & 0.47\% & 817.79 &\textit{91.63$^*$} &757.2 \\ u3-18 &\textbf{50.74} & 0 & 50.74 & 0 & 50.74 & 0 & 50.99 & 0.50\% & 124.95 &50.74$^*$ &13.8 \\ u3-24 &\textbf{67.56} & 0 & 67.87 & 0.46\% & 68.16 & 0.89\% & 68.16 & 0.89\% & 141.01 &67.56$^*$ &220.8\\ u3-30 &\textbf{76.75} & 0 & 77.12 & 0.48\% & 77.93 & 1.54\% & 78.65 & 2.48\% & 285.81 &76.75$^*$ &336.6 \\ u3-36 &104.49 & 0.41\% & 105.65 & 1.53\% & 106.37 & 2.22\% & 107.19 & 3.01\% & 898.90 &104.06$^*$ &2010 \\ u4-16 &\textbf{53.58} & 0 & 53.58 & 0 & 53.58 & 0 & 53.58 & 0 & 60.52 &53.58$^*$ &44.4 \\ u4-24 &90.72 & 1.00\% & 90.72 & 1.00\% & 91.00 & 1.30\% & 91.12 & 1.44\% & 65.57 &89.83$^*$ &28.2 \\ u4-32 &\textbf{99.29} & 0 & 99.29 & 0 & 99.42 & 0.13\% & 99.90 & 0.61\% & 156.27 &99.29$^*$ &2667.6 \\ u4-40 &\textbf{133.78$^{**}$} & -0.10\% & 135.43 & 1.14\% & 135.83 & 1.44\% & 136.56 & 1.98\% & 303.06 &\textit{133.91$^*$} &2653.2 \\ u4-48 &\textbf{148.48$^*$} & -- & 149.86 & -- & 150.81 & -- & 151.77 & -- & 1390.74 &NA &7200 \\ u5-40 &\textbf{121.96$^*$} & -0.22\% & 123.08 & 0.69\% & 123.63 & 1.15\% & 124.42 & 1.79\% & 160.80 &122.23 &7200 \\ u5-50 &143.68 & 0.38\% & 145.66 & 1.76\% & 146.60 & 2.42\% & 147.15 & 2.80\% & 391.46 &143.14 &7200 \\ \hline Summary & & -- & & -- & & -- & & -- & 397.15 & &2169.26 \\ \hline \textbf{$\gamma = 0.7$} & $BC$ & $BC\%$ & $Q1$ & $Q1\%$ & $AC$ &$AC\%$ &$Q3$ & $Q3\%$ &CPU(s) & $BC'$ &CPU$'$(s)\\ \hline u2-16 &\textbf{59.19} & 0 & 59.26 & 0.11 & 60.01 & 1.38 & 60.19 & 1.69 & 419.57 &59.19$^*$ &338.4 \\ u2-20 &\textbf{56.86} & 0 & 58.39 & 2.69 & 58.39 & 2.69 & 58.88 & 3.55 & 1527.60 &56.86$^*$ &72 \\ u2-24 &\textbf{92.84$^*$} & -- & 94.33 & -- & 99.38 & -- & NA & NA & 502.50 &NA &7200 \\ u3-18 &\textbf{50.99} & 0 & 50.99 & 0 & 50.99 & 0 & 50.99 & 0 & 206.92 &50.99$^*$ &24 \\ u3-24 &\textbf{68.39} & 0 & 68.39 & 0 & 68.44 & 0.08\% & 68.73 & 0.49\% & 375.75 &68.39$^*$ &400.2 \\ u3-30 &\textbf{77.94$^{**}$} & -0.26\% & 78.72 & 0.74\% & 79.37 & 1.57\% & 79.56 & 1.81\% & 1094.81 &\textit{78.14$^*$} &3401.4 \\ u3-36 &106.00 & 0.20\% & 106.41 & 0.59\% & 107.57 & 1.68\% & 107.92 & 2.01\% & 1606.43 &105.79 &7200 \\ u4-16 &\textbf{53.87} & 0 & 53.87 & 0 & 53.87 & 0 & 53.87 & 0 & 96.90 &53.87$^*$ &88.8 \\ u4-24 &90.07 & 0.12\% & 90.97 & 1.12\% & 90.97 & 1.12\% & 90.97 & 1.12\% & 254.45 &89.96$^*$ &22.8 \\ u4-32 &\textbf{99.50} & 0 & 100.01 & 0.51\% & 101.09 & 1.60\% & 101.75 & 2.26\% & 325.31 &99.50$^*$ &2827.2 \\ u4-40 &\textbf{136.08$^*$} & -- & 137.65 & -- & 138.98 & -- & NA & NA & 708.04 &NA &7200 \\ u4-48 &\textbf{152.58$^*$} & -- & 157.85 & -- & 162.62 & -- & NA & NA & 1958.80 &NA &7200 \\ u5-40 &\textbf{123.52$^*$} & -- & 125.30 & -- & 126.10 & -- & 127.08 & -- & 359.59 &NA &7200\\ u5-50 &\textbf{143.51$^*$} & -0.59\% & 148.16 & 2.64\% & 149.52 & 3.58\% & 152.36 & 5.54\% & 922.19 &144.36 &7200 \\ \hline Summary & &-- & &-- & &-- & &-- &780.10 & &3598.2 \\ \hline \end{tabular} \end{center} \end{table} \subsubsection{Type-u instances results under different energy restrictions} On type-u instances, we conduct experiments under different energy-restriction levels $\gamma = 0.1, 0.4, 0.7$. The results are shown in Table \ref{uber instances results}. The proposed DA algorithm finds equal solutions for 22 out of 42 instances and finds new best solutions for 12 previously solved and unsolved instances. Particularly, on instance u2-24-0.1, u2-24-0.4, u4-40-0.4, and u3-30-0.7, we find strictly better solutions than the reported optimal solutions in \cite{bongiovanni2019electric}. In each scenario, our best solutions have quite small gaps to the $BC'$ reported in \cite{bongiovanni2019electric}. We further demonstrate our algorithm consistency via other statistical values ($Q1\%$, $AC\%$, $Q3\%$), as our algorithm continuously finds high-quality solutions with the increasing size of instances. In terms of computational efficiency, our algorithm has an average CPU time of 476.14s. Our algorithm outperforms the B\&C algorithm on most instances, as we obtain equal/better solutions within a much shorter computational time. \subsubsection{Type-r instances results under different energy restrictions} \label{large-scale type-r} We present our algorithm results on type-r instances in Table \ref{ropke instances results}. These results are the first solutions found for these new instances and can serve as benchmark results for future studies. In scenarios $\gamma = 0.1$ and $\gamma = 0.4$, we find feasible solutions for 19 out of 20 instances, with an average CPU time of 269.71s and 373.89s, respectively. When increasing from $\gamma=0.1$ to $\gamma=0.4$, the statistical dispersion also increases, but the dispersion remains quite acceptable. For instance r7-84, most of the runs with $\gamma=0.4$ do not find a feasible solution. For instance r8-96, our DA algorithm cannot find a feasible solution among 50 runs with $\gamma=0.4$. These instances seem challenging for future works. When $\gamma = 0.7$, we found no feasible solution for all the type-r instances, despite 50 runs and 10000 iterations. One reason is that many of these instances are too constrained to be feasible for $\gamma=0.7$ with the limitation of visiting recharging stations. However, it opens a perspective to prove it using exact methods with lower bounds. \begin{table}[!htp] \caption{Results of the proposed DA algorithm with 10000 iterations 50 runs on type-r instances under $\gamma = 0.1, 0.4$} \label{ropke instances results} \begin{center} \footnotesize \begin{tabular}{c c c c c c c c c} \toprule \textbf{$\gamma = 0.1$} & $BC$ &$Q1$ & $Q1\%$ &$AC$ &$AC\%$ &$Q3$ & $Q3\%$ &CPU(s)\\ \midrule r5-60 &691.83 & 699.93 & 1.17\% & 706.20 & 2.08\% & 710.43 & 2.69\% & 178.44 \\ r6-48 &506.72 & 509.67 & 0.58\% & 512.69 & 1.18\% & 515.39 & 1.71\% & 229.31 \\ r6-60 &692.00 & 696.67 & 0.67\% & 700.15 & 1.18\% & 703.95 & 1.73\% & 127.03 \\ r6-72 &777.44 & 788.12 & 1.37\% & 794.69 & 2.22\% & 801.87 & 3.14\% & 208.39 \\ r7-56 &613.10 & 620.69 & 1.24\% & 624.51 & 1.86\% & 630.72 & 2.87\% & 88.20 \\ r7-70 &760.90 & 772.45 & 1.52\% & 778.84 & 2.36\% & 786.02 & 3.30\% & 209.76 \\ r7-84 &889.38 & 900.34 & 1.23\% & 904.88 & 1.74\% & 913.88 & 2.75\% & 322.66 \\ r8-64 &641.99 & 647.87 & 0.92\% & 652.59 & 1.65\% & 657.49 & 2.41\% & 612.06\\ r8-80 &803.52 & 820.96 & 2.17\% & 828.67 & 3.13\% & 834.19 & 3.82\% & 357.75 \\ r8-96 &1053.11 & 1069.98 & 1.60\% & 1080.80 & 2.63\% & 1089.96 & 3.50\% & 363.46 \\ \hline Summary && & 1.25\% & & 2.00\% & &2.79\% &269.71\\ \hline \textbf{$\gamma = 0.4$} & $BC$ &$Q1$ & $Q1\%$ &$AC$ &$AC\%$ &$Q3$ & $Q3\%$ &CPU(s)\\ \hline r5-60 &697.97 & 710.30 & 1.77\% & 718.44 & 2.93\% & 727.27 & 4.20\% & 293.25 \\ r6-48 &506.91 & 509.48 & 0.51\% & 514.46 & 1.49\% & 517.53 & 2.10\% & 257.59 \\ r6-60 &694.78 & 702.67 & 1.14\% & 706.07 & 1.62\% & 710.80 & 2.31\% & 173.43 \\ r6-72 &799.60 & 811.85 & 1.53\% & 821.17 & 2.70\% & 832.07 & 4.06\% & 349.98 \\ r7-56 &613.66 & 620.58 & 1.13\% & 624.40 & 1.75\% & 627.51 & 2.26\% & 99.91 \\ r7-70 &766.05 & 778.70 & 1.65\% & 784.54 & 2.41\% & 791.07 & 3.27\% & 273.52 \\ r7-84 &932.12 & 964.04 & 3.43\% & NA & NA & NA & NA & 584.26 \\ r8-64 &638.36 & 649.84 & 1.80\% & 652.30 & 2.18\% & 657.02 & 2.92\% & 641.63 \\ r8-80 &811.19 & 823.70 & 1.54\% & 833.05 & 2.69\% & 841.76 & 3.77\% & 448.14\\ r8-89 &NA & NA & NA & NA & NA & NA & NA & 617.17 \\ \hline Summary & & &NA & &NA & &NA &373.89\\ \bottomrule \end{tabular} \end{center} \end{table} \subsubsection{Conclusion of algorithm performance} On both type-a and -u instances, we observe the limit of solving capabilities of the B\&C. Even with a time limit of two hours, it is difficult for B\&C to solve medium-to-large-sized E-ADARP instances, especially under a high energy restriction. Our DA algorithm can provide high-quality solutions for highly constrained instances within a reasonable computational time. We also show that our DA algorithm can tackle larger-sized instances with up to 8 vehicles and 96 requests. Nineteen type-r instances for $\gamma=0.1$ and $\gamma=0.4$ are solved feasibly, and these results are the first solutions found for these new instances, which can serve as a benchmark for future studies. To conclude, the proposed DA algorithm remains highly effective and can provide optimal/near-optimal solutions even facing highly constrained instances. The proposed DA algorithm significantly outperforms the B\&C algorithm for medium-to-large-sized instances, and its consistency seems quite acceptable for such difficult instances. \subsection{Sensitivity analysis of the maximum number of charging visits per station} \label{multiple section} As discussed in Section \ref{sec::multiple?}, the hypothesis of visiting each recharging station at most once is not realistic. We adjust our DA algorithm as mentioned in Section \ref{adapt DA to allow multiple visits} to allow multiple visits to each recharging station. The adjusted DA algorithm is able to investigate the effect of increasing the value of $n_{as}$ on solution cost and feasibility. Recalling that we analyze four different cases: $n_{as} =1,2,3,\infty$. For type-a instances, as in the scenario of $\gamma = 0.1$, we obtain optimal solutions for most of the instances, and other instances are solved without visiting recharging stations. Therefore, we focus on scenarios of $\gamma = 0.4, 0.7$ and analyze the effect of allowing multiple visits in these cases. For type-u and -r instances, we conduct experiments with adjusted DA algorithm with $n_{as} = 2,3,\infty$ under $\gamma \in \{0.1,0.4, 0.7\}$. The detailed results are presented in \ref{unlimited visits}. In Table \ref{mul cordeau} and \ref{mul ropke}, we compare DA algorithm results on each instance with setting $n_{as} = 1,2,3,\infty$ and we mark the best one(s) in bold. In Table \ref{mul uber}, we compare our algorithm results under each setting of $n_{as}$ with the reported results in \cite{bongiovanni2019electric}. Improved solutions are marked in bold with an asterisk while equal solutions are marked in bold. In the column of $BC_{\infty}$, if the obtained solution is better than other solutions obtained under $n_{as} = 1,2,3$, we mark it in bold with double stars. On each instance, the adjusted DA algorithm performs 50 runs with 10000 iterations per run. From these results, we observe that the previous difficulties for the DA algorithm to solve the E-ADARP instances are reduced considering multiple visits per station. The major findings are: (1) allowing multiple visits to each recharging station improves the solution quality as we found lower-cost solutions. Particularly, we obtain feasible solutions for all type-r instances under $\gamma = 0.7$ with $n_{as} = 3, \infty$, while no feasible solution is found with $n_{as} = 1$; (2) among $n_{as} = 2,3,\infty$, the DA algorithm performs the best with $n_{as} = \infty$ in terms of solution quality; (3) the computational time increases with $n_{as}$; (4) on average, allowing at-most-two and -three visits per station slightly increase the computational time, which is more applicable in practice. Allowing at-most-three visits per station strikes a good balance between solution quality and computational time. A potential perspective from these results would be to investigate more realistic constraints, e.g., on the capacity of recharging stations, rather than limiting visits to recharging stations in the E-ADARP. \section{Conclusions and Perspectives} \label{sec::conclusion} This paper proposes an efficient DA algorithm to solve the E-ADARP, which aims to minimize a weighted-sum objective, including the total travel time and the total excess user ride time. To minimize the total excess user ride time, we propose a fragment-based representation of paths. A new method is developed upon this representation to calculate the minimum excess user ride time for a given route. Another challenge in solving the E-ADARP involves incorporating the partial recharging at recharging stations, which complicates the feasibility checking of a given route; to resolve this issue, we propose an exact route evaluation scheme of linear time complexity that can accurately handle the effect of allowing partial recharging and validate the feasibility of solutions. These two methods compose an exact and efficient optimization of excess user ride time for an E-ADARP route. To the best of our knowledge, this is the first time that total excess user ride time is optimized in an exact way for the E-ADARP. In computational experiments, we first prove the effectiveness and accuracy of our DA algorithm compared to the best-reported results of \cite{bongiovanni2019electric}. On 84 existing E-ADARP instances, our DA algorithm obtains equal solutions for 45 instances and provides better solutions on 25 instances. On the previously solved instances, the DA algorithm improves the solution quality by 0.16\% on average. On newly introduced large-scale E-ADARP instances, we provide new solutions for 19 instances. These results may serve as benchmark results for future studies. We then extend the E-ADARP model to allow unlimited visits to each recharging station. The previous difficulties for DA local search are lessened under this more realistic situation, and the results are less dispersed than the results of the at-most-one visit to each recharging station. Our extension of the E-ADARP model thus offers a new perspective in proposing a more realistic constraint in the E-ADARP for recharging stations, e.g., considering capacity and scheduling constraints in recharging stations. Our results offer other new perspectives for the E-ADARP in terms of algorithmic and modeling aspects. First, some instances remain unsolvable even after 50 independent runs of the DA algorithm. One reason may be that no feasible solution exists for these instances, which remain challenging for future studies using heuristic and exact methods. Another method to deal with highly-constrained instances involves using mathematical programming to explore large neighborhoods with many infeasibilities, as in \cite{dupin2021matheuristics}. The E-ADARP could also be extended to consider user's inconvenience as a second objective, which helps understand the conflicting interests between service providers and users and provide a high-quality approximation of Pareto front for decision makers. The proposed excess user ride time optimization approach can also be adapted to solve the classical DARP in the context of multiple objectives, in which the total excess user ride time is minimized as a separate objective. Another way that the E-ADARP model may be improved involves taking into account more real-life characteristics. For example, time-dependent travel times occur with traffic jams in peak hours. Relatedly, the static E-ADARP can be extended to dynamic E-ADARP, taking into account updates of requests during the day (e.g., new requests, cancellations, modifications). Having quick and efficient heuristic algorithms for the dynamic E-ADARP is crucial in such a context where metaheuristics also seem promising. \ACKNOWLEDGMENT{The authors would like to thank Claudia Bongiovanni, Mor Kaspi, and Nikolas Geroliminis for kindly providing access to their implementation. This work is supported by the China Scholarship Council (CSC, grant number 201807000145) and by public funding within the scope of the French Program ``Investissements d’Avenir”. \bibliographystyle{informs2014trsc}
1,314,259,995,679
arxiv
\section{Introduction} The classical Lambert $W$ function is a special function gaining more and more interest from mathematicians and physicists. It can be defined on the whole complex plane by the transcendental equation \begin{equation} W(z)e^{W(z)}=z.\label{W_def} \end{equation} As this equation has infinitely many solutions (except when $z=0$), the $W$ function has infinitely many branches. See the basic source \cite{W} for the main properties of $W$, and \cite{W,Houari,Mezo,Valluri} for references on the many applications $W$ has. In the present paper we define the $p$-adic analogue of the Lambert $W$ function, and study its properties. The proof of the results are slightly less trivial than in the classical and well known case of the $p$-adic exponential function $\exp_p(x)$ and its inverse $\log_p(x)$. Thus the study of this new function has demonstrative power. Let $\Omega_p$ be the algebraically and topologically closed (and spherically complete) $p$-adic field\footnote{We will not need this generality, however. Our analysis works even on $\mathbb{Q}_p$.} for a prime $p$. Based on \eqref{W_def}, we define $W_p(x)$ to be a function on (a part of) $\Omega_p$ such that \[W_p(x)\exp_p(W_p(x))=x.\] The theory of $p$-adic functions dictates that the inversion of $xe^x$ and $x\exp_p(x)$ results in the same Taylor series, only the radius of convergence changes when we switch from the standard topology generated by $|\cdot|$ to the $p$-adic topology based on $|\cdot|_p$. Thus, we have that $W_p(x)$ is represented by the very same Taylor series as the classical Lambert $W$ function: \begin{equation} W_p(x)=\sum_{n=1}^\infty\frac{(-n)^{n-1}}{n!}x^n.\label{Wp} \end{equation} First we study the basic mapping properties of $W_p$, then we prove that it cannot be represented as a uniform limit of rational $p$-adic functions. \section{The basic mapping properties of $W_p$} We prove the following theorem. \begin{theorem}The series \eqref{Wp} defining $W_p$ is convergent whenever $x\in\Omega_p$ such that $|x|_p<p^{-\frac{1}{p-1}}=:r_p$, and it is divergent elsewhere. Moreover, it is true that \[|W_p(x)|_p=|x|_p\quad(|x|_p<r_p).\] Thus $W_p$ has no zeros on its domain of definition except $x=0$. For the growth modulus we have that \[M_r(W_p)\stackrel{\mathrm{def}}{=}\max_{n\ge1}\left|\frac{(-n)^{n-1}}{n!}\right|_pr^n=r\quad(0<r<r_p),\] and $W_p$ has no critical radius\footnote{A critical radius of a function $f(x)=\sum_{n\ge0}a_nx^n$ is a positive real number $r$ such that $|a_n|_p\cdot r^n=|a_m|_p\cdot r^m$ for some $n,m$ such that $n\neq m$. Critical radii are important in the study of zeros of a $p$-adic function, but, as we see, $W_r$ has no non-trivial zeros. More on the critical radii can be read in \cite[6.1-6.2]{Robert}}. \end{theorem} \begin{proof}That the radius of convergence for \eqref{Wp} is $r_p$ can be seen easily: for $n$ which is not a multiple of $p$, $\left|\frac{(-n)^{n-1}}{n!}\right|_p=\left|\frac{1}{n!}\right|_p$, which shows that for such $n$ \eqref{Wp} contains a partial series of the $p$-adic exponential \[\exp_p(x)=\sum_{n\ge0}\frac{1}{n!}x^n,\] and it is well known that this series' radius of convergence is $r_p$ (\cite[p. 251]{Robert}, \cite[p. 79]{Koblitz}), and on the radius $|x|_p=r_p$ the series in question is divergent. That $|W_p(x)|_p=|x|_p$ can be seen as follows. We take the Taylor series, and we show that the first term, $x$, dominates the rest in absolute value. To this end, let us fix $n>1$, fix an $x$ such that $|x|_p<r_p$, and carry out the estimation \[\left|\frac{(-n)^{n-1}}{n!}x^n\right|_p=|x|_p\left|\frac{(-n)^{n-1}x^{n-1}}{n!}\right|_p\le|x|_p\left|\frac{x^{n-1}}{n!}\right|_p\le|x|_p\frac{|x^{n-1}|_p}{r_p^{n-1}}=|x|_p\left(\frac{|x|_p}{r_p}\right)^{n-1}<|x|_p.\] Here we used the simple fact that $|n!|_p\ge r_p^{n-1}$: \[|n!|_p=p^{-\frac{n-S_n}{p-1}}\ge p^{-\frac{n-1}{p-1}}=r_p^{n-1}.\] ($S_n$ is the sum of the digits of $n$ in base $p$.) The statement on the growth modulus is proven by considering the following steps. \[M_r(W_p)=\max_{n\ge1}\left|\frac{(-n)^{n-1}}{n!}\right|_pr^n=\max_{n\ge1}p^{-(n-1)\mathrm{ord}_p(-n)+\mathrm{ord}_p(n!)}\cdot r^n=\] \[\max_{n\ge1}p^{-(n-1)\mathrm{ord}_p(n)+\frac{n-S_n}{p-1}}\cdot r^n=\max_{n\ge1}\left(p^{\frac{1}{p-1}}r\right)^np^{-\frac{S_n}{p-1}}p^{-(n-1)\mathrm{ord}_p(n)}.\] Now it can be seen that the maximum is attained when $n=1$, so \[M_r(W_p)=p^{\frac{1}{p-1}}\cdot r\cdot p^{-\frac{1}{p-1}}=r.\] As critical radii characterize the absolute value of the zeros of an analytic function, and $w_p$ has no zeros of positive magnitude, it follows that $W_p$ has no critical radii. All the statements of the theorem are proved. \end{proof} \section{$W_p(\pi x)$ is not an analytic element} An analytic element is a function that can be represented as a uniform limit of rational functions \cite[p. 342]{Robert}. The Christol-Robba theorem provides a condition which often helps to decide whether a given $p$-adic analytic function is an analytic element or not. Let $f=\sum_{n\ge0}a_nx^n$ be a formal power series with bounded coefficients. Define $p_\nu=p^\nu(p^\nu-1)$ for $\nu\ge1$. Then $f$ defines an analytic element on \[\mathbf{M}_p=\{x\in\Omega_p\mid|x|_p<1\}\] if and only if the following condition holds. For each $\varepsilon>0$ there exists $\nu$ and $N$ positive integers such that \[|a_{n+p_\nu}-a_n|_p<\varepsilon\] for all $n\ge N$. This condition is called the Christol-Robba condition. Since \[\left|\frac{(-n)^{n-1}}{n!}p^{\frac{n}{p-1}}\right|_p\le\left|\frac{1}{n!}p^{\frac{n}{p-1}}\right|_p=p^{-\frac{n}{p-1}+\frac{n-S_n}{p-1}}=p^{\frac{-S_n}{p-1}}\le p^{-\frac{1}{p-1}},\] the Taylor coefficients of $W_p(\pi x)$ are bounded, and the Cristol-Robba theorem can be applied to $W_p(\pi x):\mathbf{M}_p\to\mathbf{M}_p$. Here $\pi$ is a $p$-adic number used to rescale the arguments such that $W_p$ can be defined on the open unit disk $\mathbf{M}_p$ of $\Omega_p$. It can be choosen to be any of the roots of the equation $x^{p-1}-p=0$. We are now ready to prove the following statement. \begin{theorem}The function $W_p(\pi x):\mathbf{M}_p\to\mathbf{M}_p$ is not an analytic element. \end{theorem} \begin{proof}Let us fix an $1>\varepsilon>0$, and a $\nu$ as it is given in the CR-theorem. If we prove that there are infinitely many $n$ such that $|a_{n+p_\nu}-a_n|_p\not\le\varepsilon$ with $a_n=\frac{(-n)^{n-1}}{n!}p^{\frac{n}{p-1}}$, then we will be done. In fact, we will prove that this absolute value is always bounded from below. To this end, we fix $n=p^\alpha k+1$ such that $\alpha>2\nu$. Then note that \begin{align} \mathrm{ord}_p(n)&=\mathrm{ord}_p(n+p_\nu)=0,\label{nobs1}\\ \intertext{and} S_{n+p_\nu}&=S_n+S_{p_\nu}.\label{nobs2} \end{align} Now consider \begin{equation} |a_{n+p_\nu}-a_n|_p=\left|p^{\frac{n}{p-1}}\frac{(-n)^{n-1}}{n!}\right|_p\left|(-1)^{p_\nu}\left(\frac{n+p_\nu}{n}\right)^{n-1}(n+p_\nu)^{p_\nu}p^{\frac{p_\nu}{p-1}}\frac{n!}{(n+p_\nu)!}-1\right|_p.\label{CRcond} \end{equation} Since the chosen $n$s are not divisible by $p$, we have that \[\left|p^{\frac{n}{p-1}}\frac{(-n)^{n-1}}{n!}\right|_p=\left|\frac{p^{\frac{n}{p-1}}}{n!}\right|_p=p^{-\frac{S_n}{p-1}}\ge p^{-\frac{1}{p-1}}.\] Moreover, the $p$-adic order of the first expression in the second absolute value of \eqref{CRcond} is equal to \[(n-1)\mathrm{ord}_p\left(\frac{n+p_\nu}{n}\right)+p_\nu\mathrm{ord}_p(n+p\nu)+\frac{p_\nu}{p-1}+\frac{1}{p-1}\left(n-S_n-(n+p_\nu)+S_{n+p\nu}\right).\] By the observations \eqref{nobs1}-\eqref{nobs2} this simplifies to \[\frac{1}{p-1}S_{p_\nu}=\nu.\] This last equality is trivial: \[S_{p_\nu}=S_{p^\nu(p^\nu-1)}=S_{p^\nu-1}=S_{(p-1)p^0+(p-1)p^1+\cdots+(p-1)p^{\nu-1}}=\nu(p-1).\] We therefore have that the first term in the second absolute value of \eqref{CRcond} is divisible by $p^\nu$, so the $p$-adic number in the absolute value is a unit. Collecting all the information we get that \[|a_{n+p_\nu}-a_n|_p\ge p^{-\frac{1}{p-1}},\] thus it is indeed bounded from below, as we claimed. \end{proof} \section*{Acknowledgement} The author is grateful to Huang XuePing for raising the question of the study of the $p$-adic Lambert function.
1,314,259,995,680
arxiv
\section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\texttt{acmsmall}}: The default journal template style. \item {\texttt{acmlarge}}: Used by JOCCH and TAP. \item {\texttt{acmtog}}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\texttt{acmconf}}: The default proceedings template style. \item{\texttt{sigchi}}: Used for SIGCHI conference articles. \item{\texttt{sigchi-a}}: Used for SIGCHI ``Extended Abstract'' articles. \item{\texttt{sigplan}}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\texttt{anonymous,review}}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \texttt{\acmSubmissionID} command to print the submission's unique ID on each page of the work. \item{\texttt{authorversion}}: Produces a version of the work suitable for posting by the author. \item{\texttt{screen}}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. As an exception, multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} Always use midrule to separate table header rows from data rows, and use it only for this purpose. This enables assistive technologies to recognise table headers and support their users in navigating tables more easily. \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{A woman and a girl in white dresses sit in an open car.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions are placed {\itshape below} the figure. Every figure should also have a figure description unless it is purely decorative. These descriptions convey what’s in the image to someone who cannot see it. They are also used by search engine crawlers for indexing images, and when images cannot be loaded. A figure description must be unformatted plain text less than 2000 characters long (including spaces). {\bfseries Figure descriptions should not repeat the figure caption – their purpose is to capture important information that is not already provided in the caption or the main text of the paper.} For figures that convey important and complex new information, a short text description may not be adequate. More complex alternative descriptions can be placed in an appendix and referenced in a short figure description. For example, provide a data table capturing the information in a bar chart, or a structured list representing a graph. For additional information regarding how best to write figure descriptions and why doing this is so important, please see \url{https://www.acm.org/publications/taps/describing-figures/}. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. Using the BibLaTeX system, the bibliography is included in your source document with the following command, placed just before the \verb|\end{document}| command: \begin{verbatim} \printbibliography \end{verbatim} The command \verb|\addbibresource{bibfile}| declares the \BibTeX\ file to use in the {\bfseries preamble} (before the command ``\verb|\begin{document}|'') of your \LaTeX\ source where ``\verb|bibfile|'' is the name, \emph{with} the ``\verb|.bib|'' suffix. Notice that \verb|\addbibresource| takes only one argument: to declare multiple files, use multiple instances of the command. Citations and references are numbered by default. A small number of ACM publications have citations and references formatted in the ``author year'' style; for these exceptions, please pass the option \verb|style=acmauthoryear| to the \verb|biblatex| package loaded in the {\bfseries preamble} (before the command ``\verb|\begin{document}|'') of your \LaTeX\ source. Some examples. A paginated journal article \cite{Abril07}, an enumerated journal article \cite{Cohen07}, a reference to an entire issue \cite{JCohen96}, a monograph (whole book) \cite{Kosiur01}, a monograph/whole book in a series (see 2a in spec. document) \cite{Harel79}, a divisible-book such as an anthology or compilation \cite{Editor00} followed by the same example, however we only output the series if the volume number is given \cite{Editor00a} (so Editor00a's series should NOT be present since it has no vol. no.), a chapter in a divisible book \cite{Spector90}, a chapter in a divisible book in a series \cite{Douglass98}, a multi-volume work as book \cite{Knuth97}, a couple of articles in a proceedings (of a conference, symposium, workshop for example) (paginated proceedings article) \cite{Andler79, Hagerup1993}, a proceedings article with all possible elements \cite{Smith10}, an example of an enumerated proceedings article \cite{VanGundy07}, an informally published work \cite{Harel78}, a couple of preprints \cite{Bornmann2019, AnzarootPBM14}, a doctoral dissertation \cite{Clarkson85}, a master's thesis: \cite{anisi03}, an online document / world wide web resource \cite{Thornburg01, Ablamowicz07, Poker06}, a video game (Case 1) \cite{Obama08} and (Case 2) \cite{Novak03} and \cite{Lee05} and (Case 3) a patent \cite{JoeScientist001}, work accepted for publication \cite{rous08}, 'YYYYb'-test for prolific author \cite{SaeediMEJ10} and \cite{SaeediJETC10}. Other cites might contain 'duplicate' DOI and URLs (some SIAM articles) \cite{Kirschmer:2010:AEI:1958016.1958018}. Boris / Barbara Beeton: multi-volume works as books \cite{MR781536} and \cite{MR781537}. A couple of citations with DOIs: \cite{2004:ITE:1009386.1010128,Kirschmer:2010:AEI:1958016.1958018}. Online citations: \cite{TUGInstmem, Thornburg01, CTANacmart}. Data Artifacts: \cite{UMassCitations}. Software project: ~\cite{cgal,delebecque:hal-02090402}. Software Version: ~\cite{gf-tag-sound-repo,}. Software Module: ~\cite{cgal:lp-gi-20a}. Code fragment: ~\cite{simplemapper}. \section{Acknowledgments} Identification of funding sources and other support, and thanks to individuals and groups that assisted in the research and the preparation of the work should be included in an acknowledgment section, which is placed just before the reference section in your document. This section has a special environment: \begin{verbatim} \begin{acks} ... \end{acks} \end{verbatim} so that the information contained therein can be more easily collected during the article metadata extraction phase, and to ensure consistency in the spelling of the section heading. Authors should not prepare this section as a numbered or unnumbered {\verb|\section|}; please use the ``{\verb|acks|}'' environment. \section{Appendices} If your work needs an appendix, add it before the ``\verb|\end{document}|'' command at the conclusion of your source document. Start the appendix with the ``\verb|appendix|'' command: \begin{verbatim} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} Always use midrule to separate table header rows from data rows, and use it only for this purpose. This enables assistive technologies to recognise table headers and support their users in navigating tables more easily. \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{A woman and a girl in white dresses sit in an open car.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions are placed {\itshape below} the figure. Every figure should also have a figure description unless it is purely decorative. These descriptions convey what’s in the image to someone who cannot see it. They are also used by search engine crawlers for indexing images, and when images cannot be loaded. A figure description must be unformatted plain text less than 2000 characters long (including spaces). {\bfseries Figure descriptions should not repeat the figure caption – their purpose is to capture important information that is not already provided in the caption or the main text of the paper.} For figures that convey important and complex new information, a short text description may not be adequate. More complex alternative descriptions can be placed in an appendix and referenced in a short figure description. For example, provide a data table capturing the information in a bar chart, or a structured list representing a graph. For additional information regarding how best to write figure descriptions and why doing this is so important, please see \url{https://www.acm.org/publications/taps/describing-figures/}. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\texttt{acmsmall}}: The default journal template style. \item {\texttt{acmlarge}}: Used by JOCCH and TAP. \item {\texttt{acmtog}}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\texttt{acmconf}}: The default proceedings template style. \item{\texttt{sigchi}}: Used for SIGCHI conference articles. \item{\texttt{sigchi-a}}: Used for SIGCHI ``Extended Abstract'' articles. \item{\texttt{sigplan}}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\texttt{anonymous,review}}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \texttt{\acmSubmissionID} command to print the submission's unique ID on each page of the work. \item{\texttt{authorversion}}: Produces a version of the work suitable for posting by the author. \item{\texttt{screen}}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf, language=french, language=german, language=spanish, language=english]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. As an exception, multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} Always use midrule to separate table header rows from data rows, and use it only for this purpose. This enables assistive technologies to recognise table headers and support their users in navigating tables more easily. \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{A woman and a girl in white dresses sit in an open car.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions are placed {\itshape below} the figure. Every figure should also have a figure description unless it is purely decorative. These descriptions convey what’s in the image to someone who cannot see it. They are also used by search engine crawlers for indexing images, and when images cannot be loaded. A figure description must be unformatted plain text less than 2000 characters long (including spaces). {\bfseries Figure descriptions should not repeat the figure caption – their purpose is to capture important information that is not already provided in the caption or the main text of the paper.} For figures that convey important and complex new information, a short text description may not be adequate. More complex alternative descriptions can be placed in an appendix and referenced in a short figure description. For example, provide a data table capturing the information in a bar chart, or a structured list representing a graph. For additional information regarding how best to write figure descriptions and why doing this is so important, please see \url{https://www.acm.org/publications/taps/describing-figures/}. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} Always use midrule to separate table header rows from data rows, and use it only for this purpose. This enables assistive technologies to recognise table headers and support their users in navigating tables more easily. \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{A woman and a girl in white dresses sit in an open car.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions are placed {\itshape below} the figure. Every figure should also have a figure description unless it is purely decorative. These descriptions convey what’s in the image to someone who cannot see it. They are also used by search engine crawlers for indexing images, and when images cannot be loaded. A figure description must be unformatted plain text less than 2000 characters long (including spaces). {\bfseries Figure descriptions should not repeat the figure caption – their purpose is to capture important information that is not already provided in the caption or the main text of the paper.} For figures that convey important and complex new information, a short text description may not be adequate. More complex alternative descriptions can be placed in an appendix and referenced in a short figure description. For example, provide a data table capturing the information in a bar chart, or a structured list representing a graph. For additional information regarding how best to write figure descriptions and why doing this is so important, please see \url{https://www.acm.org/publications/taps/describing-figures/}. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \section{Introduction} ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template. If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template. The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\itshape very} few changes to the source. \section{Template Overview} As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\itshape template style} and {\itshape template parameters}. This document will explain the major features of the document class. For further information, the {\itshape \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}. \subsection{Template Styles} The primary parameter given to the ``\verb|acmart|'' document class is the {\itshape template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command: \begin{verbatim} \documentclass[STYLE]{acmart} \end{verbatim} Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style: \begin{itemize} \item {\verb|acmsmall|}: The default journal template style. \item {\verb|acmlarge|}: Used by JOCCH and TAP. \item {\verb|acmtog|}: Used by TOG. \end{itemize} The majority of conference proceedings documentation will use the {\verb|acmconf|} template style. \begin{itemize} \item {\verb|acmconf|}: The default proceedings template style. \item{\verb|sigchi|}: Used for SIGCHI conference articles. \item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles. \item{\verb|sigplan|}: Used for SIGPLAN conference articles. \end{itemize} \subsection{Template Parameters} In addition to specifying the {\itshape template style} to be used in formatting your work, there are a number of {\itshape template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\itshape \LaTeX\ User's Guide.} Frequently-used parameters, or combinations of parameters, include: \begin{itemize} \item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work. \item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author. \item{\verb|screen|}: Produces colored hyperlinks. \end{itemize} This document uses the following string as the first command in the source file: \begin{verbatim} \documentclass[sigconf]{acmart} \end{verbatim} \section{Modifications} Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed. {\bfseries Your document will be returned to you for revision if modifications are discovered.} \section{Typefaces} The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families. \section{Title Information} The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command. Do not insert line breaks in your title. If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter: \begin{verbatim} \title[short title]{full title} \end{verbatim} \section{Authors and Affiliations} Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible. Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable: \begin{verbatim} \author{Brooke Aster, David Mehldau} \email{dave,judy,steve@university.edu} \email{firstname.lastname@phillips.org} \end{verbatim} The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work. If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition: \begin{verbatim} \renewcommand{\shortauthors}{McCartney, et al.} \end{verbatim} Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers. The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use. Note that authors' addresses are mandatory for journal articles. \section{Rights Information} Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement. Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document: \begin{itemize} \item the ``ACM Reference Format'' text on the first page. \item the ``rights management'' text on the first page. \item the conference information in the page header(s). \end{itemize} Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works. The ACM Reference Format text is required for all articles over one page in length, and is optional for one-page articles (abstracts). \section{CCS Concepts and User-Defined Keywords} Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search. The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source. User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented. CCS concepts and user-defined keywords are required for for all articles over two pages in length, and are optional for one- and two-page articles (or abstracts). \section{Sectioning Commands} Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands. Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bfseries not allowed.} \section{Tables} The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables. Table captions are placed {\itshape above} the table. Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} Always use midrule to separate table header rows from data rows, and use it only for this purpose. This enables assistive technologies to recognise table headers and support their users in navigating tables more easily. \section{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \section{Figures} The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{sample-franklin} \caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).} \Description{A woman and a girl in white dresses sit in an open car.} \end{figure} Your figures should contain a caption which describes the figure to the reader. Figure captions are placed {\itshape below} the figure. Every figure should also have a figure description unless it is purely decorative. These descriptions convey what’s in the image to someone who cannot see it. They are also used by search engine crawlers for indexing images, and when images cannot be loaded. A figure description must be unformatted plain text less than 2000 characters long (including spaces). {\bfseries Figure descriptions should not repeat the figure caption – their purpose is to capture important information that is not already provided in the caption or the main text of the paper.} For figures that convey important and complex new information, a short text description may not be adequate. More complex alternative descriptions can be placed in an appendix and referenced in a short figure description. For example, provide a data table capturing the information in a bar chart, or a structured list representing a graph. For additional information regarding how best to write figure descriptions and why doing this is so important, please see \url{https://www.acm.org/publications/taps/describing-figures/}. \subsection{The ``Teaser Figure''} A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command: \begin{verbatim} \begin{teaserfigure} \includegraphics[width=\textwidth]{sampleteaser} \caption{figure caption} \Description{figure description} \end{teaserfigure} \end{verbatim} \section{Citations and Bibliographies} The use of \BibTeX\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc. The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command: \begin{verbatim} \bibliographystyle{ACM-Reference-Format} \subsection{Library Terminology} This section briefly explains the main terminology used in our library. \begin{itemize} \item A \textbf{sensitive attribute} is an attribute that partitions the population into groups with unequal benefits received. \item A \textbf{protected group} (or simply group) is created by partitioning the population by one or many sensitive attributes. \item A \textbf{privileged value} of a sensitive attribute is a value that gives more benefit to a protected group, which includes it, than to protected groups, which do not include it. \item A \textbf{subgroup} is created by splitting a protected group by privileges and disprivileged values. \item A \textbf{group metric} is a metric that shows the relation between privileged and disprivileged subgroups created based on one or many sensitive attributes. \end{itemize} \subsection{Code Quality Management} Establishing high code quality practices is essential for an open-source library to be adopted by the community and extended by contributors. We do not propose any novelty in code quality management, but the good foundation of these practices is another feature that distinguishes us from other fairness projects. Automation of development workflows in the \textsf{Virny}\xspace GitHub repository is provided by \textbf{GitHub Actions}. Each pull request to the ‘main’ branch triggers a CI pipeline that automatically runs unit tests with pytest and adds a description of new features and modifications in the documentation. The status of the test execution is displayed to the reviewer of the pull request to check the impact of added modifications on other existing library functionality. Creating test cases, which cover the main library functionality, is a crucial component of reliable development of new features. Our library has two types of tests: \textbf{unit tests} and \textbf{integration tests}. Unit tests are created based on the pytest Python library, ensuring the correct functionality of the main library functions. Integration tests are developed in Jupyter notebooks to check component interaction for all library use cases. TODO: [Table of test coverage] We also understand that user adoption requires comprehensive \textbf{library documentation} and detailed \textbf{use case examples}. Therefore, we created a website with all API descriptions and examples hosted on GitHub Pages using \href{https://squidfunk.github.io/mkdocs-material/}{mkdocs-material Python library}. Additionally, we adapted an open-source mkdocs parser (\href{https://github.com/MaxHalford/yamp}{YAMP}) for our library to generate automatic function descriptions based on code documentation. \section{conclusions and Future Work} \label{sec:discussion} In this work we attempted to clarify the desiderata of fairness and stability, by asking the question: ``Is estimator variance a friend or a foe?''. In answering this question we uncovered the fairness-variance-accuracy trade-off, an enrichment of the classically understood fairness-accuracy and accuracy-robustness trade-offs. We empirically demonstrated contexts in which large estimator variance, as well as large disparity in estimator variance, can have a corrective effect on both model accuracy and fairness, but we also identified scenarios in which variance fails to help. We hope that our work will usher in a new paradigm of fairness-enhancing interventions that go beyond the classic fairness-accuracy dichotomy~\cite{chouldechova_frontiers}. For instance, there is interesting future work to be done to exploit large noise variance on protected groups with improved fairness and accuracy through this fairness-variance-accuracy trichotomy. Furthermore, our insights on the effect of estimator variance could help guide model selection in cases when several models are equally ``fair'' or equally accurate. Our work also comes with important limitations: \citet{debiasing_bias} highlight statistical errors in the measurement of different performance metrics, and the statistical procedures used to compute estimator variance in this study also suffer from the same shortcomings. There are also interesting statistical questions around the variance of these variance estimates --- specially in social contexts where it is widely believed that noise variance tracks protected attributes~\cite{kappelhof2017,schelter2019fairprep} --- which we leave for future work. Fairness is not a purely technical or statistical concept, but rather a normative and philosophical one. The major contributions of this work (methods, results and analysis) are purely technical, and are based on a popular technical definition of ``fairness'' as the parity in statistical bias. This is of course a limiting view, and one that should be regarded within a broader socio-legal-political view of fairness. One of the contributions of our work is the \textsf{Virny}\xspace software library. We envision several enhancements to our software library. Firstly, we would like to support other sampling-during-inference techniques for variance estimation beyond the simple Bootstrap, such as the Jackknife~\cite{Jackknife_review}, as well as combinations of Bootstrap and Jackknife~\cite{barber2021jackknife+,kim2020predictive_jackknife_bootstrap,Efron1992JackknifeAfterBootstrapSE}. We would also like to evaluate Conformal Prediction methods~\cite{vovk2017conformal,shafer2008conformal,angelopoulos2021conformal} for quantifying and correcting model instability. Specifically, it would be interesting to compare the insights from the variance metrics analyzed in this study with insights from the coverage and interval widths on different protected groups of conformal methods. \section{Experiments} \label{sec:experiments} We used the \textsf{Virny}\xspace library, presented in Section~\ref{sec:library}, to conduct an extensive empirical comparison of the behavior of the metrics described in Section~\ref{sec:fairness}, and evaluate the trade-offs between fairness, variance and accuracy. \subsection{Benchmarks} We used two fair-ml benchmarks for our evaluation, namely \href{https://github.com/zykls/folktables#1}{folktables} and \href{https://github.com/propublica/compas-analysis}{COMPAS}. Folktables~\citep{DBLP:conf/nips/DingHMS21} is constructed from census data from 50 US states for the years 2014-2018. We report results on the ACSEmployment task: a binary classification task of predicting whether an individual is employed. We report our results on data from Georgia from 2018. The dataset has 16 covariates, including age, schooling, and disability status, and contains about 200k samples, which we sub-sample down to 20k samples for computational feasibility. COMPAS~\citep{compas_propublica} is perhaps the most influential dataset in fair-ML, released for public use by ProPublica as part of their seminal report titled ``Machine Bias.'' We use the binary classification task to predict violent recidivism. Covariates include sex, age, and information on prior criminal justice involvement. We use the version of COMPAS supported by \href{https://fairlearn.org/v0.4.6/auto_examples/plot_binary_classification_COMPAS.html}{FairLearn}. \textsf{FairLearn} loads the dataset pre-split into training and test. We merge these into a single dataset and then perform different random splits. We use the full dataset with 5,278 samples. \begin{table*}[t!] \caption{Demographic composition of folktables and COMPAS.} \centering \small \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & sex$\_$race$\_$priv & sex$\_$race$\_$dis & sex$\_$priv & sex$\_$dis & race$\_$priv & race$\_$dis \\ \hline folktables & 0.322 & 0.177 & 0.484 & 0.516 & 0.661 & 0.339 \\ \hline COMPAS & 0.083 & 0.491 & 0.188 & 0.812 & 0.404 & 0.596 \\ \hline \end{tabular} \label{tab:protected-info} \end{table*} We define binary groups with respect to two features, sex and race. Males are the privileged group in folktables, while females are the privileged group in COMPAS. Whites are the privileged group in both folktables and COMPAS. We also look at intersectional groups constructed from sex and race: (male, white) and (female, black) are the intersectionally privileged and disadvantaged groups in folktables respectively, while (female,white) and (male,black) are the intersectionally privileged and disadvantaged groups in COMPAS respectively. The proportion of demographic groups in folktables and COMPAS is reported in Table~\ref{tab:protected-info}. \subsection{Model Training} In our experiments, we evaluate the performance of 6 different models, namely, Decision Tree (DT), Logistic Regression (LR), Random Forest (RF), XG-Boosted Trees (XGB), K-Neighbors classifier (kNN), and a Neural Network (historically called the Multi-layer Perceptron, or MLP). In each run, we randomly split the dataset into train-test-validation sets (80:10:10). We use the validation set to tune hyper-parameters once for each model type, for each dataset. We fit a single model on the complete train set and compute standard performance metrics (such as accuracy, TPR, FPR, TNR and FNR) both on the overall test set, and broken down by demographic groups listed in table~\ref{tab:protected-info}. Next, we use the bootstrap to construct 200 different versions of the training set (each with a size of $80\%$ of the full training set) and use this to train an ensemble of 200 predictors. We compute the variance metrics described in Section~\ref{sec:variance-metrics} on the outputs of this ensemble. We repeat this procedure for 10 different seeds on COMPAS and for 6 different seeds on folktables. \subsection{Experimental Results} For our analysis we will focus on four dimensions of model performance: \begin{enumerate} \item Overall statistical bias: an \emph{accurate} model has low statistical bias on the full test set. \item Overall variance: a \emph{stable} model has low variance on the full test set. \item Disparity in statistical bias: a \emph{fair} model shows parity in statistical bias on \textsf{dis}\xspace and \textsf{priv}\xspace groups. \item Disparity in variance: a \emph{uniformly stable} model shows parity in variance on \textsf{dis}\xspace and \textsf{priv}\xspace groups. \end{enumerate} The overall statistical bias and variance of different models is presented in Figure \ref{fig:folk_metrics} for folktables and Figure \ref{fig:compas_metrics} for COMPAS. Standard deviation (Std), inter-quantile range (IQR), jitter, and label stability are measures of estimator variance, while accuracy, TPR, FPR, TNR, and FNR are measures of statistical bias. We report all parity-based measures in Table~\ref{tab:folk-metrics} for folktables and Table~\ref{tab:COMPAS-metrics} for COMPAS. Cells are colored according to the following scheme: cells with values close to parity (0 for difference measures, 1 for ratio measures) are in green. Cells that report discrimination (i.e.,\xspace disparity in favor of the \textsf{priv}\xspace group) are in pink, while those that report reverse discrimination (i.e.,\xspace disparity in favor of the \textsf{dis}\xspace group) are in yellow. The positive class in folktables (positive employment status) is desirable, whereas in COMPAS (positive risk of recidivism) is undesirable, and so we flip the coloring scheme across datasets. For variance metrics, cells that show larger instability on the \textsf{priv}\xspace group than on the \textsf{dis}\xspace group are in yellow, and those with a larger instability on the \textsf{dis}\xspace group than on the \textsf{priv}\xspace group are in pink. A summary of desirable behavior on our metrics, and the corresponding color scheme, is presented in Table~\ref{tab:color-table}. \begin{table} \centering \caption{Summary of desirable behavior and coloring scheme on different metrics. Pink represents discrimination, yellow represents reverse-discrimination.} \begin{tabular}{|c|c|c|c|} \hline Metric name & Value & folktables & COMPAS \\ \hline Accuracy Parity & >0 & \colorbox{yellow}{ } & \colorbox{yellow}{ } \\ Equalized Odds FPR & >0 & \colorbox{pink}{ } & \colorbox{pink}{ } \\ Statistical Parity Difference & >0 & \colorbox{yellow}{ } & \colorbox{pink}{ } \\ Disparate Impact & >1 & \colorbox{yellow}{ } & \colorbox{pink}{ } \\ IQR Parity & >0 & \colorbox{pink}{ } & \colorbox{pink}{ } \\ Jitter Parity & >0 & \colorbox{pink}{ } & \colorbox{pink}{ } \\ Std Parity & >0 & \colorbox{pink}{ } & \colorbox{pink}{ } \\ Label Stability Ratio & >1 & \colorbox{yellow}{ } & \colorbox{yellow}{ } \\ \hline \end{tabular} \label{tab:color-table} \end{table} \begin{figure*}[h!] \centering \includegraphics[width=\linewidth]{folk-all-metrics.png} \caption{Statistical bias and variance metrics on folktables.} \label{fig:folk_metrics} \end{figure*} \input{folk-table.tex} \subsection{The Fairness-Variance-Accuracy Trade-off} Overall, as expected, ensemble models (Random Forest and XGBoost) are the most stable on all metrics and all datasets. Generally, the kNN and Decision Tree classifiers score highly on variance metrics (i.e.,\xspace are the least stable). The neural network (MLP) is stable on COMPAS, but is the least stable model on folktables! This is interesting, and counter-intuitive to the general understanding of how estimator variance relates to dataset size: folktables has 20k samples, while COMPAS has only 5k samples. From a statistical bias perspective, all models perform poorly on COMPAS (no model has accuracy higher than $68\%$). \subsubsection{Folktables} The MLP classifier and Random Forest are the best performing models on folktables, with an accuracy of $82.16\%$ and $82.3\%$ respectively. Random Forest is also one of the most stable models (low Std, low IQR, low Jitter, and high Label Stability), while MLP is one of the least stable models (both MLP and kNN have high Std, IQR and Jitter, and low Label Stability). From a fairness perspective, the MLP classifier and Logistic Regression perform the best. The Logistic Regression is not the best model on overall metrics, but has good parity on both statistical bias-based and variance-based metrics on folktables, as reported in Table \ref{tab:folk-metrics}. This is the first indication of a fairness-variance-accuracy trade-off: parity in variance and parity in statistical bias (``fairness'') comes at the cost of overall model accuracy. Strikingly, the MLP is also a reasonably fair model --- it shows low Statistical Parity Difference and Disparate Impact (close to 0 and 1, respectively), despite having low overall stability and large disparity in variance-based metrics across groups. We argue that this is a feature and not a bug, and is, once again, the fairness-variance-accuracy trade-off at play: the classifier shows a larger variation in outputs on \textsf{dis}\xspace than on \textsf{priv}\xspace, and this has a corrective effect on both the overall fairness and accuracy. Here, we are trading off stability/variance to gain fairness and accuracy. The behavior of the Random Forest classifier also illustrates this trade-off: as mentioned previously, the Random Forest has the highest accuracy of all the models. From Table~\ref{tab:folk-metrics} we see that this classifier also shows good parity on almost all variance metrics. This, however, comes with model unfairness (large disparity in statistical bias)! On metrics that relate to model error (such as accuracy parity and equalized odds) the model is ``unfair'', in the sense that it discriminates against the \textsf{dis}\xspace group. However, on metrics that track selection rates (such as statistical parity and disparate impact) the Random Forest classifier shows reverse discrimination, in the sense that it over-selects the \textsf{dis}\xspace group. Here, the model trades off fairness on the one hand for high accuracy and parity in variance on the other hand. There is no observable consistent trend in terms of fairness or stability for the kNN and XGBoost classifiers, and perhaps their lower overall accuracy compared to other models can also be explained by a sub-optimal trade-off on the fairness-variance-accuracy spectrum. \begin{figure*}[h!] \centering \includegraphics[width=\linewidth]{compas-all-metrics.png} \caption{Statistical bias and variance metrics on COMPAS} \label{fig:compas_metrics} \end{figure*} \input{compas-table.tex} \subsubsection{COMPAS} As described previously, none of the models in our experiments are particularly accurate on COMPAS. As expected, we also do not find these models to be particularly fair along any of the sensitive attributes, and for any fairness metrics. XGBoost is the most accurate model, and it does show parity for a handful of the bias-based metrics (Statistical Parity Difference and Disparate Impact, both along the lines of sex) and variance-based metrics (for IQR parity, Jitter parity, and Label stability ratio). For a classifier that has low overall accuracy, stability (low variance) and uniform stability (parity in variance) negates any potentially corrective effect estimator variance could have had, and results in model unfairness. Interestingly, the Decision Tree is the most ``fair'' model on COMPAS --- it is close to having parity in accuracy across all groups and has the best parity in bias-based metrics for intersectional groups. Further, unlike the XGBoost classifier on COMPAS, the Decision Tree is far from having parity in variance, and it, in fact has higher variance on the \textsf{priv}\xspace group. Here, we see the corrective effect of estimator variance on the ``fairness'' of an inaccurate model: the Decision Tree has low accuracy --- even as compared to the other poorly performing models --- but its disparity in variance seems to improve the parity in statistical bias-based measures. \subsection{Comparing variance metrics} \label{sec:metrics-comparison} \begin{figure*}[h!] \centering \includegraphics[width=\linewidth]{folk-scatter-models-20k.png} \caption{folktables (20k samples): Relationship between different variance metrics. Y=X line is plotted in blue} \label{fig:metrics-scatter-folk-20k} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[width=\linewidth]{folk-scatter-models-5k.png} \vspace{-0.75cm} \caption{folktables (5k samples): Relationship between different variance metrics. Y=X line is plotted in blue} \label{fig:metrics-scatter-folk-5k} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[width=\linewidth]{compas-scatter-models.png} \vspace{-0.75cm} \caption{COMPAS (5k samples): Relationship between different variance metrics. Y=X line is plotted in blue} \label{fig:metric-scatter-compas} \end{figure*} In our last set of experiments, we examined two families of complimentary variance metrics: (1) Standard deviation (Std) and Inter-Quantile Range (IQR) track the spread of the predicted probabilities, while (2) Jitter and Label Stability track how often the predicted label flips. We compare how ``good'' (i.e.,\xspace informative) these different measures of estimator variance are in Figures~\ref{fig:metrics-scatter-folk-20k} and~\ref{fig:metrics-scatter-folk-5k} for folktables, and in Figure~\ref{fig:metric-scatter-compas} for COMPAS. The behavior of variance metrics seems to be both model-specific and-dataset specific. The MLP classifier (shown in red dots in Figure \ref{fig:metrics-scatter-folk-20k} and in green dots in Figure \ref{fig:metrics-scatter-folk-5k}) falls approximately on the $Y=X$ line (plotted in blue in both figures). This means that IQR, Jitter and Std are highly correlated, and so can be used interchangeably for this model. Extending our earlier discussion of the instability of the MLP classifier on folktables: we see high instability in both the model that was trained on 5k samples (Figure~\ref{fig:metrics-scatter-folk-20k}) and in the model that was trained on 20k samples (Figure ~\ref{fig:metrics-scatter-folk-5k}). As expected, estimator variance decreases as the sample size increases: the MLP classifier trained with 20k and 5k samples has a maximum (worst-case) IQR of approximately 0.15 and 0.20 respectively. We see interesting trends in estimator variance on the COMPAS dataset: in Figure~\ref{fig:metric-scatter-compas}, variance metrics reported for different model types forms clusters that are almost constant along one dimension. For the same value of IQR, models have a range of values of Jitter (left-most subplot), and for the same value of Std, models have a range of values for Jitter (second plot from the left). We do not observe this behavior when considering Label Stability: the metrics of different models do form clusters, but they do not stay constant along one dimension. This suggests that, while we may be tempted to treat them interchangeably, we do need to look at them as a set, since reporting them on different benchmark datasets (such as COMPAS here) could lead to some metrics appearing to be redundant, despite being informative in a different context (such as on folktables in Figure \ref{fig:metrics-scatter-folk-20k} and \ref{fig:metrics-scatter-folk-5k}). \section{Metrics for Model Performance} \label{sec:fairness} Literature on fair-ML abounds with measures of models ``fairness''. Here, we first summarize some influential fairness measures that are stated as the ratio or the difference of measures of statistical bias computed on different demographic groups in Section~\ref{sec:bias-metrics}, and go on to define a new family of variance-based measures in Section~\ref{sec:variance-metrics}. \paragraph{Notation.} Let $Y$ be the target or true outcome, $X$ be the covariates or features, and $A$ be the set of protected/sensitive attributes. To start, we limit our treatment to binary group membership, letting $A=1$ denote the privileged group and $A=0$ denote the disadvantaged group. We are interested to construct an estimator $\hat{Y} = f(X,A)$ that predicts $Y$, with the help of a suitable loss function. In fair-ML, we apply additional constraints on the interaction between $\hat{Y}$, $Y$ and $A$ in order to ensure that the estimator $\hat{Y}$ does not discriminate on the basis of sensitive attributes $A$. Different notions of fairness are formalized as different constraints, and a violation of the fairness constraint is usually defined as the corresponding measure of model unfairness, as we will discuss next. \subsection{Measures of Model (Un)Fairness} \label{sec:bias-metrics} \subsubsection{Equalized Odds} The fairness criterion of Equalized Odds from \citet{hardt_EOP2016} is defined as: $$ P(\hat{Y}=1|A=0,Y=y)=P(\hat{Y}=1|A=1,Y=y), y \in \{0,1\} $$ For $Y = 1$ (the positive outcome), this fairness constraint requires parity in true positive rates (TPR) across the groups $A = 0$ and $A = 1$, and for $Y = 0$ (the negative outcome), the constraint requires parity in false positive rates (FPR). A violation of this constraint (i.e.,\xspace the disparity in TPR and FPR across groups) is reported as a measure of model unfairness. In our paper, we refer to TPR and FPR as the \emph{base measures}, and we say that the fairness criterion of Equalized Odds is \emph{composed} as the difference between these base measures computed for the disadvantaged group ($A=0$, which we call \textsf{dis}\xspace) and for the privileged group ($A=1$, which we call \textsf{priv}\xspace), respectively. $$\text{Equalized Odds Violation (True Positive)} = \Delta\text{TPR} = P(\hat{Y}=1|A=0,Y=1) -P(\hat{Y}=1|A=1,Y=1) $$ $$\text{Equalized Odds Violation (False Positive)} = \Delta\text{FPR} = P(\hat{Y}=1|A=0,Y=0)- P(\hat{Y}=1|A=1,Y=0) $$ We will now rewrite other influential fairness measures~\cite{chouldechova_impossibility, Kleinberg_impossibility} as the difference or ratio between different the base measures on the \textsf{dis}\xspace and \textsf{priv}\xspace groups. \footnote{$\Delta f = f_{dis} - f_{priv} $, $\mathcal{Q} f = f_{dis} / f_{priv} $} \subsubsection{Disparate Impact} Inspired by the 4/5th's rule in legal doctrines, Disparate Impact has been formulated as a fairness measure: $$ \text{Disparate Impact} = \mathcal{Q}(\text{Positive Rate}) = \frac{P(\hat{Y}=1|A=0)}{P(\hat{Y}=1|A=1)} $$ $P(\hat{Y}=1)$ is simply the Positive Rate of the estimator, and so the measure of Disparate Impact is composed as the ratio of the Positive Rate on the \textsf{dis}\xspace and \textsf{priv}\xspace groups, respectively. \subsubsection{Statistical Parity Difference} Similarly, Statistical Parity is the fairness criterion that asks that comparable proportions of samples from each protected group receive the positive outcome: $$ P(\hat{Y}=1|A=0) = P(\hat{Y}=1|A=1) $$ Statistical parity difference (SPD) is a popular fairness metric composed simply as the difference between the estimator's Positive Rate on \textsf{dis}\xspace and \textsf{priv}\xspace groups, respectively. $$ \text{Statistical Parity Difference} = \Delta(\text{Positive Rate}) = P(\hat{Y}=1|A=0) - P(\hat{Y}=1|A=1) $$ \subsubsection{Accuracy Parity} Accuracy parity is also commonly reported, and is computed as the difference in accuracy on \textsf{dis}\xspace and \textsf{priv}\xspace samples. $$ \text{Accuracy Parity} = \Delta(\text{Accuracy}) = \frac{P(\hat{Y}=1|A=0,Y=1)+P(\hat{Y}=0|A=0,Y=0)}{P(\hat{Y}=1|A=0,Y=1)+P(\hat{Y}=1|A=0,Y=0)+P(\hat{Y}=0|A=0,Y=1)+P(\hat{Y}=0|A=0,Y=0)} $$ $$ - \frac{P(\hat{Y}=1|A=1,Y=1)+P(\hat{Y}=0|A=1,Y=0)}{P(\hat{Y}=1|A=1,Y=1)+P(\hat{Y}=1|A=1,Y=0)+P(\hat{Y}=0|A=1,Y=1)+P(\hat{Y}=0|A=1,Y=0)} $$ \subsection{Measures of Model (In)Stability} \label{sec:variance-metrics} We now introduce several variance-based metrics. We will introduce several base measures of model variance first, as this requires some reconciliation with uncertainty quantification and robustness literature, and then define the corresponding variance-based measures of model instability on different groups. To measure the variation in model output, we will use the popular bootstrap technique~\cite{efron1994bootstrap}. This involves constructing multiple training sets by sampling with replacement from the given training set, and then fitting estimators (with the same architecture and hyper-parameters) on these bootstrapped training sets. This allows us to construct a predictive distribution from the outputs of each trained model, instead of a single point estimate of the predicted probability. We can thereby compute different measures of variation between the predictions of the ensemble of estimators for the same data point, and use it to approximate the variance of a single model trained on the full dataset. We provide more details of our implementation of these techniques in Sections~\ref{sec:library} and~\ref{sec:experiments}. \subsubsection{Label Stability} Label Stability \cite{Darling2018TowardUQ} is defined as the normalized absolute difference between the number of times a sample is classified as positive or negative: $$ \text{Label Stability} = \frac{|\sum_{i=1}^{b} \mathbbm{1}[p_{\theta_{i}}(x)==1] - \sum_{i=1}^{b} \mathbbm{1}[p_{\theta_{i}}(x)==0]|}{b} $$ where $x$ is an unseen test sample, and $p_{\theta_{i}}(x)$ is the prediction of the $i^{\text{th}}$ model in the ensemble that has $b$ estimators. Recall that we are using the bootstrap to construct an ensemble of predictors to approximate the variance of a single estimator fit on the entire dataset. Label stability is a measure of disagreement between estimators in the ensemble: If the absolute difference is large, the label is more stable. If the difference is exactly zero, then the estimator is said to be ``highly unstable'' because a test sample is equally likely to be classified as positive or negative by the ensemble. We define the \textbf{Label Stability Ratio} as a new parity measure. It is computed as the ratio of the average Label Stability on samples from the disadvantaged (dis) group and the privileged (priv) group respectively. \subsubsection{Jitter} Jitter \cite{liu2022model} is a measure of the disparities of the model's predictions for each individual test example. It reuses a notion of \emph{Churn} \cite{milani2016launch} to define a ''pairwise jitter'': $$ J_{i, j}\left(p_\theta\right)=\operatorname{Churn}_{i, j}\left(p_\theta\right)=\frac{\left|p_{\theta i}(x) \neq p_{\theta j}(x)\right|_{x \in X}}{|X|} $$ where $x$ is an unseen test sample, and $p_{\theta i}(x)$, $p_{\theta j}(x)$ are the predictions of the $i^{\text{th}}$ and $j^{\text{th}}$ estimator in the ensemble for $x$, respectively. To compute the variability over all models in the ensemble, we need to average \textit{pairwise jitters} over all pairs of models. This more general definition is called \emph{Jitter}: $$J\left(p_\theta\right)=\frac{\sum_{\forall i, j \in N} J_{i, j}\left(p_\theta\right)}{N \cdot(N-1) \cdot \frac{1}{2}} \text{, where } i<j$$ We define \textbf{Jitter Parity} as the difference of the average Jitter on samples from the \textsf{dis}\xspace and \textsf{priv}\xspace groups, respectively. \subsubsection{Standard Deviation and Inter-Quantile Range} The bootstrap can also be used to compute the standard deviation (Std) and the inter-quantile range (IQR) of the predicted probabilities of the ensemble, as an approximation of the spread in predictions of a single model trained on the full dataset. We compute the standard deviation (Std) and IQR on different groups (\textsf{dis}\xspace and \textsf{priv}\xspace), and compose the group-wise difference as \textbf{Std Parity} and \textbf{IQR Parity}, respectively. We will empirically demonstrate the usefulness of these variance-based metrics, especially where statistical-bias based measures fail to provide a complete picture of model performance, in Section \ref{sec:experiments}. We will use a software library we developed to support this empirical analysis, and describe it next. \section{Introduction} \label{sec:intro} The error of an estimator can be decomposed into a (statistical) bias term, a variance term, and an irreducible noise term. When we do bias analysis, formally we are asking the question: ``how \emph{good} are the predictions?'' The role of bias in the error decomposition is clear: if we trust the labels/targets, then we would want the estimator to have as low bias as possible, in order to minimize error. Fair machine learning (fair-ML) is concerned with the question: ``Are the predictions \emph{equally good} for different demographic or socioeconomic groups?'' One way to define \emph{equally good} is to require that the statistical bias of the estimator on samples from different demographic groups should be comparable. In other words, unbiasedness is a fairness desideratum if we trust the data labels. This has naturally led to a variety of proposed fairness metrics, usually defined as the difference or the ratio of a measure of statistical bias (such as the True Positive Rate or the True Negative Rate) computed on different test subsets --- corresponding to socially privileged and socially disadvantaged groups, respectively. A complementary statistical question concerns the variance of the estimator. When we do variance analysis, formally we are asking the question: ``How \emph{stable} are the predictions?'' The role of variance in the error decomposition is subtle: it is unclear whether low variance is always a desirable property. For example, in a biased estimator --- whose predictions deviate from the true value --- high variance can have a corrective effect on some samples. From a philosophical perspective, randomness is morally neutral, and so the effects of large variance can be morally more acceptable (and fairness-enhancing) than the effects of a systematic skew. Randomization is, of course, already used in algorithmic fairness research~\cite{dwork_awareness}, and it is an essential building block of the differential privacy framework~\cite{DBLP:journals/cacm/Dwork11}. In this paper, our goal is to understand the role of estimator variance from a fairness perspective --- what behavior of estimator variance is morally desirable? The dominant belief in fair-ML is that both stability and fairness are simultaneously desirable, that is, that we want to construct estimators that have parity in statistical bias across groups (are ``fair'') and have low variance (are ``stable'')~\cite{huang2019stable, friedler2019comparative}. Our insights provide a more nuanced picture of the stability desideratum in fair-ML and uncover a novel fairness-variance-accuracy trade-off. \paragraph{\textbf{Contributions:}} \begin{enumerate} \item We propose a new family of performance measures based on group-wise parity in variance in Section~\ref{sec:fairness}, and demonstrate their usefulness on folktables~\cite{DBLP:conf/nips/DingHMS21} and COMPAS~\cite{compas_propublica} benchmarks in Section \ref{sec:experiments}. \item We clarify the relationship between fairness and stability: If a model is fair (in the sense of exhibiting low disparity in statistical bias), then we also desire it to be stable (in the sense of exhibiting overall low variance). However, instability (high variance) does not imply unfairness (high disparity in statistical bias)! Indeed, as we show empirically in Section~\ref{sec:experiments}, there is a fairness-variance-accuracy trade-off, where: \begin{itemize} \item[(i)] Parity in variance and parity in statistical bias (``fairness'') can come at the cost of overall model accuracy, as we demonstrate empirically using the logistic regression model for the ACSEmployment task on the folktables benchmark~\cite{DBLP:conf/nips/DingHMS21}. \item[(ii)] Variance can have a corrective effect on both fairness and the overall accuracy for models that have reasonably high overall accuracy. For example, we observe the MLP classifier on the ACSEmployment task on folktables and Decision Tree on COMPAS~\cite{compas_propublica} trading-off model stability to gain parity in statistical bias. \item[(iii)] Conversely, for a classifier that has low overall accuracy, attempting to improve overall stability (low variance) and parity in stability across groups (parity in variance) negates any potentially corrective effect of estimator variance, and thereby leads to model unfairness. We observe this empirically for the XGBoost classifier on COMPAS. \end{itemize} \item We developed and publicly release a software library called \textsf{Virny}\xspace\footnote{https://github.com/DataResponsibly/Virny} that reconciles uncertainty quantification techniques with fairness analysis/auditing frameworks. Using this library, it is easy to measure stability (estimator variance) and fairness for several protected groups, and their intersections. We use this library in our own empirical analysis. \end{enumerate} \paragraph{\textbf{Roadmap.}} In Section~\ref{sec:fairness}, we present metrics for model performance. We first review influential fairness metrics, expressing them as ratios or differences of measures of statistical bias (Section~\ref{sec:bias-metrics}). Next, we introduce several measures of estimator variance from robustness and uncertainty quantification literature, and propose a new family of performance metrics, expressed as the difference or ratio of these variance metrics (Section \ref{sec:variance-metrics}). This reconciles statistical bias-based and variance-based analysis of parity in estimator performance on different subgroups, and provides a richer picture of algorithmic discrimination, as we empirically demonstrate in Section \ref{sec:experiments}. In Section \ref{sec:library} we introduce a new software library --- \textsf{Virny}\xspace ---to compute statistical bias and variance metrics on subgroups of interest, and to compose parity-based performance metrics from them. In Section \ref{sec:experiments}, we report our empirical findings on folktables and COMPAS benchmarks, introduce the fairness-variance-accuracy trichotomy, and give a critical comparison of our proposed variance metrics with existing metrics. We conclude and discuss avenues for exciting future work in Section~\ref{sec:discussion}. \section{Motivation} \label{sec:intro} Heteroskedastic noise --- noise variance tracks protected group membership. Bias variance tradeoffs in groups. \todo{TODO: Forward reference C-IID work.} Fairness and the need to measure fairness --- model does not perform equally good on all parts of the input space: so we define fairness metrics as ratios or differences between performance metrics computed on different test subsets --- corresponding to demographic groups. Intuitively makes sense to define as disparity. Led to some good insights \todo{(cite important work here)}, turns out different performance metrics trade off against each other. But, reader, perhaps your statistical senses are tingling...Plotting how well the estimator performs for different groups is only one part of the story --- the bias part. In this paper we tell the variance story: we introduce a new family of 'fairness' metrics, computed as ratios and differences of several measures of variation in model outputs. Unfortunately there are several sources of uncertainty and it is not immediately obvious how to quantify individual effects. So, we instead identify stages in the model life-cycle that can introduce uncertainty, intervene on them one stage at a time, keeping all others constant, and measure the uncertainty propagated to the model outputs. We also compute standard fairness metrics under each intervention, and this reconciles bias-based and variance-based measures of fairness, and provides a more rich picture of algorithmic discrimination. \textbf{Contributions:} \begin{enumerate} \item Introduce a new family of variance-based fairness measures. Demonstrated how this complements bias-based fairness measures in the literature through experiments on benchmark datasets. \item We also make a significant contribution to literature that aims to study the fairness of ML models through a lifecycle-view. To the best our knowledge, ours is the first systematic empirical study to quantify uncertainty at different stages of the model lifecycle. \end{enumerate} \section{The \textsf{Virny}\xspace software library} \label{sec:library} In order to reconcile the reporting of statistical bias-based and variance-based performance measures discussed in Section \ref{sec:fairness}, we developed \textsf{Virny}\xspace\footnote{\emph{\textsf{Virny}\xspace} is a Ukranian word meaning faithful, true or reliable. $\#$ScienceForUkraine} --- a Python library for auditing model stability and fairness. The \textsf{Virny}\xspace library was developed based on three fundamental principles: 1) easy extensibility of model analysis capabilities; 2) compatibility to user-defined/custom datasets and model types; 3) simple composition of parity metrics based on context of use. \textsf{Virny}\xspace decouples model auditing into several stages, including: subgroup metrics computation, group metrics composition, and metrics visualization and reporting. This gives data scientists and practitioners more control and flexibility to use the library for both model development and monitoring post-deployment. \subsection{Comparison with existing fairness libraries} Many toolkits dedicated to measuring bias and fairness have been released in the past couple of years. The majority of these toolkits are easily extensible, can measure a list of fairness metrics, and create detailed reports and visualizations. For example, \textsf{AI Fairness 360}~\cite{bellamy2018ai} is an extensible Python toolkit for fairness researchers and industry practitioners that can detect, explain, and mitigate unwanted algorithmic bias. \textsf{Aequitas}~\cite{saleiro2018aequitas} is another fairness auditing toolbox for both data scientists and policymakers. It concentrates on detailed explanations of how it should be used in a public policy context, including a ``Fairness Tree'' that guides the user to select suitable fairness metrics for their decision-making context. \textsf{FairLearn}~\cite{bird2020fairlearn} provides an interactive visualization dashboard and implements unfairness mitigation algorithms. These components help with navigating trade-offs between fairness and model performance. Similarly, \textsf{LiFT}~\cite{vasudevan2020lift} can measure a set of fairness metrics, but additionally, it focuses on scalable metric computation for large ML systems. Authors have shown how bias measurement and mitigation tools can be integrated with production ML systems and, at the same time, how to enable monitoring and mitigation at each stage of the ML lifecycle. Finally, \textsf{fairlib}~\cite{han2022fairlib} implements a broad range of bias mitigation approaches and supports the analysis of neural networks for complex computer vision and natural language processing tasks. In addition, the analysis module of \textsf{fairlib} provides an interactive model comparison to explore the effects of different mitigation approaches. \textsf{Virny}\xspace distinguishes itself from the existing libraries in three key aspects. First, our software library instantiates our conceptual contribution, allowing the data scientist to understand the role of estimator variance in assessing model fairness. \textsf{Virny}\xspace supports the measurement of both statistical bias and variance metrics for a set of initialized models, both overall on the full test set and broken down by user-defined subgroups of interest. Second, \textsf{Virny}\xspace provides several APIs for metrics computation, including an interface for the analysis of a set of initialized models based on multiple executions and random seeds. This interface enables a detailed model audit, and supports reliable and reproducible analysis of model performance. Third, our library allows data scientists to specify multiple sensitive attributes, as well as their intersections, for analysis. For example, \textsf{Virny}\xspace can audit statistical bias and variance with respect to all of the following simultaneously: \textsf{sex}, \textsf{race}, \textsf{age}, \textsf{sex\&race}, and \textsf{race\&age}. We also support the definition of non-binary sensitive attributes (although we limit our experimental evaluation in Section~\ref{sec:experiments} to binary groups). We hope that this flexibility in selecting datasets, models, metrics and subgroups of interest will help usher in an era of research where measuring and reporting a variety of metrics of model performance on different subgroups is the norm, and not a specialized research interest of the few. \subsection{Architecture} \begin{figure*}[h!] \centering \includegraphics[width=\linewidth]{library-architecture.png} \vspace{-0.75cm} \caption{\textsf{Virny}\xspace Architecture} \label{fig:library_diagram} \end{figure*} Figure \ref{fig:library_diagram} shows how \textsf{Virny}\xspace constructs a pipeline for model analysis. Pipeline stages are shown in blue, and the output of each stage is shown in purple. Each analysis pipeline has three processing stages: subgroup metrics computation, group metrics composition, and metrics visualization and reporting. We will now describe each of them. \subsubsection{Inputs} To use \textsf{Virny}\xspace, the user needs to provide three inputs, namely: \begin{itemize} \item A \textsf{dataset class} is a for the user's dataset that includes its descriptive attributes such a target column, numerical columns, categorical columns, etc\xspace. This class must be inherited from the $BaseDataset$ class, which was created for user convenience. The idea behind having a common base class is to standardize raw dataset pre-processing and feature creation and to simplify the logic for downstream metric computation. \item A \textsf{config Yaml} is a file that specifies the configuration parameters for \textsf{Virny}\xspace's user interfaces for metrics computation. We adopt this user-specified configuration approach to allow more flexibility to users. For instance, users can easily shift from one experiment to another, having just one config yaml per experiment, without having to make any further modifications before using \textsf{Virny}\xspace's user interfaces. The config file contains information such as the number of bootstrap samples to create (this is the number of estimators in our ensemble for variance analysis), the fraction of samples in each bootstrap sample, a list of random seeds, etc\xspace. Importantly, we ask the user to specify subgroups of interest in the dataset by simply passing a dictionary where key-value pairs specify the relevant column names and the values of the sensitive attribute of the groups of interest. Users can also specify intersectional groups here. \item Finally, a \textsf{models config} is a Python dictionary, where keys are model names and values are initialized models for analysis. This dictionary helps conduct audits of multiple models for one or multiple runs and analyze different types of models. \end{itemize} \subsubsection{Subgroup metric computation} After the variables are input to a user interface, \textsf{Virny}\xspace creates a \textsf{generic pipeline} based on the input dataset class to hide pre-processing complexity (such as one-hot encoding categorical columns, scaling numerical columns, etc\xspace) and provide methods for subsequent model analysis. Later, this generic pipeline is used in subgroup analyzers to compute different sets of metrics. Our library implements a \textsf{Subgroup Variance Analyzer} and a \textsf{Subgroup Statistical Bias Analyzer}, and it is easily extensible to include other analyzers. We provide abstract analyzer classes for users to inherit from and to create custom analyzers. Once these analyzers finish computing metrics, their outputs are combined and returned as a \textsf{pandas} dataframe. The \textsf{Subgroup Variance Analyzer} is responsible for computing our variance metrics (from Section \ref{sec:variance-metrics}) on the overall test set, as well as on subgroups of interest specified by the user. We use a simple bootstrapping approach~\cite{efron1994bootstrap} to quantify estimator variance, as is common in uncertainty quantification literature~\cite{Darling2018TowardUQ,liu2022jitter,debiasing_bias}. However, instead of simply computing the standard deviation of the predictive distribution, we also compute additional metrics such as label stability, jitter and IQR (defined in Section \ref{sec:variance-metrics}). Similarly, the \textsf{Subgroup Statistical Bias Analyzer} computes statistical bias metrics (such as accuracy, TPR, FPR, TNR, and FNR) on the overall test set as well as for subgroups of interest. \subsubsection{Group metric composition} The \textsf{Metrics Composer} is responsible for the second stage of the model audit. Currently, it computes the statistical bias-based and variance-based parity metrics described in Section \ref{sec:fairness}, but a user can compose additional metrics if desired. For example, the fairness measure of Disparate Impact is composed as the ratio of the Positive Rate computed on the \textsf{priv}\xspace and \textsf{dis}\xspace subgroups. \subsubsection{Metric visualization and reporting} The \textsf{Metrics Visualizer} unifies different processing steps on the composed metrics and creates various data formats to ease visualization. Users can use methods of this class to create custom plots for analysis. Additionally, these plots can be collected in an HTML report with comments for reporting. \subsection{User Interfaces} For the first library release, we have developed the following three user interfaces: \subsubsection{Single run, single model} This interface gives the ability to audit one model for one execution. Users can set a model seed or generate and record a random seed, and control the number of estimators for bootstrap, the fraction of samples used in each bootstrap sample, and the test set fraction. This interface returns a \textsf{pandas} dataframe of statistical bias and variance metrics for an input base model and stores results separately in a file. \subsubsection{Single run, multiple models} This interface extends the functionality of the previous interface to audit multiple models. It can be more convenient and speed up the computation of multiple metrics for all models. \subsubsection{Multiple runs, multiple models} This interface can be used for a more extensive model audit. Users specify a set of models to use and the seeds for each run. This interface then computes metrics for all specified models and seeds, and saves the results after each run. In addition to metrics, this interface stores the seeds used for each run, which can help maintain consistent and reproducible results, such as those reported in Section~\ref{sec:experiments}. \section{Quantifying Uncertainty in the Model Lifecycle} \label{sec:lifecycle} \todo{New experiments: model selection and data generating process, on one folktables task on one state} We quantify the uncertainty/variance in model outputs with respect to the following dimensions. Our list of interventions are mapped onto different stages of the data science life-cycle, shown in Figure \ref{fig:lifecycle} \subsection{Hypothesis Space} \label{sec:hypothesis} Different model architectures encode different inductive biases, and so one source of uncertainty, which could result in instability in model outputs downstream, is the hypothesis class over which we are running our optimization. \subsection{Parameter Space} \label{sec:parameter} The settings of several hyper parameters in predictive model determine how stable the model outputs will be --- based on choice we make in the parameter space we might end up at a minimum in a flat valley in the loss surface, and parameterize a stable model, whose performance generalizes reasonably well to unseen samples, or the local minimum we pick might be extremely sharp, and the corresponding model could produce large deviations in outputs for small perturbations in the input space. \subsection{Data Processing} \label{sec:preprocessing} Raw data is necessarily pre-processed and cleaned before being fed into data-driven models. The choice of data engineering technique is another source of uncertainty, because it involves a strategic manipulation of the dataset, which modifies it statistical properties. For example, standardization or max-min scaling is a common technique to reduce the variance in a dataset. \subsection{Data Collection} \label{sec:collection} The quality of an estimator depends acutely on the quality and size of the dataset that it was estimated from. We supplement our analysis of the impact of data errors (see point above), with an analysis of dataset size --- a model simply could not have seen enough data to make stable predictions, ie to find good, stable minima, and so dataset size is an important facet along which to study model instability. \subsection{Data Generating Process} \label{sec:generation} In practice, we will not be able to confidently assert that the data we built our model from, and the data that we will apply our model to get predictions for, comes from the same distribution. If any part of the data generating process changes (more weakly, if we suddenly start accessing/sampling from a different, previously unseen part of the data space) then the guarantees that we have for model performance no longer hold. We will study bais-variance trade-offs in the simple setting of test-data coming from a distribution that is different from the one sampled our training set from \subsection{Learning Paradigm} \label{sec:paradigm} We take a second, more detailed look at the effect of data uncertainty on model stability by intervening on the learning paradigm -- whether the model sees all the data at once (batch), or is trained iteratively (incremental/online). \begin{figure} \includegraphics[width=\textwidth]{stability_lifecycle.png} \caption{Stages in the model lifecycle at which we study stability. The numbers in circles map to different interventions, described in Section \ref{sec:lifecycle}.} \centering \Description{Stages in the model lifecycle at which we will measure stability} \label{fig:lifecycle} \end{figure} \section{Uncertainty Quantification Techniques} \label{sec:methods} \subsection{Sampling During Inference \todo{TODO: Diagrams? + How to construct confidence intervals}} \label{sec:sampling-during-inference} The basic idea of these UQ techniques is to generate samples at inference time (hence the name), and use the sample variance of simulated values to approximate the true variance of the estimator (by the law of large numbers) \cite{wasserman2004all}. We will now discuss dominant approaches to constructing these samples during inference, contrasting their statistical properties/guarantees and computational complexities. \subsubsection{\textbf{The Bootstrap}} \label{sec:bootstrap} The bootstrap ~\citep{efron1994bootstrap} is a widely used and understood statistical procedure so we will not spend time explaining it here. Instead, we will describe how the Bootstrap can be used at inference time to get uncertainty estimates, along with the standard predicted probabilities: By construction, any ensemble model implicitly fits several copies of the same model (with the same base architecture). Ensembles constructed from bootstraps of the same training set come with implicit uncertainty estimates --- we make a prediction based on the ensemble mean, but we could also look at the ensemble variance. In this way, the predictive variance of the bootstrap outputs approximates the variance in predictions as if a single model was being used to make a prediction. Indeed the reduction in variance that we gain from ensembling can be interpreted as an estimate of the worst-case variance of a single model. \subsubsection{\textbf{The Jackknife}} \label{sec:jackknife} The jackknife~\citep{Jackknife_review} is the intellectual predecessor to the sampling technique of leave-one-out cross-validation. Similarly to how we used the Bootstrap to construct estimates of predictive variance (and related measures) we can construct samples during inference from the base training set by leaving one (k in general) sample out each time. So, the Jackkife constructs subsampled dataset by iteraticely leaving one sample of the training set (hence the name leave-one-out). If there are n samples, the Jackknife constructs n permuted training sets, each of which differ exactly in one sample (added in one sample, omitted in the other). Hence, a common realistic scenario under which the Jackknife is preferable to the Bootstrap is when the number of samples is critically low: recall that the Bootstrap constructs variations of the base train set by sampling with replacement. If we do not have enough samples to be able to construct unique subsets through the Bootstrap, we might instead prefer to construct samples using the Jackknife: this is very computationally optimal when the train set size is sufficiently small. In other words, the Bootstrap is a more computationally feasible alternative to the Jackknife when the sample size is prohibitively high. \subsubsection{\textbf{Jackknife+}} \label{sec:jaccknife-plus} ~\cite{barber2021jackknife+} showed that the Jackknife fails to provide an assumption-free guarantee. The key observation they make is that the test residual computed on a held-out test set is not comparable with the leave-one-out residuals computed on the Jackknife. This is because the former sees one more observation in training than the latter sees (the single observation that is left out while constructing the subsets using the Jackknife. Instead the Jackknife+ makes a slight modification: it uses the ensemble constructed from models trained on the leave-one-out training for inference directly. The residuals on the held-out test set can therefore be used to construct confidence intervals for the ensemble with a guarantee of distribution-free predictive coverage at level 1-{$2\alpha$}. We point the reader to the original Jackknife+ paper~\cite{barber2021jackknife+} for the technical proofs. \subsubsection{\textbf{Jackknife+-after-Bootstrap}} \label{sec:jackknife-bootstrap} There are several variants that combine ideas from the Jackknife and the Bootstrap. Here we highlight perhaps the most influential variant: the Jackknife+-after-bootstrap~\citep{kim2020predictive_jackknife_bootstrap}. The authors make a very neat observation that allows them to use "out-of-bag" estimates to improve on the computational complexity of sampling-during-inference techniques, namely that it is possible to obtain the i-th leave-one-out residual without having to recompute residuals from the base model by reusing the residuals computed from each training set. We can simply aggregate the residuals from models that did not see the i-th data point and directly compute the residual! \subsection{Conformal Prediction} \label{sec: conformal} To motivate conformal prediction let us think about a scenario in which we are working with large-scale dataset or one where model evaluation is expensive: As the sample size increases, both the bootstrap and the jackknife become increasingly computationally expensive --- prohibitively so. In such a situation it would be more feasible to instead create a second hold-out set --- called the 'calibration set' --- on which to construct our confidence intervals from. This a very different procedure than sampling-during-inference techniques. Here, we train a single model --- we do not create multiple copies of the base model, train them on slightly different training sets and then construct predictive distributions. Instead, we train one model, apply it on the calibration set, compute residuals (or any suitable non-conformity score) and use it to construct confidence intervals for unseen samples. Alternatively, we could construct prediction sets for each unseen sample (with uncertainty built into the selection), instead of constructing confidence intervals around point estimates. In this manner conformal prediction can be thought of as a transformation from an uncertainty heuristic (computed on the labelled samples) into an uncertainty guarantee (for held-out samples). The basic algorithm for conformal prediction (with coverage 1-$\alpha$) is as follows: \begin{enumerate} \item Fit your estimator. This is the model whose uncertainty you want to quantify and calibrate to satisfy the desired coverage. \item Define a score function. Intuitively this is a measure of non-conformity, and larger scores indicate larger disagreement between inputs. \item Compute the non-conformity between the true label and the predicted values from the estimator for all the samples in the calibration set. From this compute the (1-$\alpha$) quantile of these calibration score (let us call it $qˆ$). \item Use this quantile to form the prediction sets for new examples as: $$ C(X_{test}) = \{y : s(X_{test},y) \leq qˆ\}. $$ \end{enumerate} Conformal prediction is an increasingly popular area of research in machine learning. For a more complete treatment of this technique we point the reader to the excellent primer on Conformal Prediction from \cite{angelopoulos2021conformal} and to seminal works from ~\cite{shafer2008conformal, vovk2017conformal, shafer2008conformal, vovk_cross-conformal}. \todo{TODO: Comparison of methods table} \section{Model Stability/Robustness Metrics} \label{sec:metrics} The Jackknife and Bootstrap (and their many variants) are all ways to construct a predictive distribution on a held-out set, instead of single point estimates. There are several things we can do with this distribution, we group them broadly into supervised and unsupervised methods. Supervised methods use the true labels to compute residuals and construct confidence intervals for unseen test samples, as discussed in Section \ref{sec:methods}. Now, we discuss unsupervised approaches which instead focus on the statistical properties of the predictive distribution. In this way, sampling during inference UQ techniques have a natural connection to metrics of model stability from stability/ adversarial robustness literature. Popular statistical measures of variation include, but are not limited to: \todo{TODO: set up single notation for all equations + diagram showing connection between methods?} \begin{enumerate} \item \textbf{Label Stability:} Normalized absolute difference between the number of times a sample is classified as positive or negative. Value of 1 means perfect stability. Value of 0 means extremely bad model stability, because it indicates that ensemble members disagree greatly on the predicted labels. \item \textbf{Jitter:} TODO \item \textbf{Sample Variance:} The spread of the predictive distribution is a natural measure of model (in)stability. \item \textbf{Sample Inter Quantile Range:} This is another natural statistical measure of the spread of predictions by the ensemble. \item \textbf{Predictive Entropy:} (?) \end{enumerate} \section{Notation} \label{sec:notation} Before proceeding let us fix some notation. \todo{TODO} \section{Quantifying Uncertainty in the Model Lifecycle} \label{sec:setup} \begin{figure} \includegraphics[width=\textwidth]{stability_lifecycle.png} \caption{Stages in the model lifecycle at which we study stability. The numbers in circles map to different interventions, described in Section \ref{sec:setup}.} \centering \Description{Stages in the model lifecycle at which we will measure stability} \label{fig:lifecycle} \end{figure} \subsection{Interventions} We quantify the uncertainty/variance in model outputs with respect to the following dimensions. Our list of interventions are mapped onto different stages of the data science life-cycle, shown in Figure \ref{fig:lifecycle} \begin{enumerate} \item \textbf{Hypothesis Space:} Different model architectures encode different inductive biases, and so one source of uncertainty, which could result in instability in model outputs downstream, is the hypothesis class over which we are running our optimization. \todo{Bias-variance plots (broken down by groups) for different models: linear models, tree-based models, ensembles, knn, linear NNs, SVM(?). Also plot training dynamics for each model: train loss, test loss, label stability on the test set, as a function of train epochs} \item \textbf{Parameter Space:} The settings of several hyper parameters in predictive model determine how stable the model outputs will be --- based on choice we make in the parameter space we might end up at a minimum in a flat valley in the loss surface, and parameterize a stable model, whose performance generalizes reasonably well to unseen samples, or the local minimum we pick might be extremely sharp, and the corresponding model could produce large deviations in outputs for small perturbations in the input space. \todo{Pick one (best) model from each hypothesis class, train it in three regimes: underfitting, overfitting and tuned. Plot bias vs fairness for different subgroups.} \item \textbf{Data Processing:} Raw data is necessarily pre-processed and cleaned before being fed into data-driven models. The choice of data engineering technique is another source of uncertainty, because it involves a strategic manipulation of the dataset, which modifies it statistical properties. For example, standardization or max-min scaling is a common technique to reduce the variance in a dataset. \todo{We will limit our analysis to the impact of synthetically generated data errors, so that we have access to the ground truth. For this we will: simulate nulls (percentage), add noise to produce outliers (frac of noise/weight of noise) and randomly flip labels in the train set (frac of labels flipped -- can be strategic wrt groups). Create bias-variance plots comparing different procedures for each error type. Also plot bias and variance metrics for different values of the noise hyperparameters (written in brackets above) } \item \textbf{Data Collection:} The quality of an estimator depends acutely on the quality and size of the dataset that it was estimated from. We supplement our analysis of the impact of data errors (see point above), with an analysis of dataset size --- a model simply could not have seen enough data to make stable predictions, ie to find good, stable minima, and so dataset size is an important facet along which to study model instability. \todo{Subsample the dataset and report trends for bias and variance metrics} \item \textbf{Data Generating Process:} In practice, we will not be able to confidently assert that the data we built our model from, and the data that we will apply our model to get predictions for, comes from the same distribution. If any part of the data generating process changes (more weakly, if we suddenly start accessing/sampling from a different, previously unseen part of the data space) then the guarantees that we have for model performance no longer hold. We will study bais-variance trade-offs in the simple setting of test-data coming from a distribution that is different from the one sampled our training set from \todo{Only do for folktables: train on (S1, Y1), sample test data points from (S2, Y1), (S1, Y2) and (S2,Y2) and report bias-variance plots for each. Can we capture a notion of shift? KL between datasets?} \item{ \textbf{Learning Paradigm:} We take a second, more detailed look at the effect of data uncertainty on model stability by intervening on the learning paradigm -- whether the model sees all the data at once (batch), or is trained iteratively (incremental/online).} \todo{Compare incremental vs batch versions of the same algorithm, ie. hold the hypothesis space fixed, and report bias-variance plots. } \end{enumerate}
1,314,259,995,681
arxiv
\section{Principle of quantum signatures} Suppose Alice sends a message to Bob, who in turn may pass it on to Charlie. How can Bob tell that the message is from Alice, and has not been tampered with, and how can Charlie tell that Bob did not modify {or by himself generate} the message? {If one is} using a conventional handwritten signature, {then} Alice has previously distributed copies of her signature, and recipients compare the signature on a document with the previously distributed signature sample. Quantum signature schemes also have two stages, a distribution stage where sequences of quantum states are distributed among the participants, and an entirely ``classical'' messaging stage, which can occur much later, where Alice sends signed messages to Bob or Charlie. In the type of scheme we will employ, Alice distributes sequences of non-orthogonal quantum states to the possible recipients Bob and Charlie, as signatures {for the} possible future messages. The classical information which fully describes these sequences can be viewed as Alice's ``private keys''. Only Alice has exact knowledge of these sequences, i.e. of the private keys. Bob and Charlie, or any other malicious party, are only able to obtain partial information about the quantum states in these sequences, no matter what type of quantum measurements they are using. The simplest case is if Alice wants to later sign a one-bit message, ``0'' or ``1''. Alice then initially distributes quantum state sequences corresponding to ``0'' and ``1'', that is, she distributes the quantum signatures for ``0'' and ``1'', respectively. In the messaging stage, if Alice wants to communicate ``0'', she sends the message together with the corresponding private key, that is, the classical description of what states the corresponding sequence contained. Let us say that the recipient is Bob; due to the non-orthogonality of the signature states, Bob inevitably {can obtain only} {partial information about the private key,} {whether he is honest and performs the measurements {in} the protocol or not}. Therefore he only has ``noisy'' {information} about the private key corresponding to {either ``0'' or ``1"}, no matter what measurement procedure he uses. Note that here ``noisy'' refers {both} to errors in identifying the states, and to any noise in transmission line. {The latter, however, can in principle be avoided, whereas the former cannot be avoided even in principle.} Bob then checks Alice's classical private key (classical information about the sent quantum signature) against the measurement results he obtained in the distribution stage, for the corresponding sequence of quantum states. In practice, due to errors, Bob's measurement results will not perfectly match Alice's private key, but Bob accepts the message as coming from Alice if the ``distance'' between his {stored} ``noisy'' {information} and the private key is small enough. {If Bob later wants to forward the message, he} forwards the message (``0'', say) and the corresponding private key {(that he received from Alice)} to Charlie, who tests the signature in the same way as Bob. {Importantly, Bob should be able to know whether Charlie is likely to accept the message already when Bob performs his initial check of the signature, without at that point contacting Charlie.} {Bob may try to cheat, that is, try to make Charlie accept a forged message as genuinely coming from Alice. This he can do either if he has received no message from Alice, or if he has received some other message from Alice. For example, if Bob receives the message ``0'' and corresponding private key from Alice}, he may decide to cheat and {try to convince} Charlie that Alice communicated ``1''. To do so, he would have to send the private key corresponding to ``1'', but this is not at his disposal. Instead, {in the signature protocol we are implementing}, all he can do is send a classical sequence based on {the measurement that he performed during the distribution stage, which only gives} a ``noisy'' copy of the private key for ``1''. He should choose the sequence that, {as far as Bob knows}, is most likely to {be accepted by Charlie as coming from Alice.} However, Bob's {noisy copy {of the private key}} and Charlie's noisy {private key} for ``1'' are different, because they are obtained from different measurements. Therefore Charlie will detect a greater {``noise distance''} between his {own} noisy {private key} and {the one} he receives from Bob, than he would expect if the message came from Alice. Therefore Charlie knows something is wrong and rejects the message. Alice can also try to cheat by sending a message that Bob will accept but Charlie will reject, i.e., she can try to send a non-transferable message. Bob and Charlie guard against this by symmetrizing their noisy {measurement results for} the private key, {in the distribution stage}. This can be done, for example, by randomly swapping half of their ``noisy'' {measurement results with} each other. After this swapping, from Alice's point of view, Bob's and Charlie's measurement results follow the same statistics, and therefore it is impossible for her to create a non-transferable message, as long as Charlie uses a {less strict} threshold for accepting the {signed message} than Bob. \section{Security analysis} To be considered a useful scheme, quantum digital signatures (QDS) must be secure against both repudiation and forgery. The scheme is secure if the probability that the signature can be repudiated or forged decays exponentially with the length of the key. In addition, the scheme should be robust, which means that if all parties behave as they should, the protocol runs as intended with high probability. The analysis below follows the same methods as in \cite{Donaldson_15}. {\it Security against repudiation}: For successful repudiation, Charlie must reject a message that Bob has already accepted. Due to the random swapping of measurement results between Bob and Charlie, the measurement statistics they share are symmetrical, which provides security against repudiation. No matter what cheating strategy Alice adopts, including strategies involving entangled states, this will result in Bob and Charlie having the same probability $p$ to observe a mismatch in the messaging stage. Alice can adjust $p$, but this is all she can do. To achieve successful repudiation, Alice can manipulate the states sent to Bob and Charlie to try to cause a disagreement between them. We give Alice full control over the probability of a mismatch between the private key and Bob's (Charlie's) eliminated signature. We call the probability of a mismatch $p_B$ for states first sent to Bob, and $p_C$ for states first sent to Charlie. For successful repudiation, Bob must accept the message for both {parts of his signature, each of length $L/2$}, and Charlie has to reject the message {for} at least one part of his signature. Since $P(A\cap B)\le \min\{P(A),P(B)\}$ and $P(A\cup B)\le P(A)+P(B)$, we can write \begin{equation}\label{prep1} \begin{split} p_{rep}&=P((A\cap B)\cap(C\cup D))\\ &\le \min\{\min\{P(A),P(B)\},P(C)+P(D)\}, \end{split} \end{equation} where $P(A)$ ($P(B)$) is the probability that Bob will accept the message using the $L/2$ states received from Alice (Charlie), and $P(C)$ ($P(D)$) is the probability that Charlie will reject the message due to the $L/2$ states received from Bob (Alice). Using Hoeffding's inequalities~\cite{hoeff}, which bound the probability that the empirical mean of $L$ independent random variables deviates from their expected mean, the probabilities $P(A)$ and $P(B)$ that Bob will accept the message, for the length $L/2$ parts of his eliminated signature received from Alice and Charlie respectively, are \begin{equation}\begin{split}P(A)&\le\exp[-(p_B-s_a)^2L],\\ P(B)&\le\exp[-(p_C-s_a)^2L],\end{split}\end{equation} where $s_a$ is the authentication threshold. Similarly, the probabilities $P(C)$ and $P(D)$ that Charlie will reject the message for the length $L/2$ parts of his eliminated signature received from Bob and Alice respectively are \begin{equation}\begin{split}P(C)&\le\exp[-(s_v-p_B)^2L],\\ P(D)&\le\exp[-(s_v-p_C)^2L],\end{split}\end{equation} where $s_v$ is the verification threshold and $s_v>s_a$. Now we can take $p=\max\{p_B,p_C\}$. In that case $\exp[-(p-s_a)^2L]=\min\{P(A),P(B)\}$. In addition, $2\exp[-(s_v-p)^2L]\ge P(C)+P(D)$. Combining these two equations with Eq. \eqref{prep1}, we get \begin{equation} p_{rep}\le\min\{2\exp[-(p-s_a)^2L],2\exp[-(s_v-p)^2L]\}, \end{equation} where the first term in the {minimum} has been doubled for simplicity, noting that this slightly loosens the tightness of the bound on the repudiation probability. Alice's optimal choice of $p$ is the one that maximizes the smaller of these two terms, that is, $p=\frac{s_a+s_v}{2}$. With this choice, her repudiation probability is bounded as \begin{equation} p_{rep}\le2\exp\left[-\frac{(s_v-s_a)^2}{4}L\right]. \label{prep} \end{equation} This decays exponentially with the length of the signature and thus the scheme is secure against repudiation. {\it Security against forging}: It is easier to forge a message that is claimed to be forwarded, than one that is claimed to come directly from Alice. Bounding the probability for the former also bounds the probability for the latter. Therefore, we will consider the case where Bob attempts to forge a message which he is forwarding to Charlie, claiming he received it from Alice. Since the protocol is symmetric with respect to the two recipients Bob and Charlie, this also bounds Charlie's probability to forge messages. To successfully forge, Bob must ensure that he doesn't, in the messaging stage, declare {too many} of the states that Charlie has eliminated, with fewer than $s_vL/2$ errors in each length $L/2$ part of Charlie's eliminated signature. Since Bob can control what he forwards to Charlie in the distribution stage, Bob can completely control the number of mismatches for these positions. If he so wishes, he can cause no mismatches in those positions. Therefore it is the measurement results which Charlie did not forward to Bob that Bob has to try to guess. The measurement results Charlie received through Bob are used to protect against repudiation, whereas the measurement results Charlie obtained for states directly received from Alice are used to test for forgery by Bob, and vice versa. Assuming that Bob cannot interfere with the quantum states which Alice sends to Charlie, Bob's best forging strategy will involve measurements on the copies of these states which Bob legitimately received from Alice. Based on this, Bob will make a best guess, when later declaring to Charlie what these states supposedly were. The optimal measurement Bob should make to forge is limited only by what is possible in quantum mechanics, not by any considerations of what measurements are practical to realize, and is not the same measurement as he would make if honestly following the protocol. In general, one should assume that Bob knows which measurement results Charlie will forward, and which ones he will keep to himself, so that Bob can change his measurement strategy accordingly for states in different positions. The fact that the possible states Alice can send are non-orthogonal provides the basis of the security of the scheme. As in \cite{Collins_14}, the optimal individual measurement Bob can perform is a minimum-cost measurement, minimising Bob's ``cost" associated with mismatches. Since the states sent by Alice are uncorrelated with each other, collective forging strategies, where measurements on successive signature states can depend on the results obtained in previous measurements, provide no advantage over individual forging strategies, where Bob simply repeats the same optimal measurement for each signature state~\cite{Collins_14}. The most general type of forging attack are coherent forging attacks, where Bob can measure any number of signature states in an entangled basis. While intuitively the protocol should remain secure also against coherent forging, this analysis is not in general straightforward. We therefore leave discussion of coherent forging attacks for future work, noting that it has been shown that for BB84 signature states, coherent attacks provide no advantage \cite{Wallden_14}. To prove security against individual and collective forging, we need to bound Bob's minimum cost for a measurement on an individual signature state, which in this case is identical to Bob's probability to cause a mismatch for a single signature element. This is done following the method in the supplemental material of \cite{Donaldson_15}, resulting in a lower bound on the minimum cost $C_{min}$, depending on the cost matrix, which is determined from the experimental data, and $p_{min}$, which is the minimum probability for Bob to incorrectly identify a state received from Alice. $p_{min}$ depends on the amplitude of the initial coherent states, and can be shown to be \cite{Collins_14} \begin{equation} p_{min}=1-\frac{1}{16}|\sum_{i=1}^4\sqrt{\lambda_i}|^2,\label{pmin} \end{equation} where $\lambda_{1,2}=2\exp(-\alpha^2)[\cosh(\alpha^2)\pm\cos(\alpha^2)]$ and $\lambda_{3,4}=2\exp(-\alpha^2)[\sinh(\alpha^2)\pm\sin(\alpha)^2].$ Here, we are assuming that the forger Bob has access to the states Alice sends before any losses or imperfections have acted on them. This is not true for an honest Charlie, whose measurements on the states is subject to loss and imperfections. An example of a calculation of a bound for the minimum cost for an experimental cost matrix is given in Section 2, and a calculation for a theoretical cost matrix is given in Section 3. The probability of a successful forgery is the probability that Charlie measures fewer than $s_v L/2$ errors in the results for the $L/2$ states received directly from Alice during forgery by Bob. Using Hoeffding's inequalitie , the probability of a successful forgery is therefore \begin{equation} p_{forg}\le\exp\left[-(C_{min}-s_v)^2L\right].\label{pforg} \end{equation} This probability decays exponentially with respect to signature length as long as $C_{min}>s_v$. {\it Robustness}: A QDS scheme is only useful if it only fails with small probability. If all parties are honest, then Bob should accept the message as being genuine, except with small probability. The message is rejected if Bob detects more than $s_aL/2$ errors in either of the length $L/2$ parts of his eliminated signature, which using Hoeffding's inequalities occurs with probability \begin{equation} p_{fail}\le2\exp\left[-(s_a-p_{err})^2L\right],\label{pfail} \end{equation} where $p_{err}$ is the probability that an honest recipient, following the protocol, will eliminate the state actually sent by Alice. If, as is normally the case, $p_{err}$ for the states sent to Charlie is different to that for those sent to Bob, then $p_{err}$ should be taken as the maximum of those probabilities. Since Charlie's rejection threshold is less strict than Bob's, Charlie's rejection probability is smaller than Bob's. For the protocol to be robust, we thus have to choose $s_v>s_a>p_{err}$. Taking everything together, the protocol can be made secure and robust as long as an honest Charlie is able to distinguish a ``fake" declaration by Bob from a declaration made by Alice, in terms of the average number of mismatches Charlie sees. This occurs when Bob's optimum probability to cause a mismatch, $C_{min}$, is greater than the probability $p_{err}$ that Alice's true declaration will cause a mismatch. As long as $C_{min} > p_{err}$ holds, the thresholds $s_v$, $s_a$ and the signature length $L$ can be chosen so that the scheme is as secure as desired against forging for all displacement amplitudes. If we assume that all parties are equally likely to be dishonest, then we can define the level of security by setting the terms in the exponentials of Eqs. \eqref{prep}, \eqref{pforg} and \eqref{pfail} to be equal to each other. This is achieved when $s_a=p_{err}+(C_{min}-p_{err})/4$, and $s_v=p_{err}+3(C_{min}-p_{err})/4$. This gives an upper bound for the total probability for the scheme to fail in any one of these ways of \begin{equation} P(\mbox{failure})\le2\exp\left(-\frac{g^2}{16}L\right), \label{failureprob} \end{equation} where $g=C_{min}-p_{err}$ can be determined from experimental results. The figure of merit we use to characterize the quality of a QDS scheme is the length $2L$ required to sign a one-bit message for a particular security level. In this work, to facilitate comparison with earlier realizations~\cite{Collins_14, Donaldson_15}, the security level we choose is that the probability of failure is $\le$ 0.01$\%$. \section{Experimental Details} {For the signature state sequences} we use four coherent states $\lvert \alpha\rangle$, $\lvert i\alpha\rangle$, $\lvert -\alpha\rangle$, $\lvert -i\alpha\rangle$, which are symmetrically distributed in quadrature phase space. In \cite{Heim_14}, we used these same states to distribute and quantify effective entanglement between Alice and Bob. At the receiver, we measure both the $\hat{x}$ and the $\hat{p}$ quadrature using a heterodyne measurement. The signal is split at a balanced beam splitter, and homodyne detectors are used at both outputs. In particular, in each homodyne measurement, we mix a strong local oscillator with the signal on a balanced beam splitter, and measure the resulting difference signal of two PIN photodiodes, built into a homemade detector. To achieve a high detection efficiency at the receiver we send the local oscillator (LO) together with the signal states, polarization multiplexed, through the 1.6\,km free space channel. This can be described using Stokes operators \cite{Korolkova_02}. At Alice's end, we use a grating-stabilized diode laser at 809~nm wavelength (Toptica DL 100). The output of this laser is spatially mode-cleaned by a single-mode fiber, and a small part of the output power is used in a balanced self-homodyning setup to monitor the shot noise-limited operation of the laser. The remaining part is used to prepare the actual signals and also the local oscillator. The polarization is cleaned by a polarizing beam splitter (PBS) and then adjusted to be circularly polarized ($\langle\hat{S}_1\rangle=\langle\hat{S}_2\rangle=0$) using a quarter-wave plate (QWP). Then we use two sequences of half-wave plates (HWP) and electro-optical modulators (EOM, Thorlabs, EO-AM-NR-C1, 600-900 nm, bandwidth 100 MHz) to produce the four signal states in the $S_1$-$S_2$-plane. In terms of Stokes operators this leads to a bright $+S_3$-polarized local oscillator (which is essentially not affected by the signal modulation) and the signal states $\lvert \alpha\rangle$, $\lvert i\alpha\rangle$, $\lvert -\alpha\rangle$, $\lvert -i\alpha\rangle$, which are orthogonally (i.e.\ $-S_3$-) polarized. Therefore the EOMs are driven by two individual arbitrary waveform generators (Agilent 33250A) that are synchronized with each other. They are used to produce Gaussian-shaped modulation voltages with peak voltage in the mV range, leading to signal amplitudes in the range of a few shot-noise units. After each Gaussian-shaped pulse, the output voltage is set to zero for the same time period. This is used as the vacuum reference for the signal states. The repetition rate of the produced signal states is 3.05~MHz. After 263 signal pulses, we increase the peak voltage of one pulse to produce a trigger signal for synchronization between Alice and Bob (Charlie). To avoid any influence of the bright trigger pulse onto the quantum signals, we disregard the 92 following signal pulses. This leads to an effective sending rate of 2.22~MHz. At the sender, the signal preparation is either verified, or the beam is expanded to a beam width of approximately 4~cm and sent through an optical window followed by the free-space channel to Bob (Charlie). The signal measurement at Alice, used to adjust and confirm the signal preparation, uses a balanced beam splitter to split the signals in two equal parts. They are mixed with the polarization-multiplexed LO on a PBS. The phase of the LO can in this case be adjusted using a HWP, while a QWP is used to compensate for static polarization offsets. The outputs of the PBS are detected with two PIN photodiodes and the difference signal of these are amplified in a homemade detector. By this we are able to simultaneously measure the $S_1$ ($\hat{x}$ quadrature) observable and the $S_2$ ($\hat{p}$ quadrature) observable. The overall detection efficiency, including optical losses and the diodes' quantum efficiencies, is $0.84\pm0.02$. The electronic signal is high-pass filtered (Minicircuits BLK-89-S+, 100 kHz) and analogue-to-digital-converted with an oscilloscope at a sampling rate of 250 Msamples/s. Thus each signal pulse is 41 samples long, followed by 41 samples of vacuum. The linearity of the detection system was confirmed by an attenuation measurement without signal modulation. At Bob (Charlie) we use a telescope with a receiving aperture of 150~mm to catch as much as possible of the incoming beam and reduce its beam width for further processing. First we split 5\% from the beam with an unbalanced beam splitter, to record the channel transmission, which was between 50\% and 85\% during our measurements. The remaining received signal is measured in exactly the same manner as at Alice's site. Here the overall detection efficiency including optical losses and the diodes' quantum efficiencies is $0.83\pm0.02$. The experiment was implemented with three different signal peak voltages leading to the average signal sizes $\alpha=0.48$, $\alpha=0.93$, and $\alpha=1.63$. The $S_1$ signals are slightly reduced compared to the $S_2$ signals, as we use the same modulation voltages but produce the $S_1$ signals first. Thus they are attenuated by the second EOM which has a transmittance of 95\%. We attribute the first (second) half of the overall measurement time to Bob (Charlie). As already mentioned in the main article Bob's (Charlie's) measurement data is then sorted in 32 sub-channels, according to the measured transmission. Depending on the sign of the quadrature measurement values, for each signal state two of the possibly sent states were eliminated. For example, in the case of a positive $S_1$ ($S_2$) measurement value, $\lvert-\alpha\rangle$ ($\lvert-i\alpha\rangle$) is eliminated. For each $\alpha$ and transmission bin, the knowledge of the sent state was combined with the eliminated states to produce a cost matrix that gives the probability that each state was eliminated for each sent state. The rows of the matrix correspond to the states sent by Alice, in the order $\lvert \alpha\rangle$, $\lvert i\alpha\rangle$, $\lvert -\alpha\rangle$, $\lvert -i\alpha\rangle$. The columns correspond to the states eliminated by Bob in the same order. The diagonal elements therefore give the probability that the sent state is eliminated. An example of the measured cost matrix is shown below, with errors. {The errors are calculated by dividing the available dataset in 10 equal sized parts and calculating the respective cost matrix and their standard deviation. Thus the errors give an upper bound for the statistical error and possibly drifting systematic errors.} This matrix is Bob's data for $\alpha=0.48$ at a transmission level of $T=0.600$ ($T$+$R$=1) and is given by \[C=\left(\begin{array}{cccc} 0.3767&0.5028&0.6233&0.4972\\ 0.4929&0.3682&0.5071&0.6318\\ 0.5979&0.496&0.4021&0.504\\ 0.4957&0.6204&0.5043&0.3796\\ \end{array}\right)\] \begin{equation}\pm\left(\begin{array}{cccc} 0.015&0.019&0.015&0.019\\ 0.008&0.013&0.008&0.013\\ 0.013&0.019&0.013&0.019\\ 0.014&0.020&0.014&0.020\\ \end{array}\right).\label{ECM}\end{equation} The relevant cost matrix can be used to bound the minimum cost of a minimum-cost measurement performed by a forger, by following the method in the supplemental material of \cite{Collins_14}. {To find an analytical bound on the minimum cost, we manipulate the cost matrix in Eq. \eqref{ECM} to the form of an error-type cost matrix. We do this because the minimum cost of an error-type cost matrix is proportional to $p_{min}$, the minimum probability to incorrectly identify the state, with the proportionality given by the off-diagonal elements of the cost matrix. An error-type cost matrix has zeros on the diagonals of the cost matrix, and all the off-diagonal terms are equal. It is called error-type because a correct declaration has zero cost, and an incorrect declaration always has the same cost.} {To get to this form, we use two properties of cost matrices. First, subtracting a constant-row matrix from a cost matrix reduces the cost by a constant, while leaving the minimum-cost measurement unchanged. Second, the cost of a cost matrix $C_{i,j}$ is {bounded from below} by the cost of a cost matrix $C_{i,j}^l$ that is strictly smaller than it $C_{i,j}^l\le C_{i,j}$.} {We define $C_{i,j}^h=C_{i,i}$, a constant-row matrix for which the elements in each row are equal to the diagonal elements of the matrix $C_{i,j}$. We then define $C_{i,j}'=C_{i,j}-C_{i,j}^h$, which has the same minimum-cost measurement as $C_{i,j}$, but with the minimum cost reduced by $C^{h}=1/4\sum_iC_{i,i}$. Finally we define the cost matrix $C_{i,j}^l$ that is strictly smaller than $C_{i,j}'$ for all $i,j$ such that $C_{i,j}^l=\min_{i\ne j}C_{i,j}'$ for all $i\ne j$, and with zeros on the diagonal. This final cost matrix $C_{i,j}^l$ is of error-type, for which the minimum cost $C_{min}^l$ is proportional to the minimum error probability $p_{min}$. Using this argument we can lower bound the minimum cost of the cost matrix \eqref{ECM} as} \begin{equation} C_{min}\ge C^h+C_{min}^l. \end{equation} Starting from \eqref{ECM}, the subsequent cost matrices are \begin{equation} C^h=\left(\begin{array}{cccc} 0.3767&0.3767&0.3767&0.3767\\ 0.3682&0.3682&0.3682&0.3682\\ 0.4021&0.4021&0.4021&0.4021\\ 0.3796&0.3796&0.3796&0.3796\\ \end{array}\right),\label{constantrow}\end{equation} \begin{equation} C'=\left(\begin{array}{cccc} 0&0.1261&0.2466&0.1205\\ 0.1247&0&0.1389&0.2636\\ 0.1958&0.0939&0&0.1019\\ 0.1161&0.2408&0.1247&0\\ \end{array}\right),\end{equation} \begin{equation} C^l=\left(\begin{array}{cccc} 0&0.0939&0.0939&0.0939\\ 0.0939&0&0.0939&0.0939\\ 0.0939&0.0939&0&0.0939\\ 0.0939&0.0939&0.0939&0\\ \end{array}\right).\label{Cl}\end{equation} From \eqref{constantrow}, $C^h=0.3817$. This is the cost for an honest scenario; it is the probability that Charlie will eliminate a state that Alice sent if all parties are honest. From \eqref{Cl}, the minimum difference between the probability of eliminating the sent state, and the probability of eliminating another state is 0.0939. This difference therefore gives the advantage of declaring the sent state at the messaging stage. The minimum cost for matrix \eqref{Cl} is the product of that advantage and the minimum probability to incorrectly identify a state $p_{min}$. For this state $\alpha=0.48$, so from \eqref{pmin}, $p_{min}=0.4373$. The minimum cost of the matrix $C_{i,j}$ is finally \begin{equation} C_{min}=0.3817+0.0939\times0.4373=0.42276, \end{equation} and the parameter $g$ used to calculate the signature length is \begin{equation} g=C_{min}-C^h=0.04106. \end{equation} This corresponds to a required signature length of $L=94000$ for a security level of 0.01\%. In Figs. \ref{fig2} and \ref{fig3}, we plot similarly calculated signature lengths as a function of the channel transmission, for $\alpha=0.93$ and $\alpha=1.63$. The graph for $\alpha=0.48$ is already included in the main paper. In all experimental graphs, errors in the signature length were calculated using the statistical errors of the elements in the cost matrices. The errors in the length were calculated by first adding the errors of the diagonal elements, and subtracting the errors of the off-diagonal elements. This gives a new cost matrix $C'$ from which a new parameter $g'$ can be calculated as above, with $g'<g$. This new $g'$ is then used to calculate a new length $L'>L$ that is the worst-case scenario for the required signature length. The length $L'$ is taken as the top of the error bar in the experimental graph. Second, the error bars in the diagonal elements are subtracted, and the errors in the off-diagonal elements are added to give a new cost matrix $C''$ that has a new parameter $g''>g$. This new $g''$ is then used to calculate a new length $L''<L$ that gives a best-case scenario for the required signature length. The length $L''$ is used for the bottom of the error bar in the experimental graph. Note, that to ensure the required security when running a full signature protocol, the longest length $L'$ should be used for the signature length, as this is the worst-case scenario. This means it is important to minimize the errors in the cost matrix by taking a large number of measurements to calculate the cost matrix. In this experiment, insufficient data was available at some transmission levels, which led to the large error bars seen. \begin{figure}[tb] \includegraphics[width=8cm]{Croal_fig1_supplemental.pdf} \caption{Signature length for $\alpha=0.93$. Blue curve: theoretical model. Blue dots/bars: results from the data attributed to Bob. Red dots/bars: results from the data attributed to Charlie. The error bars calculated are statistical.}\label{fig2} \end{figure} \begin{figure}[tb] \includegraphics[width=8cm]{Croal_fig2_supplemental.pdf} \caption{Signature length for $\alpha=1.63$. Blue curve: theoretical model. Blue dots/bars: results from the data attributed to Bob. Red dots/bars: results from the data attributed to Charlie. The error bars calculated are statistical.}\label{fig3} \end{figure} \section{Theoretical Models} In Fig. 3 of the main paper, the required signature length required with respect to transmission $T$ for three different theoretical models is shown. Here we describe how those curves were calculated. The black (lower) curve shows the case where heterodyne detection is used by the honest recipients and there are no experimental imperfections. In this case, the ideal cost matrix is \begin{equation} C=\left(\begin{array}{cccc} p_{err}&1/2&1-p_{err}&1/2\\ 1/2&p_{err}&1/2&1-p_{err}\\ 1-p_{err}&1/2&p_{err}&1/2\\ 1/2&1-p_{err}&1/2&p_{err}\\ \end{array}\right),\label{theoryCM}\end{equation} where $p_{err}=\frac{1}{2}\mbox{erfc}\left(\sqrt{\frac{T}{2}}\alpha\right).$ From this cost matrix, the minimum cost is bounded as described for the experimental cost matrix in the previous section. In this way, the minimum cost is found to be $C_{min}=p_{err}+p_{min}(\frac{1}{2}-p_{err})$, and the parameter $g$ is thus $g=p_{min}(\frac{1}{2}-p_{err})$. Note that the $\alpha$ used to calculate $p_{min}$ is the unattenuated $\alpha$ prepared by Alice. A higher $g$ gives a shorter signature length and therefore the optimal $\alpha$ is the one that gives the highest $g$. In this case $g$ is maximal when $\alpha\approx0.5$. The black curve calculated in Fig. 3 of the main paper is plotted by fixing $\alpha=0.5$ and calculating $L$ from the resulting $g$. The red (middle) curve in Fig. 3 of the main paper shows the case where heterodyne detection is used by the honest recipients and some experimental imperfections are taken into account. The experimental imperfections considered are imperfect detection efficiency, additional variance introduced by the EOM that displaces the coherent states, and electronic noise that increases the variance at the measurement stage. When these imperfections are taken into account, the cost matrix is the same as in Eq. \eqref{theoryCM}, but with a new $p_{err}$, \begin{equation}p_{err}=\frac{1}{2}\mbox{erfc}\left(\frac{\frac{1}{2}\eta T\alpha}{\sqrt{\frac{1}{2}\eta T\epsilon+elect}}\right),\end{equation} where $\eta$ is the detection efficiency, $\epsilon$ is the additional variance that comes in from the state prepartation and $elect$ is the electronic noise that increases the variance of the states. In all experiments, $\eta=0.856$ and $\epsilon=1.01$, and $elect$ varies between 0.04 and 0.08. The value of $elect$ is determined from the measured variances of the states. The theoretical model also takes into account the fact that the modulation of the Stokes operators $\hat{S}_1$ and $\hat{S}_2$ had a slightly different amplitude. The lower amplitude of $\hat{S}_1$ was used to calculate the guaranteed advantage from the cost matrix, and the higher amplitude of $\hat{S}_2$ was used to calculate $p_{min}$. The encoding always has some phase imperfections, however, since this only has a small effect on the signature length, it is not included in the model for simplicity. The signature length is calculated from the cost matrix in the same way as for the black curve, and the result is plotted in Fig. 3 of the main paper. This model is the same one that was used to plot the theoretical curve for Fig. 2 of the main paper and Figs. \ref{fig2} and \ref{fig3} of the supplementary material. The blue (upper) curve shows the case where single-photon detection is used for unambiguous state elimination as in \cite{Collins_14}, and there are no experimental imperfections. This represents the optimum length achievable for these states using unambiguous state elimination. In this case the ideal cost matrix is \begin{equation} C=\left(\begin{array}{cccc} 0&q&p&q\\ q&0&q&p\\ p&q&0&q\\ q&p&q&0\\ \end{array}\right),\label{theoryCM2}\end{equation} where $$p=1-\exp(-|\sqrt{T}\alpha|^2), ~~q=1-\exp(-|\sqrt{T}\alpha|^2/2).$$ From this, the minimum cost is bounded as before to be $C_{min}=p_{min}q$. Since the diagonal elements are 0, $C_{min}=g$, and $g$ is used to calculate the required signature length. Again, a higher $g$ gives a shorter signature length and therefore the optimal $\alpha$ is the one that gives the highest $g$. The blue curve is plotted by using the optimal $\alpha$ at each level of transmission, which in this case is $\alpha\approx0.7$. The black and blue curves can be compared to study which measurement scheme is most efficient for this set of states. They both assume ideal experimental conditions and so remove any technical considerations. Since the black curve is always lower than the blue curve, this show that heterodyne detection has a fundamental advantage over single photon detection for this protocol. In fact, even when taking into account realistic experimental imperfections, the scheme based on heterodyne detection performs better than the one based on single photon detection could ever do. \section{Introduction} Digital signatures \cite{Diffie_77} are ubiquitous in electronic communication, {used} in, for example, e-mail and digital banking. They guarantee the provenance, integrity and transferability of messages. Currently used classical digital signature schemes, however, rely on unproven computational assumptions~\cite{Knuth_69}, {and may become insecure} especially if quantum computers can be built \cite{Shor_97}. Quantum digital signatures (QDS)\cite{Gottesman_01, Andersson_06, Clarke_12, Dunjko_14, Collins_14, Amiri_15, Yin_15}, on the other hand, give information-theoretic security \cite{Gottesman_01}, loosely speaking based on {the fact that non-orthogonal quantum states cannot be perfectly distinguished from each other.} The first quantum signature schemes assumed tamper-proof, ``authenticated" quantum communication links. Intuitively, this could be accomplished using parameter estimation techniques similar to those used in quantum key distribution (QKD). How to achieve this was explicitly shown only recently~\cite{Amiri_15b, Yin_15}. {In addition, r}ecent quantum signature schemes {\cite{Dunjko_14, Collins_14}, including our protocol,} do not require {long-}term quantum memory. Importantly, this means that quantum signatures {can be implemented} with current technology, {essentially similar to QKD setups}. {``Classical" signature schemes with information-theoretic security also exist~\cite{chaum, hanaoka, swansonstinson}, but rely on secret shared keys, which could be accomplished using QKD. Quantum signature schemes have some advantages over such classical schemes~\cite{Amiri_15b}, but exactly what signature schemes are the most efficient remains an open problem.} Since messages may be forwarded between recipients, a signature protocol has at least three parties, a sender Alice and two recipients Bob and Charlie. In QKD, the communicating parties Alice and Bob are assumed to be honest. In signature protocols, however, any of the involved parties could be dishonest. Signature schemes should be secure against forging (with high probability, only messages sent by Alice should be accepted) and against repudiation {(it is unlikely that Alice could successfully deny having sent a message that she did send)}. {Repudiation} is closely related to message transferability. {Transferability means that it is unlikely that one recipient accepts a message as genuine, but that this message then is rejected if it is forwarded to another recipient.} If there is no trusted third party, one way to settle disputes is by majority voting. For three parties, which is the case we will consider, non-repudiation and message transferability then become equivalent. {In principle, quantum signature schemes are based on a ``quantum one-way function" which maps classical information (a ``private key'') to non-orthogonal quantum states (a ``public key") \cite{Gottesman_01}. In the simplest case, Alice wants to be able to later on send a one-bit message ``0" or ``1". For longer messages, the scheme could be suitably iterated. Generically, signature schemes have a {\it distribution stage}, where the scheme is set up, and a {\it messaging stage}, when messages are sent and received. The distribution stage could be compared to leaving a sample of a handwritten signature e.g. when first opening a bank account. The messaging stage typically takes place much later. In our quantum signature scheme, the messaging stage is entirely ``classical".} {In the distribution stage, Alice selects sequences of quantum states, one sequence for each possible future message ``0" and ``1". The states in the sequences are selected from some set of non-orthogonal quantum states. The classical information about what states Alice has selected forms her ``private keys" for the possible messages ``0" or ``1". The quantum state sequences are the corresponding ``public keys". Alice then sends copies of the ``public key" sequences to Bob and Charlie, who measure the states they receive. Since it is impossible to perfectly discriminate non-orthogonal quantum states, Bob and Charlie, or any other party, can never obtain full information about Alice's ``private keys''.} {Later on, in the messaging stage, when Alice wants to send a message to Bob or Charlie, she sends the message together with the corresponding ``private key". The recipient of a message checks that the appended private key sufficiently well matches the measurement results he obtained in the distribution stage} { for the respective message}{. In a real implementation, there will be mismatches even for a private key sent by an honest Alice. However, if imperfections are not too high, then anyone other than Alice would cause a higher level of mismatches than Alice {. This guarantees security against message forging.} {Similarly, to forward a message, a recipient forwards the message together with its private key, received from Alice, and the new recipient checks for mismatches with his measurement record. Related to this, Bob and Charlie also need to ensure that Alice cannot cheat, which would mean that she could make them disagree about the validity of a message. They achieve this by some kind of symmetrization procedure, done in the distribution stage~\cite{Gottesman_01, Andersson_06, Wallden_15}. In our protocol, as in~\cite{Wallden_15}, Bob and Charlie randomly forward half of their obtained measurement results to each other using a classical communication channel, secret from Alice. This channel could be realized using standard quantum key distribution. To ensure that Alice is unlikely to make Bob and Charlie disagree about the validity of a signature, the threshold for accepting a message directly from Alice should be stricter than for accepting a forwarded message. For more details see \cite{supplementary}.} In this paper, we implement a quantum signature scheme using continuous variable (CV) {heterodyne} quantum measurements. Previous quantum signature schemes~\cite{Clarke_12, Collins_14, Donaldson_15} have instead used unambiguous quantum measurements. We demonstrate that our scheme is viable in a noisy environment using a free-space urban optical communication link. Finally, we show that, even when experimental imperfections are taken into account, this scheme outperforms {a recent scheme that uses unambiguous state elimination measurements~\cite{Donaldson_15}.} \begin{figure}[tb] \includegraphics[width=8cm]{Croal_fig1.pdf} \caption{Depiction of the scheme. The numbered parts relate to the corresponding stages in the main text. Green dashed lines indicate classical communication. Red lines indicate communication with quantum states. }\label{schematic} \end{figure} Our QDS scheme is represented in Fig. \ref{schematic}, with the protocol described below. The stages in the text correspond to the respective numbers in the figure. We use a discrete set of CV states, four phase-encoded coherent states $|\alpha\rangle$, $|i\alpha\rangle$, $|-\alpha\rangle$, $|-i\alpha\rangle$, and {heterodyne} CV measurements~\cite{Leonhardt_10}. These same states {were also} used in previous QDS schemes \cite{Clarke_12, Collins_14, Donaldson_15} and are similar to those used in some types of CV QKD \cite{Leverrier_09,Lorenz_04}. In \cite{Clarke_12, Collins_14, Donaldson_15}, however, recipients made ``discrete" quantum measurements with error-free (unambiguous) results, at the expense of sometimes obtaining no result. Here we instead perform {heterodyne} measurements, which always give a result, at the expense of increased errors in the results. In many cases, unambiguous results are required for a protocol to perform efficiently \cite{Bennett_92, Bergou_03}. Surprisingly, we find that for this particular QDS protocol, {heterodyne} measurements provide an advantage. \noindent {\it Distribution stage: 1-4} \noindent 1. For each possible future one-bit message $k=0,1$, Alice generates two identical copies of sequences of phase-encoded coherent states, $QuantSig_k=\otimes_{l=1}^L {$|\psi_l^k\rangle\langle\psi_l^k|$, where $|\psi_l^k\rangle$} is a randomly chosen phase-encoded coherent state, $|\psi_l^k\rangle=|\alpha e^{i\phi_l^k}\rangle$, $\phi_l^k\in \{0$, $\pi/2$, $\pi$, $3\pi/2\}$, and $L$ is a suitably chosen integer. The state $QuantSig_k$ is called the quantum signature, and the sequence of phases $PrivKey_k=(\phi_1^k,...\phi_L^k)$ is called the private key. \noindent 2. Alice sends one copy of $QuantSig_k$ to Bob and one to Charlie, for each possible message $k=0$ and $k=1$. \noindent 3. Bob (Charlie) measures the states received from Alice by performing a {heterodyne} detection \cite{Leonhardt_10,Leonhardt_95} of the $\hat{x}$- and $\hat{p}$-quadratur . He records the result of the measurement and the associated position {in the sequence} $l$. For each quadrature, the sign of the measured result determines which state is eliminated. For example if a positive result is measured, then the state $|-\alpha\rangle$ or $|-i\alpha\rangle$ is eliminated, depending on the measured quadrature. In this way, Bob (Charlie) eliminates two states, one for each quadrature, for each signature element. \noindent 4. {Symmetrization:} Bob (Charlie), for each element $l$ of $QuantSig_k$, randomly chooses with equal probability to either forward the measurement results and position to Charlie (Bob) or not, secret from Alice, who should not learn the positions of the forwarded results. The resulting sequences of measurement outcomes, after the forwarding procedure, form Bob's and Charlie's ``eliminated signatures". Bob (Charlie) keeps the results obtained directly from Alice, and the results forwarded to him by Charlie (Bob) separate. Therefore, he has an eliminated signature in two parts, each of length $L/2$. {Hom}odyne measurements will, even in the ideal case, sometimes eliminate the sent state. If everybody follows the protocol, the probability for this depends on the {overlap} of the coherent states, and would be equal to \frac{1}{2}\mbox{erfc}\left({\alpha}/\sqrt{2}\right)$ in the ideal case with no loss or experimental imperfections, where erfc($x$) is the complementary error function. For $\alpha=0$, this probability equals one half, and quickly approaches zero as $\alpha$ increases. Due to the unavoidable errors, this measurement protocol is an example of ``ambiguous state elimination". Since measurements are performed immediately on receipt of the states, no quantum memory is required, just as in \cite{Collins_14,Dunjko_14}. \noindent {\it Messaging stage: 5-7} \noindent 5. To send a signed one-bit message $m$, Alice sends $(m, PrivKey_m)$ to Bob. \noindent 6. Bob checks whether $(m,PrivKey_m)$ matches both parts of his stored eliminated signature by counting how many elements of Alice's private key were eliminated during the distribution stage. If there are fewer than $s_aL/2$ mismatches in each of the two parts of his eliminated signature, where $s_a$ is the authentication threshold, Bob accepts the message. \noindent 7. {If Bob wishes to forward a message, he forwards the message and its corresponding private key.} Charlie tests for mismatches in the same way as Bob, but with a higher verification threshold $s_v$, to protect against repudiation. Charlie accepts the message if there are fewer than $s_vL/2$ mismatches in each of the two parts of his eliminated signature, with $p_{err}< s_a<s_v<\frac{1}{2}$. \begin{figure}[tb] \includegraphics[width=8cm]{Croal_fig2.pdf} \caption{Signature length for $\alpha=0.48$. Blue curve: theoretical model. Blue dots/bars: results from the data attributed to Bob. Red triangles/bars: results from the data attributed to Charlie. The error bars calculated are {derived by investigating the standard deviation of ten subsets of the entire dataset}. The errors naturally increase with decreasing transmission since $g$ from Eq. \eqref{Pfailure} decreases. {In addition, }less data was available at lower transmission values {(see histogram of signals received by Bob per transmission sub-channel as inset)}. The data used for each point comes from a small range of transmissions, but horizontal error bars are omitted for clarity.}\label{fig1} \end{figure} {In essence, the security of this scheme comes from two sources. First, it is impossible for a forger to perfectly determine the {private key, since the used quantum states are non-orthogonal}. If noise is sufficiently low, the distributor Alice has an advantage over any other party. Second, the forwarding of measurement results ensures that, from Alice's point of view, Bob's and Charlie's measurement records follow the same statistics. This means that if Charlie uses a higher verification threshold $s_v$ than Bob's authentication threshold $s_a$, then Alice's probability to repudiate can be made arbitrarily small by choosing the signature length $L$ large enough. An upper bound on the repudiation probability is calculated using the Hoeffding inequality \cite{Hoeffding_63} in the supplemental material \cite{supplementary}.} Security against collective attacks follows from the fact that different signature states are completely uncorrelated, meaning that the optimal collective attack is an individual attack on each signature element \cite{Collins_14}. Security against coherent attacks is left for future work, noting that due to the forwarding of measurement results amongst other things \cite{Wallden_14}, methods from the security of QKD cannot be directly carried over. {Security against coherent attacks has nevertheless been analysed for a related quantum signature protocol~\cite{Wallden_15,Amiri_15b}.} We also assume that there are authenticated quantum channels between Alice, Bob and Charlie. Some kind of parameter estimation procedure should be used to replace this assumption, analogous to~\cite{Amiri_15b, Yin_15}. To successfully forge, Bob must guess a sequence of states that meets Charlie's verification threshold. For individual and collective forging, the optimal forging attack is to perform a minimum-cost measurement on the individual signature states \cite{Wallden_14}. The minimum cost $C_{min}$ is the minimum probability that an honest party will detect an error in an individual signature element coming from the forger, and is calculated in the supplemental material \cite{supplementary}. As long as $C_{min}$ is larger than $p_{err}$, which denotes the probability {of a mismatch with the sent signature when all parties are honest,} the signature scheme can be made secure by appropriately choosing other protocol parameters such as the length $L$. Note that $p_{err}$ is determined from experimental data. A final condition for a useful QDS scheme is that it must be robust, i.e. it must succeed with high probability if all parties are honest. The exact security definitions can vary and depend on whether one party is more likely to be dishonest than the others. As detailed in the supplemental material~\cite{supplementary}, we set protocol parameters so that the repudiation probability, the forging probability and the failure probability are all approximately equal. In this way, the probability that the scheme will fail in any one of these ways is bounded by \begin{equation} P(\mbox{failure})\le 2\exp\left(-\frac{g^2}{16}L\right), \label{Pfailure} \end{equation} where $g=C_{min}-p_{err}$ is the advantage that the legitimate sender Alice has over a forger for a single position of the signature sequence \cite{Donaldson_15,supplementary}. Since the failure probability decays exponentially with the signature length $L$, the scheme is secure, and any required security level can be achieved with sufficiently large $L$. The figure of merit we use to characterise the quality of our QDS schemes is the length $2L$ required to sign a one-bit message with a failure probability of 0.01$\%$. \begin{figure}[tb] \includegraphics[width=8cm]{Croal_fig3.pdf} \caption{Black (solid) curve: Signature length for an ideal ambiguous measurement scheme. Red (dotted) curve: Signature length for an ambiguous measurement scheme with realistic imperfections. Blue (dot-dashed) curve: Signature length for an ideal unambiguous measurement scheme.}\label{fig3} \end{figure} {To show the robustness of the protocol}, the experiment was carried out {over a real free-space urban link \cite{Peuntinger_14,Heim_14}}. {The signal states $\lvert \pm \alpha\rangle$, $\lvert \pm i\alpha\rangle$} were then repeatedly transmitted, polarization multiplexed with the local oscillator, which is needed for later detection, through a free-space channel between the buildings of the Max Planck Institute and the University of Erlangen-N\"{u}rnberg \cite{Peuntinger_14,Heim_14,Korolkova_02}. The length of the channel is approximately 1.6~km. The channel transmission fluctuated between 50\,\% and 85\,\% due to {beam wandering and} scintillation. At the receiver the signal was split on a balanced beam splitter to measure both the $\hat{x}$ and $\hat{p}$ quadratures. Simultaneously, the transmission was recorded for each state {(for more details see~\cite{supplementary})}. The experiment was implemented for three different signal amplitudes, $\alpha=0.48$, $\alpha=0.93$, and $\alpha=1.63$, and we attribute the first (second) half of the measurement time to Bob (Charlie). {To remedy the channel fading}, Bob's (Charlie's) measurement data is then sorted into 32 sub-channels according to the measured transmission \cite{Peuntinger_14,Heim_14}. Depending on the sign of the quadrature measurement values, for each signal state, two of the possible sent states were eliminated. For each set of data, the sequence of eliminated states was used to produce a cost matrix \cite{Wallden_14} that gives the probability that each state was eliminated for a particular signal state. For each cost matrix, {we calculate the minimum difference between an off-diagonal element of the cost matrix (probability of eliminating a state {that was not sent}) and the diagonal element of that row (probability of eliminating the sent state)}. This difference was multiplied by the appropriate $p_{min}$ to obtain the parameter $g$ from \eqref{Pfailure} for that cost matrix. The minimum probability that a forger will incorrectly identify the state is $p_{min}$ {(see~\cite{supplementary}).} For each $g$, the signature length $2L$ to sign a one-bit message with a failure probability of 0.01$\%$ was calculated. In Fig. \ref{fig1}, the length $L$ is plotted against transmission $T$ {with} $T$+$R$=1 for $\alpha=0.48$. {To {account for} experimental imperfections, a theoretical model was developed, {using} only experimental data, {with} no free parameters (for details see the supplemental material \cite{supplementary}). {The larger errors bars in Fig.~\ref{fig1} are mostly due to the statistical error of the smaller amount of data available at lower transmission.} The experiment has a clock rate of about $2.2$~MHz and the required signature length of about 10$^5$ is easily manageable {in the sub-channels}; thus this demonstrates a viable QDS scheme.} The experiment was also carried out at $\alpha=0.93$ and $\alpha=1.63$ {(results given in \cite{supplementary})}. Increasing $\alpha$ improves the cost matrix but also decreases $p_{min}$, which makes the guess of the forger easier. There is a trade-off between these two effects, with the optimal $\alpha$ predicted to be $\alpha\approx0.5$, supported by the experimental results. The main purpose of this experiment is as a test of the measurement procedure used. {A} calculation of the cost matrix provides all the information relevant for implementing a full scheme. In the experiment, all the quantum steps were carried out; the rest is classical communication and information processing. The experiment is also the first to demonstrate a signature scheme in a free-space setting, in contrast to previous experiments using optical fibers. {It is important to compare the performance of this scheme to previous results}. In \cite{Donaldson_15}, a similar scheme is presented, but with unambiguous state elimination rather than the ``continuous-variable ambiguous state elimination" used here. There, the signature length required was about $10^{9}$, for 500~m of optical fiber and a total loss level of 35\%. Comparing this to our results, the signature length was about $7\times 10^4$ with a similar loss level and a 1.6~km free-space channel. In \cite{Donaldson_15}, the experiment ran at a clock rate of 100~MHz, whereas the clock rate of this experiment was 2.2~MHz. Increasing the clock rate into the GHz range is straightforward with available technology. {Fig. \ref{fig3} shows the dependence of signature lengths from transmission for the two schemes (details of the models given in~\cite{supplementary}). Even including experimental errors, our scheme requires a shorter signature than the ideal result for~\cite{Donaldson_15}.} {That is, the QDS protocol based on ambiguous state elimination has a fundamental advantage over unambiguous state elimination}. This advantage is even more pronounced when experimental inefficiencies are taken into account. Approximately one order of magnitude of the advantage comes purely from the {chosen} measurement, as shown in Fig.~\ref{fig3}. The rest comes from the improved technical performance of {hom}odyne measurements compared to single-photon detectors. In conclusion, we have presented a QDS scheme that uses {CV} {hom}odyne measurements. We have experimentally demonstrated that the scheme works over a fluctuating free-space channel. In addition, the {signature} rate per quantum state sent is orders of magnitude better than previous work. Interestingly, despite the ambiguity in the measurements, this scheme has a fundamental advantage over corresponding schemes using unambiguous measurements. C. C. and N. K. acknowledge the support from the Scottish Universities Physics Alliance (SUPA) and the Engineering and Physical Sciences Research Council (EPSRC). The project was supported within the framework of the International Max Planck Partnership (IMPP) with Scottish Universities. C.P. and B.H. thank their colleagues at the FAU computer science building for hosting the receiver. E. A. acknowledges the support of EPSRC EP/K015338/1.
1,314,259,995,682
arxiv
\section{Introduction} A non-singular model of a black hole was proposed by Hayward \cite{hayward}. As such, it may be a solution of an ultraviolet complete gravity such as \cite{biswas}. The geodesics in such a non-singular spacetime would illuminate some basics aspects of the spacetime, as the geodesics in the Schwarzschild spacetime do \cite{chandra}. In this paper, we summarize the results of our study of the properties of geodesics in the geometry described by the Hayward metric in a self-contained manner. Some of the results may be already known (e.g., \cite{wei,schee}), but the detailed study of both the properties of marginally stable circular orbits and the behaviour of null geodesics are new as far as we know. In Sec. \ref{sec2}, after giving the radii of the horizons of the spacetime, we study the properties of timelike geodesics and null geodesics. Sec. \ref{sec3} is devoted to summary. In Appendix \ref{appendixa}, we summarize the properties of the geodesics in the Reissner-Nordstr\"om metric. We use the units of $G=c=1$. \section{Geodesics in Hayward Metric} \label{sec2} The Hayward metric is given by \cite{hayward} \beqa ds^2&=&-F(r)dt^2+\frac{dr^2}{F(r)}+r^2d\Omega^2\\ &&F(r)=1-\frac{2Mr^2}{r^3+2\ell^2M} \nonumber \,, \label{metric} \eeqa where $M$ is a mass parameter and $\ell$ is a length-scale parameter. The metric approaches $1-2M/r$ as $r\rightarrow \infty$ and approaches unity smoothly as $r\rightarrow 0$ and hence is non-singular. The metric is ``minimal'' in the sense that it contains the least number of free parameters ($\ell$ only) with the desired properties (regularity at the center such that $F(r)\rightarrow 1+{\cal O}(r^2)$ and Schwarzschild asymptotic behaviour at large radii. In fact, Frolov has recently shown that if $F(r)$ is a rational function of $r$, the order of the polynomials must be larger than 2 and an $F(r)$ constructed out of polynomials of order 3 contains two free parameters in general \cite{frolov}. We may compute the effective energy momentum for the metric Eq. (\ref{metric}) via $T_{\mu\nu}=\frac{1}{8\pi}G_{\mu\nu}$ as done in \cite{hayward}. \footnote{ The method for computing $T_{\mu\nu}$ in this way is sometimes called ``Nariai method" in Japanese GR community.} Then, the energy density $\rho=-{T^t}_t$, the radial pressure $p_r={T^r}_r$ and the tangential pressure $p_T={T^{\theta}}_{\theta}={T^{\phi}}_{\phi}$ are given by \cite{hayward} \beqa \rho=-p_r&=&\frac{3\ell^2M^2}{2\pi(r^3+2\ell^2M)^2},\\ p_T&=&\frac{3\ell^2M^2(r^3-\ell^2M)}{\pi(r^3+2\ell^2M)^3}\, , \eeqa and we find that the weak energy condition is satisfied: $\rho>0, \rho+p_r=0, \rho+p_T\geq 0$ and that the strong energy condition is violated for $r< (\ell^2M)^{1/3}$ since $\rho+p_r+2p_T=2p_T$. Henceforth, we normalize the length scale by $M$ and introduce the dimensionless parameter \beqa a\equiv \frac{\ell}{M}\,, \eeqa and use $\r=r/M$ as a dimensionless variable. \subsection{Horizon} The location of a horizon is determined by $F(r)=0$, and horizons (outer horizon $\r_+$ and inner horizon $\r_-$) exist for $0\leq a\leq a_{H}$ \cite{hayward}, \beqa &&\r_+=\frac23 +\frac43 \cos\left(\frac13\cos^{-1}\left(1-\frac{27a^2}{8}\right)\right),\\ &&\r_-=\frac23 -\frac43 \cos\left(\frac13\cos^{-1}\left(1-\frac{27a^2}{8}\right)+\frac{\pi}{3}\right), \eeqa where \beqa a_H=\frac{4}{3\sqrt{3}}=0.7698...\,. \eeqa The upper bound on $a$ implies the lower bound on $M$, $M\geq a_H\ell$. The implications of this lower bound on the formation and the evaporation of black holes are discussed in \cite{hayward}. In Fig. \ref{fig1}, we show the radii of the horizons (black curve). The causal structure is quite similar to the Reissner-Nordstr\"om black hole: $a<a_H,a=a_H, a>a_H$ corresponds to $Q<M, Q=M, Q>M$ Reissner-Nordstr\"om black hole, the only exception being that the central singularity is replaced with a regular center. \begin{figure} \includegraphics[height=3.5in]{radius.eps} \caption{\label{fig1} The radii of horizons (black) (outer: solid; inner: dashed), ISCO or marginally stable circular orbit (blue), photon sphere (red solid) and stable circular orbit of photon (red dashed). Dotted vertical lines are critical values of $a=\ell/M (a_H,a_P,a_I$ from left). } \end{figure} \subsection{Timelike Geodesics} {}From the spherical symmetry, we may restrict ourselves to the equatorial plane without loss of generality. In terms of two conserved quantities, the energy $E=F(r)\dot t$ and the angular momentum $L=r^2\dot\phi$, the timelike geodesics satisfy the energy equation \beqa \frac12 \dot r^2+V(r)=\frac12 E^2,~~~~~~~~~~V(r)=\frac12\left(1+\frac{L^2}{r^2}\right)F(r)\,, \eeqa where $\dot t=dt/d\tau$ with $\tau$ being the proper time. A marginally stable circular orbit (MSCO) is determined by the condition $V'=V''=0$ which reduces to the following equation for $r$ \footnote{If $F(r)=1-Q_{n-1}(r)/P_n(r)$, where $P_n(r)$ and $Q_{n-1}(r)$ are polynomials of order $n$ and $n-1$, respectively, the degree of the equation becomes $3n-3$.} \beqa \r^6-6\r^5+22 a^2 \r^3-32a^4=0 \,, \label{isco} \eeqa and the innermost such an orbit is called the innermost stable circular orbit (ISCO). Then, $L^2$ is determined by \beqa \left(\frac{L}{M}\right)^2=\frac{\r^3{dF}/{d\r}}{2F-\r {dF}/{d\r}}=\frac{\r^7-4 a^2 \r^4}{4 a^4+4 a^2 \r^3+(\r-3) \r^5}\,. \eeqa We find that Eq. (\ref{isco}) has one positive real solution for $a< a_H$ and three positive real solutions for $a_H<a<a_I$ and one positive real solution for $a>a_I$, where $a_I$ is given by \beqa a_{I}=\frac{200}{51}\sqrt{\frac{5}{51}}=1.2278... \,. \eeqa However, both two of the solutions for $a_H<a<a_P$ and one solution for $a>a_P$ are turned out to have imaginary $L$ and are unphysical, where \beqa a_P=\frac{25}{24}\sqrt{\frac{5}{6}}=0.9509...\,, \label{ap} \eeqa so there is no ISCO for $a>a_I$. After all, there is one ISCO for $a< a_P$ and two MSCOs for $a_P<a<a_I$ and no ISCO for $a>a_I$. In Fig. \ref{fig1} and Fig.\ref{fig3}, we show the radii of ISCO or MSCO (blue curve) and the angular momentum. The angular momentum of the inner MSCO for $a_H<a<a_I$ is arbitrary large as $a\rightarrow a_P$. \begin{figure} \includegraphics[height=2.5in]{angmom.eps} \caption{\label{fig3} The angular momentum of marginally stable circular orbits. Solid (dashed) curve corresponds to blue solid (dashed) curve in Fig. \ref{fig1}. } \end{figure} \begin{figure} \includegraphics[height=3in]{Hayward_region.eps} \caption{\label{figcircle} The region of stable circular orbit (gray) as a function of $a$. Light gray region in the middle is the unstable orbit. Lower white region is forbidden because either the angular momentum is imaginary or the radius is inside the horizon. Blue (ISCO), red (photon sphere and circular null geodesics) and black (horizons) curves are the same as Fig. \ref{fig1}. } \end{figure} In Fig. \ref{figcircle}, we show the allowed region for the stable circular orbit (gray). The light gray region in the middle is the unstable orbit $d^2V/dr^2<0$. The lower white region is forbidden because the angular momentum is imaginary or the radius is inside the horizon. In order to study the properties of circular orbits, we calculate the energy of a particle in circular orbit as a function the radius for $a=0.5,0.9,1$ as shown in Fig. \ref{fig31} . The extrema of the energy correspond to MSCOs: for $a=1$ the minimum is the outer MSCO and the maximum is the inner MSCO. The region in between two MSCOs (dashed curve) is unstable because $d^2V/dr^2<0$ there. The outer MSCO can be reached from a particle in circular orbit outside by its emitting energy and angular momentum. On the other hand, since the energy of the inner MSCO is larger than $1$ (the energy at large $r$), the inner MSCO is unbound and may be regarded as an ``excited state" that cannot reached from a particle in circular orbit outside. In this sense, only the outer MSCO may be physically important. For $a=0.9$, there is also a stable circular orbit with large $L$ and $E>1$. \begin{figure} \includegraphics[width=2.10in]{energy05.eps} \includegraphics[width=2.1in]{energy09.eps} \includegraphics[width=2.1in]{energy.eps} \caption{\label{fig31} The energy of a particle in circular orbit as a function of the radius of the orbit for $a=0.5,0.9,1$ from left to right. Black dots correspond to marginally stable circular orbits. The circular orbit is unstable for dashed curve. } \end{figure} \subsection{Null Geodesics} As in the case of the timelike geodesics, in terms of $E$ and $L$, the null geodesics satisfy \beqa \frac12 \dot r^2+V_{\rm null}(r)=\frac12 E^2,~~~~~~~~V_{\rm null}(r)=\frac{L^2}{2r^2}F(r)\,, \label{energy:null} \eeqa where $\dot r=dr/d\lambda$ with $\lambda$ being the affine parameter. Since $L/E$ is the impact parameter at large $r$, the (local) maximum of the effective potential $V_{\rm null}(r)$ determines the capture cross section for photons and hence the size of shadow of a black hole: photons with smaller impact parameter will be captured by the black hole. The location of the (local) maximum of $V_{\rm null}(r)$, or the radius of unstable circular orbits of photons (photon sphere \cite{photon}), $r_P$, is determined by \beqa \r^6-3\r^5+4a^2\r^3+4a^4=0\,. \eeqa Similarly to the case of ISCO, we find that the maximum of $V_{\rm null}(r)$ exists if $a < a_P$, where $a_P$ is defined in Eq. (\ref{ap}). In Fig. \ref{fig1} we show the radius of photon sphere (red solid). There also appears a stable circular orbit of photon (red dashed) for $a<a_P$. The appearance of a stable circular photon orbit is inevitable in the presence of the photon sphere (local maximum of $V_{\rm null})$) since $V_{\rm null}$ positively diverges as $r\rightarrow 0$. \footnote{The presence of a stable photon orbit, however, would be problematic because perturbations can become long-lived and nonlinear effects may destabilize the system \cite{cardoso}. } In Fig. \ref{fig4}, we show the radius of the shadow $b_{P}$ defined by \beqa b_{P}=\sqrt{\frac{L^2}{2V_{\rm null}(r_{P})}}=\frac{r_P}{\sqrt{F(r_P)}} \eeqa \begin{figure} \includegraphics[height=2.2in]{shadowrad.eps} \caption{\label{fig4} Shadow radius $b_{P}$ as a function of $a=\ell/M$. } \end{figure} In order to study the effect of the geometry on the propagation of light rays, we first compute the deflection of light in the Hayward metric. We consider the deflection of light in the equatorial plane. Then from Eq. (\ref{energy:null}) and $L=r^2\dot\phi$, in terms of the impact parameter $b=L/E$, up to the turning point of the deflection, $\phi$ satisfies \beqa r^2\frac{d\phi}{dr}=\left(b^{-2}-\frac{F(r)}{r^2}\right)^{-1/2}\,. \eeqa Denoting the turning point of the deflection by $r_0$ at which $V_{\rm null}(r_0)=E^2/2$ or $b^{-2}=F(r_0)/r_0^2$, the deflection angle $\Delta\phi$ is then given by \beqa \Delta\phi=2\int^{1/r_0}_0\frac{du}{\sqrt{b^{-2}-u^2F(1/u)}}-\pi , \eeqa where we have made the change of variables $u=1/r$ as usual. Expanding in terms of $M/r_0$ under the assumption $\Delta\phi$ is small (weak deflection), the result is \beqa \Delta\phi&=&\frac{4 M}{{r_0}}+\frac{(15 \pi -16) }{4} \left(\frac{M}{r_0}\right)^2+ \frac{(244-45 \pi ) }{6} \left(\frac{M}{r_0}\right)^3+ \left(-130+\frac{3465\pi}{64} -\frac{15\pi}{4}a^2\right) \left(\frac{M}{r_0}\right)^4\nonumber\\ &&+\left(\frac{7783}{10}-\frac{3465\pi}{16} +\frac{75 \pi -472 }{5}a^2\right) \left(\frac{M}{r_0}\right)^5\nonumber\\ &&+ \left(\frac{310695 \pi }{256}-\frac{21397}{6} +\frac{5}{16} (1664-693 \pi ) a ^2\right) \left(\frac{M}{r_0}\right)^6 +O\left(\left(\frac{M}{r_0}\right)^7\right) . \eeqa We show the result only up to $O((M/r_0)^6)$, although we can calculate arbitrarily higher orders. The coefficients for $a=0$ fully agree with \cite{keeton}, but the sign of the coefficient of $(M/r_0)^4$ including $a$ is different from \cite{wei} in which only the numerical values of the coefficient are given and higher-order terms are not given. In terms of the impact parameter $b$ using $b^{-2}=F(r_0)/r_0^2$, the deflection angle up to $O((M/b)^6)$ is given by \beqa \Delta\phi&=&\frac{4 M}{b}+\frac{15 \pi }{4}\left(\frac{M}{b}\right)^2 +\frac{128 }{3 }\left(\frac{M}{b}\right)^3 +\left(\frac{3465 \pi }{64}-\frac{15\pi}{4} a^2\right)\left(\frac{M}{b}\right)^4 \nonumber\\ &&+\left(\frac{3584 }{5}-\frac{512 a ^2}{5}\right)\left(\frac{M}{b}\right)^5 +\left(\frac{255255 \pi }{256}-\frac{3465\pi}{16} a^2\right)\left(\frac{M}{b}\right)^6 + O\left(\left(\frac{M}{b}\right)^7\right) . \eeqa Again, we find complete agreement with \cite{keeton} up to this order if $a=0$. Since the effect of nonzero $a$ appears only for $O((M/b)^4)$ and beyond, it is difficult to detect the effect by the (weak) deflection angle. {}From the requirement that the fourth term should not dominate the first term by $0.001\%$ \cite{nature}, $a$ is constrained only as $a\siml 10^{5}(b/R_{\odot})^{3/2}(M/M_{\odot})^{-3/2}$ or $\ell\siml 10^{5}{\rm km}(b/R_{\odot})^{3/2}(M/M_{\odot})^{-1/2}$. \begin{figure} \includegraphics[width=1.55in]{nulla0.eps} \includegraphics[width=1.55in]{nulla05.eps} \includegraphics[width=1.55in]{nulla09.eps} \includegraphics[width=1.55in]{nulla1.eps} \caption{\label{fig5} Trajectories of light rays (coming in from upper right) for $a=0, 0.5, 0.9, 1$ from left to right. A black circle is the (outer) horizon, red circles are the radii of photon spheres. } \end{figure} \begin{figure} \includegraphics[width=1.55in]{deltaphi_a_0.eps} \includegraphics[width=1.55in]{deltaphi_a_0_5.eps} \includegraphics[width=1.55in]{deltaphi_a_0_9.eps} \includegraphics[width=1.55in]{deltaphi_a_1.eps} \caption{\label{delphi} The deflection angle as a function of the impact parameter $b$ for $a=0, 0.5, 0.9, 1$ from left to right. } \end{figure} \begin{figure} \includegraphics[height=1.55in]{zoom_a_0.eps} \includegraphics[height=1.55in]{zoom_a_0_5.eps} \includegraphics[height=1.55in]{zoom_a_0_9.eps} \includegraphics[height=1.55in]{zoom_a_1.eps} \caption{\label{shadow} Image of a non-singular black hole described by the Hayward metric for $a=0, 0.5, 0.9, 1$ from left to right. Photons in black regions never reach the observer. Photons in white regions travel around the central region more than once, while photons in gray regions reach the observer straightforwardly. } \end{figure} However, the presence of $a$ does affect the existence or nonexistence of the photon sphere and hence affects the behaviour of light rays traveling around the photon sphere. In Fig. \ref{fig5}, we show the trajectories of light rays coming in from upper right for $a=0,0.5,0.9,1$. A black circle is the (outer) horizon, red circles are the radii of photon spheres. In Fig. \ref{delphi}, we show the deflection angle as a function of the impact parameter $b$ for $a=0,0.5,0.9,1$. If the impact parameter is slightly larger than the radius of a shadow $b_{P}$, the deflection angle $\Delta\phi$ is logarithmically divergent as \beqa \Delta\phi & \sim & \frac{1}{\sqrt{-5 + 2\sqrt{3 r_P/M}}} \ln(b - b_{P})^{-1}, \eeqa If $a = a_P$, {\it i.e.,} $r_P = 25/12$, the divergent behavior is changed into \beqa \Delta\phi & \sim & c_0 M^{1/6}(b - b_{P})^{-1/6}, \\ c_0 & = & 2^{11/6} 3^{2/3} 5^{5/4} \int_0^\infty dy\frac{1}{\sqrt{432y + 900y^2 +625 y^3}} \simeq 7.771. \eeqa Quite similar behaviour is also found for the Reissner-Nordstr\"om metric where critical values corresponding to $a_H,a_P$ appear depending on the value of the charge (Appendix\ref{appendixa}). The images of a ``black hole" are shown in Fig \ref{shadow}. Photons in black regions never reach the observer due to the presence of a horizon or the deflection angle being $\pm \pi/2$(modulo $2\pi$). Photons in white regions travel around the central region more than once, while photons in gray regions reach the observer straightforwardly. The image for $a=0.5$ is little different from that of Schwarzschild black hole ($a=0$): A blight ring surrounding a black disk (shadow of a black hole) appears. Interestingly, for $a=0.9 (a_H<a<a_P)$, the photon sphere exists even though the horizon is absent; a black disk disappears and a black doughnut appears instead, and a ring image persists. More interestingly, even for $a=1 (>a_P)$, the ring persists although the photon sphere is absent as shown in Fig. \ref{shadow}. Therefore, the existence of a blight ring image does not necessarily imply the existence of a photon sphere. Of course, for astrophysical black holes $a\simeq 10^{-38}(\ell/\ell_{\rm p})(M_{\odot}/M)\ll 1$, where $\ell_{\rm p}$ is the Planck length, and these phenomena would be expected only for Planck-scale primordial black holes. \section{Summary} \label{sec3} In this paper, we have studied the timelike and null geodesics in a non-singular black hole geometry proposed by Hayward that involves a parameter $\ell$ and found several interesting features of the geometry: the existence or non-existence of the horizon, the photon sphere and the ISCO. We also have found that two marginally stable circular orbits appear for $a_H<\ell/M<a_P$ although the inner orbit is unbound. The existence of a horizon and/or a photon sphere significantly affects the behaviour of the null geodesics. We have found that a black doughnut appears if the horizon is absent and that blight rings appear even if the photon sphere is absent. One may think that bright regions in a shadow image are due to the existence of the photon sphere. However, as shown in Fig.\ref{shadow}, bright regions also can appear in the spacetime with no photon sphere if the parameter $a$ is slightly larger than $a_P$. For such parameters, while there are no photon sphere, $\Delta \phi$ can still take a value larger than $2\pi$ (see Fig.\ref{delphi}). Similar behavior can be found in the Reissner-Nordstr\"om metric. These results suggest that such the shadow images are universal behavior for parameters slightly larger than the critical value where photon sphere marginally exists. The parameter $\ell$ is currently only loosely constrained by the solar-system experiment since the deflection angle at the post-Newtonian order is the same as that of Schwarzschild. The Event Horizon Telescope, a long baseline interferometer experiment, will be able to resolve black holes at horizon scales with the angular resolution of $20 \mu as$ which corresponds to a size of $3.6 M$ for SgA* at the galactic center \cite{eht}. If the shadow of a black hole is observed by such a telescope, we may put an $O(1)$ constraint on $\ell/M$. \section*{ACKNOWLEDGEMENTS} This work is supported by the MEXT Grant-in-Aid for Scientific Research on Innovative Areas (15H05894) and in part by Nihon University. M.K. acknowledges financial support provided under the European Union's H2020 ERC Consolidator Grant ``Matter and strong-field gravity: New frontiers in Einstein's theory'' grant agreement no. MaGRaTh-646597, and under the H2020-MSCA-RISE-2015 Grant No. StronGrHEP-690904.
1,314,259,995,683
arxiv
\section{Introduction}\label{intro} Magnetic reconnection is a common phenomenon in which the topology of magnetic field lines is changed and magnetic energy is converted to kinetic energy. Interpretations of space plasma measurements \citep[e.g.,][] {2008NatPh...4...19C, 2011PhRvL.107p5007O} and astronomical observations suggest that reconnection occurs in many places in the Universe. Because the length scale of magnetic fields in astrophysical plasmas is extremely large, of order the size of astrophysical sources, while low plasma resistivity means that the characteristic scale of dissipation is very small, magnetic field lines are typically ``frozen'' into the astrophysical plasma, inhibiting dissipation. The topological change in the field lines produced by reconnection can break flux freezing and facilitate dissipative energy conversion. In this review, we focus on reconnection in pair plasmas in the relativistic regime, in which the magnetic energy before the fields reconnect is significantly greater than the total enthalpy of the particles, so that the particles become relativistic when they enter the reconnection region. This condition is precisely stated as \begin{equation} \sigma\equiv \frac{B^2}{4\pi m n c^2 w_n}>1 \label{eq:sigma} \end{equation} where $B$ is the magnetic field, $n$ is the total particle number density including all species, and $w_n$ is the enthalpy per particle (assumed to be the same for both species), given by $w_n=\gamma +P/(m n c^2)$, where $\gamma$ is the mean particle Lorentz factor and $P$ is the particle pressure. \footnote{If $w_n\gg1$ but $\sigma<1$, the plasma is initially relativistic but reconnection is typically weak, so the relativistic reconnection discussed in this review typically fulfils condition (\ref{eq:sigma}).} Relativistic reconnection may be of importance in astrophysical magnetically dominated systems such as Pulsar Wind Nebulae (PWN), as well as relativistic jets in Active Galactic Nuclei (AGN) or Gamma Ray Bursts (GRB) which may be magnetically dominated. The observed radiation from such systems is typically highly energetic and nonthermal. Because shock acceleration of particles through the Fermi process is likely to be inefficient in magnetically dominated flows \citep{2011ApJ...726...75S,sironi_13}, it is expected that reconnection is responsible for the acceleration of high energy particles and the production of radiation in these magnetically dominated systems. The role of relativistic reconnection in particle acceleration and radiation is a primary subject of this review. This paper is organised as follows. In the remainder of Section \ref{intro}, we review simple models of relativistic reconnection (Section \ref{models}) and discuss the physics of particle acceleration in relativistic reconnection (Section \ref{intro_rad}) In Section \ref{simulations}, we discuss simulations of relativistic reconnection and the resulting particle acceleration, anisotropies, and bulk flows. In Section \ref{applications}, we explore the application of relativistic reconnection in astrophysical systems; this section includes predictions of the radiation spectrum resulting from reconnection in those systems. Finally, in Section \ref{conclusions} we present our conclusions. \subsection{Models of Reconnection}\label{models} We now discuss models of reconnection in detail. Whenever regions of opposite magnetic polarity are present, Maxwell's equations imply that there will be a current sheet in between. In this current layer, magnetic field lines can diffuse across the plasma to reconnect at one or more X-lines. During reconnection, magnetized plasma approaches the central plane of the current layer with an asymptotic inflow velocity $v_{\rm in}$, which is also known as the reconnection velocity. After passing the X-line, plasma is expelled from the vicinity of the X-line to either side at the outflow velocity $v_{\rm out}$, which is typically assumed to equal the characteristic speed of magnetic disturbances in plasma, the Alfv\'en velocity $v_{\rm A}$. In the relativistic regime, $v_{\rm A} =c\sqrt{\sigma/(1+\sigma)}\sim c$. The dimensionless reconnection rate is usually defined as $r_{\rm rec}\equiv v_{\rm in}/v_{\rm out}$. Outside of the current sheet, non-ideal effects are negligible and the magnetohydrodynamic (MHD) condition \begin{equation} \mathbf{E} +\frac{1}{c} \langle \mathbf{v} \rangle \times \mathbf{B}=0, \label{eq:mhd} \end{equation} holds, where $\mathbf{E}$ is the electric field, $\mathbf{B}$ is the magnetic field, and $\langle\mathbf{v}\rangle$ is the mean particle velocity. In a steady-state configuration which is quasi-two dimensional and does not vary strongly perpendicular to the plane of reconnection, the electric field throughout the reconnection region $\mathbf{E}_{\rm rec}$ is uniform and can be found by applying the condition (\ref{eq:mhd}) outside the current sheet, giving \begin{equation} {\mathbf E}_{\rm rec}=-\frac{1}{c} ({\mathbf v}_{\rm in}\times {\mathbf B_0}), \label{eq:estructure} \end{equation} where ${\mathbf B_0}$ is the reversing magnetic field outside the current sheet. Because there is no velocity flow inside the current sheet, the electric field there is sustained by some non-ideal effect which is responsible for dissipation. The reconnection rate $r_{\rm rec}$ may be related to the electric field by the equation \begin{equation} \label{recE} r_{\rm rec}\equiv \frac{v_{\rm in}}{v_{\rm out}}=\frac{E_{\rm rec}}{(v_{\rm A}/c)B_0} . \end{equation} \subsubsection{Sweet-Parker resistive and kinetic relativistic reconnection} Defining $\delta$ and $L$ to be the thickness and length of the current sheet, the conservation of mass from the reconnection inflow to the outflow in an incompressible plasma requires \begin{equation} \frac{\delta}{L}=r_{\rm rec}=\frac{v_{\rm in}}{v_{\rm out}} \sim \frac{v_{\rm in}}{v_{\rm A}} . \label{sprec} \end{equation} This equation is not always applicable to relativistic reconnection due to the possible presence of relativistic bulk flows which violate the incompressibility assumption, but it does apply in the simple steady-state models we discuss in this section. In the Sweet-Parker resistive model of reconnection, $L$ is taken to be the macroscopic length scale of the magnetic field, while the thickness $\delta$ is determined by the dissipation rate that can be sustained by resistivity. The dimensionless parameter that determines the importance of collisional resistivity is the Lundquist number $S\equiv v_{\rm A} L/\eta$, where $\eta$ is the magnetic diffusivity produced by resistivity. \citet{2005MNRAS.358..113L} has shown that the reconnection rate for relativistic Sweet-Parker resistive reconnection is \begin{equation} r_{\rm rec}=\frac{\delta}{L}\sim \frac{1}{\sqrt{S}}, \end{equation} which is identical to the result for non-relativistic Sweet-Parker resistive reconnection. Since the Lundquist number $S$ is very large in astrophysical plasmas (depending on the application, $S\sim 10^{20}$ may be a typical value), Sweet-Parker reconnection is extremely slow. On the other hand, solar flares are believed to be powered by magnetic reconnection requiring that $v_{\rm in}/v_{\rm A}\sim 0.1$! Since the collisional resistivity is often extremely small in magnetically dominated astrophysical plasmas, kinetic effects resulting from individual particle motions are likely to be more important than resistivity in many systems. The characteristic frequency of kinetic effects is the plasma oscillation frequency $\omega_{\rm p}$, given by \begin{equation} \omega_{\rm p}=\sqrt{\frac{4 \pi n q^2 }{w_n m}}, \label{plasma} \end{equation} where $q$ is the charge of the particles. Kinetic effects become important on spatial scales smaller than the corresponding inertial length $c/\omega_{\rm p}$ (also known as ``skin depth''). \citet{2014PhRvL.113d5001C} have shown that when kinetic effects are important, the reconnection rate in the relativistic case is given by \begin{equation} r_{\rm rec}=\frac{c}{\omega_{\rm p}L}. \end{equation} Because $c/ \omega_{\rm p}$ is small compared to the macroscopic scale $L$ of the field lines, steady-state Sweet-Parker kinetic reconnection is still relatively slow. \subsubsection{Fast reconnection and the tearing and plasmod instabilities} There have been many attempts to identify effects that would result in current sheets with smaller aspect ratios $L/\delta$, to allow for faster reconnection. The most basic of these models is the Petschek mechanism \citep{1964NASSP..50..425P}, which assumes that oblique slow shocks are present around a central X-point, and they effectively limit the length of the reconnection region. Simulations in the non-relativistic regime have found that this configuration is unstable unless an anomalous localised resistivity is present in the center of the reconnection layer, i.e., at the X-line \citep{2000PhPl....7.4018U}. If the aspect ratio of the reconnection region is larger than $\sim100$, oblique slow shocks can form at the end of the reconnection exhausts, \citep{2012PhPl...19b2110L,2012JGRA..117.1220H}, but it is uncertain whether these shocks are analogous to those in the Petschek model. Despite the difficulty in confirming the viability of this mechanism, the name ``Petschek reconnection'' is often used to describe fast reconnection because kinetic effects can produce an effective anomalous resistivity. Below, we occasionally use the relativistic ``Petschek'' model derived by \citet{2005MNRAS.358..113L} to parameterise the properties of fast reconnection in the relativistic regime. Most other models of fast reconnection focus on the effects of instabilities in the current layer. In any current sheet, the oppositely oriented fields constitute a source of free energy. An important instability that draws on this energy is the tearing instability, which at the same time mediates and is mediated by reconnection. The tearing instability produces an alternating series of narrow X-lines where reconnection can occur, separated by large flux ropes. In turn, steady reconnection equilibria contain thin current sheets, which themselves can be unstable to the tearing instability. The nonideal effect that violates flux freezing to produce reconnection at these X-lines may be collisional resistivity, or it may arise from kinetic effects, so the tearing instability, like reconnection, can take both resistive and kinetic forms. The growth rate of the tearing instability depends strongly on the width of the current sheet. For fast growth, the sheet width must be comparable to those associated with resistive or kinetic reconnection \citep{2000mrp..book.....B, 2007PPCF...49.1885P}. A Sweet-Parker resistive current sheet is thin enough that a resistive instability of the Sweet-Parker current sheet, called the plasmodia instability, may break the sheet into X-lines and magnetic islands, thus lowering its aspect ratio $L/\delta$ and leading to relatively fast reconnection rates $r_{\rm rec}\sim 0.01$ even at high Lundquist numbers, for which the unperturbed Sweet-Parker reconnection would be extremely slow \citep{2007PhPl...14j0703L, 2009PhRvL.103j5004S, 2010PhPl...17f2104H}. However, the corresponding reconnection rate in the relativistic case is significantly lower, $r_{\rm rec}\sim 0.0001$ \citep{2011MNRAS.418.1004Z}. In a long kinetic current sheet whose width is comparable to the skin depth, the kinetic tearing instability can grow quickly and break up the current sheet into X-lines and flux ropes, which can result in fast reconnection at $r_{\rm rec}\sim 0.1$ \citep[e.g.,] []{2001JGR...106.3737B}. A phase diagram of reconnection has been proposed uniting Sweet-Parker and plasmoid configurations for resistive and kinetic reconnection, with the transition from resistive to kinetic reconnection occurring when the Sweet-Parker sheet width approaches the skin depth, and the transition from Sweet-Parker to plasmoid configurations occurring as the aspect ratio of the reconnection region increases \citep{2011PhPl...18k1207J, 2014PhRvL.113d5001C}. The transition between resistive and kinetic regimes has been proposed as a possible explanation of observed variability in reconnection sites \citep{2008ApJ...688..555G} and the onset of fast reconnection far from the central engine in a Poynting flux model of GRBs \citep{2012MNRAS.419..573M}. In this review, we focus on the study of kinetic relativistic reconnection and the particle acceleration and radiation that can be produced by such reconnection, because kinetic effects will often dominate in relativistic magnetically dominated astrophysical plasmas, which are typically nearly collisionless. \subsection{Particle acceleration and radiation in reconnection}\label{intro_rad} As discussed earlier, it is thought that magnetic reconnection is likely to be responsible for the acceleration of particles in systems that are magnetically dominated. As particles cross the current sheet at the X-line, they are forced to return into the current sheet by the reversing magnetic field, following Speiser orbits \citep{1965JGR....70.4219S}. Particles following such orbits can be accelerated in the direction perpendicular to the plane of reconnection \citep[e.g. ][]{2001ApJ...562L..63Z} by the reconnection electric field. Other acceleration mechanisms, in both X-lines and flux ropes, have been found in kinetic simulations for a review of these mechanisms see \citet{2010ApJ...714..915O}, as well as the discussion in Section \ref{acceleration}. The energy gain per unit time for a charged particle accelerated electromagnetically is given in general by \begin{equation} \frac{dW}{dt} =q \mathbf{E} \cdot \mathbf{v}\sim qEc, \label{eq:accel} \end{equation} Particles accelerated in relativistic magnetically dominated systems are typically thought to radiate via the synchrotron mechanism, which tends to place a fundamental constraint on the maximum energy of electromagnetically accelerated particles. The total synchrotron power emitted by a particle is given approximately by \begin{equation} \frac{dW}{dt}\sim \frac{2q^4B^2\gamma^2}{3m^2c^3}. \label{eq:synch} \end{equation} In regions where the MHD condition (\ref{eq:mhd}) holds, $E\le B$ and setting $E=B$ allows the derivation of a maximum $\gamma$ for charged particles, which corresponds to a maximum radiation frequency referred to as the synchrotron burnoff limit. However, during reconnection particles experiencing extreme acceleration at the X-line can spend most of their time deep in the reconnection layer where $E>B$ \citep{2011ApJ...737L..40U}. Thus, they are able to evade this restriction and produce radiation beyond the burnoff limit, as we demonstrate in Section \ref{PWN}. \section{Particle-in-cell simulations of reconnection}\label{simulations} \subsection{Numerical setup}\label{setup} \subsubsection{Numerical techniques} The most common method for simulating the kinetic dynamics of a reconnecting plasma involves the use of a particle-in-cell (PIC) code that evolves the discretized equations of electrodynamics -- Maxwell's equations and the Lorentz force law. See \citet{1991ppcs.book.....B} for a detailed discussion of this method. PIC codes can model astrophysical plasmas from first principles, as a collection of charged macro-particles that are moved by integration of the Lorentz force. Each macroparticle represents many physical particles. Currents associated with the macro-particles are deposited on a grid on which Maxwell's equations is discretized. Electromagnetic fields are then advanced via Maxwell's equations, with particle currents as the source term. Finally, the updated fields are extrapolated to the particle locations and used for the computation of the Lorentz force, so the loop is closed self-consistently. So long as current deposition is the only effect of the macro-particles on the field quantities, charge conservation is ensured. This approach is capable of treating all effects present in collisionless plasmas, including particle acceleration to high energies. To ensure that kinetic effects are resolved in the simulation, it is necessary that the grid spacing be much smaller than the skin depth $c/\omega_{\rm p}$, and that the timestep be much smaller than the corresponding timescale $\omega_{\rm p}^{-1}$. To ensure that the momentum space distribution is adequately sampled, keep particle noise at a low level, and reduce the effects of unphysical collisions due to the relatively small number of particles in a Debye sphere, it is necessary that there be several particles per cell for each particle species. \subsubsection{The Harris current sheet} The starting equilibrium of most reconnection simulations is the Harris current sheet, which is an exact 1D equilibrium of plasma physics \citep{harris62}. It is characterised by the field profile \begin{equation} {\mathbf B}=B_0 \tanh \frac{y}{\delta}\ \hat{{\mathbf x}} +\kappa B_0 \hat{{\mathbf z}}, \label{eq:harris_field} \end{equation} where $\delta$ is the half-thickness of the current sheet, which must be of the same order as $c/\omega_{\rm p}$ for fast reconnection to occur. The quantity $\kappa$ sets the relative strength of a uniform ``guide'' field (orthogonal to the reconnection plane) which may be present in realistic reconnection configurations. For most of the discussion below, we will assume $\kappa=0$ for the sake of simplicity. The particles within the current sheet in the Harris equilibrium are initialised in a drifting Maxwell-Juttner thermal distribution in which positively and negatively charged particles have equal and opposite bulk velocities $\boldsymbol{\beta}_+=-\boldsymbol{\beta}_-=\boldsymbol{\beta}$ (in units of the speed of light) and drifting Lorentz factors of $\gamma_d=1/\sqrt{1-\boldsymbol{\beta}^2}$. The density profile of the Harris current sheet including both electrons and positrons in the simulation frame is \begin{equation} \label{eq:density_profile} n=n_0 \ {\rm sech}^2\ \frac{y}{\delta}, \end{equation} Pressure equilibrium requires that $B_0^2=8 \pi n_0 T_0$, where $T_0$ is the temperature of the particles (in units of $m c^2$) in the current sheet including the Boltzmann constant in the simulation frame. Amp\`ere's Law requires that \begin{equation} \boldsymbol{\beta}_+=-\boldsymbol{\beta}_-=B_0 /(4\pi n_0 q \delta) (-\hat{{\mathbf z}}). \end{equation} This simple configuration is unstable to the tearing instability and is useful for studying reconnection. An additional uniform background population of particles with rest-frame density $n_b$ and no drift velocity is typically added to the current sheet population. Thus, the total density in the simulation frame of all particles in the middle of the current sheet is $n_0 +n_{\rm b}$, whereas the total density in the background plasma away from the current sheet is $n_{\rm b}$. Using the expression for pressure equilibrium above allows us to express the value of $\sigma$ far from the current sheet as \begin{equation} \sigma=\frac{2 n_0 T_0 }{n_b w_{n,\rm b}}. \end{equation} where $w_{n,b}$ is the mean enthalpy of the particles in the background plasma. Note that the value of $n_0 T_0$ is a Lorentz invariant. This equilibrium can be modified while retaining the same value of $\sigma_b$ by increasing the temperature $T_0$ and decreasing the value of $n_0/n_b$ to produce an equilibrium with less density contrast but a difference in temperature between the populations; for a detailed discussion, see \citet{2013A&A...558A.133M}. This modification is used in the simulations in this paper. While the Harris sheet is the most common initial condition for studying reconnection, it should be mentioned that there are other possibilities. Reconnection can be initialised using a force-free current sheet \citep{guo14}, and dynamical scenarios such as X-point collapse \citep{2014PhPl...21a2901G}. Finally, fully three dimensional configurations \citep[][ and references therein]{2011AdSpR..47.1508P} are likely to be the most realistic starting points for simulation, but only a few PIC simulations have used such configurations \citep{2012ApJ...759L...9B,PhysRevLett.111.045002}. \begin{figure*}[!tbp] \begin{center} \includegraphics[width=1.05\textwidth]{Fig1-eps-converted-to.pdf} \caption{Structure of the particle density in the reconnection layer at $\omega_{\rm p}t=3000$, from a 2D simulation of $\sigma=10$ reconnection presented in \citet{2014ApJ...783L..21S}.} \label{fig:fluid2da} \end{center} \end{figure*} \begin{figure*}[!tbp] \begin{center} \includegraphics[width=1.05\textwidth]{Fig2-eps-converted-to.pdf} \caption{Structure of the reconnection layer at $\omega_{\rm p}t=3000$, from a 2D simulation of $\sigma=10$ reconnection discussed in \citet{2014ApJ...783L..21S}. This figure is a zoom-in at $0\leq x\leq 2500\,c/\omega_{\rm p}$ of \fig{fluid2da}. We present (a) particle density, in units of the number density far from the current sheet (with overplotted magnetic field lines), (b) magnetic energy fraction $\epsilon_B=B^2/8\pi m n_{\rm b} c^2$ and (c) mean kinetic energy per particle.} \label{fig:fluid2d} \end{center} \end{figure*} \begin{figure}[!tbp] \begin{center} \includegraphics[width=0.7\textwidth]{Fig3-eps-converted-to.pdf} \caption{Structure of the particle density at two different times: (a) $\omega_{\rm p}t=250$ and (b) $\omega_{\rm p}t=1600$. The plot refers to a 3D simulation of $\sigma=10$ reconnection without a guide field, presented in \citet{2014ApJ...783L..21S}. The 2D slices in the top and bottom panels (at $x=0$ and $z=130\,c/\omega_{\rm p}$, respectively) show the particle number density in that plane.} \label{fig:fluid3d} \end{center} \end{figure} \subsection{Structure of the reconnection layer}\label{structure} We now present the structure and the dynamics of the reconnection layer, discussing the results of 2D and 3D PIC simulations. We concentrate on the case of an electron-positron plasma, which has been most widely explored in the literature, both in 2D \citep[][]{2001ApJ...562L..63Z,zenitani_hoshino_05b,zenitani_07,zenitani_hesse_08b,jaroschek_04,jaroschek_08b,bessho_05,bessho_07,bessho_12,daughton_07,lyubarsky_liverts_08, 2012ApJ...754L..33C, 2013ApJ...770..147C, 2014arXiv1409.8262W} and in 3D \citep[][]{zenitani_08,yin_08,liu_11,2011ApJ...741...39S, sironi_spitkovsky_12,kagan_13,2014ApJ...782..104C,2014ApJ...783L..21S,guo14}. The physics of relativistic electron-proton reconnection, yet still at an early stage of investigation, shows remarkable similarities with electron-positron reconnection \citep{melzani14}. As described above, the reconnection layer is set up in Harris equilibrium, with the magnetic field reversing at $y=0$. For the sake of simplicity, we discuss here the case of anti-parallel fields, without a guide field component. The strength of the alternating fields is parameterized by the magnetization $\sigma$ defined in Eq. \ref{eq:sigma}. Here, we assume that the background plasma far from the current sheet is cold, so $w_n\sim1$ and $\sigma=B_0^2/4\pi m n_{\rm b} c^2$. As a result of the tearing instability, the reconnection layer fragments into a series of magnetic islands (or flux tubes), separated by X-points. Over time, the islands coalesce and grow to larger scales (\citealt{daughton_07} have described a similar evolution in non-relativistic reconnection). The structure of the reconnection region at late times is presented in \fig{fluid2da}, from a large-scale 2D simulation in a $\sigma=10$ pair plasma presented in \citet{2014ApJ...783L..21S}. By zooming into the region $0\lesssim x\lesssim 2500\,c/\omega_{\rm p}$ (here, the inertial length $\,c/\omega_{\rm p}$ is measured taking the density far from the current sheet), we see that each X-line is further fragmented into a number of smaller islands. This is a result of the secondary tearing mode (or ``plasmoid instability'') discussed by \citet{2010PhRvL.105w5002U}. The secondary islands lie at $700\,c/\omega_{\rm p}\lesssim x \lesssim 1400\,c/\omega_{\rm p}$ in \fig{fluid2d}. They are overdense (\fig{fluid2d}a), filled with hot particles (\fig{fluid2d}c) and confined by strong fields (\fig{fluid2d}b). In between each pair of secondary islands, a secondary X-point mediates the transfer of energy from the fields to the particles. As shown in the next section, efficient particle acceleration occurs at the X-points. The reconnection rate is $r_{\rm rec}\equiv v_{\rm in}/v_{\rm out}\sim v_{\rm in }/c\simeq 0.08$ for $\sigma=10$, nearly constant at late times. The reconnection rate depends on the plasma magnetization. In the case of vanishing guide field, \citet{2014ApJ...783L..21S} quote that the reconnection rate in 2D increases from $r_{\rm rec}\simeq 0.03$ for $\sigma=1$ to $r_{\rm rec}\simeq 0.12$ for $\sigma=30$, and it is nearly independent of $\sigma$ for larger magnetizations, in agreement with the analytical model by \citet{2005MNRAS.358..113L}. After entering the current sheet, the flow is advected towards the large magnetic islands by the tension force of the reconnected magnetic field (in \fig{fluid2d}a-c, the major islands are at $200\,c/\omega_{\rm p}\lesssim x\lesssim500\,c/\omega_{\rm p}$ and $1600\,c/\omega_{\rm p}\lesssim x\lesssim1900\,c/\omega_{\rm p}$). Pushed by the ram pressure of the reconnection outflows, the major islands move along the layer, merging with neighboring islands. A merger event in indeed seen at $x\sim 1800\,c/\omega_{\rm p}$ in \fig{fluid2d}. The current sheet formed between the two merging islands is unstable to the tearing mode, and it breaks into a series of secondary islands along the $y$ direction (orthogonal to the primary current sheet). The evolution of 3D reconnection at late times parallels closely the 2D physics described above, even in the absence of a guide field.\footnote{The presence of a strong guide field orthogonal to the reconnecting plane guarantees that the 3D physics will resemble the 2D results, see \citet{guo14}.} As shown in \fig{fluid3d}a, the early phases of evolution are governed by the so-called drift-kink (DK) mode \citep{zenitani_08,2014ApJ...782..104C,2014ApJ...783L..21S}. The DK instability corrugates the current sheet in the $z$ direction, broadening the layer and inhibiting the growth of the tearing mode at early times. However, at later times the evolution is controlled by the tearing instability \citep{2014ApJ...783L..21S}, that produces in the $xy$ plane a series of magnetic islands (or rather, flux tubes), in analogy to the 2D physics. The reconnection layer at late times is organized into a few major islands (see the overdense plasmoids in \fig{fluid3d}b), separated by underdense regions (transparent in \fig{fluid3d}b) where field dissipation by reconnection is most efficient. In short, at late times the 3D physics parallels closely the 2D evolution presented above (yet, with a smaller reconnection rate, $r_{\rm rec}\simeq0.02$ in 3D versus $r_{\rm rec}\simeq0.08$ in 2D). As discussed in the next section, this has important implications for the acceleration performance of relativistic reconnection in 3D. \begin{figure}[tbp] \begin{center} \includegraphics[width=0.8\textwidth]{Fig4-eps-converted-to.pdf} \caption{Temporal evolution of the particle energy spectrum, from a 2D simulation of $\sigma=10$ reconnection by \citet{2014ApJ...783L..21S}. The spectrum at late times resembles a power-law with slope $p=2$ (dotted red line), and it clearly deviates from a Maxwellian with mean energy $(\sigma+1)\,mc^2$ (dashed red line, which assumes complete field dissipation).} \label{fig:spec2d} \end{center} \end{figure} \begin{figure}[tbp] \begin{center} \includegraphics[width=0.8\textwidth]{Fig5-eps-converted-to.pdf} \caption{Temporal evolution of the particle energy spectrum, from a 3D simulation of $\sigma=10$ reconnection by \citet{2014ApJ...783L..21S}. The spectra from two 2D simulations with in-plane (out-of-plane, respectively) anti-parallel fields are shown with red dotted (dashed, respectively) lines. } \label{fig:spec3d} \end{center} \end{figure} \begin{figure}[tbp] \begin{center} \includegraphics[width=0.8\textwidth]{Fig6-eps-converted-to.pdf} \caption{Dependence of the spectrum on the magnetization, as indicated in the legend. The dotted lines refer to power-law slopes of $-4$, $-3$, $-2$ and $-1.5$ (from black to green).} \label{fig:spec2db} \end{center} \end{figure} \subsection{Non-thermal particle acceleration}\label{acceleration} Relativistic reconnection is an efficient source of non-thermal particles. In \fig{spec2d} we present the time evolution of the particle energy spectrum, from a 2D simulation of reconnection with $\sigma=10$ performed by \citet{2014ApJ...783L..21S}. A generic by-product of relativistic reconnection is the generation of a broad non-thermal spectrum extending to ultra-relativistic energies. For $\sigma=10$, the spectrum at $\gamma\gtrsim 1.5$ can be fitted with a power-law of slope $p\equiv - d\log N/d\log \gamma\sim2$ (dotted red line). The spectrum clearly departs from a Maxwellian distribution with mean energy $(\sigma+1)\,mc^2$ (red dashed line, which assumes complete field dissipation). As shown in \fig{spec2db}, the power-law slope depends on the flow magnetization, being harder for higher $\sigma$ ($p\sim1.5$ for $\sigma=50$, compare solid and dotted green lines). The slope is steeper for lower magnetizations ($p\sim4$ for $\sigma=1$, solid and dotted black lines), approaching the result of non-relativistic reconnection, yielding poor acceleration efficiencies \citep[][]{drake_10}. In the limit $\sigma\gg1$, \citet{guo14} and \citet{2014arXiv1409.8262W} have confirmed the trend described above, arguing that the non-thermal slope asymptotes to $p\simeq 1$ for highly magnetized flows. For magnetizations $\sigma\gtrsim10$ that yield $p\lesssim2$, the increase in maximum energy over time is expected to terminate, since the mean energy per particle cannot exceed $(\sigma+1)\,mc^2$. For a power-law of index $1<p<2$ starting from $\gamma_{\rm min}=1$, the maximum Lorentz factor should saturate at $\gamma_{\rm max}\sim[(\sigma+1)(2-p)/(p-1)]^{1/(2-p)}$. For $\sigma\lesssim 10$ (where $p\gtrsim 2$), the increase in maximum energy does not stop, but it slows down at late times. In short, 2D simulations of relativistic reconnection produce hard populations of non-thermal particles. However, the structure of X-points in 3D is different from 2D, as emphasized in the previous section. In particular, the DK mode is expected to result in heating, not in particle acceleration \citep{zenitani_07}. \fig{spec3d} presents the temporal evolution of the particle spectrum in a 3D simulation with $\sigma=10$, by \citet{2014ApJ...783L..21S}. The spectrum at early times is quasi-thermal (black to blue lines in \fig{spec3d}), and it resembles the distribution resulting from the DK mode (the red dashed line shows the spectrum from a 2D simulation with out-of-plane anti-parallel fields, to target the contribution of the DK mode). As discussed above, the DK mode is the fastest to grow, but the sheet evolution at late times is controlled by the tearing instability, in analogy to the 2D physics with in-plane fields. In fact, the spectrum at late times (cyan to red lines in \fig{spec3d}) presents a pronounced high-energy power-law. The power-law slope is $p\sim2.3$, close to the $p\sim2$ index of 2D simulations with in-plane fields. With respect to the 2D spectrum (dotted red line in \fig{spec3d}), the normalization and the upper energy cutoff of the 3D spectrum are smaller, due to the lower reconnection rate ($r_{\rm rec}\simeq 0.02$ in 3D versus $r_{\rm rec}\simeq 0.08$ in 2D), so that fewer particles enter the current sheet per unit time, where they get accelerated by a weaker electric field $E_{\rm rec}\sim r_{\rm rec}\, B_0$. The mechanism of particle acceleration at X-points has been the subject of various investigations, with analytical \citep{larrabee_03,bessho_12} or numerical methods.\footnote{{ Particle acceleration in magnetic islands (as opposed to X-lines or X-points) is also widely discussed in the literature, both in non-relativistic reconnection \citep[e.g.,][]{drake_06,2010ApJ...714..915O} --- where the particles are adiabatic, and they bounce several times between the two edges of an island --- and relativistic reconnection \citep{liu_11,guo14}, where the energy gain might come just from a single bounce. However, the inflowing particles interact at first with the X-points, where they get energy from the dissipating fields. It is this first acceleration episode (that we describe below) which will establish the spectral slope and strongly affect the future history of the inflowing particles. In fact, particles accelerated to high energies at the X-point are likely to experience further acceleration via reflection off of moving magnetic disturbances (e.g., in contracting islands or in between two merging islands), which might eventually dominate the overall energy gain.}} Using test particle simulations in prescribed electromagnetic fields, \citet{2003PhPl...10..835N, 2011ApJ...737L..40U, 2012ApJ...746..148C} found that reconnection naturally produces beams of high-energy particles aligned with the reconnection electric field present within the current layer. These particles follow relativistic Speiser orbits as they are moving back and forth across the reconnection layer. For a steady Sweet-Parker configuration, \citet{2011ApJ...737L..40U} showed that the meandering width of the Speiser orbit decreases as the energy of the particle increases, i.e., the most energetic particles, {with larger Lorentz factor}, are also the most focused along the electric field (see also \citealt{2004PhRvL..92r1101K, 2007A&A...472..219C}). The properties of these special orbits are also well captured by PIC simulations \citep{2012ApJ...746..148C, 2013ApJ...770..147C}. Fig.~\ref{fig_orbits} shows the trajectory of a sample of 150 particles chosen randomly in a 2D PIC simulation with $\sigma=10$. The particle orbits are projected in the plane perpendicular to the reconnecting field, i.e., here in the $(yz)$-plane (reconnection happens in the $xy$-plane). Away from the two layers (located at $y/\rho_{\rm c}\sim125$ and $375$, with $\rho_{\rm c}=mc^2/eB_0$), the particles are well magnetized: they gyrate along the field lines and remain at $z=0$. In contrast, the particles that enter the layer are efficiently boosted along the direction of the electric field (the $z$-axis) and follow relativistic Speiser orbits. The further the particle gets along the $z$-direction, the more energetic it will be. \begin{figure}[] \centering \includegraphics[width=8.0cm]{Fig7-eps-converted-to.pdf} \caption{Trajectories of a sample of 150 particles projected in the $(yz)$-plane from a 2D PIC simulation of relativistic reconnection with $\sigma=10$, and without guide field. Each orbit are drawn with a different color to increase the readability of this figure.The simulation starts with two anti-parallel Harris sheets of temperature $kT=mc^2$ located at $y/\rho_{\rm c}\sim125$ and $375$, where $\rho_{\rm c}=mc^2/eB_0$. Particles are accelerated along the $z$-axis within the current layers where the electric field is maximum, and they follow special orbits known as relativistic Speiser orbits. The further the particle gets along the $z$-axis, the more energetic the particle will become.} \label{fig_orbits} \end{figure} \begin{figure}[tbp] \begin{center} \includegraphics[width=0.8\textwidth]{Fig8-eps-converted-to.pdf} \caption{(a) Energy evolution of a sample of selected particles interacting with a major X-point, as a function of the location $x$ along the current sheet. Colors are scaled with $\gamma_{\rm X\text{-}line}$, the Lorentz factor attained at the outflow boundary of the X-line (at $x=0$ or $280\,c/\omega_{\rm p}$, depending on the particle). (b) $\epsilon_B-\epsilon_E$ at the time when the particles interact with the X-point (here, $\epsilon_E=E^2/8\pi m n_{\rm b} c^2$ is the electric energy fraction).} \label{fig:accel} \end{center} \end{figure} The trajectories of a sample of particles extracted from a 2D simulation with $\sigma=10$ (in \fig{accel}, from \citet{2014ApJ...783L..21S}) also illustrate the mechanism for the formation of the power-law tail in the particle spectrum. At the X-point located at $x\sim 135\,c/\omega_{\rm p}$ the magnetic energy is smaller than the electric energy (blue region in \fig{accel}b), so the particles become unmagnetized and they get accelerated along $z$ by the reconnection electric field. The final energy of the particles -- the color in \fig{accel}a indicates the Lorentz factor measured at the outflow boundary of the X-line -- directly correlates with the location at the moment of interaction with the current sheet \citep[as argued in the analytical models by][]{larrabee_03,bessho_12}. Particles interacting closer to the center of the X-point (darkest blue in \fig{accel}b) are less prone to be advected away along $x$ by the reconnected magnetic field, so they can stay longer in the acceleration region and reach higher Lorentz factors (orange and red lines in \fig{accel}a). In other words, energetic particles turn slowly into the reconnected field ($B_y$ in \fig{accel}), because the Larmor radius is proportional to $\gamma$, so that they spend even more time at the X-point than particles with lower energies. This is an argument originally proposed by \citet{2001ApJ...562L..63Z}, that may also explain the power-law nature of the spectrum (along with the impact parameter of the particles in the current sheet). Indeed, a broad power-law distribution is then established, as a result of the different energy histories of particles interacting at different distances from the X-point. We point out that the most energetic particles (red and orange curves in \fig{accel}) are slowly turning around the reconnected magnetic field $B_y$, and still have a positive $q\, \bmath{E}\cdot \bmath{v}$, so that they gain energy even outside of the blue region (where $|\bmath{E}|>|\bmath{B}|$). On the other hand, the green and blue particles experience also the electric fields surrounding the secondary islands, which explains the oscillations in their energy curves. \subsection{Particle anisotropy and bulk motions}\label{anisotropy} It is now well established that relativistic reconnection is an efficient source of non-thermal particle acceleration (see previous section). In usual astrophysical environments, these energetic particles would emit non-thermal radiation via, e.g., synchrotron or inverse Compton scattering. Due to relativistic aberrations, the radiation emitted by highly relativistic particles (with $\gamma\gg 1$) is beamed within a cone of semi-aperture angle $\sim 1/\gamma\ll 1$ along the direction of motion of the emitting particle. As a result, any anisotropy in the particle distribution results in an anisotropic distribution of radiation which is of critical importance in astronomy because the observer probes only one direction at a time. The overall energetic budget or even the shape of the particle spectrum inferred from observations could differ significantly from the isotropically averaged quantities. \begin{figure}[] \centering \includegraphics[width=10cm]{Fig9a-eps-converted-to.pdf} \includegraphics[width=10cm]{Fig9b-eps-converted-to.pdf} \includegraphics[width=10cm]{Fig9c-eps-converted-to.pdf} \caption{Angular distribution of the particle 4-velocity vectors $\mathbf{u}$, $d{\rm n}/d\Omega d\gamma$ (contour plot), in three energy bins: $\mathbf{\gamma=1.5\pm 0.1}$ (top), $\mathbf{\gamma=6\pm 0.3}$ (middle), and $\mathbf{\gamma=25\pm 1.2}$ (bottom). In this projection (Aitoff), each direction is given by the latitude angle ($\sin\phi=u_{\rm y}/|u|$ with $-90^{\circ}<\phi<+90^{\circ}$, vertical axis) and the longitude angle ($\cos\lambda=u_{\rm z}/\sqrt{u_{\rm x}^2+u_{\rm z}^2}$ with $-180^{\circ}<\lambda<+180^{\circ}$, horizontal axis). The precise geometry of the simulation is shown in Fig.~\ref{fig_bulk1}. These results were obtained from a 2D PIC simulation with $\sigma=10$ with no guide field (see also \citealt{2012ApJ...754L..33C, 2013ApJ...770..147C}, and \citealt{2014ApJ...782..104C} in 3D).} \label{fig_anis} \end{figure} Fig.~\ref{fig_anis} presents the angular distribution of the particle 4-velocity vectors as a function of the particle energy, from a 2D PIC simulation with $\sigma \approx10$ and with no guide field as first reported by \citet{2012ApJ...754L..33C}. The low-energy particles ($\gamma\sim 1$, top panel) present little anisotropy because these particles have not been accelerated at X-points. At higher energies ($\gamma\gtrsim\sigma$, middle and bottom panel), the particles exhibit clear sign of anisotropy with two beams pointing roughly towards the $\pm x$-directions, i.e., along the reconnection exhausts. Hence, the beams are not necessarily pointing along the reconnection electric field because the tension of the reconnected field lines pushes the particles away from the X-points in the form of a reconnection outflow towards the magnetic islands (see \fig{fluid2d}a, and top panel in Fig.~\ref{fig_bulk1}). Nonetheless, the direction of the beam of energetic particles is not static: it wiggles rapidly within the $(xz)$-plane (along the horizontal axis in Fig.~\ref{fig_anis}), which results in rapid flares of energetic radiation when the beam crosses the line of sight of a distant observer \citep{2012ApJ...754L..33C}. This result has interesting application to astrophysical flares, and in particular to the recently discovered $>100$~MeV gamma-ray flares discovered in the Crab Nebula \citep{2013ApJ...770..147C, 2014ApJ...782..104C} (see Sect.~\ref{PWN}). The Crab flare case is quite extreme in the sense that the particles emitting $>100$~MeV synchrotron radiation should be accelerated and radiating over a sub-Larmor timescale, so the highest energy radiation should keep the imprint of the particle anisotropy (regardless of the acceleration process), while the low-energy radiation should be more isotropic. \begin{figure}[tbp] \begin{center} \includegraphics[width=0.8\textwidth]{Fig10-eps-converted-to.pdf} \caption{Positron momentum spectrum along $x$ (green), $y$ (blue), $+z$ (red solid) and $-z$ (red dashed), for 2D and 3D, as indicated in the legend.} \label{fig:spec3db} \end{center} \end{figure} The pronounced anisotropy discussed above lasts for some limited amount of time. Indeed, when the high-energy particles reach the magnetic islands, they isotropize quickly in the strong fields shown in \fig{fluid2d}c and they do not contribute to the beamed emission. Since most of the particles at late times are contained in the major islands, it is not surprising that the long-term momentum spectra show little signs of anisotropy (see \fig{spec3db}). Even the residual difference between the momentum spectra along $+z$ and $-z$ (red solid and dashed lines, respectively) diminishes at later times (the 2D momentum spectra at $\omega_{\rm p}t=1800$ were similar to the 3D results in \fig{spec3db}, showing that the anisotropy decays over time). \begin{figure}[] \centering \includegraphics[width=13.5cm]{Fig11-eps-converted-to.pdf} \caption{Positron fluid velocity $\boldsymbol{\beta}=\mathbf{v}/c$ in the $x$- (top), $y$- (middle), and $z$-directions (bottom), for the same simulation as in Fig.~\ref{fig_anis} ($\sigma=10$, no guide field). The black solid lines show the magnetic field lines. The electron fluid velocity maps are identical, except that $\beta_{\rm z,electrons}=-\beta_{\rm z,positrons}$.} \label{fig_bulk1} \end{figure} \begin{figure}[] \centering \includegraphics[width=13.5cm]{Fig12-eps-converted-to.pdf} \caption{Total Lorentz factor of the positron fluid, $\Gamma=1/\sqrt{1-\beta^2}$, computed from Fig.~\ref{fig_bulk1}. The white solid lines are magnetic field lines.} \label{fig_bulk2} \end{figure} It is important to stress that this beaming mechanism is strongly energy-dependent. It should be distinguished from the Doppler boosting due to a relativistic bulk motion in the flow which beams all the particles and radiation by the same factor. In fact, relativistic reconnection produces also relativistic bulk flows as anticipated by \citet{2005MNRAS.358..113L}, and constitutes the cornerstone of the fast-variability models for blazar jets by \citet{2009MNRAS.395L..29G} (see Sect.~\ref{AGN}). Fig.~\ref{fig_bulk1} shows the three components of the fluid velocity vector normalized by the speed of light, $\boldsymbol{\beta}=\mathbf{v}/c$, for the same simulation as in Fig.~\ref{fig_anis} (where $\sigma=10$ and with no guide field) and at the same stage. The $x$-component presents the characteristic signature of a dipolar relativistic flow at every X-point where $\beta_{\rm x}\approx \pm 0.5$, which corresponds to the reconnection outflow accelerated by the tension of the newly reconnected field lines (i.e., $v_{\rm out}/c$ defined in Sect.~\ref{models}). The $y$-component shows the inflow of particles from the upstream towards the X-point that feeds the reconnection process with fresh plasma (i.e., $v_{\rm in}/c$ in Sect.~\ref{models}). This motion is due to the $\mathbf{E_{\rm z}}\times \mathbf{B_{\rm x}}$ drift velocity, and is about $\beta_{\rm y}\approx\pm 0.3$ in this particular simulation. The $z$-component is related to the electric current carried by counter-streaming electrons and positrons around the X-points. The corresponding fluid velocity is about $\beta_{\rm z}=\beta_+=-0.6$ for the positrons and $\beta_{\rm z}=\beta_-=+0.6$ for the electrons, but the net velocity is close to zero if both fluids are combined. Overall, the bulk Lorentz factor is only close to unity in this simulation (see Fig.~\ref{fig_bulk2}), which demonstrates that the anisotropic particle distributions is not related to the relativistic Doppler beaming. This being said, according to \citet{2005MNRAS.358..113L}, the bulk Lorentz factor of the outflow in relativistic Petscheck-like reconnection should scale as $\gamma_{\rm out}\sim\sqrt{\sigma}$. Indeed, it is hard to envision a scenario of fast reconnection (in the high $\sigma$ regime) where the outflowing material is not in relativistic bulk motion. PIC simulation runs that follow the evolution of the current sheet on a longer time scale typically find that the $\gamma_{\rm out}\sim\sqrt{\sigma}$ scaling works in the high-$\sigma$ regime (\citealt{2014ApJ...783L..21S}, K.~Nalewajko 2013, private communication). \section{Astrophysical applications}\label{applications} \subsection{Pulsars and pulsar wind nebulae}\label{PWN} Pulsars are often regarded as one of the most suitable astrophysical environment for relativistic pair plasma reconnection. These objects are known to generate extremely magnetized plasma of pairs within their co-rotating magnetosphere. The plasma is released in the form of a relativistic magnetized wind beyond the light-cylinder surface, which is defined where the co-rotating velocity with the star equals the speed of light. In the wind region, the magnetic field lines open up and become mostly toroidal due to the fast rotation of the neutron star. This configuration naturally results in the formation of an equatorial current sheet (or ``striped wind'') that separates the two magnetic polarities. This is the relativistic analog of the well-known ballerina's skirt shaped heliospheric current sheet. Reconnection in the equatorial current sheet was first proposed by \citet{1990ApJ...349..538C} and \citet{1994ApJ...431..397M} as a remedy to the ``sigma-problem'', i.e., to explain the transition between a Poynting-flux dominated flow formed close to the neutron star ($\sigma\gg 1$) to the observed low-$\sigma$ pulsar wind nebulae. However, \citet{2001ApJ...547..437L} noticed that the dissipation of the current sheet would be followed by the acceleration of the wind. In the Crab pulsar, the wind would reach the termination shock before reconnection could proceed, unless the pulsar injects pairs at a higher rate than usually expected \citep{2003ApJ...591..366K}. As an alternative to the classical magnetospheric models (e.g., polar-cap, outer-gap, slot-gap), \citet{1996A&A...311..172L} suggested that reconnection in the striped wind could also explain the high-energy gamma-ray emission observed in pulsars \citep{2002A&A...388L..29K, 2012MNRAS.424.2023P, 2013A&A...550A.101A, 2014ApJ...780....3U}. If, however, reconnection is inefficient in the wind zone, the striped wind is forced to dissipate at the termination shock \citep{2003MNRAS.345..153L}. Using particle-in-cell simulations, \citet{2007A&A...473..683P} in 1D and \citet{2011ApJ...741...39S} in 2D and 3D showed that shock-driven reconnection is able to annihilate the magnetic structure and efficiently accelerates particles regardless of the wind properties for large magnetizations. Whether the dissipation occurs in the wind or at the termination shock, it solves only partially the sigma-problem because the striped wind covers only a fraction of the solid angle set by the inclination angle between the rotation axis and the magnetic axis. Hence, the wind and the nebula should remain magnetically dominated at high latitudes (except for an orthogonal rotator). But, as we know from observations, pulsar wind nebulae are particle kinetic energy dominated flows, so there must be an extra mechanism to dissipate the remaining Poynting flux. \citet{1992SvAL...18..356L} and \citet{1998ApJ...493..291B} argued that pulsar wind nebulae should be subject to non-axisymmetric kink-like instabilities. Their hypothesis was recently corroborated by 3D relativistic MHD simulations by \citet{2011ApJ...728...90M} and \citet{2013MNRAS.431L..48P, 2014MNRAS.438..278P}. The dissipation of the magnetic energy could be done during the non-linear development of these instabilities via non-ideal MHD effects such as magnetic reconnection. The surprising discovery of short-lived, bright gamma-ray flares from the Crab Nebula \citep{2011Sci...331..736T, 2011Sci...331..739A} could be the direct evidence of magnetic reconnection in the Nebula \citep{2011ApJ...737L..40U, 2012MNRAS.426.1374C, 2014PhPl...21e6501C}. Using 2D and 3D PIC simulations, \citet{2013ApJ...770..147C, 2014ApJ...782..104C} showed that most of the features of the flares can be explained with relativistic reconnection (timescale, energetics, particle and photon spectra). In particular, these studies demonstrated that reconnection can accelerate particles above the synchrotron radiation burn-off limit \citep{1983MNRAS.205..593G, 1996ApJ...457..253D} deep inside the reconnection layer where the electric field overcome the magnetic field (see Fig.~\ref{fig_burnoff}), as anticipated by \citet{2004PhRvL..92r1101K} and \citet{2007A&A...472..219C} (Sect.~\ref{intro_rad}). This result is crucial because it can explain the emission of $>100~$MeV synchrotron radiation emitted during every Crab flare, which would be impossible to achieve in ideal MHD. The reconnection scenario would work best in the most magnetized regions of the nebula, i.e., near the poles and possibly in the jets \citep{2012ApJ...746..148C, 2012MNRAS.427.1497L, 2013MNRAS.428.2459K, 2013MNRAS.436.1102M}. Unfortunately, the current gamma-ray telescopes do not have the angular resolution to pin down the precise location of the flares within the Nebula. \begin{figure}[] \centering \includegraphics[width=12cm]{Fig13-eps-converted-to.pdf} \caption{Isotropically-averaged particle spectrum ($\gamma d{\rm N}/d\gamma$, left panel) and synchrotron radiation energy distribution ($\nu F_{\nu}$, right panel) in a 2D (solid line) and 3D (dashed line) PIC simulations of relativistic reconnection, including the effect of the radiation reaction force on the particles. The vertical dotted lines show the radiation-reaction limited energy of a particle if $E=B_0$ ($\gamma=\gamma_{\rm rad}$, left), and the corresponding maximum synchrotron photon energy ($\epsilon=160~$MeV independent of $E$ and $B_0$, right). Figure adapted from \citet{2014PhPl...21e6501C}.} \label{fig_burnoff} \end{figure} \subsection{Jets from Active Galactic Nucleii}\label{AGN} Jets from active-galactic nuclei have been monitored for decades at practically all accessible electromagnetic wavelengths resulting in a very rich phenomenology \citep{1995PASP..107..803U}. When the jet is pointing close to our line of sight, it is referred to as a ``blazar''. Recent observational progress in the blazar field has been immense. In particular, Cherenkov telescopes can now detect minute timescale variability in an increasing number of blazars \citep{2007ApJ...664L..71A}. These novel results strongly constrain the hydrodynamical models for the jet emission. A broader consensus has emerged regarding the qualitative nature of the ``central engine''. The energy source in this view is a spinning black hole or the inner accretion disk threaded by a strong magnetic field (see, e.g., \citealt{1982MNRAS.199..883B}). This field transfers rotation energy outward as a Poynting flux. While part of the magnetic energy is used for the bulk acceleration of the jet, much of the energy remains in the magnetic field \citep{2010MNRAS.402..353L} and is available to power the jet emission through dissipation by instabilities and magnetic reconnection \citep{2009MNRAS.395L..29G}. In this picture, the jet is expected to be magnetically dominated in the emitting region, i.e., one deals with relativistic reconnection. In the following we show that applying our current understanding of relativistic reconnection to the physical conditions expected is blazar jets, reconnection can account for the extreme energetics and timescales inferred by blazar observations (for a similar approach to the modeling of the emission from gamma-ray bursts see \citealt{2005A&A...430....1G, 2006NJPh....8..119L, 2011ApJ...726...90Z}). The possibility that ultra-high-energy cosmic ray acceleration takes place at the current sheets of the reconnection regions of powerful jets is investigated in \citet{2010MNRAS.408L..46G}. \paragraph{The magnetic reconnection model for blazar emission:} Blazar emission varies on time\-scales typically ranging from hours to years and is thought to reflect, in part, variations of the gas properties in the black-hole vicinity\footnote{Several hours is the event-horizon light-crossing time of a billion solar-mass black hole-- mass typically inferred for the central engine in blazars: $t_{\rm cross}=2GM_{\rm BH}/c^3\simeq 10^4 M_9$ s.}. The recently discovered ultra-fast TeV flares from several blazars\footnote{Including Mrk 421, Mrk 501, PKS 2155-304, PKS 1222-216, and BL Lac.} (see, e.g., \citealt{2007ApJ...664L..71A, 2007ApJ...669..862A}) are strongly challenging the models for the blazar emission (\citealt{2008MNRAS.386L..28G,2009MNRAS.395L..29G}). This rare but generic blazar activity has several very revealing properties. (i) Fast flares have $\sim 10$ minute variability timescale, i.e, a factor $\sim 100$ shorter than the light-crossing time of the size of the black hole, pointing to extremely compact emitting regions. (ii) The emitting material must move with $\Gamma_{\rm em}\gtrsim 50-100$ for the TeV radiation to avoid anihhilation by soft radiation fields at the source \citep{2008MNRAS.384L..19B, 2008ApJ...686..181F}; these values of $\Gamma_{\rm em}$ are much larger than the bulk jet motion $\Gamma_j\sim 10$ typically inferred in blazars from radio observations (see \citealt{2009AJ....138.1874L}). (iii) For $\gtrsim 100$ GeV photons to escape the {\it observed} broad line region of the blazar PKS 1222-216, the emitting region must be located at scales $\gtrsim 0.5$ pc \citep{2011A&A...534A..86T}. (iv) Simultaneous TeV and GeV ({\it Fermi-LAT}) observations indicate that the TeV flaring takes place on top of longer day-long blazar activity (e.g. \citealt{2011ApJ...733...19T}). (v) Fast flares may come in a repetitive fashion of similar events as observed in PKS 2155-304 \citep{2007ApJ...664L..71A}. Taken together, these inferences are extremely constraining for the models for the blazar emission. \citet{2009MNRAS.395L..29G} argued that the ultra-fast variability must be generated internally in the jet by MHD instabilities. In strongly magnetized jets, the reconnection process injects energetic particles in compact, fast moving regions. These regions are natural emitters of powerful flares. Furthermore, the emitting material is expected to be faster than the jet on average allowing for TeVs to escape the source. For a jet moving with bulk $\Gamma_{\rm j}\sim 10-20$ and a plasmoid being ejected with bulk $\gamma_{\rm out}\simeq \sqrt{\sigma}$ (as measured in the rest frame of the jet), the emitting region moves with $\Gamma_{\rm em}\simeq 2 \Gamma_{\rm j} \gamma_{\rm out}$ (in the frame of the host galaxy). For $\sigma\sim$ several, one can easily account for the required $\Gamma_{\rm em}\gtrsim 50$. Applications of the model to fit spectra of specific sources are reported in \citet{2010MNRAS.402.1649G,2011MNRAS.413..333N}. The \citet{2009MNRAS.395L..29G} model is based on a simplified picture for the reconnection geometry adopting a steady state reconnection model. As pointed out by \citet{2012MNRAS.420..604N} steady reconnection cannot account for the fastest evolving blazar flares because the variability timescale is limited by the reconnection speed $\beta_{\rm in}<1$. However, assuming steady reconnection is over-simplistic. Solar and Earth magnetosphere observations and recent advances in theory and numerical simulations (see previous Sections) have revealed that reconnection is an inherently time-dependent, highly dynamic, process (see, e.g., \citealt{2005ApJ...622.1251L, 2006PhRvL..96s5003P, 2010SoPh..266...71K}). These time-dependent aspects of reconnection are crucial in understanding the fastest timescales involved in blazar flaring. For the physical conditions prevailing in jets, the reconnection current sheets are expected to suffer from tearing instabilities that lead to their fragmentation to a large number of plasmoids \citep{2007PhPl...14j0703L, 2009PhPl...16k2102B}. The plasmoids grow rapidly through mergers before leaving the reconnection region. Occasionally plasmoids undergo significant growth to a scale of order of that of the reconnection region, forming ``monster'' plasmoids (\citealt{2010PhRvL.105w5002U}; see Fig.~\ref{fig:dimitrios}; left panel). The relativistic motion of the plasmoids in the rest frame of the jet results in additional beaming of their emission (i.e., beyond that induced by the jet motion). When the layer's orientation is such that plasmoids beam their emission towards the observer, powerful and fast evolving flares emerge. {\it Here we focus on the characteristic observed timescales and luminosities resulting from plasmoids that form in the reconnection region.} For simplicity, we assume that the dissipated energy is efficiently converted into radiation.\footnote{In practice the blazar emission is likely to result of ultrarelativistic electrons cooling via synchrotron radiation and Compton scattering. As discussed in previous sections, relativistic reconnection is an effective means of accelerating particles to such extreme energies.} \citet{2013MNRAS.431..355G} demonstrated that a broad range of blazar phenomenology can be qualitatively understood in the context of plasmoid-dominated reconnection. The virtue of the model is that it can be applied to all blazar sources with observed fast flaring for similar adopted parameters. The model favors pc-scale dissipation for the origin of the fast flaring and provides theoretical motivation for such dissipation distance. Another interesting aspect of the model is that a sequence of fast flares is expected to have similar timescale set by the size of the reconnection layer as observed in PKS 2155. This work has demonstrated that the tight energetic, emitter Lorentz factor, and timescale constraints (i)-(v) are satisfied in the reconnection model. {\it More importantly, the basic assumptions of the Giannios 2013 analysis on the properties of the reconnection layer have been fully verified by PIC simulations since then (see previous Sections).} \begin{figure}[t] \includegraphics[width=6cm]{Fig14a-eps-converted-to.pdf} \includegraphics[width=6cm]{Fig14b-eps-converted-to.pdf} \caption{{\it Left Panel}: Schematic representation of the geometry of reconnection process shown in a frame comoving with the jet. Magnetic field lines of opposite polarity annihilate at the $x-y$ plane with speed $v_{\rm rec}=\beta_{\rm in} c$. The reconnection layer fragments to a large number of plasmoids. Regularily, plasmoids undergo multiple mergers resulting in a ``monster'' plasmoid (shaded blob). {\it Right Panel}: Sketch of the emission from plasmoid-dominated reconnection. The reconnection proceeds on a global timescale $t_{\rm rec}=l/\beta_{\rm in} c$, powering $\sim 1$day long flares (or envelope emission). Regularily, plasmoids grow to become ``monster'' plasmoids (shaded blob) giving rise to powerful, fast-evolving flares of duration $t_{\rm flare}\sim 10$ minutes. Several fast flares are expected from a single reconnection event.} \label{fig:dimitrios} \end{figure} {In the following of this Section we make a plausibility argument for the model: we estimate the characteristic observed timescales and luminosities resulting from plasmoids that form in the reconnection region} (for full derivations see Giannios 2013). To this end, we consider a blob (or plasmoid) emerging from the reconnection layer moving with the Alfv\'en speed of the reconnection upstream, i.e, with a corresponding bulk Lorentz factor $\gamma_{\rm out}\simeq \sqrt{\sigma}$ (measured in the jet rest frame) and of size $R_{\rm p}''=fl'$, where $l'$ is the characteristic scale of the reconnection region and $f$ is a dimensionless parameter of the order of 0.1, as expected for the largest, ``monster'' plasmoids (Uzdensky et al. 2010); hereafter, primed (double primed) quantities are measured in the rest frame of the jet (emitting blob).\footnote{We assume that the plasmoid instability operates across the whole length of the current sheet, as opposed to a situation where central, very compact, dissipation region forms and is surrounded by extended magnetic separatrices (the slow shocks in Petscheck model) across which most of the plasma flows. In the latter case, the monster plasmoids may be smaller.} The observed characteristic variability time for the plasmoid emission is $t_{\rm v}\simeq R''_{\rm p}/\delta_p c$, where $\delta_{\rm p}$ is the Doppler boost of the plasmoid radiation towards the observer. For a central engine in which the magnetic field varies on a dynamical time $\sim R_{\rm Sch}/c$, the characteristic scale of the reconnection region can be estimated to be $l'\simeq\Gamma_{\rm j} R_{\rm Sch}$ resulting in \begin{equation} t_{\rm v}=\frac{f\Gamma_jR_{\rm Sch}}{\delta_{\rm p}c}=400 f_{-1}\Gamma_{\rm j, 20}M_9\delta_{p, 50}^{-1}\rm \quad s, \label{eq:1} \end{equation} where $\delta_{\rm p}=50\delta_{\rm p,50}$, $f=0.1f_{-1}$, $\Gamma_{\rm j}=20\Gamma_{\rm j,20}$. $f\sim 0.1$ describes the largest plasmoids expected in the layer \citep{2010PhRvL.105w5002U}. Flaring on several minute timescale is therefore expected in this picture. Consider a jet emerging from a supermassive black hole with (isotropic equivalent) power $L_{\rm iso}$, opening angle $\theta_j$ and Lorentz factor $\Gamma_j$. We also assume that $\theta_j \Gamma_j=0.2$ as indicated by observations \citep{2009A&A...507L..33P}. The typical bulk Lorentz factor of gamma-ray active blazars is $\Gamma_j\sim 10-20$ \citep{2010A&A...512A..24S, 2012ApJ...758...84P}. The energy density at the dissipation, or ``blazar'', zone is \begin{equation}U'_{\rm j}=\frac{L_{\rm iso}}{4\pi (\theta_{\rm j} R_{\rm diss})^2\delta_{\rm j}^4c}.\end{equation} The dissipation distance $R_{\rm diss}$ is estimated requiring that the reconnection proceeds within the expansion time of the jet ($R_{\rm diss}/\Gamma_jc\sim l'/\epsilon c$). Pressure balance across the reconnection layer requires the energy density of the plasmoid to be similar to that of the jet $U''_p\sim U'_j$. Assuming efficient conversion of dissipated energy into radiation, the rest-frame luminosity of the plasmoid is thus $L_{\rm p,obs}=\delta_p^4L'' =\delta_p^4U_{\rm p}''4\pi R_{\rm p}''^2c$. Putting everything together, the observed luminosity of the plasmoid is \citep{2013MNRAS.431..355G} \begin{equation} L_{\rm p,obs}=10^{47}\frac{\beta_{\rm in, -1}^2f_{-1}^2\delta_{p,50}^4L_{\rm iso,48}}{\delta_{j,20}^4}\quad \rm erg/s. \label{eq:3} \end{equation} The Doppler factor of the plasmoid $\delta_{\rm p}$ depends on several parameters. It is related to $\Gamma_j$, $\gamma_{\rm out}$, the angle of the plasmoid with respect to the jet motion and the observer's angle of sight. For typical situations where the reconnection layer is at a large $\theta\sim \pi/2$ angle with respect to the jet propagation (as seen in the jet rest) and fairly aligned with the observer (giving powerful flares) $\delta_{\rm p} \sim \Gamma_{\rm j}\gamma_{\rm out}$. {One can see (see Eq.~\ref{eq:3}) that powerful flares on a timescale of $\sim$10 min is possible even with very modest relativistic motions within the jet $\gamma_{\rm out}\sim 2$. \paragraph{Ejection of multiple monster plasmoids:} During a reconnection event multiple monster plasmoids are expected to form. 2D simulations \citep{2012PhPl...19d2303L} indicate that monster plasmoids form every few Alfv\'en times $t_A$ or at a rate of $\sim 0.3t_A^{-1}$. It appears likely that 2D simulations underestimate the rate of formation of monster plasmoids. The actual rate may be higher when the 3D structure of the layer is considered \citep{2014ApJ...783L..21S}. If monster plasmoids emerge at a rate $\sim (0.3-3)t_A^{-1}$, some $(3-30)/\beta_{\rm in, -1}$ plasmoids are expected from a single reconnection layer powering multiple flares. A sketch of such pattern is shown in Fig.~\ref{fig:dimitrios}. \paragraph{The ``envelope emission'' from the reconnection region:} The bulk motion of a monster plasmoid is expected to be similar to the speed of other structures (e.g. smaller plasmoids) leaving the reconnection region. When the plasmoid emission is beamed towards the observer (powering a fast flare), the overall emission from the current layer is also beamed by a similar factor. The emission from the layer forms a slower-evolving ``envelope''. In the following we estimate the timescale and luminosity of the emission from the reconnection layer. At the dissipation distance $R_{\rm diss}$, the reconnection proceeds within the expansion time of the jet ($R_{\rm diss}/\Gamma_jc\sim l'/\beta_{\rm in} c$) which is observed to last for $t_{\rm exp,obs}\simeq R_{\rm diss}/\Gamma_j^2c$. Therefore, $t_{\rm exp,obs}$ corresponds to the observed duration of the envelope emission which is simply (using also Eq.~(\ref{eq:1})): \begin{equation} t_{\rm env}=\frac{R_{\rm diss}}{\Gamma_j^2c}=10^5\frac{M_9}{\beta_{\rm in, -1}} \quad \rm s. \end{equation} The duration of the envelope emission is $\sim$days. Such timescale is characteristic of blazar flares. The (lab frame) energy available to power the envelope emission is $E_{\rm env}=U_{\rm j}2l'^3/\Gamma_{\rm j}$, where $U_j=\Gamma_j^2U'_j$ is the energy density of the jet and $2l'^3/\Gamma_{\rm j}$ accounts for (lab frame) volume of the reconnection region that powers each minijet (see Fig.~\ref{fig:dimitrios}). The emitted luminosity of the reconnection region is $E_{\rm env}/t_{\rm env}$. It can be converted into {\it observed} luminosity by accounting for beaming factor of the emission $\sim \delta_p^2$: \begin{equation} L_{\rm env,obs}\simeq 2\Gamma_{\rm j}^2\delta_{\rm p}^2l'^2U_{\rm j}'\beta_{\rm in} c=3\times 10^{46}\frac{\Gamma_{\rm j,20}^2\delta_{\rm p,50}^2 \beta^3_{\rm in, -1}L_{\rm iso, 48}}{\delta_{j,20}^4} \quad \rm erg/s. \label{eq:4} \end{equation} The envelope emission is quite bright. Dividing Eqs.~(\ref{eq:3}) and (\ref{eq:4}), one arrives to a fairly simple expression for the ratio of the plasmoid to envelope luminosities $L_p/L_{\rm env}\sim 3f_{-1}^2\delta_{\rm p,50}^2/(\Gamma_{\rm j,20}^2\beta_{\rm in, -1})$. The luminosity contrast depends only on the Lorentz factor of the minijet in the rest frame of the jet $\gamma_{\rm p}\simeq \delta_{\rm p}/\Gamma_{\rm j}$, the size of the plasmoid parametrized by $f$, and the reconnection sped $\beta_{\rm in}$. The observed luminosity ratio is of order unity constraining $\delta_{\rm p,50}/\Gamma_{\rm j,20} \sim 1$ for $\beta_{\rm in}\sim f\sim 0.1$. The ratio $\delta_{\rm p,50}/\Gamma_{\rm j,20}$ is determined by the reconnection-induced bulk motions in the jet and points to $\gamma_{\rm out}\sim 2$ or, equivalently, moderately magnetized jet with $\sigma\sim $ several. Most of the current numerical work on relativistic reconnection (and this review so far) has focused on the case of electron-positron plasmas. The composition of the jet flow is still an open question but an electron-proton jet is a strong possibility. Electron-ion reconnection is more challenging, on a numerical level, than electron-positron reconnection, since the computation has to resolve the small scales of electrons, yet the system evolves on the longer ion timescales. However, the physics of relativistic electron-proton reconnection, yet still at an early stage of investigation, shows remarkable similarities with electron-positron reconnection \citep[e.g.,][]{melzani14}. A detailed investigation of relativistic reconnection in the case of unequal mass charges is of paramount importance to obtaining predictions for the acceleration of electrons and cosmic rays in blazar jets. \section{Conclusion}\label{conclusions} There has been significant progress in our understanding of relativistic reconnection in recent years, thanks to both analytical works and numerical simulations. One important outcome is that plasma instabilities in current sheets play a crucial role in the dynamics of reconnection. In particular, the tearing instability which fragments the current sheet, leads to fast reconnection and efficient non-thermal particle acceleration. Particle-in-cell simulations are now large enough to unambiguously identify broad, hard power laws in the particle energy distributions (in the high-magnetization limit). The power-law index is typically harder than the universal $\sim -2$ index expected in shock acceleration. These impressive developments were also motivated by puzzling observations of high-energy phenomena in the Universe, especially flaring gamma-ray sources. Ultra-rapid gamma-ray flares discovered in the Crab Nebula and in several AGN jets are too fast and too bright to be explained by conventional models. Particle beaming and relativistic bulk motions associated with relativistic reconnection can alleviate these difficulties. We expect fast new developments in this field, with more applications to astrophysical objects. \begin{acknowledgements} We thank the referees for useful comments that helped to improve the manuscript. LS is supported by NASA through Einstein Postdoctoral Fellowship grant number PF1-120090 awarded by the Chandra X-ray Center, which is operated by the Smithsonian Astrophysical Observatory for NASA under contract NAS8-03060. BC acknowledges support from the Lyman Spitzer Jr. Fellowship awarded by the Department of Astrophysical Sciences at Princeton University, and the Max-Planck/Princeton Center for Plasma Physics. DG acknowledges support from the NASA grant NNX13AP13G. \end{acknowledgements} \bibliographystyle{aps-nameyear}
1,314,259,995,684
arxiv
\section{I. Introduction} Thermopower ($S$) of Ce-based Kondo lattice systems (KLS) exhibits a variety of unusual features depending on the coupling strength ($J$) between Ce-4$f$ and conduction electrons, which is of both fundamental and application interests \cite{Zlaticbook,Sakurai}. Particularly, as $J$ is enhanced by external pressure ($p$), typical features in $S(T)$ from Kondo to intermediate-valence (IV) regime can be observed even in a single compound, examples of which include CeCu$_{2}$Si$_{2}$ \cite{CeCu2Si2}, CeAl$_{3}$ \cite{CeAl3}, CeCu$_{2}$Ge$_{2}$ \cite{CeCu2Ge2andCePd2Si2}, CePd$_{2}$Si$_{2}$ \cite{CeCu2Ge2andCePd2Si2}, CeCu$_{2}$ \cite{CeCu2}, CeRu$_{2}$Ge$_{2}$ \cite{CeRu2Ge2}. At $p$ = 0, $S(T)$ of these materials displays a positive maximum at high $T$ but a negative minimum at low $T$. The increase of $p$ tends to upshift the whole $S(T)$ curve. Once $J$ becomes large enough for Ce to be in an IV state, one broad positive $S(T)$ maximum of the order of $k_{\rm B}$/$e$ (= 87 $\mu$V/K) persists, often with a shoulder at low $T$ \cite{CePd3}. Theoretically, while at high and intermediate $T$ the $S$ behavior is understood as resulting from the interplay between Kondo and crystal field (CF) effects \cite{Coqblin,Zlatic1,Zlatic2}, the low-$T$ negative $S$ remains mysterious. However, since in the aforementioned cases the long-range magnetic order (if any) disappears at $p$ $<$ 8 GPa \cite{CeCu2Ge2andCePd2Si2,CeCu2,CeRu2Ge2}, the way in which $S(T)$ evolves with $p$ remains largely unexplored for weak Kondo coupling (small $J$) regime. In this context, it is worth noting that we recently found a giant overlap in $p$ between the magnetic and superconducting phases of CeAu$_{2}$Si$_{2}$, which contrasts radically with the observations made on its sister compounds CeCu$_{2}$$X$$_{2}$ ($X$ = Si, Ge) \cite{RenPRX}. Strikingly, the magnetic order of CeAu$_{2}$Si$_{2}$ persists up to $p_{\rm c}$ $\sim$ 22 GPa \cite{RenPRX}, which, together with the small magnitude of $\mid$$S$$\mid$ ($\sim$2 $\mu$V/K) \cite{DidierCeCu2Ge2,CeAu2Si2TEP1,CeAu2Si2TEP2}, places the compound far from magnetic instabilities. Thus, the high-$p$ $S$($T$) measurement of CeAu$_{2}$Si$_{2}$ allows not only to look for possible differences with the normal-state properties of CeCu$_{2}$$X$$_{2}$, but also offers a rare opportunity to study the $p$-evolution of the characteristic features in $S$($T$) starting from a small $J$. In this paper, we present thermopower $S(T)$ and electrical resistivity $\rho(T)$ measurements on the very same CeAu$_{2}$Si$_{2}$ crystal at $p$ up to 27.8 GPa. Thanks to its large $p_{\rm c}$, a remarkable scaling behavior is uncovered: the $S(T)$ data for $p$ $\leqslant$ 20.5 GPa, when normalized by a proper factor, collapse onto a single curve when plotted against $T$/$T^{\rm *}$, where $T^{\rm *}$ is the temperature at which the first sign change of $S$ occurs with decreasing $T$. The comparison with the results of $\rho$ measurements shows that the normalization factor and $T^{\rm *}$ follow nearly the same $p$-dependence as the $-$ln$T$ slope of $\rho(T)$ and overall CF splitting $\Delta_{\rm CF}$, respectively. In the $p$-range where superconductivity (SC) is enhanced with $p$, a large negative minimum in $S(T)$ develops below $T^{\rm *}$. Further on, it is shown that the scaling relation is commonly applicable to Ce-based KLS provided that the coupling strength $J$ is small enough. These results are discussed in regard to existing theories and their possible implication for Ce-based heavy fermion SC. \section{II. Experimental} We grow CeAu$_{2}$Si$_{2}$ crystals from Sn flux as described elsewhere \cite{RenPRX,OnukiCeAu2Si2}. A sample cut from a crystal with low residual resistivity $\rho_{\rm 0}$ $\approx$ 2 $\mu$$\Omega$cm is used for in-plane $S$($T$) and $\rho$($T$) measurements, which are carried out in a Bridgman-type anvil pressure cell in the temperature range 1.3 $<$ $T$ $<$ 300 K with lead (Pb) as the $p$-gauge \cite{Bridgmananvil}. A photograph of the high-$p$ chamber is shown in S1 of the Supplemental Material \cite{SM}. Compared with the previous study \cite{RenPRX}, we succeed in further miniaturizing the setup by reducing the sample size, the $p$-cell thickness ($d$) and the cross section of the thermocouple (TC) wires. Due to the rapid relaxation of the thermal gradient $\Delta$$T$ $\propto$ exp(-1/$d$), where $d$ $\approx$ 40 $\mu$m, special care must be taken to position the TC wires as close as possible to the heater. Following Ref. \cite{CeRu2Ge2}, $S(T)$ of the TC wires are assumed to be $p$-independent, and we check that our results are free from their (possible) small variations under $p$. Corrections due to the misplacement of the TC wires are introduced by examining the chamber after depressurization. Within the experimental error of $\sim$ 20\%, the results presented here are in good agreement with those obtained in another cell for the overlapping $T$-range (1.3 $<$ $T$ $<$ 7 K) \cite{GernotCeAu2Si2}. \begin{figure} \includegraphics*[width=9cm]{Fig1.eps} \caption{(Color online) (a) Logarithmic $T$-dependence of the non-phononic contribution ($\Delta$$\rho$) to in-plane resistivity of CeAu$_{2}$Si$_{2}$ for typical pressures. The arrows indicate the two resistivity maxima at $T_{1}^{\rm max}$ and $T_{2}^{\rm max}$, respectively, for 12.9 GPa. The two dashed lines evidence the --ln$T$ dependence of resistivity. At 16.7 GPa, the magnetic and onset superconducting transitions are denoted by $T_{\rm M}$ and $T_{\rm c}$, respectively. (b) The $-$ln$T$ slopes $k_{\rm 1}$ and $k_{\rm 2}$ above $T_{1}^{\rm max}$ and $T_{2}^{\rm max}$ as a function of $p$. At $p$ = 0, the large error in $k_{\rm 2}$ is due to the uncertainty in $\rho_{\rm ph}$; for the other pressures, the error is within the size of the symbol. The solid lines are a guide to the eyes. (c) $\Delta$$\rho$ at 27.8 GPa plotted a function of $T^{2}$. $T_{\rm FL}$ denotes the Fermi liquid temperature below which $\Delta$$\rho$ $\propto$ $T^{2}$ (solid line). } \label{fig1} \end{figure} \section{III. Results and Discussion} Let us first briefly summarize the $\rho$($T$) results, which are essentially the same as in previous studies \cite{RenPRX,RenPRB}. Figure 1(a) shows the $T$-dependence of the non-phononic resistivity $\Delta$$\rho$ = $\rho$ $-$ $\rho_{\rm ph}$ for typical pressures, where $\rho_{\rm ph}$ is a phonon term derived from $\rho$($T$) of LaPd$_{2}$Si$_{2}$ \cite{LaPd2Si2} and assumed to be $p$-independent. Note that such a $\rho_{\rm ph}$ is a better approximation at low $T$ than the linear one used previously \cite{RenPRX,RenPRB}, which overestimates the actual $\rho_{\rm ph}$ contribution. At an intermediate $p$, say 12.9 GPa, and as observed in numerous KLS, $\Delta$$\rho$($T$) exhibits two maxima at $T_{\rm 1}^{\rm max}$ and $T_{\rm 2}^{\rm max}$. Moreover, above each maximum $\Delta$$\rho$($T$) $\propto$ $-$ln$T$, reflecting the incoherent Kondo scattering of the ground state and excited CF levels \cite{scattering1}. The increase of $p$ has little effect on $T_{\rm 2}^{\rm max}$, but enhances both $T_{\rm 1}^{\rm max}$ and the $-$ln$T$ slope $k_{i}$ = $-$$d$($\Delta$$\rho$)/$d$(ln$T$) for $T$ $>$ $T_{i}^{\rm max}$ ($i$ = 1, 2) \cite{slope}. Strikingly, as shown in Fig.1(b), $k_{\rm 1}$ and $k_{\rm 2}$ share almost the same $p$-exponential dependence up to 16.7 and 20.5 GPa \cite{Note1}, respectively, pointing to a $p$-independent ratio of $k_{\rm 2}$/$k_{\rm 1}$ $\approx$ 1.5. It must be emphasized that the $p$-dependence of the $k_{\rm 1}$ and $k_{\rm 2}$ slopes is practically independent of the choice of $\rho_{\rm ph}$, but their ratio ($k_{\rm 2}$/$k_{\rm 1}$) can vary from 1.5 to 2.5 for different $\rho_{\rm ph}$ terms. Well below $T_{\rm 1}^{\rm max}$, signature of $p$-induced SC is detected at 16.7 and 18.8 GPa in addition to magnetic ordering, though the transition is broadened likely due to $p$-gradient. At the highest $p$ of 27.8 GPa, Fermi liquid (FL) behavior extends up to $T_{\rm FL}$ $\approx$ 25 K [Fig. 1(c)], and the Kondo temperature $T_{\rm K}$ $\approx$ 240 K, estimated from the $T_{\rm 1}^{\rm max}$ \cite{SunCeCu2Si2}, is $\sim$140 times the ambient-$p$ value (1.7 K) deduced from the neutron scattering experiments \cite{CeAu2Si2TK}. \begin{figure} \includegraphics*[width=8cm]{Fig2.eps} \caption{(Color online) In-plane $S$($T$) of CeAu$_{2}$Si$_{2}$ for (a) $p$ $\leqslant$ 17.6 GPa and (b) 18.8 $\leqslant$ $p$ $\leqslant$ 27.8 GPa. The vertical arrows and dashed lines are a guide to the eyes. The inset shows the low temperature data for typical pressures. The temperatures corresponding to the superconducting transition and the maximum in thermopower are marked by arrows. } \label{fig2} \end{figure} We will now discuss the in-plane $S$($T$) data. At $p$ = 0, $S(T)$ of CeAu$_{2}$Si$_{2}$ undergoes a sign change at $T^{\rm *}$ $\approx$ 100 K and shows a minimum at $T_{\rm min}$ $\approx$ 25 K. This nonmonotonic $T$-dependence is in stark contrast to that found in polycrystalline non-magnetic LaAu$_{2}$Si$_{2}$ \cite{CeAu2Si2TEP2}, suggesting that the dominant contribution to $S(T)$ already stems from weak Kondo scattering. As can be seen in Fig. 2(a) and (b), the evolution of $S$($T$) at low $T$ is qualitatively different at $p$ below and above 17.6 GPa. With increasing $p$ below 17.6 GPa, it is observed for the first time that the magnitude of the negative $S(T)$ is boosted from small to giant values typical of KLS. Concomitantly, $T_{\rm min}$ increases to $\sim$50 K for $p$ $\geqslant$ 9 GPa. On closer examination, a bump at $\sim$25 K is still discernible, suggesting that there are actually two superimposed negative contributions (see dashed lines in Fig. 2). At 17.6 GPa, the $S_{\rm min}$ value $\sim$$-$30 $\mu$V/K is close to that of CeCu$_{2}$Si$_{2}$ at $p$ = 0 \cite{CeCu2Si2,SunCeCu2Si2}, an expected result given that at this $p$ the volume of CeAu$_{2}$Si$_{2}$ is reduced to that of CeCu$_{2}$Si$_{2}$\cite{RenPRX}. Since the overall $S$($T$) behavior of CeCu$_{2}$Si$_{2}$ is very similar along the $ab$ plane and $c$-axis with nearly isotropic $S_{\rm min}$ \cite{unpublished}, a weak anisotropy in $S$($T$) is expected for CeAu$_{2}$Si$_{2}$ around this $p$. In addition, it is worth mentioning that, after a partial depressurization from 27.8 down to $\sim$17.6 GPa, while $T^{\rm *}$ is almost unaffected, the $S$($T$) magnitude is reduced concomitantly with an increase of $\rho_{\rm 0}$, in line with a Nordheim-Gorter type relation \cite{Note5}. Let us note the weak $p$-dependence of the temperature $T^{\rm *}$, to which we will return below. For $p$ $\geqslant$ 18.8 GPa, the trend in $S(T)$ versus $p$ is reminiscent of previous observations in Ce-based KLS \cite{CeCu2Si2,CeAl3,CeCu2Ge2andCePd2Si2,CeCu2,CeRu2Ge2}. While the positive $S$($T$) keeps growing, the negative contribution decays and finally vanishes for $p$ $>$ 23.2 GPa. In fact, $S$($T$) starts to change back to positive value at low $T$ already for $p$ $\geqslant$ 16.7 GPa. Following the sign change, $S$($T$) increases with decreasing $T$ and then drops to zero due to the superconducting transition. However, since such sign change may already occur below 1.3 K at lower $p$, its detailed study is beyond the scope of the present paper. Incidentally, the anomaly in $S$($T$) associated with magnetic ordering is only observed at $p$ = 0. At room $T$, $S$ at 27.8 GPa ($\sim$ 50$\mu$V/K) is $\sim$20 higher than at $p$ = 0. By contrast, there is only a fivefold enhancement in $\rho$. Together, these results signify a giant $p$-induced increase ($\sim$80 times) in the power factor $S^{2}$/$\rho$. \begin{figure} \includegraphics*[width=8.3cm]{Fig3.eps} \caption{(Color online) (a) The normalized $S$($T$) data for $p$$ \leqslant$ 22.1 GPa as a function of $T$/$T^{\rm *}$. Data at 7.5 GPa are not considered due to possible gasket relaxation after initial pressurization. The red dashed line denotes the fit from an empirical formula for dilute Kondo systems (see text). The inset shows the $p$-dependence of the $S_{\rm N}$, $S_{\rm 280K}$, and $\mid$$S_{\rm min}$$\mid$ and $k_{\rm 2}$ slope above $T_{2}^{\rm max}$. Note that both vertical axes are in logarithmic scale, and the solid lines are a guide to the eyes. (b) the normalized $S$($T$) versus $T$/$T^{\rm *}$ for a number of Ce-based Kondo-lattice compounds at ambient pressure. The letter in parentheses denotes the crystal symmetry, with T for tetragonal, O for orthorhombic, C for cubic. } \label{fig3} \end{figure} It is known that $S$($T$) of dilute Kondo alloys with 3$d$-impurities, when normalized by a factor of 1/$S_{\rm N}$, follows a universal function $f$($T$/$\Theta$), where the temperature $\Theta$ characterizes the coupling between the impurity local moments and the conduction electrons \cite{Kondoalloyscaling}. A similar situation is expected concerning 4$f$ impurities, although no experimental observations have been reported to date \cite{Zlaticbook}. The above $\rho$($T$) results which clearly demonstrate that, CeAu$_{2}$Si$_{2}$ behaves as a Kondo alloy above the temperature $T_{i}^{\rm max}$ over a broad $p$ range, leads us to examine whether such a scaling exists in this compound. As shown in the main panel of Fig. 3(a), it turns out that for $T$ $\geqslant$ $T^{\rm *}$ ($S$ $\geqslant$ 0) and $p$ up to 20.5 GPa, the normalized $S$/$S_{\rm N}$ data fall on a single curve when plotted as a function of $t$ = $T$/$T^{\rm *}$. Note that, in our case, $S_{\rm N}$ is set as $S$(1.79$T^{\rm *}$), which is the same as, or very close to, the value of $S_{\rm 280 K}$ for different $p$. At $t$ $>$ 1.8, the scaling curve can be fitted by the empirical formula $S$ $\sim$ $T$/($T$ + $T^{\rm *}$) for dilute Kondo systems \cite{Kondoformula}, indicating that $S$($T$) is ascribed to the incoherent Kondo effect at sufficiently high $T$. At $t$ $<$ 1, where $S(T)$ is likely governed by the thermal depopulation of the two upper CF doublets, a scaling is found when plotting $S$/$\mid$$S_{\rm min}$$\mid$ instead of $S$/$S_{\rm N}$ against $t$. The quality of the data collapse is less good than for $t$ $>$ 1, probably due to the interference of the two contributions to the negative $S(T)$ minimum mentioned above. For $p$ $\geqslant$ 22.1 GPa, no such scalings are observed at either low or high $T$, which we ascribe to the delocalization of Ce-4$f$ electrons \cite{RenPRX}. The inset of Fig. 3(a) shows the resulting $S_{\rm N}$, $S_{\rm 280K}$, and $\mid$$S_{\rm min}$$\mid$ plotted as a function of $p$, together with the slope $k_{\rm 2}$ for comparison. It is striking that $S_{\rm N}$, $S_{\rm 280K}$ and $k_{\rm 2}$ exhibit the same exponential dependence on $p$ for 9 $\leqslant$ $p$ $\leqslant$ 20.5 GPa, and so does $\mid$$S_{\rm min}$$\mid$ for $p$ $\leqslant$ 9 GPa . According to Ref. \cite{scattering1}, $k_{\rm 2}$ $\propto$ $n^{2}(E_{\rm F})$$\mid$$J$$\mid^{3}$, where $n(E_{\rm F})$ is the density of the states at Fermi level. Assuming a $p$-independent $n(E_{\rm F})$, we have $S_{\rm N}$ $\sim$ $S_{\rm 280K}$ $\propto$ $\mid$$S_{\rm min}$$\mid$ $\propto$ $\mid$$J$$\mid^{3}$ over a given $p$-range, which provides strong evidence that, at both low and high $T$, the $S(T)$ magnitude enhancement is due to the increase of $J$. In the high-$T$ limit, this is consistent with the theoretical calculation of Kondo $S(T)$ for non-interacting $f$-electron spins \cite{Fischer}. Above 9 GPa, $\mid$$S_{\rm min}$$\mid$ deviates from the exponential behavior and tends to saturate, reflecting the competition between the positive and negative $S$($T$). Similarly, $S_{\rm 280K}$ saturates above 26.7 GPa on approaching the value of $k_{\rm B}$/$e$. In Fig. 3(b), we compare the scaled $S(T)$ at $p$ = 0 of various Ce-based KLS, including CeAu$_{2}$Si$_{2}$, CeCu$_{2}$ \cite{CeCu2}, CeCu$_{2}$Si$_{2}$ \cite{unpublished}, CeRnIn$_{5}$ \cite{unpublished}, CeCu$_{2}$Ge$_{2}$ \cite{CePb3TEP}, CePb$_{3}$\cite{CePb3TEP}, CePdSn \cite{CePdSnTEP}, and CePtSn \cite{CePdSnTEP}. One can see that all data sets collapse on the same curve up to $t$ $\sim$ 2, despite the fact that $T^{\rm *}$ changes by nearly one order of magnitude. This trend exists for CeAl$_{2}$ \cite{CeAl2TEP}, CeAl$_{3}$ \cite{CeAl3TEP}, CeCu$_{5}$Au \cite{CeCu5AuTEP} and even $\beta$-Ce \cite{betaCe}, and this list being by no means exhaustive. These results clearly demonstrate that the scaling of $S(T)$ is widely applicable to Ce-based KLS when $J$ is sufficiently small, independently of the crystal structure or the local environment of the Ce ions. Furthermore, as found in CeAu$_{2}$Si$_{2}$, it is expected that such a scaling still holds for these systems within a certain $p$-range. For example, this is the case of CeCu$_{2}$Ge$_{2}$ \cite{CeCu2Ge2andCePd2Si2}. \begin{figure} \includegraphics*[width=8.7cm]{Fig4.eps} \caption{(Color online) $p-T$ phase diagram of CeAu$_{2}$Si$_{2}$ determined from the combination of resistivity and thermopower data. A contour map of the thermopower data is also included. Closed symbols represent the results from Ref. \cite{RenPRX}. $T_{\rm K}$ is calculated from $T_{\rm K}$ $\sim$ exp[$-$$\frac{1}{NJn(E_{\rm F})}$] (see text). The determination of $T_{\rm FL}$ and the crossover(COV) line can be found in the Supplemental Material \cite{SM}. Note that the vertical axis is in logarithmic scale. } \label{fig4} \end{figure} To gain more insight into the $S(T)$ evolution, we constructed a comprehensive $p$-$T$ phase diagram (PD) of CeAu$_{2}$Si$_{2}$ by combining both $S$($T$) and $\rho$($T$) data, as shown in Fig. 4. We will first discuss the driving parameter $T_{\rm K}$. As stated previously \cite{scattering1,RenPRX}, the temperature $T_{2}^{\rm max}$ scales approximately the overall CF splitting $\Delta_{\rm CF}$, while $T_{1}^{\rm max}$ gives an indication of $T_{\rm K}$ above 16 GPa when it rapidly grows to become much larger than $T_{\rm M}$. With increasing $p$, the Kondo contribution to $\rho$($T$) at $T_{1}^{\rm max}$ finally dominates over that at $T_{2}^{\rm max}$, and the system enters the IV regime with a trend to recover the full degeneracy of the Ce-4$f$$^{1}$ multiplet. At lower $T$, this trend is marked by the crossover (COV) line defined from the $\rho$($p$) drop due to the 4$f$ electron delocalization \cite{RenPRX,SM,seyfarth}. On the other hand, $T_{\rm K}$($p$) can be estimated as $T_{\rm K}$($p$) $\sim$ $D$exp[$-$$\frac{1}{NJ(p)n(E_{\rm F})}$] \cite{bandwith}, where $D$ is the bandwith, $N$ is the degeneracy of the 4$f$-state and $J$($p$) = $J_{0}$$\times$$\sqrt[3]{10^{0.073p}}$ is obtained from the fitting of $k_{\rm 2}$ data in Fig.1 (b). In order to qualitatively take into account the COV, $N$ is assumed to increase linearly with $p$ from $N$ = 2 at $p_{\rm c}$ to $N$ = 6 at 30 GPa. With the adjusted parameters $D$ = 700 K and $J_{0}$$n(E_{\rm F})$ = 0.05, the calculated $T_{\rm K}$ reproduces reasonably well the $T_{\rm 1}^{\rm max}$ for $p$ $>$ 16 GPa \cite{Note2}, confirming that $T_{\rm K}$ is increased by a factor of $\sim$40 over the huge $p$-range of SC in CeAu$_{2}$Si$_{2}$. Surprisingly, whereas no anomaly associated with the resistivity maximum at $T_{\rm 1}^{\rm max}$($T_{\rm K}$) can be observed in $S$($T$) for $p$ $<$ $p_{\rm c}$, this anomaly seems to be present in CeRu$_{2}$Ge$_{2}$ \cite{CeRu2Ge2}. In fact, the above results strongly suggest that the Kondo effect is mostly reflected in the magnitude, rather than the $T$-dependence, of $S(T)$ in CeAu$_{2}$Si$_{2}$ below $p_{\rm c}$. For $p$ $\geqslant$ 25.4 GPa, the position of the high-$T$ maximum in $S$($T$) agrees roughly with the $T_{1}^{\rm max}$ of the resistivity maximum, and gives an independent estimation of $T_{\rm K}$. By contrast, the temperature $T_{\rm S}^{\rm low}$ of the low-$T$ maximum is only half of $T_{\rm FL}$ and $\sim$5\% of $T_{\rm K}$, where the $T$-dependent $\rho$($T$) term is still very small. Hence its origin is unlikely linked to the Kondo effect, but may be ascribed to short range magnetic correlations or to the crossover between the zero-$T$ term $S_{\rm 0}$ \cite{S01,S02} and the usual Kondo term $S_{\rm mag}$. To better clarify this issue, the sister compounds CeCu$_{2}$$X$$_{2}$ should be re-investigated, given that such maximum appears at a much lower $p$ and hence can be studied in a more extended $p$-range \cite{CeCu2Si2,CeCu2Ge2andCePd2Si2}. As already noted above, both $T^{\rm *}$ and $T_{\rm min}$ vary weakly with $p$ and behave very similarly to $T_{2}^{\rm max}$, unlike $T_{\rm K}$, indicating that these quantities are controlled by $\Delta_{\rm CF}$. In this respect, the two terms contributing to the negative $S$($T$) minimum might correspond to the two excited CF levels at $\sim$190 and $\sim$250 K \cite{deltaCF}, each of them producing a minimum at $T_{\rm min}$ $\sim$ $\Delta_{\rm CF}$/6. If this is the case, this may also help to understand why our $k_{\rm 2}$/$k_{\rm 1}$ ratio is much smaller than the predicted value of 11.7 \cite{Note3}. On the other hand, the weak $p$-variation of $T^{\rm *}$ explains the difference in the observed scaling behavior between here and Ref. \cite{Kondoalloyscaling} where the CF effect is absent. To our knowledge, there is currently no theory that can account for the observed features in $S(T)$, especially at low $T$. The most elaborated calculations by Zlatic \emph{et al.} indicate that the sign change in $S(T)$ is a manifestation of the crossover from the weak-coupling local-moment regime to the strong-coupling Fermi-liquid regime with decreasing $T$ \cite{Zlatic2}. While this is in a qualitative agreement with the experimental results at high $T$, it is difficult to reconcile the weak $p$-dependence of $T^{\rm *}$ and $T_{\rm min}$ with the rapid growth of $T_{\rm K}$ over the broad $p$-range. Furthermore, the shape of $S$($T$) in the crossover region cannot be determined in their calculations so that a direct comparison with the scaling behavior is not possible. On the other hand, according to the semiphenomenological model developed by Fischer \cite{Fischer}, the dominant contribution to negative $S(T)$ stems from interacting spin pairs. However, the model predicts that $T^{\rm *}$ should scale with $T_{\rm K}$, which is at odds with our observations. Nevertheless, since the model considers only spin interactions, it will be interesting to investigate how the situation changes when the CF effect is included. Another salient feature illustrated in Fig. 4 is that the large negative $S(T)$ minimum (the blue region) is located right above the superconducting domain up to almost $p_{\rm c}$. A very similar situation is found when $S(T)$/$T$ is concerned \cite{Note4}. Actually, as suggested in Fig. 3, this appears to be a common feature of the normal state of prototypical Ce-based $p$-induced superconductors. For CeAu$_{2}$Si$_{2}$, $\mid$$S_{\rm min}$$\mid$ and $T_{\rm c}$ exhibit a qualitatively similar $p$-dependence and hence one can speculate that this $S$($T$) minimum is intimately linked to SC. Here it is noted that $T^{\rm *}$ vanishes at a $p$ considerably higher than $p_{\rm c}$ and shows a very different $p$-dependence from that of $T_{\rm M}$, indicating that the negative $S(T)$ is not related straightforwardly to magnetic fluctuations. In this regard, the understanding of its physical origin may help to elucidate the pairing mechanism for these materials. Also, the scaling relation shown in Fig. 3 could serve as an empirical guide for the search of new Ce-based $p$-induced superconductors. For example, SC might be expected in Ce(Pt/Pd)Sn ($T_{\rm N}$ $\sim$ 7 K) and CePb$_{3}$ ($T_{\rm N}$ $\sim$ 1.1 K) under $p$, provided sufficiently high-quality crystals can be obtained. Finally, it should be noted that substantial differences exist between the normal-state properties of CeAu$_{2}$Si$_{2}$ and CeCu$_{2}$$X_{2}$ despite their similarities \cite{RenPRX}. In particular, regardless its smaller overall $\Delta_{\rm CF}$, $T^{\rm *}$ of CeAu$_{2}$Si$_{2}$ ($\sim$120-150 K) is almost twice that of CeCu$_{2}$$X_{2}$ ($\sim$40-80 K). Moreover, the low-$\rho_{\rm 0}$ of CeAu$_{2}$Si$_{2}$ in comparison with CeCu$_{2}$Ge$_{2}$ suggests a longer mean free path, which is more favorable for unconventional SC \cite{RenPRB}. Given that the interaction leading to the negative $S(T)$ may also be involved in the superconducting pairing, these factors could be the key to understanding the exotic ground-state properties of CeAu$_{2}$Si$_{2}$ under $p$. \section{IV. Conclusion} In summary, we have studied systematically the high-$p$ thermopower and resistivity of CeAu$_{2}$Si$_{2}$ up to 27.8 GPa. For the first time, a scaling behavior is found in the $S(T)$ data below 20.5 GPa as a function of a reduced temperature $T$/$T^{\rm *}$. The comparison with $\rho(T)$ results shows that the $S(T)$ magnitude is determined by the Kondo coupling $J$, while the CF splitting $\Delta_{\rm CF}$ controls the characteristic temperatures of the sign change ($T^{\rm *}$) and the negative minimum ($T_{\rm min}$). Up to almost $p_{\rm c}$, a large negative $S(T)$ minimum regularly precedes the superconducting transition, suggesting that the two phenomena are closely related. Furthermore, the scaling relation is shown to hold up to 2$T^{\rm *}$ for $S$($T$) at ambient $p$ of related systems with diverse crystal structures, testifying to its wide applicability. Our work demonstrates that thermopower can be useful in probing the $p$-evolution of CF energy scale in Ce-based KLS, but should be used with caution in estimating the $T_{\rm K}$ of these materials. This calls for new theoretical understandings. \section{ACKNOWLEDGEMENT} \begin{acknowledgments} We acknowledge enlightening discussions with V. Zlatic and R. Monnier, as well as financial support from the Swiss National Science Foundation through Grant No. 200020-137519. \end{acknowledgments}
1,314,259,995,685
arxiv
\section{Introduction}\label{sec:intro} The supernova (SN) explosions blow out the various heavy elements generated by the nucleosynthesis process inside the progenitor stars. Meanwhile, the blast wave generated by the SN explosion sweeps up and heats the interstellar matter (ISM). It forms a characteristic structure of each supernova remnant (SNR). In this way, the morphology of the shell structure provides information about the ambient density of the ISM. The \object{Cygnus Loop} is one of the brightest SNRs in the X-ray sky. Its age is estimated to be $\sim10,000$yr and the distance is comparatively close to us (540pc; Blair et al. 2005). The large apparent size (2.5\arcdeg$\times$3.5\arcdeg; Levenson et al. 1997) enables us to study the plasma structure of the Loop. The origin of the \object{Cygnus Loop} is thought to be a cavity explosion \citep{McCray79, Hester86, Hester94, Levenson97}. The progenitor is presumed to be a B0, 15$M_\odot$ \ star \citep{Levenson98} and some X-ray studies also estimated the progenitor mass to be 12-15$M_\odot$ \ (\textit{e.g.}, Tsunemi et al. 2007). From the morphological point of view, the \object{Cygnus Loop} is a typical shell-like SNR and it is almost circular in shape. However, a large break is seen in the south, known as ``blowout'' region \citep{Aschenbach99}. The origin of the ``blowout'' is not well understood. \citet{Aschenbach99} explained this extended structure as a breakout into a lower density ISM. On the other hand, based on a radio observation, \citet{Uyaniker02} suggested the existence of a secondary SNR in the south. Some other radio observations support this conclusion \citep{Uyaniker04, Sun06}. Recently, \citet{Uchida08} observed this region with the \textit{XMM-Newton} observatory and found that the X-ray spectra of this region consist of two plasma components with different electron temperatures. Judging from the plasma structures and the metal distributions, they concluded that the X-ray emission is consistent with a \object{Cygnus Loop} origin and that each plasma component is derived from the ejecta and the cavity material, respectively. They also showed that the X-ray shell is thin in their fields of view (FOV) and concluded that the origin of the blowout can be explained as a breakout into a lower density ISM as proposed by \citet{Aschenbach99}. It is natural to consider that such a breakout may also exist along the line of sight. \citet{Tsunemi07} observed the \object{Cygnus Loop} along the diameter with \textit{XMM-Newton} and argued about it. They divided their FOV into a north path and a south path and found the thin shell region to be 5\arcmin \ in the south path and 20\arcmin \ in the north path. They estimated this thin shell region to have a diameter of 1\arcdeg, centering on $\alpha = 20^{\mathrm h}49^{\mathrm m}11^{\mathrm s}$, $\delta = 31\arcdeg05\arcmin20\arcsec$. \citet{Kimura09} expanded their observation northward with \textit{Suzaku} and found that the flux of the swept-up matter in southwest is about a third of that in northeast. The width of this region is $\sim50\arcmin$. \citet{Kimura09} presumed that there is a blowout region in the direction of our line of sight in the middle west of the Loop. In this paper, we used 41 observation data obtained by the \textit{Suzaku} and the \textit{XMM-Newton} observatories. We reanalyzed all the data to conduct a comprehensive study on the shell structure of the \object{Cygnus Loop}. \section{Observations} We summarized the 41 observations in table \ref{tab:sum}. All the data were taken between 2002 and 2008. The observing regions are shown in figure \ref{fig:HRI} top. The circles and rectangles represent the FOV of the \textit{XMM-Newton} MOS and the \textit{Suzaku} XIS, respectively. All of the \textit{Suzaku} data were analyzed with version 6.5 of the HEAsoft tools. For reduction of the \textit{Suzaku} data, we used version 9 of the Suzaku Software. The calibration database (CALDB) used was updated in July 2008. We used revision 2.2 of the cleaned event data and combined 3$\times$3 and 5$\times$5 event files. We applied the spaced row charge injection (SCI) method \citep{Prigozhin08} to P1, P2, P3, P4, P5, P6, P7, P9, P10, P11, P18, P19, P20, P21, P22, P23, P24 and P25 data. This method reduces the effect of radiation damage of the XIS and recovers the energy resolution, for example, from 205$\pm6$eV to 157$\pm4$eV at 5.9 keV. In order to exclude the background flare events, we obtained good time intervals (GTIs) including only times at which the count rates were within $\pm2\sigma$ of the mean count rates. Since the \object{Cygnus Loop} is a large diffuse source and our FOV are filled with the SNR's emission, we could not obtain background spectra from our FOV. We also had no background data from the neighborhood of the \object{Cygnus Loop}. We therefore applied the Lockman Hole data for background subtraction. We reviewed the effect of the galactic ridge X-ray emission (GRXE). The flux of the GRXE at $l = 62\arcdeg$, $|b| < 0\arcdeg.4$ is $6\times10^{-12}$erg cm$^{-2}$s$^{-1}$deg$^{-2}$ (0.7-2.0 keV) \citep{Sugizaki01}. Although the \object{Cygnus Loop} ($l = 74\arcdeg$, $b = -8\arcdeg.6$) is located outside the FOV of \citet{Sugizaki01}, this value gives us an upper limit of the GRXE at the \object{Cygnus Loop}. Meanwhile, the flux of the \object{Cygnus Loop} is estimated to be $8.2\times10^{-10}$erg cm$^{-2}$s$^{-1}$deg$^{-2}$ (0.7-2.0 keV), assuming that the \object{Cygnus Loop} is a circle with a diameter of $3\arcdeg.0$. Therefore, we concluded that the effect of the GRXE on the \object{Cygnus Loop} is vanishingly small. The solar wind charge exchange (SWCX) is also considered to correlate with the soft X-ray background below 1 keV \citep{Fujimoto07}. However, in terms of the \object{Cygnus Loop}, we consider that the SWCX is negligible because of the prominent surface brightness of the Loop. Thus, the Lockman Hole data obtained in 2006, 2007 and 2008 were applied for background subtraction. We selected data whose observation dates were close to those of the \object{Cygnus Loop} observations. Since there were no photons above 3.0 keV after background subtraction, the energy ranges of 0.2-3.0 keV and 0.4-3.0 keV were used for XIS1 (back-illuminated CCD; BI CCD) and XIS0,2,3 (front-illuminated CCD; FI CCD), respectively \citep{Koyama07}. All the \textit{XMM-Newton} data were processed with version 7.1.0 of the \textit{XMM} Science Analysis System (SAS). The current calibration files (CCFs) used were updated on 2008 June. We used data obtained with the EPIC MOS and pn cameras. These data were taken by using the medium filters and the prime full-window mode. We selected X-ray events corresponding to patterns 0-12 and flag = 0 for MOS 1 and 2, patterns 0-4 and flag = 0 for pn, respectively. In order to exclude background flare events, we determined the GTIs in the same way as those of the \textit{Suzaku} data. After filtering the data, they were vignetting-corrected by using the SAS task \textbf{evigweight}. For background subtraction, we employed blank-sky observations prepared by \citet{Read03} for a similar reason with the \textit{Suzaku} case. After the background subtraction, the energy range of 0.3-3.0 keV was used for each instrument. \section{Spectral Analysis}\label{sec:specana} To investigate the plasma structure of the \object{Cygnus Loop}, we divided the entire FOV into small box regions. In order to equalize the statistics, we initially divided all images of XIS1 or MOS2 into two parts; if each divided region had more than 10,000 photons, it was once again divided. In this way, we obtained 1042 box regions. Each region contained 5,000-10,000 photons for XIS1 and MOS2. The side length of each box ranges from 2.2\arcmin \ to 14\arcmin. Therefore box sizes are not smaller than the angular resolution capability of the \textit{Suzaku} XIS. We grouped 1042 spectra into bins with a minimum of 20 counts such that $\chi^2$ statistics are appropriate. In order to generate a response matrix file (RMF) and an ancillary response file (ARF), we employed xisrmfgen \citep{Ishisaki07} and xissimarfgen for the \textit{Suzaku} data, rmfgen and arfgen for the XMM-Newton data. Firstly, we fitted all the spectra by a single-component variable abundance non-equilibrium ionization (VNEI) model. We employed \textbf{TBabs} (T\"{u}bingen-Boulder ISM absorption model; Wilms et al. 2000) and \textbf{VNEI} (NEI ver.2.0; Borkowski et al. 2001) in XSPEC version 12.5.0 \citep{Arnaud96}. In this model, the abundances for C, N, O, Ne, Mg, Si and Fe were free, while we set the relative abundances for S to the solar value equal to that of Si, Ni equal to Fe. The other elements were fixed to their solar values. Other parameters were all free such as the electron temperature, $kT_e$, the ionization timescale, $\tau$ (a product of the electron density and the elapsed time after the shock heating), and the emission measure, EM ($=\int n_e n_{\rm H} dl$, where $n_e$ and $n_{\rm H}$ are the number densities of electron and hydrogen and $dl$ is the plasma depth). We also set the absorption column density, $\rm\textit{N}_H$, to be free. As a result, the spectra from the limb regions are well fitted by the single-component VNEI model. As shown by earlier observations of the northeast and the southeast limb \citep{Tsunemi07, Kimura09, Uchida09Nrim, Tsunemi09}, the spectra obtained from the limb regions of the Cygnus Loop are typically described by a single-component VNEI model. On the other hand, the spectra from the inner regions are generally not fitted by the single-component VNEI model. From earlier observations of the northeast to the southwest regions along the diameter, \citet{Tsunemi07} found that the spectra from the inner regions of the \object{Cygnus Loop} consist of a two-component VNEI plasma. They concluded the plasma structure of the \object{Cygnus Loop} as follows: the high-$kT_e$ ejecta component is surrounded by a low-$kT_e$ ISM component. \cite{Uchida09ejecta} showed that the two-component VNEI model is wholly applicable to the inner regions of the \object{Cygnus Loop}. Therefore, we next intended to give an additional high-$kT_e$ VNEI component to the single-component VNEI model. In this model, we fixed the metal abundances of the low-$kT_e$ component to the values obtained from the result of \citet{Tsunemi07}, since the model whose abundances set all free could not obtain the physically meaningful results. \citet{Tsunemi07} showed the relative abundances to the solar values of the ISM component as follows: C=0.27, N=0.10, O=0.11, Ne=0.21, Mg=0.17, Si=0.34, S=0.17, Fe(=Ni)=0.20. In addition, they fixed other elements to the solar values \citep{Anders89}. Meanwhile, in the high-$kT_e$ component, the abundances for O, Ne, Mg, Si, and Fe were free, while we set the abundances for C and N equal to O, S equal to Si, Ni equal to Fe. Other elements were fixed to their solar values. The other parameters such as $kT_e$, $\tau$, EM, and $\rm\textit{N}_H$ were all free. We applied both single-component VNEI model and two-component VNEI model to all the spectra and determined which model is acceptable by using the F-test with a significance level of 99\%. As a result, roughly $<0.80R_{\rm s}$ of the northeast region and $<0.85R_{\rm s}$ of the southwest region need an additional component, where $R_{\rm s}$ is a shock radius. Here, we define the ``limb observations'' as the regions where the single-component VNEI model is acceptable and the ``inside observations'' as the remaining regions. Figure \ref{fig:spec} shows two example XIS1 spectra. The spectral extracted regions are shown in figure \ref{fig:HRI}. Both regions are located at the inside observations and the two-component VNEI model is applicable. The bottom two panels show the best-fit results of the two-component VNEI model. Blue and red lines represent the high-$kT_e$ and the low-$kT_e$ component. We also show the result with the single-component VNEI model at the top two panels for comparison. The best-fit parameters are shown in table \ref{tab:spec}. These results show that the reduced $\chi^2$ values are significantly improved with the two-component VNEI models. \section{Discussion} \subsection{Temperature distribution of the low-$kT_e$ component} All the spectra are well fitted by either the single-component VNEI model or the two-component VNEI model. From the best-fit parameters of the inside observations, we found that the electron temperature of the low-$kT_e$ component is almost uniform. The averaged value is 0.23 keV ($\sigma = 0.08$ keV) and it is sufficiently lower than that of the high-$kT_e$ component (0.52 keV, $\sigma = 0.17$ keV). The temperature of the low-$kT_e$ component is close to that of the limb observations (0.29 keV, $\sigma = 0.07$ keV). Therefore we collectively call these components ``low-$kT_e$ component'' hereafter. Figure \ref{fig:kTe} shows our FOV and the electron temperature distribution of the low-$kT_e$ component overlaid with the white contour from the \textit{ROSAT} HRI image. The averaged value is $\sim0.28$ keV and it ranges from 0.12 keV to 0.35 keV. Meanwhile, the temperature of the high-$kT_e$ component ranges from 0.4 keV to 0.9 keV, which is consistent with the previous observations \citep{Tsunemi07, Katsuda08ejecta, Kimura09, Uchida09ejecta}. Then, we confirmed that the temperature of each component is clearly separated. \citet{Uchida09ejecta} also showed that the temperature distribution of the high-$kT_e$ component is not uniform and that it is lower in the southwest part than that in the northeast part. On the other hand, the temperature of the low-$kT_e$ component is relatively uniform (see figure \ref{fig:kTe}). The detailed distribution shows the temperature near the center is lower than that of the surroundings. We also found that the temperature distribution is seamless at the boundary between the limb observations and the inside observations. Therefore, the low-$kT_e$ components of these regions must have the same origin. The spectra from the limb observations are obviously swept-up ISM origin, and thus, we concluded that any low-$kT_e$ component originates from the ISM component. \subsection{Line-of-sight Shell Structure of the Cygnus Loop} Taking into account the age of the \object{Cygnus Loop} the reverse shocks should have already reached its center. Therefore, on the assumption that the density of the ejecta-origin plasma is homogeneous, the X-ray flux depends exclusively on its plasma depth. In figure \ref{fig:spec}, the blue line represents the high-$kT_e$ component of the two-component VNEI model. Since the region-A and the region-B are located at the same radial distance from the center ($R\sim50\arcmin$, where we define $R$ as a distance from the ``geometric center'' determined by Levenson et al. 1998), they should have almost the same plasma depths. Accordingly, the fluxes of the high-$kT_e$ components are actually not so different, while the spectral extracted regions are separated. Meanwhile, the contributions of the low-$kT_e$ components are quite different as shown with the red lines in figure \ref{fig:spec}. From the bottom left panel of figure \ref{fig:spec}, the flux of the low-$kT_e$ component in the region-A overwhelms that of the high-$kT_e$ component at 0.2-1.0 keV. On the other hand, the contribution of the low-$kT_e$ component in the region-B is clearly smaller than that in the region-A. Such a difference should be attributed to the difference of the surrounding shell of each region. The value of the flux is proportional to EM ($\propto n_{\rm H}^2l$), which means that the surface brightness is sensitive to the change of the density and the plasma depth there. In order to estimate the ambient density of the \object{Cygnus Loop}, we calculated the fluxes of the low-$kT_e$ component from all regions. Figure \ref{fig:flux} left shows the 0.2-3.0 keV flux distribution of the low-$kT_e$ component. We also show that of the high-$kT_e$ component at the right panel. The flux distribution of the high-$kT_e$ component is relatively uniform compared with that of the low-$kT_e$ component. It reflects that the ejecta component uniformly filled inside the Loop. In contrast, from the left panel, we clearly see the "limb-brightening" which reflects the spherical shell structure. Therefore, we confirmed that the low-$kT_e$ component comes from the surrounding ISM. We also found that the northeast flux is higher than that in the southwest. It suggests that the density is higher in the direction of the northeast than that of the southwest. The detailed shell structures are also seen from the left panel, for example, the ``V-shape'' knot at the southwest \citep{Aschenbach99, Leahy04}. From the left panel of figure \ref{fig:flux}, we found the flux distribution inside the Loop is far from what we expect in the uniform shell structure. This suggests the ambient density and the shell thickness varies considerably from region to region. Thus, we can study the line-of-sight shell structure of the Loop. Considering the relation between the surface brightness and the plasma density, the flux of the low-$kT_e$ component reflects the local density of the ISM. For example, the bright region in the northeast part is considered that the blast waves are expanding into the dense ISM there. In contrast, there is a low-flux region at the south of the Loop (see figure \ref{fig:flux}). It suggests the ambient density there is extremely low compared with other areas of the Loop. As shown by \citet{Uchida08}, we noticed that there is a large break in the south where the ISM density is very thin. In general, the velocity of the blast wave toward such tenuous ISM should become higher than other region. Therefore, it forms a blowout where the shell thickness must be thin. From figure \ref{fig:flux}, we also found a large low-flux region at slightly west of the \object{Cygnus Loop} center. Although our FOV does not cover the whole region, the structure is close to a circular form, and we estimated the diameter to be $\sim1.3\arcdeg$. The size is comparable to that of the south blowout. The existence of such large low-flux region suggests that it has a blowout structure along the line of sight like the south blowout. This result confirms the prediction by \citet{Tsunemi07} and \citet{Kimura09}. From figure \ref{fig:flux}, the northeast of the center also has lower flux than that of the surrounding region. It strongly indicates that the line-of-sight ambient density there is locally low as well as that in the south blowout. This region has a C-shape structure which could be explained by the superposition of the circular low-flux region and the bright region where the blast wave interacts with a small cloud. We estimate the diameter of this low-flux region to be $\sim30\arcmin$. These results show the ambient density of the \object{Cygnus Loop} is quite different from region to region. \subsection{Evidence of Cavity Explosion} To put our result into perspective, we plotted the flux of each component as a function of radius $R$ as shown in figure \ref{fig:flux_plot}. From this figure, we found the flux of the high-$kT_e$ component (shown as crosses) decreases from the center to the outside, which reflects the spherical structure of the ejecta filled inside the \object{Cygnus Loop}. On the other hand, the flux of the low-$kT_e$ component (circles) has a limb-brightening structure, as mentioned in the previous section. Furthermore the low-$kT_e$ flux at the southwest ($R>0$) is totally lower than that at the northeast. While the high-$kT_e$ flux distribution is approximately symmetric, the low-$kT_e$ flux is a few times higher at the northeast than that at the southwest. In addition, looking at the inner region of the Loop, the flux distribution of the low-$kT_e$ component is declining from $R=-50$ to $R=50$. This fact suggests the ambient density of the \object{Cygnus Loop} globally decreases from the northeast to the southwest. In order to estimate the ambient density more quantitatively, we calculated the EM of the low-$kT_e$ component and plotted it as a function of $R$. Figure \ref{fig:EM_region} shows the EM distribution of the low-$kT_e$ component. We plotted the EM profiles from six rectangular regions with different azimuthal angles as shown in figure \ref{fig:EM_region} (NE-A to NE-E and SW). That are shown in figure \ref{fig:EM_plots}. We simulated the EM profile of the shell component derived from the Sedov solution with different ambient density $n_0$ and estimated $n_0$ by comparing our observations with the EM models. In this model, we assume the shock radius of the \object{Cygnus Loop} to be 13 pc and the ejecta is filled in 90$\%$ of it. The results are shown in figure \ref{fig:EM_plots} with red lines. We also show the best-fit models using the data only in the limb-brightening regions with green lines. As for the northeast regions, the EM profiles inside the Loop are close to the models of $n_0$=0.3-0.4 cm$^{-3}$ (red) while the EM values at the limb-brightening regions are higher than these models. On the other hand, applying the data only in the limb-brightening regions (green), $n_0$ increases to 0.7-0.9 cm$^{-3}$. In any case, there are no Sedov models which agree with the EM profiles of the northeast part of the \object{Cygnus Loop}. The result is the same as the case of the southwest region while the ambient density $n_0$ is less than half of the northeast results. These results clearly show the \object{Cygnus Loop} can not be explained by a simple Sedov evolution model. \citet{McCray79} proposed that the \object{Cygnus Loop}'s SN explosion had occurred in a preexisting cavity and some other studies also supported it \citep{Hester86, Hester94, Levenson97}. Considering their results, it is natural that the EM distribution disagrees with a simple Sedov model, and thus, we concluded that our result also supports the cavity explosion as the origin of the \object{Cygnus Loop} from the standpoint of the X-ray spectral analysis. It should be noted that the Cygnus Loop is almost perfect circular in shape, although the EM (or flux) is globally higher in the northeast than that in the southwest. This fact strongly suggests that the northeast and the southwest blast waves should have hit the cavity wall very recently, and that the cavity-wall density is higher in the northeast than that in the southwest. \section{Conclusion} By analyzing the X-ray spectra, we clearly distinguished the ISM component from the ejecta component, and established a method to investigate the line-of-sight shell structure. From the flux distribution of the ISM component, we found three low-flux regions in the FOV; one is a well-known southwest blowout which is evidence of the cavity-wall break, and we also found other low-flux regions at the west and the northeast of the \object{Cygnus Loop} center. From the EM distribution of the ISM component, we support that the \object{Cygnus Loop} is originated from a cavity explosion. Then, the ISM component, or cavity wall does not have an uniform structure but has a lot of breaks or tenuous regions. We also found that the condition of the surrounding cavity wall is not uniform; the density of it is globally higher in the northeast than that in the southwest. \acknowledgments H.U. would like to thank Professor Jacco Vink and his students for many useful discussions and their hospitality at Utrecht University. This work is partly supported by a Grant-in-Aid for Scientific Research by the Ministry of Education, Culture, Sports, Science and Technology (16002004). H.U. and S.K. are supported by JSPS Research Fellowship for Young Scientists. \begin{figure} \begin{center} \includegraphics[width=120mm]{f1a.eps} \includegraphics[width=120mm]{f1b.eps} \end{center} \caption{\textit{Top}: \textit{ROSAT} HRI image of the entire Cygnus Loop. The circles and rectangles represent our FOV of the \textit{XMM-Newton} MOS and the \textit{Suzaku} XIS, respectively. \textit{Bottom}: Same as the left panel, but for overlaid with the spectral extraction regions with small rectangles.}\label{fig:HRI} \end{figure} \begin{figure} \begin{center} \includegraphics[width=80mm]{f2a.eps} \includegraphics[width=80mm]{f2b.eps} \includegraphics[width=80mm]{f2c.eps} \includegraphics[width=80mm]{f2d.eps} \end{center} \caption{Example XIS1 spectra from the regions where the flux of the swept-up matter is high (region-A: left two panels) and low (region-B: right two panels), respectively (see figure\ref{fig:HRI}). The best-fit curves for the single-component VNEI models are shown by solid black lines in the top two panels. Bottom two panels are the same as the top panels, but for the fitting results with the two-component VNEI models. In the bottom panels, blue and red lines represent the high-$kT_e$ component and the low-$kT_e$ component, respectively. The residuals are shown in lower panels.}\label{fig:spec} \end{figure} \clearpage \begin{figure} \begin{center} \includegraphics[width=120mm]{f3a.eps} \end{center} \caption{Our FOV and the electron temperature distribution of the low-$kT_e$ component overlaid with the white contour from the \textit{ROSAT} HRI image. The images are smoothed by Gaussian kernel of $\sigma=2.8\arcmin$. The values are in units of keV.}\label{fig:kTe} \end{figure} \clearpage \begin{figure} \begin{center} \includegraphics[width=160mm]{f4a.eps} \end{center} \caption{0.2-3.0 keV flux distribution of the low-$kT_e$ (left) and the high-$kT_e$ (right) component in logarithmic scales overlaid with the white contour of the \textit{ROSAT} HRI image. The images are smoothed by Gaussian kernel of $\sigma=2.8\arcmin$. The values are in units of counts cm$^{-2}$s$^{-1}$arcmin$^{-2}$ and the scale parameters correspond with each other. Blue and red correspond to $\sim10^{-4}$ and $\sim10^{-3}$ counts cm$^{-2}$s$^{-1}$arcmin$^{-2}$, respectively.}\label{fig:flux} \end{figure} \begin{figure} \begin{center} \includegraphics[width=150mm]{f5a.eps} \end{center} \caption{Averaged flux profile as a function of $R$. The circles and triangles represent the flux of low-$kT_e$ and high-$kT_e$ components, respectively.}\label{fig:flux_plot} \end{figure} \begin{figure} \begin{center} \includegraphics[width=160mm]{f6a.eps} \end{center} \caption{EM distribution of the low-$kT_e$ component in logarithmic scales overlaid with the white contour of the \textit{ROSAT} HRI image.}\label{fig:EM_region} \end{figure} \begin{figure} \begin{center} \includegraphics[width=75mm]{f7a.eps} \includegraphics[width=75mm]{f7b.eps} \includegraphics[width=75mm]{f7c.eps} \includegraphics[width=75mm]{f7d.eps} \includegraphics[width=75mm]{f7e.eps} \includegraphics[width=75mm]{f7f.eps} \end{center} \caption{EM profiles as a function of $R$ calculated from the data in the rectangular regions shown in figure \ref{fig:EM_region}. The EM profiles based on the Sedov model and the estimated ambient densities $n_0$ are shown in red and green (see text).}\label{fig:EM_plots} \end{figure} \clearpage \begin{deluxetable}{lcccc} \tablewidth{0pt} \tablecaption{Summary of the 41 observations\label{tab:sum}} \tablehead{\colhead{Obs. ID} & \colhead{Obs. Date}& \colhead{RA, DEC (J2000)} & \colhead{Position Angle} & \colhead{Effective Exposure}} \startdata \sidehead{\textit{Suzaku Observations}} 501012010 (P1) & 2007-11-14 & 20$^{\mathrm h}$54$^{\mathrm m}$07.6$^{\mathrm s}$, 31\arcdeg57\arcmin22.0\arcsec & 240$^\circ$.0 & 9.8 ksec\\ 501013010 (P2) & 2007-11-14 & 20$^{\mathrm h}$53$^{\mathrm m}$08.5$^{\mathrm s}$, 31\arcdeg45\arcmin40.3\arcsec & 240$^\circ$.0 & 16.4 ksec\\ 501014010 (P3) & 2007-11-14 & 20$^{\mathrm h}$52$^{\mathrm m}$09.9$^{\mathrm s}$, 31\arcdeg36\arcmin43.4\arcsec & 240$^\circ$.0 & 16.9 ksec\\ 501015010 (P4) & 2007-11-14 & 20$^{\mathrm h}$51$^{\mathrm m}$11.8$^{\mathrm s}$, 31\arcdeg22\arcmin08.4\arcsec & 240$^\circ$.0 & 18.3 ksec\\ 501016010 (P5) & 2007-11-15 & 20$^{\mathrm h}$50$^{\mathrm m}$11.3$^{\mathrm s}$, 31\arcdeg10\arcmin48.0\arcsec & 240$^\circ$.0 & 19.3 ksec\\ 501017010 (P6) & 2007-11-11 & 20$^{\mathrm h}$49$^{\mathrm m}$11.3$^{\mathrm s}$, 30\arcdeg59\arcmin27.6\arcsec & 240$^\circ$.0 & 28.7 ksec\\ 501018010 (P7) & 2007-11-12 & 20$^{\mathrm h}$48$^{\mathrm m}$18.7$^{\mathrm s}$, 30\arcdeg46\arcmin33.6\arcsec & 240$^\circ$.0 & 21.0 ksec\\ 501028010 (P8) & 2006-05-13 & 20$^{\mathrm h}$55$^{\mathrm m}$56.3$^{\mathrm s}$, 31\arcdeg28\arcmin56.2\arcsec & 62$^\circ$.5 & 4.9 ksec\\ 501019010 (P9) & 2007-11-12 & 20$^{\mathrm h}$47$^{\mathrm m}$14.2$^{\mathrm s}$, 30\arcdeg36\arcmin10.8\arcsec & 240$^\circ$.0 & 16.2 ksec\\ 501020010 (P10) & 2007-11-13 & 20$^{\mathrm h}$46$^{\mathrm m}$20.8$^{\mathrm s}$, 30\arcdeg23\arcmin22.6\arcsec & 240$^\circ$.0 & 14.7 ksec\\ 503055010 (P11) & 2008-05-09 & 20$^{\mathrm h}$49$^{\mathrm m}$48.7$^{\mathrm s}$, 31\arcdeg30\arcmin18.0\arcsec & 50$^\circ$.0 & 22.2 ksec\\ 501029010 (P12) & 2006-05-09 & 20$^{\mathrm h}$55$^{\mathrm m}$00.0$^{\mathrm s}$, 31\arcdeg15\arcmin46.8\arcsec & 62$^\circ$.1 & 13.2 ksec\\ 501030010 (P13) & 2006-05-10 & 20$^{\mathrm h}$53$^{\mathrm m}$59.3$^{\mathrm s}$, 31\arcdeg03\arcmin39.6\arcsec & 68$^\circ$.2 & 13.9 ksec\\ 501031010 (P14) & 2006-05-12 & 20$^{\mathrm h}$52$^{\mathrm m}$58.8$^{\mathrm s}$, 30\arcdeg51\arcmin32.4\arcsec & 62$^\circ$.4 & 18.2 ksec\\ 501032010 (P15) & 2006-05-25 & 20$^{\mathrm h}$51$^{\mathrm m}$58.6$^{\mathrm s}$, 30\arcdeg39\arcmin10.8\arcsec & 62$^\circ$.0 & 17.4 ksec\\ 501033010 (P16) & 2006-05-22 & 20$^{\mathrm h}$50$^{\mathrm m}$58.8$^{\mathrm s}$, 30\arcdeg27\arcmin00.0\arcsec & 62$^\circ$.0 & 20.0 ksec\\ 501034010 (P17) & 2006-05-22 & 20$^{\mathrm h}$48$^{\mathrm m}$49.7$^{\mathrm s}$, 30\arcdeg00\arcmin21.6\arcsec & 62$^\circ$.0 & 13.9 ksec\\ 501035010 (P18) & 2006-12-18 & 20$^{\mathrm h}$48$^{\mathrm m}$16.2$^{\mathrm s}$, 29\arcdeg42\arcmin07.2\arcsec & 237$^\circ$.5 & 11.2 ksec\\ 501036010 (P19) & 2006-12-18 & 20$^{\mathrm h}$47$^{\mathrm m}$17.3$^{\mathrm s}$, 30\arcdeg04\arcmin21.4\arcsec & 237$^\circ$.5 & 11.8 ksec\\ 503056010 (P20) & 2008-05-10 & 20$^{\mathrm h}$48$^{\mathrm m}$00.0$^{\mathrm s}$, 31\arcdeg10\arcmin30.0\arcsec & 50$^\circ$.0 & 22.5 ksec\\ 503057010 (P21) & 2008-06-02 & 20$^{\mathrm h}$52$^{\mathrm m}$43.8$^{\mathrm s}$, 32\arcdeg26\arcmin19.0\arcsec & 61$^\circ$.9 & 16.2 ksec\\ 503058010 (P22) & 2008-06-03 & 20$^{\mathrm h}$51$^{\mathrm m}$17.2$^{\mathrm s}$, 32\arcdeg25\arcmin24.6\arcsec & 61$^\circ$.4 & 19.3 ksec\\ 503059010 (P23) & 2008-06-03 & 20$^{\mathrm h}$49$^{\mathrm m}$50.6$^{\mathrm s}$, 32\arcdeg21\arcmin50.8\arcsec & 61$^\circ$.9 & 19.5 ksec\\ 503060010 (P24) & 2008-06-04 & 20$^{\mathrm h}$48$^{\mathrm m}$28.2$^{\mathrm s}$, 32\arcdeg17\arcmin44.5\arcsec & 61$^\circ$.4 & 18.5 ksec\\ 503061010 (P25) & 2008-06-04 & 20$^{\mathrm h}$47$^{\mathrm m}$22.7$^{\mathrm s}$, 32\arcdeg10\arcmin22.8\arcsec & 60$^\circ$.9 & 26.0 ksec\\ 503062010 (P26) & 2008-05-13 & 20$^{\mathrm h}$56$^{\mathrm m}$26.5$^{\mathrm s}$, 30\arcdeg19\arcmin55.2\arcsec & 49$^\circ$.8 & 16.9 ksec\\ 503063010 (P27) & 2008-05-13 & 20$^{\mathrm h}$55$^{\mathrm m}$16.3$^{\mathrm s}$, 30\arcdeg01\arcmin44.0\arcsec & 49$^\circ$.6 & 22.8 ksec\\ 503064010 (P28) & 2008-05-14 & 20$^{\mathrm h}$53$^{\mathrm m}$51.6$^{\mathrm s}$, 29\arcdeg54\arcmin42.5\arcsec & 49$^\circ$.1 & 18.2 ksec\\ 500020010 (NE1) & 2005-11-23 & 20$^{\mathrm h}$56$^{\mathrm m}$48.9$^{\mathrm s}$, 31\arcdeg56\arcmin54.8\arcsec & 223$^\circ$.0 & 20.4 ksec\\ 500021010 (NE2) & 2005-11-24 & 20$^{\mathrm h}$55$^{\mathrm m}$56.0$^{\mathrm s}$, 31\arcdeg56\arcmin53.2\arcsec & 223$^\circ$.0 & 21.4 ksec\\ 500022010 (NE3) & 2005-11-29 & 20$^{\mathrm h}$55$^{\mathrm m}$05.6$^{\mathrm s}$, 32\arcdeg10\arcmin35.4\arcsec & 222$^\circ$.9 & 21.7 ksec\\ 500023010 (NE4) & 2005-11-30 & 20$^{\mathrm h}$54$^{\mathrm m}$03.8$^{\mathrm s}$, 32\arcdeg21\arcmin47.9\arcsec & 221$^\circ$.2 & 25.3 ksec\\ \sidehead{\textit{XMM-Newton Observations}} 0082540101 (Pos-1) & 2002-11-25 & 20$^{\mathrm h}$55$^{\mathrm m}$23.6$^{\mathrm s}$, 31\arcdeg46\arcmin17.0\arcsec & 241$^\circ$.7 & 14.7 ksec\\ 0082540201 (Pos-2) & 2002-12-03 & 20$^{\mathrm h}$54$^{\mathrm m}$07.2$^{\mathrm s}$, 31\arcdeg30\arcmin51.1\arcsec & 241$^\circ$.7 & 14.4 ksec\\ 0082540301 (Pos-3) & 2002-12-05 & 20$^{\mathrm h}$52$^{\mathrm m}$51.1$^{\mathrm s}$, 31\arcdeg15\arcmin25.7\arcsec & 241$^\circ$.7 & 11.6 ksec\\ 0082540401 (Pos-4) & 2002-12-07 & 20$^{\mathrm h}$51$^{\mathrm m}$34.7$^{\mathrm s}$, 31\arcdeg00\arcmin00.0\arcsec & 241$^\circ$.7 & 4.9 ksec\\ 0082540501 (Pos-5) & 2002-12-09 & 20$^{\mathrm h}$50$^{\mathrm m}$18.4$^{\mathrm s}$, 30\arcdeg44\arcmin34.3\arcsec & 231$^\circ$.4 & 12.6 ksec\\ 0082540601 (Pos-6) & 2002-12-11 & 20$^{\mathrm h}$49$^{\mathrm m}$02.0$^{\mathrm s}$, 30\arcdeg29\arcmin08.6\arcsec & 241$^\circ$.7 & 11.5 ksec\\ 0082540701 (Pos-7) & 2002-12-13 & 20$^{\mathrm h}$47$^{\mathrm m}$45.8$^{\mathrm s}$, 30\arcdeg13\arcmin42.9\arcsec & 241$^\circ$.7 & 13.7 ksec\\ 0405490101 (Pos-8) & 2006-05-13 & 20$^{\mathrm h}$50$^{\mathrm m}$32.2$^{\mathrm s}$, 30\arcdeg11\arcmin00.0\arcsec & 69$^\circ$.9 & 6.5 ksec\\ 0405490201 (Pos-9) & 2006-05-13 & 20$^{\mathrm h}$49$^{\mathrm m}$54.2$^{\mathrm s}$, 29\arcdeg42\arcmin25.0\arcsec & 69$^\circ$.8 & 3.6 ksec\\ \enddata \end{deluxetable} \begin{table} \caption{Spectral fit parameters}\label{tab:spec} \begin{center} \begin{tabular}{ccccc} \tableline \tableline & \multicolumn{2}{c}{\textit{single-component VNEI model}} & \multicolumn{2}{c}{\textit{two-component VNEI model}} \\ \tableline & region A & region B & region A & region B\\ \tableline N$\rm _H$ [10$^{20}$cm$^{-2}$] & 1.8 $\pm$ 0.3 & 3.4 $\pm$ 0.3 & 5.2 $\pm$ 0.2 & 7.0 $\pm$ 0.3 \\ & & & \multicolumn{2}{c}{\textit{Low-$kT_e$ component:}} \\ \ \ $kT_e$ [keV] & 0.59 $\pm$ 0.03 & 0.42 $\pm$ 0.02 & 0.24 $\pm$ 0.01 & 0.12 $\pm$ 0.01 \\ \ \ C & 0.24 $\pm$ 0.05 & 0.96 $\pm$ 0.21 & \multicolumn{2}{c}{0.27 (fixed)}\\ \ \ N & 0.22 $\pm$ 0.05 & 0.09 $\pm$ 0.03 & \multicolumn{2}{c}{0.10 (fixed)}\\ \ \ O & 0.23 $\pm$ 0.02 & 0.14 $\pm$ 0.02 & \multicolumn{2}{c}{0.11 (fixed)}\\ \ \ Ne & 0.44 $\pm$ 0.04 & 0.31 $\pm$ 0.03 & \multicolumn{2}{c}{0.21 (fixed)}\\ \ \ Mg & 0.26 $\pm$ 0.03 & 0.24 $\pm$ 0.03 & \multicolumn{2}{c}{0.17 (fixed)}\\ \ \ Si & 0.27 $\pm$ 0.06 & 0.30 $\pm$ 0.06 & \multicolumn{2}{c}{0.34 (fixed)}\\ \ \ S & (=Si) & (=Si) & \multicolumn{2}{c}{0.17 (fixed)}\\ \ \ Fe(=Ni) & 0.36 $\pm$ 0.04 & 0.22 $\pm$ 0.02 & \multicolumn{2}{c}{0.20 (fixed)}\\ \ \ log $\tau$ & 10.42 $\pm$ 0.03 & 10.82 $^{+ 0.07}_{- 0.08}$ & 11.32 $^{+ 0.12}_{- 0.16}$ & $<$ 12\\ \ \ flux [counts cm$^{-2}$s$^{-1}$arcmin$^{-2}$] & $8.90 \times 10^{-4}$ & $4.34 \times 10^{-4}$ & $7.49 \times 10^{-4}$ & $2.18 \times 10^{-4}$\\ & & & \multicolumn{2}{c}{\textit{High-$kT_e$ component:}} \\ \ \ $kT_e$ [keV] & \nodata & \nodata & 0.88 $\pm$ 0.13 & 0.43 $\pm$ 0.02\\ \ \ O(=C=N) & \nodata & \nodata & 0.34 $\pm$ 0.13 & 0.38 $\pm$ 0.07\\ \ \ Ne & \nodata & \nodata & 0.82 $\pm$ 0.26 & 0.74 $\pm$ 0.12\\ \ \ Mg & \nodata & \nodata & 0.56 $\pm$ 0.19 & 0.55 $\pm$ 0.10\\ \ \ Si(=S) & \nodata & \nodata & 1.28 $\pm$ 0.42 & 0.79 $\pm$ 0.16\\ \ \ Fe(=Ni) & \nodata & \nodata & $<$ 1.43 & 0.48 $\pm$ 0.08\\ \ \ log $\tau$ & \nodata & \nodata & 10.66 $^{+ 0.09}_{- 0.12}$ & 11.11 $\pm$ 0.06\\ \ \ flux [counts cm$^{-2}$s$^{-1}$arcmin$^{-2}$] & \nodata & \nodata & $1.41 \times 10^{-4}$ & $2.16 \times 10^{-4}$\\ $\chi ^2$/dof & 1043/739 & 728/548 & 868/738 & 637/547\\ \tableline \end{tabular} \tablecomments{Other elements are fixed to solar values.} \end{center} \end{table} \clearpage
1,314,259,995,686
arxiv
\section{Proof of Lemma \lowercase{\ref{lem:noisy_unknown_cdf}} and Auxiliary Lemmas}\label{sec:proof_noisy_unknown_CDF} The purpose of this section is to prove Lemma \ref{lem:noisy_unknown_cdf}, which provides a probabilistic uniform bound on the CDF estimate $\hat{F}^{(i)}$. We will prove the desired result by showing (1) $\hat{F}^{(i)}$ concentrates around its expectation; (2) the expectation of $\hat{F}^{(i)}$ is close to that of $\tilde{F}^{(i)}$ under a high-probability conditioning event; and (3) the expectation of $\tilde{F}^{(i)}$ is uniformly close to the true CDF as shown in Lemma \ref{lem:mean_difference_tilde}. Claim (1) is proved in Appendix \ref{sec:concentration_hat} by essentially the same argument as the known noise case (see Lemma \ref{lem:sup_tilde}) and (3) is already shown. It is the proof of claim (2), for which most of this section is spared. Throughout the first three subsections (\ref{sec:base_size}, \ref{sec:control_discrepancy}, \ref{sec:est_characteristic_ftn}) we show that the size of the set for noise density estimation, $\cT_i$, is neither too big nor too small. With aid of auxiliary lemmas, we show the estimated characteristic function of the noise is sufficiently accurate so that the modified kernel estimator is sufficiently precise. The summarized result can be bound in Appendix \ref{sec:bias_hat_tilde}, which characterizes the bias between $\tilde{F}^{(i)}$ and $\hat{F}^{(i)}$. In Appendix \ref{sec:conditioning}, we introduce appropriate conditioning events which are used to prove claim (2), all of which are high probability events according to the lemmas proved. In the end, Lemma \ref{lem:noisy_unknown_cdf} is proved by applying union bound. \subsection{The size of the base set $\cT_i$ for noise density estimation}\label{sec:base_size} We defined the set $\cT_i$ to estimate the distribution of additive noise by emulating the setup of repeated measurements. In this section, we present two lemmas: on the one hand, Lemma \ref{lem:T_large} shows there are a plenty of triples in $\cT_i$ enabling the estimation; on the other hand, Lemma \ref{lem:T_not_too_big} claims that there are not too many triples in $\cT \supset \cT_i$. Later, these lemmas will be used in combination with Lemma \ref{lem:dN_max} to ensure that the noise distribution can be estimated from triples in $\cT_i$ with high probability. \begin{lemma}\label{lem:IandJ} The sets $J$ and $I$ defined in Algorithm \ref{alg:setT} are sufficiently large with high probability. Specifically, \begin{align*} \mathbb{P}\bigg(|J| \leq \frac{n \left[ 1 - \exp\left( - \frac{mp}{8} \right) \right]}{2} \bigg) &\leq \exp\bigg( - \frac{n \left[ 1 - \exp\left( - \frac{mp}{8} \right) \right]}{8} \bigg),\\ \mathbb{P}\bigg(|I| \leq \frac{m \left[ 1 - \exp\left( - \frac{|J|p}{8} \right) \right]}{2} \bigg) &\leq \exp\bigg( - \frac{m \left[ 1 - \exp\left( - \frac{|J|p}{8} \right) \right]}{8} \bigg). \end{align*} \end{lemma} \begin{proof} Recall the construction procedure of the set $\cT$ (see Algorithm \ref{alg:setT}). The number of column indices in $J$ is given as the sum of indicator variables \begin{equation}\label{eqn:J} |J| := \sum_{j \in [n]} \Ind{\left| \cB^{j} \right| \geq \frac{mp}{2}}. \end{equation} Note that $|\cB^j| = \sum_{i \in [m]} M(i,j)$ is distributed as $Binomial(m, p)$. It follows from the binomial Chernoff bound that \[ \Prob {\left| \cB^{j} \right| \geq \frac{mp}{2} } \geq 1 - \exp\left( - \frac{mp}{8} \right). \] Therefore, $n$ indicator variables in Eq. \eqref{eqn:J} are independent Bernoulli variables, each of which takes value $1$ with probability greater than $1 - \exp\left( - \frac{mp}{8} \right)$. Therefore, $|J| \sim Binomial(n, p_2)$ with $p_2 \geq 1 - \exp\left( - \frac{mp}{8} \right)$. It follows that \begin{align*} \Prob{|J| \leq \frac{n \left[ 1 - \exp\left( - \frac{mp}{8} \right) \right]}{2} } &\leq \Prob{ |J| \leq \frac{np_2}{2}}\\ & \leq \exp\left( - \frac{np_2}{8} \right) \\ & \leq \exp\left( - \frac{n \left[ 1 - \exp\left( - \frac{mp}{8} \right) \right]}{8} \right). \end{align*} In the same vein, the number of column indices in $I$ is given as the sum of indicator variables \[ |I| := \sum_{i \in [m]} \Ind{\left| \cB_{i} \cap J \right| \geq \frac{|J|p}{2}}. \] Now $|\cB_i \cap J| = \sum_{j \in J} M(i,j)$ is distributed as $Binomial(m, p')$ with $p' \geq p$, because $p' = \Prob{M(i,j) = 1 \left| j \in J \right.} \geq \Prob{M(i,j) = 1} = p$. These $m$ indicator variables are independent Bernoulli variables, each of which takes value $1$ with probability greater than \[ \Prob {\left| \cB_{i} \cap J \right| \geq \frac{|J|p}{2} } \geq 1 - \exp\left( - \frac{|J|p}{8} \right). \] Therefore, $|I| \sim Binomial(m, p_3)$ with $p_3 \geq 1 - \exp\left( - \frac{|J|p}{8} \right)$. It follows that \begin{align*} \mathbb{P}\Bigg( |I| \leq \frac{m \left[ 1 - \exp\left( - \frac{|J|p}{8} \right) \right]}{2} \Bigg) &\leq \mathbb{P}\Big( | I | \leq \frac{mp_3}{2} \Big)\\ & \leq \exp\left( - \frac{mp_3}{8} \right) \\ & \leq \exp\Bigg( - \frac{m \left[ 1 - \exp\left( - \frac{|J|p}{8} \right) \right]}{8} \Bigg). \end{align*} \end{proof} \begin{lemma}\label{lem:T_large} For any $i \in [m]$, \[ \left| \cT_i \right| \geq \Bigg(\frac{m \left[ 1 - \exp\left( - \frac{|J|p}{8} \right) \right]}{2} - 1 \Bigg) \Bigg\lceil \frac{\frac{|J|p}{2} - 1 - \lfloor \sqrt{\frac{|J|p}{2} }\rfloor}{2}\Bigg\rceil, \] with probability at least $1 - \exp\bigg( - \frac{m \left[ 1 - \exp\left( - \frac{|J|p}{8} \right) \right]}{8} \bigg)$. \end{lemma} \begin{proof} Recall the construction procedure of the set $\cT$ and $\cT_i$ (see Algorithm \ref{alg:setT}). Given $i' \in I$, we let $\sigma_{i'}: \cB_{i'} \cap J \to \left[ |\cB_{i'} \cap J | \right]$ denote a map which maps the column index in $\cB_{i'} \cap J \subseteq [n]$ to integers $1, 2, \ldots, |\cB_{i'} \cap J |$ such that $\sigma(j_1) < \sigma(j_2)$ implies that $\hat{q}_{\marg}\left( j_1 \right) \leq \hat{q}_{\marg}\left( j_2 \right)$. Note that $\sigma_{i'}$ is a bijection and is invertible where its inverse $\sigma_{i'}^{-1}: [|\cB_{i'} \cap J |] \to \cB_{i'} \cap J \subseteq [n]$. First of all, we show that there cannot exist more than $\left\lfloor \sqrt{\left| \cB_{i'} \cap J \right|} \right\rfloor$ $k$'s (where $k \in \left[ \left| \cB_{i'} \cap J \right| -1 \right]$) such that \begin{equation}\label{eqn:bad_condition} \Big| \hat{q}_{\marg}\left( \sigma_{i'}^{-1}(k+1) \right) - \hat{q}_{\marg}\left( \sigma_{i'}^{-1}(k) \right) \Big| > \frac{1}{\sqrt{\left|\cB_{i'} \cap J \right| }}. \end{equation} Let $[a,b)$ denote the half-open interval, that is to say, $[a,b):= \left\{ x \in \Reals : a \leq x < b \right\}$. If $k_1 \neq k_2$, $$\Big[ \hat{q}_{\marg}\left( \sigma_{i'}^{-1}(k_1) \right), \hat{q}_{\marg}\left( \sigma_{i'}^{-1}(k_1 + 1) \right)\Big) \cap \Big[ \hat{q}_{\marg}\left( \sigma_{i'}^{-1}(k_2) \right), \hat{q}_{\marg}\left( \sigma_{i'}^{-1}(k_2 + 1) \right)\Big) = \emptyset,$$ and hence, \begin{align*} &\mu \left( \Big[ \hat{q}_{\marg}\left( \sigma_{i'}^{-1}(k_1) \right), \hat{q}_{\marg}\left( \sigma_{i'}^{-1}(k_1 + 1) \right)\Big) \cup \Big[ \hat{q}_{\marg}\left( \sigma_{i'}^{-1}(k_2) \right), \hat{q}_{\marg}\left( \sigma_{i'}^{-1}(k_2 + 1) \right)\Big) \right)\\ &\qquad = \mu \left( \Big[ \hat{q}_{\marg}\left( \sigma_{i'}^{-1}(k_1) \right), \hat{q}_{\marg}\left( \sigma_{i'}^{-1}(k_1 + 1) \right)\Big) \right)\\ &\qquad\quad + \mu \left(\Big[ \hat{q}_{\marg}\left( \sigma_{i'}^{-1}(k_2) \right), \hat{q}_{\marg}\left( \sigma_{i'}^{-1}(k_2 + 1) \right)\Big) \right), \end{align*} where $\mu$ is the Lebesgue measure for $\Reals$, and $\mu \left( [a, b) \right) = ( b -a ) \Ind{b \geq a}$. Let $\cS_{i'}$ denote the set of $k$'s in $\left[ \left| \cB_{i'} \cap J \right| -1 \right]$, which satisfies Eq. \eqref{eqn:bad_condition}. Let's Assume that $ \left| \cS_{i'} \right| \geq \left\lfloor \sqrt{\left| \cB_{i'} \cap J \right|} \right\rfloor + 1$. Since $\hat{q}_{\marg}\left( \sigma_{i'}^{-1}(k) \right) \in [0,1], \forall k \in \big[|\cB_{i'} \cap J | \big]$, \begin{align*} 1 = \mu \left( [0,1 ) \right) &\geq \mu \left( \bigcup_{k \in \left| \cB_{i'} \right| - 1} \Big[ \hat{q}_{\marg}\left(\sigma_{i'}^{-1}(k)\right), \hat{q}_{\marg}\left(\sigma_{i'}^{-1}(k+1)\right) \Big)\right)\\ &\geq \mu \left( \bigcup_{k \in \cS_{i'}} \Big[ \hat{q}_{\marg}\left(\sigma_{i'}^{-1}(k)\right), \hat{q}_{\marg}\left(\sigma_{i'}^{-1}(k+1)\right) \Big) \right)\\ &= \sum_{k \in \cS_{i'}} \Big( \hat{q}_{\marg}\left(\sigma_{i'}^{-1}(k+1)\right)- \hat{q}_{\marg}\left(\sigma_{i'}^{-1}(k)\right) \Big)\\ &\geq \left( \left\lfloor \sqrt{\left| \cB_{i'} \cap J \right|} \right\rfloor + 1 \right) \left( \frac{1}{\sqrt{\left|\cB_{i'} \cap J \right| }} \right)\\ &> 1, \end{align*} which is a contradiction. Therefore, it is proved that $\left| \cS_{i'} \right| \leq \left\lfloor \sqrt{\left|\cB_{i'} \cap J \right| } \right\rfloor.$ For those $k \in \left[ \left|\cB_{i'} \cap J \right| - 1 \right] \setminus \cS_{i'}$, we have $$\hat{q}_{\marg}\left(\sigma_{i'}^{-1}(k+1)\right)- \hat{q}_{\marg}\left(\sigma_{i'}^{-1}(k)\right) \leq \frac{1}{\sqrt{|\cB_{i'} \cap J |}}.$$ In case both $k, k+1\in \left[ \left|\cB_{i'} \cap J \right| - 1 \right] \setminus \cS_{i'}$, either $\Big(i', \sigma_{i'}^{-1}(k), \sigma_{i'}^{-1}(k+1)\Big) \in \cT$ or $\Big(i', \sigma_{i'}^{-1}(k+1), \sigma_{i'}^{-1}(k+2)\Big) \in \cT$, but not both. However, no more than half of $k \in \big[ \left|\cB_{i'} \cap J \right| - 1 \big] \setminus \cS_{i'}$ is excluded and there exist at least $ \Big\lceil \frac{\left|\cB_{i'} \cap J \right| - 1 - \left\lfloor \sqrt{\left|\cB_{i'} \cap J \right| }\right\rfloor}{2}\Big\rceil $ number of $k$'s such that $\left(i', \sigma_{i'}^{-1}(k), \sigma_{i'}^{-1}(k+1)\right) \in \cT$. From Lemma \ref{lem:IandJ}, we know that $|I| > \frac{m \left[ 1 - \exp\left( - \frac{|J|p}{8} \right) \right]}{2}$ with high probability ($i$ might be also in $I$). We also know from the argument above that for each $i' \in I$, there exist at least $ \Big\lceil \frac{\left|\cB_{i'} \cap J\right| - 1 - \left\lfloor \sqrt{\left|\cB_{i'} \cap J \right| }\right\rfloor}{2}\Big\rceil \geq \Big\lceil \frac{\frac{|J|p}{2} - 1 - \lfloor \sqrt{\frac{|J|p}{2} }\rfloor}{2}\Big\rceil $ number of $k$'s such that $\left(i', \sigma_{i'}^{-1}(k), \sigma_{i'}^{-1}(k+1)\right) \in \cT$. All in all, we can conclude that \[ \left| \cT_i \right| \geq \Bigg(\frac{m \left[ 1 - \exp\left( - \frac{|J|p}{8} \right) \right]}{2} - 1 \Bigg) \Bigg\lceil \frac{\frac{|J|p}{2} - 1 - \lfloor \sqrt{\frac{|J|p}{2} }\rfloor}{2}\Bigg\rceil, \] with probability at least $1 - \exp\bigg( - \frac{m \left[ 1 - \exp\left( - \frac{|J|p}{8} \right) \right]}{8} \bigg)$. \end{proof} We have shown that $\cT_i$ is sufficiently large with high probability. On the other hand, we can also show that $\cT$ is not too large compared to the total number of observed entries in the matrix ($=mnp$) with high probability. \begin{lemma}\label{lem:T_not_too_big} The set $\cT$ is not too large with high probability. Specifically, \[ \Prob{|\cT| > mnp} \leq \exp \left( - \frac{mnp}{3} \right). \] \end{lemma} \begin{proof} It is clear from the description of algorithm (see Algorithm \ref{alg:setT}) that for each $(i,j)$, there can exist at most one element $(i', j_1, j_2) \in \cT$ such that either $(i,j)=(i', j_1)$ or $(i,j) = (i', j_2)$. Moreover, if there exists $(i', j_1, j_2)$ satisfying either of those two conditions, $M(i,j) = 1$. As a result, $|\cT| \leq \frac{1}{2} \sum_{i,j} M(i,j)$, which is the sum of $mn$ independent and identically distributed Bernoulli random variable with probability $p$. Applying the binomial Chernoff bound yields \[ \Prob{|\cT| > mnp} \leq \Prob{\sum_{i,j} M(i,j) > 2 mnp} \leq \exp \left( - \frac{mnp}{3} \right). \] \end{proof} \subsection{Useful properties for noise density estimation}\label{sec:control_discrepancy} The set $\cT$ is carefully constructed for estimating the noise distribution. To analyze the quality of estimated characteristic function of the noise, we introduce the following notations: \begin{align}\label{eq:notation} \dAi &= \max_{\left(i', j_1, j_2 \right) \in \cT_i} \big| A(i, j_1) - A(i, j_2) \big|, \quad\text{and}\\ \dNi &= \max_{\left(i', j_1, j_2 \right) \in \cT_i} \big| N(i, j_1) - N(i, j_2) \big|. \end{align} The following two lemmas show that these two quantities are not too large with high probability. In particular, Lemma \ref{lem:dA_max} shows that $\dAi$ is vanishingly small as $m, n \to \infty$, while Lemma \ref{lem:dN_max} shows that $\dNi$ scales only logarithmically with respect to $m, n$ and $p$. \begin{lemma}\label{lem:dA_max} For $t > L\sqrt{\frac{2}{\left| J \right| p}}+ 4L \Qf\left(\frac{mp}{2}\right)$, \begin{align*} \Prob{ \dAi > t} &\leq \big| J \big| \exp\left( - \frac{n}{8L^2} \left( t - L\sqrt{\frac{2}{\left| J \right| p}} \right)^2 \right)\\ &\quad + \big| J \big|\exp\left( -\frac{n}{12L} \left( t - L\sqrt{\frac{2}{\left| J \right| p}} - 4L \Qf\left(\frac{mp}{2}\right) \right)\right) . \end{align*} \end{lemma} \begin{proof} From the Lipschitz assumption on the latent function, we have \begin{align*} \big| A(i', j_1) - A(i', j_2) \big| &\leq L \left| \fcol{j_1} - \fcol{j_2} \right|. \end{align*} It suffices to find an upper bound on $\left| \fcol{j_1} - \fcol{j_2} \right|$ to control $\dAi$. However, this is a latent quantity, which is not observable from data. Instead, we take a detour using triangle inequality: \begin{align*} \left| \fcol{j_1} - \fcol{j_2} \right| &\leq \left| \fcol{j_1} - \hat{q}_{\marg}(j_1) \right| + \Big| \hat{q}_{\marg}(j_1) - \hat{q}_{\marg}(j_2) \Big| + \left| \hat{q}_{\marg}(j_2) - \fcol{j_2} \right| . \end{align*} We will show $\Big| \hat{q}_{\marg}(j_1) - \hat{q}_{\marg}(j_2) \Big|$ is small for all $(i', j_1, j_2) \in \cT$ by careful construction of $\cT$, and $\left| \fcol{j} - \hat{q}_{\marg}(j) \right|$ is small for all $j \in J$ due to the concentration of quantile estimates (see Lemma \ref{lem:noisy_quantile}). First of all, note that $|\cB_i \cap J| \geq \frac{|J|p}{2}$ for any $i \in I$ by construction of $\cT$. Therefore, for any $(i', j_1, j_2) \in \cT$, \[ \hat{q}_{\marg}\left( j_1 \right) - \hat{q}_{\marg}\left( j_2 \right) \leq \frac{1}{\sqrt{\left| \cB_i \cap J \right|}} \leq \sqrt{\frac{2}{\left| J \right| p}}. \] In other words, \[ \Prob{ \bigcup_{\left(i', j_1, j_2 \right) \in \cT_i} \left\{ \Big| \hat{q}_{\marg}\left( j_1 \right) - \hat{q}_{\marg}\left( j_2 \right) \Big| > \sqrt{\frac{2}{\left| J \right| p}} \right\}} = 0. \] Next, recall that we defined function $\Qf: \Reals_+ \to \Reals_+$ as (see Eq. \eqref{eqn:qstar}) \[ \Qf\left(x\right) = 2\sqrt{\pi} \left( \frac{1}{\sqrt{C_1 x}} + \frac{1}{\sqrt{C_2 x}} + \frac{1}{\sqrt{mp C_1 e^{-C_1}}} + \frac{1}{\sqrt{mp C_2 e^{-C_2}}} \right), \] where $C_1 = \frac{l^2}{2(D_{max} - D_{min})^2}$ and $C_2 = \frac{l^2}{8\sigma^2}$ are model dependent constants. Note that the set $J$ is defined as $J = \left\{ j \in [n]: |\cB^j| \geq \frac{mp}{2}\right\}$ (see Algorithm \ref{alg:setT} in Section \ref{sec:algorithm_unknown_hat}). By Lemma \ref{lem:noisy_quantile}, for any $t \geq 4 \Qf(\frac{mp}{2}) = \Theta\Big(\frac{1}{\sqrt{mp}}\Big)$, \begin{align*} \Prob{ \left| \hat{q}_{\marg}(j) - \fcol{j} \right| > t ~\bigg|~ j \in J} \leq \exp\left( - \frac{nt^2}{2} \right) + \exp\left( -\frac{n( \frac{t}{2} - \Qf\left(\frac{mp}{2}\right))}{3} \right). \end{align*} It is worthwhile to remark that $\exp\left( - \frac{mp}{8} \right)$ term is removed from the original statement of Lemma \ref{lem:noisy_quantile}. That term was originally coming from $\Prob{ \Ecol^c }$ (see the Claim 2 in the proof of the lemma), however, that term disappears once $j \in J$. By applying the union bound, it follows that \begin{align*} &\Prob{ \left| \hat{q}_{\marg}(j) - \fcol{j} \right| > t, ~\forall j \in J}\\ &\qquad\leq \sum_{j \in J} \Prob{ \left| \hat{q}_{\marg}(j) - \fcol{j} \right| > t ~\bigg|~ j \in J}\\ &\qquad\leq \big| J \big| \left[ \exp\left( - \frac{nt^2}{2} \right) + \exp\left( -\frac{n( \frac{t}{2} - \Qf\left(\frac{mp}{2}\right))}{3} \right) \right]. \end{align*} From the argument above, if $\left| \hat{q}_{\marg}(j) - \fcol{j} \right| \leq t_1$ for all $j \in J$ and $\Big| \hat{q}_{\marg}\left( j_1 \right) - \hat{q}_{\marg}\left( j_2 \right) \Big| \leq t_2$ for all triple $(i', j_1, j_2) \in \cT$, then $\big| A(i', j_1) - A(i', j_2) \big| \leq L (2t_1 + t_2)$ for all $(i', j_1, j_2) \in \cT$. Consequently, for $t > L\sqrt{\frac{2}{\left| J \right| p}}+ 4L \Qf\left(\frac{mp}{2}\right)$, \begin{align*} \Prob{ \dAi > t} &= \Prob{ \max_{\left(i', j_1, j_2 \right) \in \cT_i} \big| A(i', j_1) - A(i', j_2) \big| > t }\\ &\leq \Prob{ \bigcup_{\left(i', j_1, j_2 \right) \in \cT_i} \left\{ \left| \fcol{j_1} - \fcol{j_2} \right| > \frac{t}{L} \right\}}\\ &\leq \Prob{ \bigcup_{\left(i', j_1, j_2 \right) \in \cT_i} \left\{ \Big| \hat{q}_{\marg}\left( j_1 \right) - \hat{q}_{\marg}\left( j_2 \right) \Big| > \sqrt{\frac{2}{\left| J \right| p}} \right\}}\\ &\quad + \Prob{ \left| \hat{q}_{\marg}(j) - \fcol{j} \right| > \frac{1}{2} \left[ \frac{t}{L} - \sqrt{\frac{2}{\left| J \right| p}} \right], ~\forall j \in J}\\ &\leq \big| J \big| \exp\left( - \frac{n}{8L^2} \left( t - L\sqrt{\frac{2}{\left| J \right| p}} \right)^2 \right)\\ &\quad + \big| J \big|\exp\left( -\frac{n}{12L} \left( t - L\sqrt{\frac{2}{\left| J \right| p}} - 4L \Qf\left(\frac{mp}{2}\right) \right)\right) . \end{align*} \end{proof} \begin{lemma}\label{lem:dN_max} $\dNi$ does not exceed $4\sigma \sqrt{\log 4 |\cT| }$ with high probability. Specifically, \[ \Prob{ \dNi > 4\sigma \sqrt{\log 4 |\cT| }} \leq \frac{1}{4 |\cT|}. \] \end{lemma} Combined with Lemmas \ref{lem:T_large} and \ref{lem:T_not_too_big}, this lemma asserts that $ \dNi < 4\sigma \sqrt{\log 4 mnp }$ with high probability, i.e. $1 - O\Big(\frac{1}{mnp}\Big)$. \begin{proof} For any $t > 0$, if $\big| N(i',j_1) \big|, \big| N(i',j_2) \big| \leq \frac{t}{2}$ for all $(i', j_1, j_2) \in \cT$, then $\dNi \leq t$. Considering its contrapositive, \begin{align*} \Prob{ \dNi > t } &\leq \Prob{\exists (i', j_1, j_2) \in \cT: \big| N(i',j_1) \big| \geq \frac{t}{2} \text{ or } \big| N(i',j_2) \big| \geq \frac{t}{2}}\\ &\leq \sum_{(i', j_1, j_2) \in \cT} \left[ \Prob{ \big| N(i',j_1) \big| \geq \frac{t}{2} } + \Prob{ \big| N(i',j_2) \big| \geq \frac{t}{2}}\right]\\ &\leq 2 \big| \cT \big| \Prob{ \big| N(i,j) \big| \geq \frac{t}{2} }\\ &\leq 4 \big| \cT \big| \exp \left( - \frac{t^2}{8 \sigma^2} \right). \end{align*} The last line follows from the sub-Gaussian assumption on the noise and the Chernoff bound. With the choice of $t = 4\sigma \sqrt{\log 4 |\cT|}$, \begin{align*} \Prob{ \dNi > 4\sigma \sqrt{\log 4 |\cT| }} &\leq 4 \big| \cT \big| \exp \left( - 2 \log 4 |\cT| \right) = \frac{1}{4 |\cT|}. \end{align*} \end{proof} \subsection{Uniform convergence of $\hat{\phi}(t)$ to $\phi(t)$: step 2-1 in Section \ref{sec:algorithm_unknown_hat}}\label{sec:est_characteristic_ftn} Recall that the estimator $\hat{F}$ of interest differs from $\tilde{F}$ already analyzed only in one sense; $\hat{L}$ is defined with estimated characteristic function of the noise $\hat{\phi}_{N,i}$ with ridge parameter to avoid division-by-zero (see Eqs. \eqref{eqn:unknown_density}, \eqref{eqn:kernel_estimated}), while $L$ is defined with true noise characteristic function $\phi_N$. \[ \hat{f}^{(i)}(z) = \frac{1}{h |\cB_i|} \sum_{j\in \cB_i} \hat{L} \left( \frac{z- Z(i,j)}{h} \right), ~\text{where}~ \hat{L}(z) = \frac{1}{2\pi} \int e^{-\img tz} \frac{\phi_K(t)}{\hat{\phi}_{N,i}\left(\frac{t}{h}\right) + \rho}dt. \] The goal of this section is to show for any $i \in [m]$, $\hat{\phi}_{N,i} \approx \phi_N$, thereby having $\hat{f} \approx \tilde{f}$, which will be shown in the next section. Recall that the noise density is estimated from the base set $\cT_i$ as per described in Algorithm \ref{alg:setT} and that the estimated characteristic function is defined as follows (see Eq. \eqref{eqn:chN_est}): \[ \hat{\phi}_{N, i}(t) = \left| \frac{1}{\left| \cT_i \right|} \sum_{ \left(i, j_1, j_2) \right) \in \cT_i} \cos \big[ t \left( Z(i, j_1) - Z(i, j_2) \right) \big] \right|^{1/2}. \] For analytical purpose, we define an imaginary estimator of the characteristic function of noise as \begin{equation}\label{eqn:noise_ideal} \hat{\phi}_{N, i}^*(t) = \left| \frac{1}{\left| \cT_i \right|} \sum_{ \left(i, j_1, j_2) \right) \in \cT_i} \cos \big[ t \left( N(i, j_1) - N(i, j_2) \right) \big] \right|^{1/2}. \end{equation} We label the argument inside the absolute value bracket as follows so that $\hat{\phi}_{N, i}^*(t) = \big| \hat{\Phi}_{N,i}^*(t) \big|^{\frac{1}{2}}$: \begin{equation}\label{eqn:Phi_star} \hat{\Phi}_{N, i}^*(t) = \frac{1}{\left| \cT_i \right|} \sum_{ \left(i, j_1, j_2) \right) \in \cT_i} \cos \big[ t \left( N(i, j_1) - N(i, j_2) \right) \big]. \end{equation} \begin{lemma}\label{lem:ideal_true} For any $i \in [m]$, $\hat{\phi}_{N, i}^*$ is close to $\phi_N$ with high probability. Specifically, for any $t \in \Reals$ and for any $s > 0$, \begin{align*} \Prob{\big| \hat{\phi}_{N, i}^*(t) - \phi_N(t) \big| > s} &\leq \Prob{ \big|\hat{\Phi}_{N, i}^*(t) - \phi_N(t)^2 \big| > s^2 }\\ &\leq 2 \exp \left(- \frac{\big| \cT_i \big| s^4}{2} \right). \end{align*} \end{lemma} \begin{proof} By the assumption of supersmooth noise (see Eq. \eqref{eqn:model_supersmooth}), $\phi_N(t) \geq B^{-1} \exp \left( -\gamma |t|^{\beta} \right) > 0$ for all $t \in \Reals$. Also, by definition of the estimator (see Eq. \eqref{eqn:noise_ideal}), $\hat{\phi}_{N, i}^*(t) \geq 0$ for all $t \in \Reals$. Since $|a-b| \leq |a+b|$ for $a, b \geq 0$, we have for any $t \in \Reals$, \begin{align*} \big| \hat{\phi}_{N, i}^*(t) - \phi_N(t) \big| &\leq \Big(\big| \hat{\phi}_{N, i}^*(t) - \phi_N(t) \big| \big| \hat{\phi}_{N, i}^*(t) + \phi_N(t) \big| \Big)^{\frac{1}{2}}\\ &= \big| \hat{\phi}_{N, i}^*(t)^2 - \phi_N(t)^2 \big|^{\frac{1}{2}}\\ &\leq \big| \hat{\Phi}_{N, i}^*(t) - \phi_N(t)^2 \big|^{\frac{1}{2}}. \end{align*} The last inequality follows from $\Big| \big| \hat{\Phi}_{N, i}^*(t) \big| - \phi_N(t)^2 \Big| \leq \Big| \hat{\Phi}_{N, i}^*(t) - \phi_N(t)^2 \Big|$, because $\phi_N(t) > 0$. From the symmetry of the noise distribution and the independence between $N(i,j_1)$ and $N(i,j_2)$ for $(i,j_1, j_2) \in \cT_i$, \begin{align*} &\Exp{ \cos \big[ t \left( N(i, j_1) - N(i, j_2) \right)}\\ &\qquad= \Exp{ \frac{1}{2} \exp \big( t \left( N(i, j_1) - N(i, j_2) \right) \big) + \frac{1}{2} \exp \big( -t \left( N(i, j_1) - N(i, j_2) \right) \big) }\\ &\qquad= \frac{1}{2} \mathbb{E}\big[tN(i, j_1)\big] \mathbb{E}\big[-tN(i, j_2)\big] + \mathbb{E}\big[-tN(i, j_1)\big] \mathbb{E}\big[tN(i, j_2)\big]\\ &\qquad= \phi_N(t)^2. \end{align*} Therefore, $\Exp{\hat{\Phi}_{N, i}^*(t)} = \phi_N(t)^2$ for all $t \in \Reals$. Since $\Big| \cos \big[ t \left( N(i, j_1) - N(i, j_2) \right) \Big| \leq 1$, we can apply Hoeffding's inequality to achieve \[ \Prob{ \big|\hat{\Phi}_{N, i}^*(t) - \phi_N(t)^2 \big| > s } \leq 2 \exp \left(- \frac{\big| \cT_i \big| s^2}{2} \right), \quad \text{for all }t \in \Reals. \] All in all, for any $t \in \Reals$ and for any $s > 0$, \begin{align*} \Prob{\big| \hat{\phi}_{N, i}^*(t) - \phi_N(t) \big| > s} &\leq \Prob{\big| \hat{\phi}_{N, i}^*(t)^2 - \phi_N(t)^2 \big| > s^2}\\ &\leq \Prob{ \big|\hat{\Phi}_{N, i}^*(t) - \phi_N(t)^2 \big| > s^2 }\\ &\leq 2 \exp \left(- \frac{\big| \cT_i \big| s^4}{2} \right). \end{align*} \end{proof} \begin{lemma}\label{lem:sup_ideal} For any $i \in [m]$, $\hat{\phi}_{N, i}^*$ is uniformly close to $\phi_N$ with high probability. Specifically, for any $\Lambda > 0$, any $N \in \Nats$ and any $s > \left\| \Delta^{* (i)}_{N, \Lambda} \right\|_{\infty}^{\frac{1}{2}}$, \begin{equation} \Prob{ \sup_{t \in [-\Lambda, \Lambda]} \big| \hat{\phi}_{N, i}^*(t) - \phi_N(t) \big| > s}\leq 2N \exp \left(- \frac{\big| \cT_i \big| }{2} \left( s^2 - \left\| \Delta^{* (i)}_{N, \Lambda} \right\|_{\infty} \right)^2 \right), \end{equation} where $\left\| \Delta^{* (i)}_{N, \Lambda} \right\|_{\infty} = \frac{\Lambda}{N} \left[ |\Lambda| \dNi^2 + 2\sigma B \right]$. \end{lemma} \begin{proof} First, we discretize the interval interval $[-\Lambda, \Lambda]$ by constructing a finite $\varepsilon$-net. For any $N \geq 1$, define the set \[ \cT_N := \left\{ \frac{(2k - 1 - N)\Lambda}{2N},~\forall k \in [N] \right\}. \] Then for any $N > 0$, $\cT_N \subset [-\Lambda, \Lambda]$ and it forms a $\frac{\Lambda}{N}$-net with $\left| \cT_N \right| = N$, i.e., for any $z$ with $|z| \leq \Lambda$, there exists $z' \in \cT_N$ such that $\left| z - z' \right| \leq \frac{\Lambda}{N}$. Next, we consider the maximum rate of change of the function $\hat{\Phi}_{N, i}^*(t) - \phi_N^2(t)$ to determine the resolution of the net. For brevity, we let $\Delta N \equiv N(i, j_1) - N(i, j_2)$. We can observe that \begin{align*} &\Bigg| \frac{d}{dt} \hat{\Phi}_{N, i}^*(t) \Bigg|\\ &\qquad= \Bigg| \frac{1}{\left| \cT_i \right|} \sum_{ \left(i, j_1, j_2) \right) \in \cT_i} \frac{d}{dt} \cos \big[ t \left( N(i, j_1) - N(i, j_2) \right) \big] \Bigg| \\ &\qquad= \Bigg| \frac{-1}{\left| \cT_i \right|} \sum_{ \left(i, j_1, j_2) \right) \in \cT_i} \sin \big[ t \left( N(i, j_1) - N(i, j_2) \right) \big] \left( N(i, j_1) - N(i, j_2) \right) \Bigg| \\ &\qquad\leq \max_{ \left(i, j_1, j_2) \right) \in \cT_i} \Big| t \Big | \Big| N(i, j_1) - N(i, j_2) \Big|^2\\ &\qquad= |t| \dNi^2. \end{align*} and \begin{align*} \Bigg| \frac{d}{dt} \phi_N^2(t) \Bigg| &= 2 \Bigg| \phi_N(t) \frac{d}{dt} \phi_N(t) \Bigg|\\ &\leq 2 \big| \phi_N(t) \big| \Bigg| \frac{d}{dt} \int_{-\infty}^{\infty} e^{\img t x} dF_N(x) \Bigg|\\ &\leq 2 \big| \phi_N(t) \big| \Bigg| \int_{-\infty}^{\infty} \img x e^{\img t x} dF_N(x) \Bigg|\\ &\leq 2 \big| \phi_N(t) \big| \int_{-\infty}^{\infty} \big| x \big| dF_N(x)\\ &\leq 2\sigma B \exp\left( - \gamma |t|^{\beta} \right). \end{align*} The last line follows from the sub-Gaussian noise assumption: \[ \int_{-\infty}^{\infty} \big| x \big| dF_N(x) = \Exp{ \big| N \big| } \leq \Exp{ N^2 }^{\frac{1}{2}} \leq \sigma. \] Therefore, \begin{align*} \sup_{t \in [-\Lambda, \Lambda]} \Bigg| \frac{d}{dt} \left( \hat{\Phi}_{N, i}^*(t) - \phi_N^2(t) \right) \Bigg| &\leq \sup_{t \in [-\Lambda, \Lambda]} \Bigg| \frac{d}{dt} \hat{\Phi}_{N, i}^*(t) \Bigg| + \sup_{t \in [-\Lambda, \Lambda]} \Bigg| \frac{d}{dt} \phi_N^2(t) \Bigg|\\ &\leq |\Lambda| \dNi^2 + 2\sigma B. \end{align*} Then it follows from the continuity of $\hat{\Phi}_{N, i}^*(t) - \phi_N^2(t)$ that \[ \sup_{t \in [-\Lambda, \Lambda]} \Big| \hat{\Phi}_{N, i}^*(t) - \phi_N^2(t) \Big| \leq \sup_{t \in \cT_N} \Big| \hat{\Phi}_{N, i}^*(t) - \phi_N^2(t) \Big| + \frac{\Lambda}{N} \left[ |\Lambda| \dNi^2 + 2\sigma B \right]. \] We let $\left\| \Delta^{* (i)}_{N, \Lambda} \right\|_{\infty}$ denote the upper bound on the error term, i.e., \[ \left\| \Delta^{* (i)}_{N, \Lambda} \right\|_{\infty}:= \frac{\Lambda}{N} \left[ |\Lambda| \dNi^2 + 2\sigma B \right]. \] Therefore, if $\Big| \hat{\Phi}_{N, i}^*(t) - \phi_N^2(t) \Big| \leq s$ for all $t \in \cT_N$, the supremum over the entire domain $[-\Lambda, \Lambda]$ is bounded above up to an additional term as $\sup_{z \in [-\Lambda, \Lambda]} \Big| \hat{\Phi}_{N, i}^*(t) - \phi_N^2(t) \Big| \leq s + \Delta_*^{(i)} \frac{\Lambda}{N}$. An application of the union bound on the contraposition of the previous statement yields \begin{align*} &\Prob{ \sup_{t \in [-\Lambda, \Lambda]} \big| \hat{\phi}_{N, i}^*(t) - \phi_N(t) \big| > s}\\ &\qquad\leq \Prob{ \sup_{t \in [-\Lambda, \Lambda]} \big| \hat{\Phi}_{N, i}^*(t) - \phi_N^2(t) \big| > s^2 }\\ &\qquad\leq \Prob{ \sup_{t \in \cT_N} \big| \hat{\Phi}_{N, i}^*(t) - \phi_N^2(t) \big| > s^2 - \left\| \Delta^{* (i)}_{N, \Lambda} \right\|_{\infty}}\\ &\qquad\leq \sum_{t \in \cT_N} \Prob{ \big| \hat{\Phi}_{N, i}^*(t) - \phi_N^2(t) \big| > s^2 - \left\| \Delta^{* (i)}_{N, \Lambda} \right\|_{\infty} }\\ &\qquad\leq 2 \sum_{t \in \cT_N} \exp \left(- \frac{\big| \cT_i \big| }{2} \left( s^2 - \left\| \Delta^{* (i)}_{N, \Lambda} \right\|_{\infty} \right)^2 \right)\\ &\qquad\leq 2N \exp \left(- \frac{\big| \cT_i \big| }{2} \left( s^2 - \left\| \Delta^{* (i)}_{N, \Lambda} \right\|_{\infty} \right)^2 \right). \end{align*} \end{proof} As in Eq. \eqref{eqn:Phi_star}, we let \begin{equation}\label{eqn:Phi} \hat{\Phi}_{N, i}(t) = \frac{1}{\left| \cT_i \right|} \sum_{ \left(i, j_1, j_2) \right) \in \cT_i} \cos \big[ t \left( Z(i, j_1) - Z(i, j_2) \right) \big], \end{equation} so that $\hat{\phi}_{N, i}(t) = \big| \hat{\Phi}_{N,i}(t) \big|^{\frac{1}{2}}$. \begin{lemma}\label{lem:practical_ideal} For any $i \in [m]$, $\hat{\phi}_{N,i}$ is close to $\hat{\phi}_{N,i}^*$ with high probability. Specifically, for any $t \in \Reals$ and for any $s > \frac{|t|}{\sqrt{2}}\dAi$, \begin{align*} \Prob{\big| \hat{\phi}_{N, i}(t) - \hat{\phi}_{N, i}^*(t) \big| > s} &\leq \Prob{ \big| \hat{\Phi}_{N, i}(t) - \hat{\Phi}_{N, i}^*(t) \big| > s^2 }\\ &\leq 2 \exp \left( - \frac{ |\cT_i| }{ 2 t^2 \dAi^2} \left( s^2 - \frac{t^2 \dAi^2}{2} \right)^2 \right). \end{align*} \end{lemma} \begin{proof} We know that $\hat{\phi}_{N, i}(t), \hat{\phi}_{N, i}^*(t) \geq 0$ for all $t \in \Reals$ (see Eqs. \eqref{eqn:chN_est}, \eqref{eqn:noise_ideal}). By the same argument as in the proof of Lemma \ref{lem:ideal_true}, for any $t \in \Reals$, \begin{align*} \Big| \hat{\phi}_{N, i}(t) - \hat{\phi}_{N, i}^*(t) \Big| &\leq \Big(\Big| \hat{\phi}_{N, i}(t) - \hat{\phi}_{N, i}^*(t) \Big| \Big| \hat{\phi}_{N, i}(t) + \hat{\phi}_{N, i}^*(t) \Big| \Big)^{\frac{1}{2}}\\ &= \Big| \hat{\phi}_{N, i}(t)^2 - \hat{\phi}_{N, i}^*(t)^2 \Big|^{\frac{1}{2}}. \end{align*} Note that for any $a, b \in \Reals$, $\big||a| - |b| \big| \leq \big| a - b \big|$. \[ \Big| \hat{\phi}_{N, i}(t)^2 - \hat{\phi}_{N, i}^*(t)^2 \Big| = \Big| \big|\hat{\Phi}_{N, i}(t)\big| - \big| \hat{\Phi}_{N, i}^*(t)\big| \Big| \leq\Big| \hat{\Phi}_{N, i}(t) - \hat{\Phi}_{N, i}^*(t) \Big|. \] By the model assumption, $Z(i,j) = A(i,j) + N(i,j)$. Changing the perspective, we now consider $ Z(i, j_1) - Z(i, j_2) $ as a perturbed instance of the noise $N(i, j_1) - N(i, j_2)$ by the signal difference $A(i, j_1) - A(i, j_2)$, which is assumed to be small for $(i, j_1, j_2) \in \cT_i$. For brevity, we let $\Delta N \equiv N(i, j_1) - N(i, j_2)$, $\Delta A \equiv A(i, j_1) - A(i, j_2)$ and $\Delta Z \equiv Z(i, j_1) - Z(i, j_2)$. Since it is known that $\cos a - \cos b = -2 \sin \frac{a+b}{2} \sin \frac{a-b}{2}$, \begin{align*} \Big| \hat{\Phi}_{N, i}(t) - \hat{\Phi}_{N, i}^*(t) \Big| &= \left| \frac{1}{\left| \cT_i \right|} \sum_{ \left(i, j_1, j_2) \right) \in \cT_i} \Big\{ \cos \big[ t \Delta Z \big] - \cos \big[ t \Delta N \big] \Big\} \right|\\ &= \left| \frac{-2}{\left| \cT_i \right|} \sum_{ \left(i, j_1, j_2) \right) \in \cT_i} \sin \left( t\Delta N+ \frac{t \Delta A}{2} \right) \sin \left( \frac{t \Delta A}{2}\right) \right|. \end{align*} We will find an upper bound on this last term by showing that it sharply concentrates to its expectation, which is small. Note that the distribution of $\Delta N$ is governed by the randomness in $\big\{ N(i',j_1), N(i',j_2) \big\}_{(i',j_1, j_2) \in \cT_i}$ and that of $\Delta A$ is by $\big\{ \frow{i'}, \fcol{j_1}, \fcol{j_2} \big\}_{(i', j_1, j_2) \in \cT_i}$. Conditioned on $\big\{ \frow{i'}, \fcol{j_1}, \fcol{j_2} \big\}_{(i', j_1, j_2) \in \cT_i}$, the summands, $ \sin \left( t \Delta N+ \frac{t \Delta A}{2} \right)$ $\times \sin \left( \frac{t \Delta A}{2}\right)$, are independent from each other so that we can apply the Hoeffding's inequality. Let $\Delta \hat{\Phi}_{N,i} (t) \equiv \hat{\Phi}_{N, i}(t) - \hat{\Phi}_{N, i}^*(t)$ and note that $\big| \sin x \big| \leq \big| x \big|$ for $x \in \Reals$. Then for any $t \in \Reals$ and any $s > 0$, \begin{align} \mathbb{P} \Big( \big| \Delta \hat{\Phi}_{N,i} (t) - \Exp{\Delta \hat{\Phi}_{N,i} (t)} \big| > s \Big) &\leq 2 \exp \left( - \frac{ 2 \big( \frac{|\cT_i| s}{2} \big)^2}{ \sum_{(i,j_1, j_2) \in \cT_i} \big( t \Delta A \big)^2 } \right) \nonumber\\ &\leq 2 \exp \left( - \frac{ |\cT_i| s^2}{ 2 \max_{(i,j_1, j_2) \in \cT_i} \big( t \Delta A \big)^2 } \right) \nonumber\\ &= 2 \exp \left( - \frac{ |\cT_i| s^2}{ 2 t^2 \dAi^2 } \right). \label{eqn:conc_bound} \end{align} Now we consider the expectation $\Exp{\Delta \hat{\Phi}_{N,i} (t)}$, where the expectation is with respect to the first source of randomness, $\big\{ N(i',j_1), N(i',j_2) \big\}_{(i',j_1, j_2) \in \cT_i}$. From the symmetry in the noise distribution, \begin{align*} & \mathbb{E} \big[\Delta \hat{\Phi}_{N,i}(t) \big] ~=~ \Exp{\frac{-2}{\left| \cT_i \right|} \sum_{ \left(i, j_1, j_2) \right) \in \cT_i} \sin \left( t \Delta N+ \frac{t \Delta A}{2} \right) \sin \left( t \frac{\Delta A}{2}\right)}\\ &= \Exp{\frac{-1}{\left| \cT_i \right|} \sum_{ \left(i, j_1, j_2) \right) \in \cT_i} \left[ \sin \left( t \Delta N+ \frac{t \Delta A}{2} \right) + \sin \left( -t \Delta N+ \frac{t \Delta A}{2} \right) \right] \sin \left( t \frac{\Delta A}{2}\right)}\\ &= \Exp{\frac{-2}{\left| \cT_i \right|} \sum_{ \left(i, j_1, j_2) \right) \in \cT_i} \cos \big( t \Delta N \big) \sin^2 \left( \frac{t \Delta A}{2}\right)}. \end{align*} We used the fact that $\sin (a + b) + \sin (a - b) = 2 \sin \frac{a + b}{2} \cos \frac{a - b}{2}$. Since $\Big| \cos \big( t \Delta N \big) \Big| \leq 1$ and $ \Big| \sin \left( \frac{t \Delta A}{2}\right) \Big| \leq \Big| \frac{t \Delta A}{2} \Big|$, \begin{equation}\label{eqn:mean_bound} \Big| \mathbb{E} \big[\Delta \hat{\Phi}_{N,i}(t) \big] \Big| \leq \frac{2}{|\cT_i|} \sum_{\left(i, j_1, j_2) \right) \in \cT_i} \bigg| \frac{t \Delta A}{2} \bigg|^2 \leq \max_{\left(i, j_1, j_2) \right) \in \cT_i} \frac{\big(t \Delta A \big)^2}{2} = \frac{t^2}{2}\dAi^2. \end{equation} Combining the upper bound on $\Big| \mathbb{E} \big[\Delta \hat{\Phi}_{N,i}(t) \big] \Big|$ in Eq. \eqref{eqn:mean_bound} together with the concentration inequality Eq. \eqref{eqn:conc_bound} yields the following result: for any $t \in \Reals$ and any $s > \frac{t^2}{2} \dAi^2$, \[ \Prob{ \big| \Delta \hat{\Phi}_{N,i} (t) \big| > s } \leq 2 \exp \left( - \frac{ |\cT_i| }{ 2 t^2 \dAi^2 } \left( s - \frac{t^2 \dAi^2}{2} \right)^2 \right). \] All in all, for any $t \in \Reals$ and for any $s > \frac{t}{\sqrt{2}}\dAi$, \begin{align*} &\Prob{\big| \hat{\phi}_{N, i}(t) - \hat{\phi}_{N, i}^*(t) \big| > s}\\ &\qquad\leq \Prob{ \big| \hat{\Phi}_{N, i}(t) - \hat{\Phi}_{N, i}^*(t) \big| > s^2 }\\ &\qquad\leq 2 \exp \left( - \frac{ |\cT_i| }{ 2 t^2 \dAi^2 } \left( s^2 - \frac{t^2 \dAi^2}{2} \right)^2 \right). \end{align*} \end{proof} We can refine the result obtained so far to get a uniform upper bound with the $\varepsilon$-net argument. Recall that \begin{align*} \Big| \hat{\phi}_{N, i}(t) - \hat{\phi}_{N, i}^*(t) \Big| &\leq \Big| \hat{\phi}_{N, i}(t)^2 - \hat{\phi}_{N, i}^*(t)^2 \Big|^{\frac{1}{2}} \leq\Big| \hat{\Phi}_{N, i}(t) - \hat{\Phi}_{N, i}^*(t) \Big|^{\frac{1}{2}}. \end{align*} It suffices to find a uniform upper bound on $\Big| \hat{\Phi}_{N, i}(t) - \hat{\Phi}_{N, i}^*(t) \Big|$. \begin{lemma}[Uniform convergence of the noise estimate]\label{lem:sup_noise} For any $i \in [m]$, $\hat{\phi}_{N,i}$ is uniformly close to $\hat{\phi}_{N,i}^*$ with high probability. Specifically, for any $\Lambda > 0$, any $N \in \Nats$ and $s > \left\| \Delta^{(i)}_{N, \Lambda} \right\|_{\infty}^{\frac{1}{2}}$, \begin{align*} &\Prob{ \sup_{t \in [-\Lambda, \Lambda]} \big| \hat{\phi}_{N, i}(t) - \hat{\phi}_{N, i}^*(t) \big| > s}\\ &\qquad\leq \Prob{ \sup_{t \in [-\Lambda, \Lambda]} \big| \hat{\Phi}_{N, i}(t) - \hat{\Phi}_{N, i}^*(t) \big| > s^2 }\\ &\qquad\leq 2N \exp \left( - \frac{ |\cT_i| }{ 2 \Lambda^2 \dAi^2} \left( s^2 - \left\| \Delta^{(i)}_{N, \Lambda} \right\|_{\infty} \right)^2 \right), \end{align*} where $\left\| \Delta^{(i)}_{N, \Lambda} \right\|_{\infty} = \frac{\Lambda^2 \dAi}{2N} \Big[ (N+2) \dAi + 4 \dNi \Big]$. \end{lemma} We note that, as we refine the net by letting $N \to \infty$, $\left\| \Delta^{(i)}_N \right\|_{\infty} \to \frac{\Lambda^2 \dAi^2}{2}$, which sets the fundamental lower bound on $\sup_{t \in [-\Lambda, \Lambda]} \big| \hat{\phi}_{N, i}(t) - \hat{\phi}_{N, i}^*(t) \big|$. That is to say, $\big\| \hat{\phi}_{N, i}(t) - \hat{\phi}_{N, i}^*(t) \big\|_{\infty} \approx \Lambda \dAi$. Indeed, such is a limit on the deconvolution obtained due to the inherent noise represented by term $ \dAi$ and some such limit is naturally expected. \begin{proof} [Proof of Lemma \ref{lem:sup_noise}] First, we discretize the interval interval $[-\Lambda, \Lambda]$ by constructing a finite $\varepsilon$-net. For any $N \geq 1$, define the set \[ \cT_N := \left\{ \frac{(2k - 1 - N)\Lambda}{2N},~\forall k \in [N] \right\}. \] Then for any $N > 0$, $\cT_N \subset [-\Lambda, \Lambda]$ and it forms a $\frac{\Lambda}{N}$-net with $\left| \cT_N \right| = N$, i.e., for any $z$ with $|z| \leq \Lambda$, there exists $z' \in \cT_N$ such that $\left| z - z' \right| \leq \frac{\Lambda}{N}$. Next, we consider the maximum rate of change of the function $\Delta \hat{\Phi}_N (t) \equiv \hat{\Phi}_{N, i}(t) - \hat{\Phi}_{N, i}^*(t)$ to determine the resolution of the net. We can observe that \begin{align*} &\frac{d}{dt} \Delta \hat{\Phi}_N (t)\\ &\qquad=\frac{d}{dt}\hat{\Phi}_{N, i}(t) - \hat{\Phi}_{N, i}^*(t) \\ &\qquad= \frac{d}{dt} \left[ \frac{-2}{\left| \cT_i \right|} \sum_{ \left(i, j_1, j_2) \right) \in \cT_i} \sin \left( t \left( \Delta N+ \frac{\Delta A}{2} \right) \right) \sin \left( \frac{t \Delta A}{2}\right) \right]\\ &\qquad= \frac{-2}{\left| \cT_i \right|} \sum_{ \left(i, j_1, j_2) \right) \in \cT_i} \Bigg[ \left( \Delta N+ \frac{\Delta A}{2} \right)\cos \left( t \left( \Delta N+ \frac{\Delta A}{2} \right) \right) \sin \left( \frac{t \Delta A}{2}\right)\\ &\qquad\qquad\qquad\qquad\qquad\qquad + \frac{\Delta A}{2}\sin \left( t \left( \Delta N+ \frac{\Delta A}{2} \right) \right) \cos \left( \frac{t \Delta A}{2}\right)\Bigg], \end{align*} and hence, \begin{align*} & \sup_{t \in [-\Lambda, \Lambda]} \left| \frac{d}{dt} \Delta \hat{\Phi}_N (t) \right| \\ &\leq \sup_{t \in [-\Lambda, \Lambda]} \frac{2}{\left| \cT_i \right|} \sum_{ \left(i, j_1, j_2) \right) \in \cT_i} \Bigg| \left( \Delta N+ \frac{\Delta A}{2} \right)\cos \left( t \left( \Delta N+ \frac{\Delta A}{2} \right) \right) \sin \left( \frac{t \Delta A}{2}\right)\\ &\qquad\qquad\qquad\qquad\qquad + \frac{\Delta A}{2}\sin \left( t \left( \Delta N+ \frac{\Delta A}{2} \right) \right) \cos \left( \frac{t \Delta A}{2}\right)\Bigg|\\ &\leq \sup_{t \in [-\Lambda, \Lambda]} 2 \max_{ \left(i, j_1, j_2) \right) \in \cT_i} \Bigg[ \bigg| \Delta N+ \frac{\Delta A}{2} \bigg| \bigg| \cos \left( t \left( \Delta N+ \frac{\Delta A}{2} \right) \right) \bigg| \bigg| \sin \left( \frac{t \Delta A}{2}\right) \bigg| \\ &\qquad\qquad\qquad\qquad\qquad + \bigg| \frac{\Delta A}{2} \bigg| \bigg| \sin \left( t \left( \Delta N+ \frac{\Delta A}{2} \right) \right) \bigg| \bigg| \cos \left( \frac{t \Delta A}{2}\right) \bigg| \Bigg]\\ &\leq \sup_{t \in [-\Lambda, \Lambda]} 2 \max_{ \left(i, j_1, j_2) \right) \in \cT_i} \left| \Delta N+ \frac{\Delta A}{2} \right| \left| \frac{t \Delta A}{2}\right| + \left| \frac{\Delta A}{2} \right| \left| t \left( \Delta N+ \frac{\Delta A}{2} \right) \right|\\ &\leq \sup_{t \in [-\Lambda, \Lambda]} \big|t\big| \Big(2 \dNi + \dAi \Big) \dAi\\ &\leq \big| \Lambda \big| \Big(2 \dNi + \dAi \Big) \dAi. \end{align*} Let $\Delta^{(i)} = \big| \Lambda \big| \Big(2 \dNi + \dAi \Big) \dAi$, the upper bound in the last line. Then it follows from the continuity of $\Delta \hat{\Phi}_N(t)$ that \[ \sup_{t \in [-\Lambda, \Lambda]} \Big| \Delta \hat{\Phi}_N(t) \Big| \leq \sup_{t \in \cT_N} \Big| \Delta \hat{\Phi}_N(t) \Big| + \Delta^{(i)} \frac{\Lambda}{N}. \] Therefore, if $\Big| \Delta \hat{\Phi}_N(t) \Big| \leq s$ for all $t \in \cT_N$, the supremum over the entire domain $[-\Lambda, \Lambda]$ is bounded above up to an additional term as $\sup_{z \in [-\Lambda, \Lambda]} \Big| \Delta \hat{\Phi}_N (t) \Big| \leq s + \Delta^{(i)} \frac{\Lambda}{N}$. An application of the union bound on the contraposition of the previous statement yields \begin{align*} &\Prob{ \sup_{t \in [-\Lambda, \Lambda]} \big| \hat{\phi}_{N, i}(t) - \hat{\phi}_{N, i}^*(t) \big| > s}\\ &\qquad\leq \Prob{ \sup_{t \in [-\Lambda, \Lambda]} \big| \hat{\Phi}_{N, i}(t) - \hat{\Phi}_{N, i}^*(t) \big| > s^2 }\\ &\qquad\leq \Prob{ \sup_{t \in \cT_N} \big| \hat{\Phi}_{N, i}(t) - \hat{\Phi}_{N, i}^*(t) \big| > s^2 - \Delta^{(i)} \frac{\Lambda}{N}}\\ &\qquad\leq \sum_{t \in \cT_N} \Prob{ \big| \hat{\Phi}_{N, i}(t) - \hat{\Phi}_{N, i}^*(t) \big| > s^2 - \Delta^{(i)} \frac{\Lambda}{N}}\\ &\qquad\leq 2 \sum_{t \in \cT_N} \exp \left( - \frac{ |\cT_i| }{ 2 t^2 \dAi^2} \left( s^2 - \Delta^{(i)} \frac{\Lambda}{N} - \frac{t^2 \dAi^2}{2} \right)^2 \right)\\ &\qquad\leq 2N \exp \left( - \frac{ |\cT_i| }{ 2 \Lambda^2 \dAi^2} \left( s^2 - \Delta^{(i)} \frac{\Lambda}{N} - \frac{\Lambda^2 \dAi^2}{2} \right)^2 \right). \end{align*} We can simplify the last line by defining \begin{align*} \left\| \Delta^{(i)}_{N, \Lambda} \right\|_{\infty} &= \Delta^{(i)} \frac{\Lambda}{N} + \frac{\Lambda^2 \dAi^2}{2} \\ &= \frac{\Lambda^2 \dAi}{2N} \Big[ (N+2) \dAi + 4 \dNi \Big], \end{align*} because $\Delta^{(i)} = \big| \Lambda \big| \Big(2 \dNi + \dAi \Big) \dAi$. \end{proof} \begin{lemma}\label{lem:Ephi} For any $i \in [m]$, $\hat{\phi}_{N,i}$ is uniformly close to $\phi_N$ with high probability. Specifically, for any $\Lambda > 0$, any $N_1, N_2 \in \Nats$ and for any $s_1 > \left\| \Delta^{* (i)}_{N_1, \Lambda} \right\|_{\infty}^{\frac{1}{2}}$ and $s_2 > \left\| \Delta^{(i)}_{N_2, \Lambda} \right\|_{\infty}^{\frac{1}{2}}$, \begin{align*} & \Prob{ \sup_{t \in [-\Lambda, \Lambda]} \big| \hat{\phi}_{N, i}(t) - \phi_N(t) \big| > s_1 + s_2}\\ &\qquad\leq 2N_1 \exp \left(- \frac{\big| \cT_i \big| }{2} \left( s_1^2 - \left\| \Delta^{* (i)}_{N_1, \Lambda} \right\|_{\infty} \right)^2 \right)\\ &\qquad\quad + 2N_2 \exp \left( - \frac{ |\cT_i| }{ 2 \Lambda^2 \dAi^2} \left( s_2^2 - \left\| \Delta^{(i)}_{N_2, \Lambda} \right\|_{\infty} \right)^2 \right), \end{align*} where \begin{align*} \left\| \Delta^{* (i)}_{N_1, \Lambda} \right\|_{\infty} &= \frac{\Lambda}{N_1} \left[ |\Lambda| \dNi^2 + 2\sigma B \right]\quad\text{and}\\ \left\| \Delta^{(i)}_{N_2, \Lambda} \right\|_{\infty} &= \frac{\Lambda^2 \dAi}{2N_2} \Big[ (N_2+2) \dAi + 4 \dNi \Big]. \end{align*} \end{lemma} \begin{proof} If $ \sup_{t \in [-\Lambda, \Lambda]} \big| \hat{\phi}_{N, i}^*(t) - \phi_N(t) \big| \leq s_1 $ and $ \sup_{t \in [-\Lambda, \Lambda]} \big| \hat{\phi}_{N, i}(t) - \hat{\phi}_{N, i}^*(t) \big| \leq s_2$, then $\sup_{t \in [-\Lambda, \Lambda]} \big| \hat{\phi}_{N, i}(t) - \phi_N(t) \big| \leq s_1 + s_2$ by triangle inequality. Therefore, \begin{align*} &\Prob{ \sup_{t \in [-\Lambda, \Lambda]} \big| \hat{\phi}_{N, i}(t) - \phi_N(t) \big| > s_1 + s_2}\\ &\qquad\leq \Prob{ \sup_{t \in [-\Lambda, \Lambda]} \big| \hat{\phi}_{N, i}^*(t) - \phi_N(t) \big| > s_1}\\ &\qquad\quad+\Prob{ \sup_{t \in [-\Lambda, \Lambda]} \big| \hat{\phi}_{N, i}(t) - \hat{\phi}_{N, i}^*(t) \big| > s_2}. \end{align*} Applying Lemma \ref{lem:sup_ideal} and \ref{lem:sup_noise} concludes the proof. \end{proof} \subsection{Bias from $\tilde{F}$ to $\hat{F}$}\label{sec:bias_hat_tilde} We show that the CDF estimated by the modified kernel estimator is uniformly close to that estimated by the traditional kernel estimator. For simplicity of the lemma statement, we introduce a conditioning event indexed by $i \in [m]$ as $$ \Ephi \equiv \bigg\{ \sup_{t \in [-\frac{1}{h}, \frac{1}{h}]} \big| \hat{\phi}_{N, i}(t) - \phi_N(t) \big| \leq s_{\phi} \bigg\}. $$ We will show this event is a high probability event later in Appendix \ref{sec:conditioning}. \begin{lemma}[Bias is small]\label{lem:hat_tilde} The expectation of $\hat{F}$ is close to the expectation of $\tilde{F}$. Specifically, for any $i \in [m]$, conditioned on the event $\Ephi$, \begin{align*} &\sup_{z \in \Reals}\left| \Exp{\hat{F}^{(i)} (z)} - \Exp{\tilde{F}^{(i)}(z)} \right|\\ &\qquad \leq \frac{2K_{max}(D_2 - D_1)}{ \pi h } \left( \max_{t \in [-\frac{1}{h}, \frac{1}{h}]} \left|\phi_N \left(t\right) - \hat{\phi}_{N,i}\left(t\right) \right| + \rho\right). \end{align*} Recall that the kernel bandwidth parameter $h = (4\gamma)^{\frac{1}{\beta}}(\log |\cB_i|)^{-\frac{1}{\beta}}$. \end{lemma} \begin{proof}[Proof of Lemma \ref{lem:hat_tilde}] We want to show that $$\sup_{z \in [D_1, D_2]} \Big| \Exp{\hat{F}^{(i)}(z) - \tilde{F}^{(i)}(z)} \Big|$$ is small. Here, expectation is taken with respect to data generation process, which can subdivided to the generation of $\{ Z(i,j) \}_{j \in \cB_i}$ and $\{ N(i',j_1) - N(i',j_2)\}_{(i',j_1, j_2) \in \cT_i}$, which are independent from each other (see the construction of the set $\cT_i$). \begin{align} &\Exp{\hat{F}^{(i)}(z) - \tilde{F}^{(i)}(z)} \nonumber\\ &\qquad= \Exp{ \int_{D_1}^{z \wedge D_2} \hat{f}^{(i)}(w) - \tilde{f}^{(i)}(w) dw} \nonumber\\ &\qquad= \Exp{ \int_{D_1}^{z \wedge D_2} \frac{1}{ h |\cB_i|} \sum_{j \in \cB_i} \hat{L} \left( \frac{w - Z(i,j)}{h} \right) - L \left( \frac{w - Z(i,j)}{h} \right) dw} \nonumber\\ &\qquad= \frac{1}{2\pi h |\cB_i|} \nonumber\\ &\qquad\quad\times \bbE\Bigg[ \int_{D_1}^{z \wedge D_2} \sum_{j \in \cB_i} \int_{-\infty}^{\infty} e^{-\img t \frac{w - Z(i,j)}{h}} \left[ \frac{\phi_K(t)}{\hat{\phi}_{N,i}(\frac{t}{h}) + \rho} - \frac{\phi_K(t)}{\phi_N(\frac{t}{h}) } \right] dt~dw \Bigg] \label{eqn:term.integral} \end{align} because \begin{align*} \hat{L} \left( \frac{w - Z(i,j)}{h} \right) &= \frac{1}{2\pi}\int_{-\infty}^{\infty} e^{-\img t \frac{w - Z(i,j)}{h}} \frac{\phi_K(t)}{\hat{\phi}_{N,i}(\frac{t}{h}) + \rho} dt, \quad\text{and}\\ L \left( \frac{w - Z(i,j)}{h}\right) &= \frac{1}{2\pi}\int_{-\infty}^{\infty} e^{-\img t \frac{w - Z(i,j)}{h}} \frac{\phi_K(t)}{\phi_N(\frac{t}{h}) } dt. \end{align*} Noting that the support of $\phi_K$ is contained in $[-1,1]$ and that the integrand is a bounded continuous function, we exchange the order of integrals in Eq.\eqref{eqn:term.integral}: \begin{align} &Eq.\eqref{eqn:term.integral}\nonumber\\ &= \int_{D_1}^{z \wedge D_2} \Exp{ \sum_{j \in \cB_i} \int_{-\infty}^{\infty} e^{-\img t \frac{w - Z(i,j)}{h}} \phi_K(t) \frac{ \phi_N(\frac{t}{h}) -\left[ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho \right]}{\phi_N(\frac{t}{h}) \left[ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho \right]}dt}dw \nonumber\\ &= \int_{D_1}^{z \wedge D_2} \sum_{j \in \cB_i} \Exp{ \int_{-\infty}^{\infty} e^{-\img t \frac{w - Z(i,j)}{h}} \phi_K(t) \frac{ \phi_N(\frac{t}{h}) -\left[ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho \right]}{\phi_N(\frac{t}{h}) \left[ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho \right]}dt}dw \nonumber\\ &= \int_{D_1}^{z \wedge D_2} \sum_{j \in \cB_i} \int_{-\infty}^{\infty} \Exp{ e^{-\img t \frac{w - Z(i,j)}{h}} \phi_K(t) \frac{ \phi_N(\frac{t}{h}) -\left[ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho \right]}{\phi_N(\frac{t}{h}) \left[ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho \right]}}dt~dw \nonumber\\ &= \int_{D_1}^{z \wedge D_2} \sum_{j \in \cB_i} \int_{-\infty}^{\infty} e^{-\img t \frac{w}{h}}\Exp{ e^{\img \frac{t}{h}Z(i,j)}} \phi_K(t) \frac{ \phi_N(\frac{t}{h}) -\left[ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho \right]}{\phi_N(\frac{t}{h}) \left[ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho \right]}dt~dw. \label{eqn:term.integral2} \end{align} Recall that $\hat{\phi}_{N, i}$ estimates $\phi_N$ using data other than those from the $i$-th row, and hence, $Z(i,j)$ is independent of $\hat{\phi}_{N,i}$. $\mathbb{E}\big[ e^{\img \frac{t}{h}Z(i,j)}\big]$ is the moment generating function of $Z(i,j)$ evaluated at $\frac{t}{h}$. Since $Z = A + N$ is the independent sum of $A \sim F^{(i)}$ and $N$, the moment generating function of $Z$ is equal to the product of those, i.e., \[ \Exp{ e^{\img \frac{t}{h}Z(i,j)}} = \phi_{Z(i,j)} \left( \frac{t}{h} \right) = \phi_{F^{(i)}}\left( \frac{t}{h} \right) \phi_N\left( \frac{t}{h} \right). \] Therefore, \begin{align*} & Eq.\eqref{eqn:term.integral2} \\ &= \int_{D_1}^{z \wedge D_2} \sum_{j \in \cB_i} \int_{-\infty}^{\infty} e^{-\img t \frac{w}{h}}\phi_{X^{(i)}}\Big(\frac{t}{h}\Big)\phi_N\Big(\frac{t}{h}\Big) \phi_K(t) \frac{ \phi_N(\frac{t}{h}) -\left[ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho \right]}{\phi_N(\frac{t}{h}) \left[ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho \right]}dt~dw\\ &= \int_{D_1}^{z \wedge D_2} \sum_{j \in \cB_i} \int_{-\infty}^{\infty} e^{-\img t \frac{w}{h}}\phi_{X^{(i)}}\Big(\frac{t}{h}\Big) \phi_K(t) \frac{ \phi_N(\frac{t}{h}) -\left[ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho \right]}{ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho }dt~dw \end{align*} In short, \begin{align} &\bigg| \sup_{z \in \Reals} \Exp{\hat{F}^{(i)}(z) - \tilde{F}^{(i)}(z)} \bigg| \nonumber\\ &\qquad\leq \Bigg| \sup_{z \in \Reals} \int_{D_1}^{z \wedge D_2} \frac{1}{ h |\cB_i|} \sum_{j \in \cB_i} \frac{1}{2\pi} \int_{-\infty}^{\infty} e^{-\img t \frac{w}{h}}\phi_{X^{(i)}}\Big(\frac{t}{h}\Big) \phi_K(t)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad \times \frac{ \phi_N(\frac{t}{h}) -\left[ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho \right]}{ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho }dt~dw \Bigg| \nonumber\\ &\qquad \leq \frac{D_2 - D_1}{ 2 \pi h } \int_{-\infty}^{\infty} \left| \phi_{X^{(i)}}\Big(\frac{t}{h}\Big) \phi_K(t) \frac{ \phi_N(\frac{t}{h}) -\left[ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho \right]}{ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho } \right| dt \nonumber\\ &\qquad \leq \frac{D_2 - D_1}{ 2 \pi h } \int_{-\infty}^{\infty} \bigg| \phi_{X^{(i)}}\Big(\frac{t}{h}\Big) \bigg| \bigg| \phi_K(t) \bigg| \left| \frac{ \phi_N(\frac{t}{h}) -\left[ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho \right]}{ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho } \right| dt \nonumber\\ &\qquad \leq \frac{D_2 - D_1}{ 2 \pi h } \int_{-1}^{1} K_{max} \left| \frac{ \phi_N(\frac{t}{h}) -\left[ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho \right]}{ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho } \right| dt \label{eqn:K}\\ &\qquad \leq \frac{K_{max}(D_2 - D_1)}{ \pi h } \max_{t \in [-1, 1]} \left| \frac{ \phi_N(\frac{t}{h}) -\left[ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho \right]}{ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho } \right|. \label{eqn:bias_bound} \end{align} Eq. \eqref{eqn:K} follows from our assumption that the support of $\phi_K$ is contained within $[-1, 1]$ and that there exists $K_{max}=\max_{t \in [-1,1]} \left| \phi_K(t) \right| < \infty$. To further simplify the upper bound in Eq. \eqref{eqn:bias_bound}, we remark that \[ \frac{ \phi_N(\frac{t}{h}) -\left[ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho \right]}{ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho } = \frac{ \phi_N(\frac{t}{h}) - \hat{\phi}_{N,i}(\frac{t}{h}) - \rho }{ \phi_N(\frac{t}{h}) - \left[ \phi_N(\frac{t}{h}) - \hat{\phi}_{N,i}(\frac{t}{h}) - \rho \right] }. \] From the supersmooth assumption on the noise, for any $t \in [-1, 1]$, \begin{align} \phi_N \left(\frac{t}{h} \right) & \geq \frac{1}{B}\exp\left(- \gamma \left|\frac{t}{h}\right|^{\beta} \right) = \frac{1}{B}\exp \left(- \frac{1}{4} t^{\beta} \log |\cB_i| \right) \nonumber \\ & = \frac{1}{B} |\cB_i|^{-\frac{1}{4}t^{\beta}} \geq \frac{1}{B} |\cB_i|^{-\frac{1}{4}}. \end{align} The kernel bandwidth parameter is chosen as $h = (4\gamma)^{\frac{1}{\beta}}(\log |\cB_i|)^{-\frac{1}{\beta}}$. The ridge parameter $\rho = |\cB_i|^{-\frac{7}{24}}$ and $\left| \phi_N(\frac{t}{h}) - \hat{\phi}_N(\frac{t}{h}) \right|$ is sufficiently small when conditioned on $\Ephi$ (see Appendix \ref{sec:conditioning} for the definition of the event $\Ephi$). Since $\left| \frac{\delta}{1-\delta} \right| \leq 2 \left|\delta\right|$ given that $\left|\delta\right| \leq \frac{1}{2}$, \begin{align*} \max_{t \in [-1, 1]} \left| \frac{ \phi_N(\frac{t}{h}) - \hat{\phi}_{N,i}(\frac{t}{h}) - \rho}{ \hat{\phi}_{N,i}(\frac{t}{h}) + \rho } \right| &\leq 2 \max_{t \in [-1, 1]} \left|\phi_N \left(\frac{t}{h}\right) - \hat{\phi}_{N,i}\left(\frac{t}{h}\right) - \rho \right| \\ &\leq 2 \max_{t \in [-1, 1]} \left|\phi_N \left(\frac{t}{h}\right) - \hat{\phi}_{N,i}\left(\frac{t}{h}\right) \right| + 2\rho\\ &= 2 \max_{t \in [-\frac{1}{h}, \frac{1}{h}]} \left|\phi_N \left(t\right) - \hat{\phi}_{N,i}\left(t\right) \right| + 2\rho \end{align*} Plugging in this expression to Eq. \eqref{eqn:bias_bound} concludes the proof. \end{proof} \subsection{Concentration of $\hat{F}$}\label{sec:concentration_hat} \begin{lemma}\label{lem:concentration_hat} For each $i \in [m]$, the kernel smoothed ECDF $\hat{F}^{(i)}$ defined as in Eq. \eqref{eqn:ECDF_unknown_noise} uniformly concentrates to its expectation, i.e., $\forall z \in \left[ D_1, D_2 \right]$, \begin{align*} \Prob{\left| \hat{F}^{(i)}(z) - \Exp{\hat{F}^{(i)}(z)} \right| > t} &\leq 2\exp\left( \frac{- |\cB_i|^{5/12} }{2 C_4^2 \left( \log |\cB_i| \right)^{\frac{2}{\beta}} } t^2 \right). \end{align*} \end{lemma} \begin{proof} [Proof of Lemma \ref{lem:concentration_hat}] Recall that when conditioned on $\frow{i}$, the kernel smoothed ECDF $\hat{F}^{(i)}$ evaluated at $z$ is a function of $n_i$ independent random variables $\{Z(i, j)\}_{j \in \cB_i}$, i.e., when $z$ is fixed, $\hat{F}^{(i)}(z): \Reals^{\left| \cB_i\right|} \to \Reals$ such that \begin{align*} \hat{F}^{(i)}(z)\left[ Z(i,j_1), \ldots, Z(i,j_{n_i}) \right] = \int_{D_1}^{z \wedge D_2} \frac{1}{h n_i} \sum_{j\in \cB_i} \hat{L} \left( \frac{w- Z(i,j)}{h} \right) dw, \end{align*} where $\hat{L}(z) = \frac{1}{2\pi} \int e^{-\img tz} \frac{\phi_K(t)}{\hat{\phi}_N\left(\frac{t}{h}\right)+\rho}dt$. We can show that $\hat{F}^{(i)}(z)$ considered as a function of measurements $\left\{ Z(i,j_1), \ldots, Z(i,j_{n_i}) \right\} $ satisfies the bounded difference condition (see Eq. \eqref{eqn:bounded_difference}) as in the proof of Lemma \ref{lem:concentration_tilde}. We take a similar approach as in the proof of Lemma \ref{lem:concentration_tilde}. Let $\zeta^{n} = (\zeta_1, \ldots, \zeta_n)$ and $\zeta^{n}_j = (\zeta_1, \ldots, \zeta_j', \ldots, \zeta_{n})$ be two $n$-tuples of real numbers, which differ only at the $j$-th position. Then \begin{align} &\hat{F}^{(i)}(z)[\zeta^{n}] -\hat{F}^{(i)}(z)[\zeta^n_j] \nonumber\\ &\qquad = \frac{1}{h n} \int_{D_1}^{z \wedge D_2} \hat{L} \left( \frac{w- \zeta_j}{h} \right) - \hat{L} \left( \frac{w- \zeta'_j}{h} \right) dw \nonumber\\ &\qquad =\frac{1}{h n} \int_{D_1}^{z \wedge D_2} \frac{1}{2\pi} \int \left(e^{-\img t \frac{w- \zeta_j}{h} } - e^{-\img t \frac{w- \zeta'_j}{h} }\right) \frac{\phi_K(t)}{\hat{\phi}_N\left(\frac{t}{h}\right) + \rho}dt~ dw \nonumber\\ &\qquad \leq \frac{1}{2\pi h n} \int_{D_1}^{z \wedge D_2} \int \left|e^{-\img t \frac{w- \zeta_j}{h} } - e^{-\img t \frac{w- \zeta'_j}{h} }\right| \left|\frac{\phi_K(t)}{\hat{\phi}_N\left(\frac{t}{h}\right)+ \rho} \right| dt ~dw. \label{eqn:difference_intermediate_hat} \end{align} Because $e^{-\img tz}$ is on the unit circle in the complex plane for any real numbers $t$ and $z$, we have \begin{align*} \left| e^{-\img t \frac{w- \zeta_j}{h} } - e^{-\img t \frac{w- \zeta'_j}{h} } \right| &\leq \bigg|e^{-\img t \frac{w- \zeta_j}{h} }\bigg| + \left| e^{-\img t \frac{w- \zeta'_j}{h} }\right| = 2. \end{align*} Since $\phi_K$ is assumed to have compact support (see Appendix \ref{sec:deconv_results}) within $[-1, 1]$, and a Fourier transform of $L^1$ function is uniformly continuous, there exists $K_{max}=\max_{t \in [-1,1]} \left| \phi_K(t) \right| < \infty$ such that $\left| \phi_K(t) \right| \leq K_{max}, \forall t$. From the algorithm description in Section \ref{sec:algorithm_unknown_hat}, $\rho = n^{-7/24}$ (here, $n = |\cB_i|$ is the generic variable which stands for the number of samples in a row). By definition, $\hat{\phi}_N \left( \frac{t}{h} \right) \geq 0,~ \forall t$, and hence, $\hat{\phi}_N \left( \frac{t}{h} \right) + \rho \geq \rho,~ \forall t$. We choose the bandwidth parameter $h = \left(4\gamma \right)^{\frac{1}{\beta}}\left( \log n \right)^{-\frac{1}{\beta}}$ following Fan (Theorems \ref{thm:Fan1}, \ref{thm:Fan2}). Plugging these expresions into Eq. \eqref{eqn:difference_intermediate_hat} leads to \begin{align*} Eq. \eqref{eqn:difference_intermediate_hat} &\leq \frac{\left( \log n \right)^{\frac{1}{\beta}}}{2\pi \left( 4\gamma \right)^{\frac{1}{\beta}} n} \int_{D_1}^{z \wedge D_2} \int_{-1}^{1} 2 K_{max} n^{7/24} dt dw\\ &\leq \frac{ K_{max}\left( \log n \right)^{\frac{1}{\beta}}}{\pi \left( 4\gamma \right)^{\frac{1}{\beta}} n^{17/24}} \int_{D_1}^{z \wedge D_2} \left( 1 - (-1) \right)dw\\ &\leq \frac{ 2K_{max} \left( D_2 - D_1 \right) \left( \log n \right)^{\frac{1}{\beta}}}{\pi \left( 4\gamma \right)^{\frac{1}{\beta}} n^{17/24}}\\ &\leq \frac{ 2C_4 \left( \log n \right)^{\frac{1}{\beta}}}{ n^{17/24}}, \quad\text{for any } z \in \left[ D_1, D_2\right]. \end{align*} The last line follows from the definition of $C_4$ and the fact that $B \geq 1$ in our model. Applying McDiarmid's inequality (Lemma \ref{lem:McDiarmid}), we can conclude that for any $z \in \left[ D_1, D_2\right]$, \begin{align*} \Prob{\left| \hat{F}^{(i)}(z)[\zeta^n] - \bbE_{\zeta^n}{\hat{F}^{(i)}(z)[\zeta^n]} \right| \geq t} & \leq 2\exp\left( \frac{- n^{5/12} }{2 C_4^2 \left( \log n \right)^{\frac{2}{\beta}}} t^2 \right). \end{align*} This argument holds for every $i \in [m]$, with replacing generic variable $n$ with corresponding $ |\cB_i|$. \end{proof} \begin{lemma}[Variance is uniformly small]\label{lem:sup_hat} For each $i \in [m]$, the kernel smoothed ECDF $\hat{F}^{(i)}$ defined as in Eq. \eqref{eqn:ECDF_unknown_noise} uniformly concentrates to its expectation, i.e., for any nonnegative integer $N$ and for any $t \geq \frac{\Delta^{(i)}\left( D_{2} - D_{1} \right)}{N}$ (we define $\Delta^{(i)} := \frac{K_{max} }{\pi \left( 4\gamma \right)^{\frac{1}{\beta}} } \left| \cB_i \right|^{\frac{7}{24}} \left( \log \left| \cB_i \right| \right)^{\frac{1}{\beta}}$), \begin{align*} &\Prob{ \sup_{z \in [D_1, D_2]} \left| \hat{F}^{(i)}(z) - \Exp{\hat{F}^{(i)}(z)} \right| \geq t}\\ &\qquad \leq 2N \exp\left( \frac{- \left| \cB_i \right|^{5/12} }{2 C_4^2 \left( \log \left| \cB_i \right| \right)^{\frac{2}{\beta}} } \left( t - \frac{\Delta^{(i)}\left( D_{2} - D_{1} \right)}{N} \right)^2 \right), \end{align*} where $\beta, \gamma >0$ are smoothness parameters for the noise, and $K_{max} = \max_{t \in [-1,1]} \left| \phi_K(t) \right|$. \end{lemma} \begin{proof} [Proof of Lemma \ref{lem:sup_hat}] First, we discretize the interval interval $[D_{1}, D_{2}]$ by constructing a finite $\varepsilon$-net. For any $N \geq 1$, define the set \[ \cT_N := \left\{ D_{min} + \frac{2k - 1}{2N}\left( D_{2} - D_{1} \right),~\forall k \in [N] \right\}. \] Then for any $N > 0$, $\cT_N \subset [D_{1}, D_{2}]$ and it forms a $\frac{\left( D_{2} - D_{1} \right)}{2N}$-net with $\left| \cT_N \right| = N$, i.e., for any $z \in \left[D_{1}, D_{2}\right]$, there exists $k \in [N]$ such that $\left| z - \frac{2k - 1}{2N}\left( D_{2} - D_{1} \right) \right| \leq \frac{\left( D_{2} - D_{1} \right)}{2N}$. We can observe that \begin{align*} \left\| \hat{f}^{(i)} \right\|_{\infty} &= \bigg\|\frac{1}{h \left| \cB_i \right|} \sum_{j \in \cB_i} \hat{L} \left( \frac{z - Z(i,j)}{h} \right) \bigg\|_{\infty}\\ &\leq \frac{1}{h} \left\| \hat{L} \right\|_{\infty}\\ &= \frac{1}{2\pi h} \left\| \int_{-\infty}^{\infty} e^{-\img tz} \frac{\phi_K(t)}{\hat{\phi}_{N,i}\left( \frac{t}{h} \right) + \rho } dt \right\|_{\infty}\\ &\leq \frac{1}{2\pi h} \int_{-\infty}^{\infty} \left| e^{-\img tz} \frac{\phi_K(t)}{\hat{\phi}_{N,i}\left( \frac{t}{h} \right) + \rho } \right| dt\\ &\leq \frac{1}{2\pi h} \int_{-\infty}^{\infty} \left| e^{-\img tz} \frac{\phi_K(t)}{\rho } \right| dt\\ &\leq \frac{1}{2\pi h} \int_{-1}^1 K_{max} \left| \cB_i \right|^{\frac{7}{24}} dt\\% &\leq \frac{ \left( \log \left| \cB_i \right| \right)^{\frac{1}{\beta}}}{2\pi \left( 4\gamma \right)^{\frac{1}{\beta}} } \int_{-1}^1 K_{max} \left| \cB_i \right|^{\frac{7}{24}} dt &\because h = \left( 4\gamma\right)^{\frac{1}{\beta}} \left( \log \left| \cB_i \right| \right)^{-\frac{1}{\beta}} \\ &\leq \frac{K_{max} }{\pi \left( 4\gamma \right)^{\frac{1}{\beta}} } \left| \cB_i \right|^{\frac{7}{24}} \left( \log \left| \cB_i \right| \right)^{\frac{1}{\beta}} . \end{align*} Let $\Delta^{(i)}$ denote the upper bound in the last line. Since this upper bound is universal for all realization of samples, $\left\| \Exp{ \hat{f}^{(i)} } \right\|_{\infty} \leq \Delta^{(i)}$, too. Then $\left\| \hat{f}^{(i)} - \Exp{ \hat{f}^{(i)} } \right\|_{\infty} \leq 2\Delta^{(i)}$ and it follows from the definition of $\hat{F}^{(i)}$ (see Eq. \eqref{eqn:ECDF_unknown_noise}) that \[ \sup_{z \in [D_1, D_2]} \bigg| \hat{F}^{(i)}(z) - \Exp{\hat{F}^{(i)}(z)} \bigg| \leq \sup_{z \in \cT_N} \bigg| \hat{F}^{(i)}(z) - \Exp{\hat{F}^{(i)}(z)} \bigg| + \frac{\Delta^{(i)}\left( D_{2} - D_{1} \right)}{N}. \] Therefore, if $\left| \hat{F}^{(i)}(z) - \Exp{\hat{F}^{(i)}(z)} \right| \leq \varepsilon$ for all $z \in \cT_n$, the supremum over the whole domain is bounded above up to an additional term as $$\sup_{z \in [D_{1}, D_{2}]} \left| \hat{F}^{(i)}(z) - \Exp{\hat{F}^{(i)}(z)} \right| \leq \varepsilon + \frac{\Delta^{(i)}\left( D_{2} - D_{1} \right)}{N}.$$ An application of the union bound on the contraposition of the previous statement yields \begin{align*} &\Prob{ \sup_{z \in [D_{1}, D_{2}]} \left| \hat{F}^{(i)}(z) - \Exp{\hat{F}^{(i)}(z)} \right| \geq t}\\ &\qquad \leq \Prob{ \sup_{z \in \cT_N} \left| \hat{F}^{(i)}(z) - \Exp{\hat{F}^{(i)}(z)} \right| \geq t - \frac{\Delta^{(i)}\left( D_{2} - D_{1} \right)}{N} }\\ &\qquad \leq \sum_{z \in \cT_N} \Prob{ \left| \hat{F}^{(i)}(z) - \Exp{\hat{F}^{(i)}(z)} \right| \geq t - \frac{\Delta^{(i)}\left( D_{2} - D_{1} \right)}{N} }\\ &\qquad \leq 2N \exp\left( \frac{- \left| \cB_i \right|^{5/12} }{2 C_4^2 \left( \log \left| \cB_i \right| \right)^{\frac{2}{\beta}} } \left( t - \frac{\Delta^{(i)}\left( D_{2} - D_{1} \right)}{N} \right)^2 \right). \end{align*} \end{proof} \subsection{Conditioning Events}\label{sec:conditioning} For analysis, we define some conditioning events. \begin{align* \EJ & \equiv \bigg\{ |J| \geq \frac{1}{4} n \bigg\},\\ \EcTi &\equiv \bigg\{ |\cT_i| \geq \frac{1}{512} mnp \bigg\},\\ \EcT &\equiv \bigg\{ |\cT| \leq mnp \bigg\},\\ \EdA &\equiv \bigg\{ \big| A(i,j_1) - A(i,j_2) \big| \leq \frac{c_{\Delta A}}{\sqrt{mp}} + \frac{2L\sqrt{2}}{\sqrt{np}}(1 + \sqrt[4]{np}) , ~\forall (i,j_1, j_2) \in \cT \bigg\},\\ \EdN &\equiv \bigg\{ \big| N(i,j_1) - N(i,j_2) \big| \leq 4\sigma \sqrt{\log (4 mnp) } ,~ \forall (i,j_1, j_2) \in \cT \bigg\},\\ \Ephi &\equiv \bigg\{ \sup_{t \in [-\frac{1}{h}, \frac{1}{h}]} \big| \hat{\phi}_{N, i}(t) - \phi_N(t) \big| \leq s_{\phi} \bigg\}. \end{align*} Here, $c_{\Delta A} = 8\sqrt{\pi}\left( \frac{\sqrt{e^{C_1}} + \sqrt{2}}{\sqrt{C_1}} + \frac{\sqrt{e^{C_2}} + \sqrt{2}}{\sqrt{C_2}} \right)$ and $s_{\phi} = s_1 + s_2$ where $s_1 = \frac{8 \sigma (\log | \cB_i |)^{\frac{1}{\beta}} }{(4\gamma)^{\frac{1}{\beta}}} \frac{ \sqrt{ \log (4mnp)} }{(mnp)^{\frac{1}{4}} }$ and $s_2 = \frac{2 (\log | \cB_i |)^{\frac{1}{\beta}}}{(4\gamma)^{\frac{1}{\beta}}} \left[ \frac{c_{\Delta A}}{\sqrt{mp}} + \frac{2L\sqrt{2}}{\sqrt{np}}(1 + \sqrt[4]{np}) \right]$. We analyze probabilities of these conditioning events, which will be used in the proof of Lemma \ref{lem:noisy_unknown_cdf} in the next section. We may assume $m, n \gg 1$ so that $mp \geq 8 \ln 2$ and $np \geq 48 > 32 \ln 2$. These assumptions are arbitrary and can be removed; the only purpose of these assumptions are to simplify the following probabilistic bounds. \medskip\noindent \em{ 1. $\EJ$: $\Prob{\EJ^c}$ is small. } Since $mp \geq 8 \ln 2$, $ \exp\left( - \frac{mp}{8} \right) \leq \frac{1}{2}$. By Lemma \ref{lem:IandJ}, \begin{align} \Prob{\EJ^c} &\leq \mathbb{P}\bigg(|J| \leq \frac{n \left[ 1 - \exp\left( - \frac{mp}{8} \right) \right]}{2} \bigg) \nonumber\\ &\leq \exp\bigg( - \frac{n \left[ 1 - \exp\left( - \frac{mp}{8} \right) \right]}{8} \bigg) \nonumber\\ &\leq \exp\bigg( - \frac{n}{16} \bigg). \label{eqn:EJc} \end{align} \medskip\noindent \em{ 2. $\EcTi$: $\Prob{\EcTi^c \big| \EJ}$ is small. } Conditioned on $\EJ$, $|J| \geq \frac{1}{4}n$ and $|J|p \geq \frac{np}{4}$. Therefore, \begin{align*} &\frac{m \left[ 1 - \exp\left( - \frac{|J|p}{8} \right) \right]}{2} - 1 \geq \frac{m}{4} - 1 \geq \frac{m}{8}, \quad \text{and }\\ &\Bigg\lceil \frac{\frac{|J|p}{2} - 1 - \lfloor \sqrt{\frac{|J|p}{2} }\rfloor}{2}\Bigg\rceil \geq \Bigg\lceil \frac{\frac{np}{8} - 1 - \lfloor \sqrt{\frac{np}{8} }\rfloor}{2}\Bigg\rceil \geq \frac{1}{4} \left( \frac{np}{8} - 1 \right) \geq \frac{np}{64}. \end{align*} For any $i \in [m]$ Lemma \ref{lem:T_large} asserts that \begin{align} \Prob{\EcTi^c \big| \EJ} &\leq \Prob{ \left| \cT_i \right| < \Bigg(\frac{m \left[ 1 - \exp\left( - \frac{|J|p}{8} \right) \right]}{2} - 1 \Bigg) \Bigg\lceil \frac{\frac{|J|p}{2} - 1 - \lfloor \sqrt{\frac{|J|p}{2} }\rfloor}{2}\Bigg\rceil \Bigg| \EJ } \nonumber\\ &\leq \exp\Bigg( - \frac{m \left[ 1 - \exp\left( - \frac{|J|p}{8} \right) \right]}{8} \Bigg)\Bigg|_{|J| \geq \frac{1}{4}n} \nonumber\\ &\leq \exp\bigg( - \frac{m}{16} \bigg). \label{eqn:EcTic} \end{align} \medskip\noindent \em{ 3. $\EcT$: $\Prob{\EcT^c}$ is small. } Lemma \ref{lem:T_not_too_big} ensure that \begin{equation}\label{eqn:EcTc} \Prob{\EcT^c} \leq \exp \left( - \frac{mnp}{3} \right). \end{equation} \medskip\noindent \em{ 4. $\EdA$: $\Prob{ \EdA^c | \EJ }$ is small. } Conditioned on $\EJ$, $|J| \geq \frac{n}{4}$. Hence, \begin{align*} & L\sqrt{\frac{2}{\left| J \right| p}}+ 4L \Qf\left(\frac{mp}{2}\right) \\ &\quad \leq \frac{2L\sqrt{2}}{\sqrt{np}} + 8L \sqrt{\pi} \left( \sqrt{\frac{2}{C_1 mp}} + \sqrt{\frac{2}{C_2 mp}} +\sqrt{\frac{e^{C_1}}{C_1 mp}} + \sqrt{\frac{e^{C_2}}{C_2 mp}} \right)\\ &\quad = \frac{2L\sqrt{2}}{\sqrt{np}} + \frac{c_{\Delta A}}{\sqrt{mp}}. \end{align*} Then $\frac{c_{\Delta A}}{\sqrt{mp}} + \frac{2L\sqrt{2}}{\sqrt{np}}(1 + \sqrt[4]{np}) - \left[ L\sqrt{\frac{2}{\left| J \right| p}}+ 4L \Qf\left(\frac{mp}{2}\right) \right] \geq \frac{2L\sqrt{2}}{\sqrt[4]{np}}$. Note that $\Qf\left(\frac{mp}{2}\right) > 0$ and hence, by Lemma \ref{lem:dA_max}, \begin{align} \Prob{ \EdA^c | \EJ } &\leq n \exp\left( - \frac{n}{8L^2} \left( \frac{2L\sqrt{2}}{\sqrt[4]{np}} \right)^2 \right) + n \exp\left( -\frac{n}{12L} \left(\frac{2L\sqrt{2}}{\sqrt[4]{np}} \right)\right) \nonumber\\ &\leq n \exp\bigg( - n^{\frac{1}{2}} \bigg) + n \exp\bigg( -\frac{1}{3\sqrt{2}} n^{\frac{3}{4}} \bigg). \label{eqn:EdAc} \end{align} We used the fact $J \subset [n]$ implies $|J| \leq n$ and $|J| \geq \frac{n}{4}$ when conditioned on $\EJ$ and that $p \leq 1$. \medskip\noindent \em{ 5. $\EdN$: $\Prob{ \Ephi^c \big| \EcTi, \EdA, \EdN}$ is small. } Conditioned on $\EcTi$, $|\cT| \geq |\cT_i| \geq \frac{mnp}{512}$, while $\EcT$ ensures $|\cT| \leq mnp$. Recall that Lemma \ref{lem:dN_max} ascertains $\dNi$ does not exceed $4\sigma \sqrt{\log 4 |\cT| }$ with high probability as \[ \Prob{ \dNi > 4\sigma \sqrt{\log 4 |\cT| }} \leq \frac{1}{4 |\cT|}. \] If we combine this probabilistic bound with the conditioning events, then the following upper bound can be achieved: \begin{align} \Prob{\EdN^c | \EcTi, \EcT} &\leq \Prob{ \dNi > 4\sigma \sqrt{\log 4 |\cT| } | \EcTi, \EcT} \nonumber\\ & \leq \frac{1}{4 |\cT|} \Bigg|_{\EcTi, \EcT} \nonumber\\ &\leq \frac{128}{mnp}. \label{eqn:EdNc} \end{align} \medskip\noindent \em{ 6. $\Ephi$: $\Prob{ \Ephi^c \big| \EcTi, \EdA, \EdN}$ is small. } Conditioned on $\EcTi, \EdA, \EdN$, \begin{align*} |\cT_i| &\geq \frac{mnp}{512}\\ \dAi &\leq \frac{c_{\Delta A}}{\sqrt{mp}} + \frac{2L\sqrt{2}}{\sqrt{np}}(1 + \sqrt[4]{np})\\ \dNi &\leq 4\sigma \sqrt{\log (4 mnp) }. \end{align*} Now the length of our interval $\Lambda = \frac{1}{h} = (\frac{\log | \cB_i | }{4\gamma})^{\frac{1}{\beta}}$. If $mnp \gg 1$ so that $\log (4mnp) (\log | \cB_i |)^{\frac{1}{\beta}} \geq \frac{B (4\gamma)^{\frac{1}{\beta}}}{4 \sigma}$, then \begin{align*} \left\| \Delta^{* (i)}_{N_1, \frac{1}{h}} \right\|_{\infty} &= \frac{1}{N_1} \left[ \frac{1}{h^2} \dNi^2 + \frac{2}{h} \sigma B \right] \leq \frac{32 \sigma^2 (\log | \cB_i |)^{\frac{2}{\beta}}}{N_1 (4\gamma)^{\frac{2}{\beta}}} \log (4mnp) . \end{align*} Let $N_1 = \sqrt{mnp} $, and $s_1 = \frac{8 \sigma (\log | \cB_i |)^{\frac{1}{\beta}} }{(4\gamma)^{\frac{1}{\beta}}} \frac{ \sqrt{ \log (4mnp)} }{(mnp)^{\frac{1}{4}} }$. \begin{align*} &2N_1 \exp \left(- \frac{\big| \cT_i \big| }{2} \left( s_1^2 - \left\| \Delta^{* (i)}_{N_1, \Lambda} \right\|_{\infty} \right)^2 \right) \\ &\qquad \leq 2 \sqrt{mnp} \exp \left(- \frac{mnp}{1024} \left( \frac{32 \sigma^2 (\log | \cB_i |)^{\frac{2}{\beta}} }{(4\gamma)^{\frac{2}{\beta}}} \frac{\log (4mnp)}{ \sqrt{mnp}} \right)^2 \right)\\ &\qquad = \exp \left(- \frac{\sigma^4 (\log | \cB_i |)^{\frac{4}{\beta}} }{(4\gamma)^{\frac{4}{\beta}}} \log^2 (4mnp) + \log (4mnp) \right). \end{align*} Similarly, (assume $N_2 \geq 2$) \begin{align*} \left\| \Delta^{(i)}_{N_2, \frac{1}{h}} \right\|_{\infty} &= \frac{N_2 + 2}{2N_2 h^2} \dAi^2 + \frac{2}{N_2 h^2} \dAi \dNi\\ &\leq \frac{(\log | \cB_i |)^{\frac{2}{\beta}}}{(4\gamma)^{\frac{2}{\beta}}} \left[ \frac{c_{\Delta A}}{\sqrt{mp}} + \frac{2L\sqrt{2}}{\sqrt{np}}(1 + \sqrt[4]{np}) \right]^2\\ &\quad + \frac{8\sigma}{N_2}\frac{(\log | \cB_i |)^{\frac{2}{\beta}}}{(4\gamma)^{\frac{2}{\beta}}} \left[ \frac{c_{\Delta A}}{\sqrt{mp}} + \frac{2L\sqrt{2}}{\sqrt{np}}(1 + \sqrt[4]{np}) \right] \sqrt{\log (4mnp)}. \end{align*} % Let \begin{align*} N_2 & = 8\sigma \sqrt{\log (4mnp)} \left[ \frac{c_{\Delta A}}{\sqrt{mp}} + \frac{2L\sqrt{2}}{\sqrt{np}}(1 + \sqrt[4]{np}) \right]^{-1} \\ & \leq \frac{8\sigma}{c_{\Delta A} + 2L\sqrt{2}} \sqrt{mnp} \sqrt{\log (4mnp)}, \end{align*} and \begin{align*} s_2 &= \frac{2 (\log | \cB_i |)^{\frac{1}{\beta}}}{(4\gamma)^{\frac{1}{\beta}}} \left[ \frac{c_{\Delta A}}{\sqrt{mp}} + \frac{2L\sqrt{2}}{\sqrt{np}}(1 + \sqrt[4]{np}) \right]. \end{align*} Then, \begin{align*} &2N_2 \exp \left( - \frac{ |\cT_i| }{ 2 \Lambda^2 \dAi^2} \left( s_2^2 - \left\| \Delta^{(i)}_{N_2, \Lambda} \right\|_{\infty} \right)^2 \right)\\ &\qquad\leq 2N_2 \exp \left( - \frac{ |\cT_i| }{ 2 \Lambda^2 \dAi^2} \frac{4 (\log | \cB_i |)^{\frac{4}{\beta}}}{(4\gamma)^{\frac{4}{\beta}}} \left[ \frac{c_{\Delta A}}{\sqrt{mp}} + \frac{2L\sqrt{2}}{\sqrt{np}}(1 + \sqrt[4]{np}) \right]^4 \right)\\ &\qquad\leq 2N_2 \exp \left( - 2 |\cT_i| \frac{(\log | \cB_i |)^{\frac{2}{\beta}}}{(4\gamma)^{\frac{2}{\beta}}} \left[ \frac{c_{\Delta A}}{\sqrt{mp}} + \frac{2L\sqrt{2}}{\sqrt{np}}(1 + \sqrt[4]{np}) \right]^2 \right)\\ &\qquad\leq 2N_2 \exp \left( - \frac{mnp}{256} \frac{(\log | \cB_i |)^{\frac{2}{\beta}}}{(4\gamma)^{\frac{2}{\beta}}} \left[ \frac{c_{\Delta A}}{\sqrt{mp}} + \frac{2L\sqrt{2}}{\sqrt{np}}(1 + \sqrt[4]{np}) \right]^2 \right)\\ &\qquad\leq \exp \Bigg( - \frac{(\log | \cB_i |)^{\frac{2}{\beta}}}{256 (4\gamma)^{\frac{2}{\beta}}} \left[ c_{\Delta A}\sqrt{n} + 2L\sqrt{2m} \right]^2\\ &\qquad\qquad\qquad + \frac{1}{2} \Big( \log{mnp} + \log \log (4mnp) \Big) + \log \frac{16\sigma}{c_{\Delta A}+ 2L\sqrt{2}} \Bigg). \end{align*} All in all, \begin{align} & \Prob{ \Ephi^c \big| \EcTi, \EdA, \EdN} \\ & \qquad \leq \exp \left(- \frac{\sigma^4 (\log | \cB_i |)^{\frac{4}{\beta}} }{(4\gamma)^{\frac{4}{\beta}}} \log^2 (4mnp) + \log (4mnp) \right) \nonumber\\ &\qquad\qquad+ \exp \Bigg( - \frac{(\log | \cB_i |)^{\frac{2}{\beta}}}{256 (4\gamma)^{\frac{2}{\beta}}} \left[ c_{\Delta A}\sqrt{n} + 2L\sqrt{2m} \right]^2 \label{eqn:Ephic}\\ &\qquad\qquad\quad +\frac{1}{2} \Big( \log{mnp} + \log \log (4mnp) \Big) + \log \frac{16\sigma}{c_{\Delta A}+ 2L\sqrt{2}} \Bigg). \nonumber \end{align} \subsection{Proof of Lemma \ref{lem:noisy_unknown_cdf}} \begin{proof}[Proof of Lemma \ref{lem:noisy_unknown_cdf}] By the usual trick of applying triangle inequality, we have \begin{align*} &\sup_{z \in [D_1, D_2]} \left| \hat{F}^{(i)} (z) - F^{(i)} \right|\\ &\quad= \sup_{z \in [D_1, D_2]} \Big| \hat{F}^{(i)} (z) - \Exp{\hat{F}^{(i)} (z)} + \Exp{\hat{F}^{(i)} (z)} - \Exp{\tilde{F}^{(i)} (z)} \\ &\qquad\qquad\qquad + \Exp{\tilde{F}^{(i)} (z)} - F^{(i)} \Big|\\ &\quad\leq \sup_{z \in [D_1, D_2]} \left| \hat{F}^{(i)} (z) - \Exp{\hat{F}^{(i)} (z)} \right| + \sup_{z \in [D_1, D_2]} \left| \Exp{\hat{F}^{(i)} (z)} - \Exp{\tilde{F}^{(i)} (z)} \right|\\ &\quad\quad + \sup_{z \in [D_1, D_2]} \left| \Exp{\tilde{F}^{(i)} (z)} - F^{(i)} \right|. \end{align*} If $\sup_{z \in [D_1, D_2]} \left| \hat{F}^{(i)} (z) - \Exp{\hat{F}^{(i)} (z)} \right| \leq t_1$, $\sup_{z \in [D_1, D_2]} \left| \Exp{\hat{F}^{(i)} (z)} - \Exp{\tilde{F}^{(i)} (z)} \right| \leq t_2$, and $\sup_{z \in [D_1, D_2]} \left| \Exp{\tilde{F}^{(i)} (z)} - F^{(i)} \right| \leq t_3$, then $\sup_{z \in [D_1, D_2]} \left| \hat{F}^{(i)} (z) - F^{(i)} \right| \leq t_1 + t_2 + t_3$. Applying union bound on the contrapositive yields \begin{align} &\Prob{\sup_{z \in [D_1, D_2]} \left| \hat{F}^{(i)} (z) - F^{(i)} \right| > t_1 + t_2 + t_3} \nonumber\\ &\qquad \leq \Prob{\sup_{z \in [D_1, D_2]}\left| \hat{F}^{(i)} (z) - \Exp{\hat{F}^{(i)} (z)} \right| > t_1 } \label{eqn:hat_term.1}\\ &\qquad \quad+ \Prob{\sup_{z \in [D_1, D_2]}\left| \Exp{\tilde{F}^{(i)} (z)} - F^{(i)} (z) \right| > t_2 } \label{eqn:hat_term.2}\\ &\qquad \quad + \Prob{\sup_{z \in [D_1, D_2]}\left| \Exp{\hat{F}^{(i)} (z)} - \Exp{\tilde{F}^{(i)} (z)} \right| > t_3 }. \label{eqn:hat_term.3} \end{align} \medskip \noindent {\bf 1. Eq. \eqref{eqn:hat_term.1}:} Eq. \eqref{eqn:hat_term.1} is bounded by Lemma \ref{lem:sup_hat}. We take integer $N = \frac{1}{2} | \cB_i|^{\frac{1}{6}}$. Then for any $t_1 \geq \frac{2\Delta^{(i)}\left( D_{2} - D_{1} \right)}{N} = \frac{4K_{max} (D_2 - D_1) }{\pi \left( 4\gamma \right)^{\frac{1}{\beta}} } \left| \cB_i \right|^{-\frac{5}{24}} \left( \log \left| \cB_i \right| \right)^{\frac{1}{\beta}}$, \begin{align*} &\Prob{ \sup_{z \in [D_1, D_2]} \left| \hat{F}^{(i)}(z) - \Exp{\hat{F}^{(i)}(z)} \right| \geq t_1}\\ &\qquad \leq 2N \exp\left( \frac{- \left| \cB_i \right|^{5/12} }{2 C_4^2 \left( \log \left| \cB_i \right| \right)^{\frac{2}{\beta}} } \left( t_1 - \frac{\Delta^{(i)}\left( D_{2} - D_{1} \right)}{N} \right)^2 \right)\\ &\qquad \leq | \cB_i |^{\frac{1}{6}}\exp\left( \frac{- \left| \cB_i \right|^{5/12} }{8 C_4^2 \left( \log \left| \cB_i \right| \right)^{\frac{2}{\beta}} }t_1^2 \right), \end{align*} where $\beta, \gamma >0$ are smoothness parameters for the noise, and $K_{max} = \max_{t \in [-1,1]} \left| \phi_K(t) \right|$. \medskip \noindent {\bf 2. Eq. \eqref{eqn:hat_term.2}:} If we take $t_2 = C_3 \left( \log \left| \cB_i \right| \right)^{-1/\beta}$, the probability in Eq. \eqref{eqn:hat_term.2} becomes $0$ by Lemma \ref{lem:mean_difference_tilde}. \medskip \noindent {\bf 3. Eq. \eqref{eqn:hat_term.3}:} We further partition the probability in Eq. \eqref{eqn:hat_term.3} by conditioning events defined in Appendix \ref{sec:conditioning}. \begin{align*} Eq. \eqref{eqn:hat_term.3} & \leq \Prob{\sup_{z \in [D_1, D_2]}\left| \Exp{\hat{F}^{(i)} (z)} - \Exp{\tilde{F}^{(i)} (z)} \right| > t_3 \Bigg| \Ephi} + \Prob{\Ephi^c}. \end{align*} The first term is bounded by Lemma \ref{lem:hat_tilde}: the conditional probability becomes $0$ if we choose $t_3 = \frac{2K_{max}(D_2 - D_1)}{ \pi h } \left( s_{\phi} + \rho\right)$. It remains to analyze $\Prob{\Ephi^c}$. \begin{align*} \Prob{\Ephi^c} &\leq \Prob{\Ephi^c \Big| \EcTi \cap \EdA \cap \EdN} + \Prob{\EcTi^c \cup \EdA^c \cup \EdN^c}\\ &= \Prob{\Ephi^c \Big| \EcTi \cap \EdA \cap \EdN} + \Prob{\EcTi^c \cup \EdA^c} + \Prob{\EdN^c \cap \EcTi}. \end{align*} The first term is small (see Eq. \eqref{eqn:Ephic}). The second term: \begin{align*} \Prob{\EcTi^c \cup \EdA^c} &\leq \Prob{\EJ^c} + \Prob{\EcTi^c \cup \EdA^c \Big| \EJ}\\ &\leq \Prob{\EJ^c} + \Prob{\EcTi^c \Big| \EJ} + \Prob{\EdA^c \Big| \EJ}. \end{align*} See Eqs. \eqref{eqn:EJc}, \eqref{eqn:EcTic}, \eqref{eqn:EdAc}. The third term: \begin{align*} \Prob{\EdN^c \cap \EcTi} &\leq \Prob{\EdN^c \cap \EcTi \cap \EcT} + \Prob{\EcT^c}\\ &= \Prob{\EdN^c \big| \EcTi \cap \EcT} \Prob{\EcTi \cap \EcT} + \Prob{\EcT^c}\\ &\leq \Prob{\EdN^c \big| \EcTi \cap \EcT} + \Prob{\EcT^c}. \end{align*} See Eqs. \eqref{eqn:EdNc} and \eqref{eqn:EcTc}. To sum up, let $t_0 = C_3 \left( \log \left| \cB_i \right| \right)^{-1/\beta} + \frac{2K_{max}(D_2 - D_1)}{ \pi h } \left( s_{\phi} + \rho\right)$. Then we can conclude that for any $i \in [m]$, and for any $t \geq t_0 + \frac{4K_{max} (D_2 - D_1) }{\pi \left( 4\gamma \right)^{\frac{1}{\beta}} } \left| \cB_i \right|^{-\frac{5}{24}} \left( \log \left| \cB_i \right| \right)^{\frac{1}{\beta}}$, \begin{align*} &\Prob{ \sup_{z \in [D_1, D_2]} \left| \tilde{F}^{(i)} (z) - F^{(i)}(z) \right| > t + t_0}\\ &\qquad \leq | \cB_i |^{\frac{1}{6}} \exp\left( \frac{- \left| \cB_i \right|^{5/12} }{8 C_4^2 \left( \log \left| \cB_i \right| \right)^{\frac{2}{\beta}} }(t - t_0)^2 \right) + \tilde{\Psi}_{m,n,p}\left( | \cB_i | \right). \end{align*} For completeness, we note that the Remainder term, $\tilde{\Psi}_{m,n,p}\left( | \cB_i | \right)$ (see Eq. \eqref{eqn:Remainder}), is the sum of upper bounds in Eq. \eqref{eqn:EJc} - \eqref{eqn:Ephic}, which vanishes as $m, n \to \infty$. \end{proof} \begin{proof}[Proof of Corollary \ref{coro:unknown_CDF_uniform}] Conditioned on event $\Erow$, it holds for all $i \in [m]$ that $\left| \cB_i \right| \geq \frac{np}{2}$. Similarly, $\left| \cB_i \right| \leq 2np$ for all $i \in [m]$, when conditioned on event $\Erowp$. Therefore, for any $i \in [m]$, and any $t \geq \Tos$, \begin{align*} &\Prob{ \left. \sup_{z \in [D_1, D_2]} \left| \tilde{F}^{(i)} (z) - F^{(i)}(z) \right| > t \right| \Erow, \Erowp}\\ &\qquad \leq (2np)^{\frac{1}{6}}\exp\left( \frac{- \left( \frac{np}{2} \right)^{5/12} }{8 C_4^2 \left( \log (2np) \right)^{\frac{2}{\beta}} }(t - \tos)^2 \right) + \tilde{\Psi}_{m,n,p}\left( \frac{np}{2} \right). \end{align*} \end{proof} \section{Proof of the Main Result}\label{sec:notation} First, we present 5 key lemmas used in the proof of the main theorems. First two of them are on the consistency and concentration of quantile estimation (see Section \ref{sec:quantile_lemmas}), while the other three lemmas are on the reliability of CDF estimation (see Section \ref{sec:CDF_lemmas}). Once these five lemmas are established, the proof of main theorems are straightforward; combining a pair of relevant lemmas from each category yields the desired tail probability bounds (Appendix \ref{appendix:main_prob}). Integrating these tail bounds yields the main theorems (see Section \ref{sec:thm_proof} for the concise statements; their full statements and proofs are presented in Appendix \ref{appendix:main_proof}). \paragraph{Notation} Recall that the indicator function of a subset $A$ of a set $X$ is a function $\mathbb{I}_A: X \to \{0, 1\}$ defined as \begin{equation}\label{eqn:Indicator} \mathbb{I}_A (x) = \begin{cases} 1, & \text{if }x \in A,\\ 0, &\text{if } x \not\in A. \end{cases} \end{equation} In this paper, we use the indicator function to define auxiliary random variables which identify whether a certain condition is satisfied or not. The fulfillment of conditions is equivalent to the membership of the outcome in the specific event (measurable set) in the language of probability theory. Therefore, when the intensional description of an event $A$ is available and the outcome $\omega$ and the sample space $\Omega$ are obvious, we will write $\Ind{\text{description of }A}$ in lieu of $\mathbb{I}_A(\omega)$ by abuse of notation. The Heaviside step function $H: \Reals \to \left\{0, \frac{1}{2}, 1 \right\}$ is a linear combination of the indicator functions, defined as \begin{equation}\label{eqn:Heaviside} H(x) = \frac{1}{2} \left[ \Ind{x > 0} + \Ind{ x\geq 0 } \right] = \begin{cases} 1, & \text{if }x > 0,\\ \frac{1}{2}, &\text{if } x = 0,\\ 0, &\text{if } x < 0. \end{cases} \end{equation} With aid of these, we can define useful auxiliary variables. \begin{align} R(i,j) &= \Ind{M(i,j) = 1},\qquad\qquad \forall (i,j) \in [m] \times [n],\\ Q^i(j_1, j_2) &= H\left( Z(i, j_1) - Z(i, j_2) \right),\quad\forall i \in [m], \forall j_1, j_2 \in [n]. \end{align} Note that $\sum_{j_2 = 1}^n Q_i(j_1, j_2)$ is the number of entries $Z(i, j)$ in row $i$ whose value is smaller than $Z(i, j_1)$ while $Z(i, j_1)$ itself is counted with weight $\frac{1}{2}$. For $i \in [m]$, we let $\cB_i$ denote the set of column indices for which $Z(i,j)$ is observed (similarly, $\cB^j$ denotes the set of row indices for $j \in [n]$, respectively), i.e., \[ \cB_i = \{ j' \in [n]: M(i,j') = 1 \},\qquad \cB^j = \{ i' \in [m]: M(i',j) = 1 \}. \] and let \[n_i = \sum_{j =1}^n R(i,j)\] denote the number of observed entries in row $i$. Suppose that the latent function $g$ is given. \begin{align*} D_{max} &= \sup_{x,y \in [0,1]} g(x,y) = \sup_{x \in [0,1]}g(x,1),\\ D_{min} &= \inf_{x,y \in [0,1]} g(x,y) = \inf_{x \in [0,1]}g(x,0),\\ L &= \sup_{x,y_1 \neq y_2 \in [0,1]} \frac{g(x,y_2) - g(x,y_1)}{y_2 - y_1},\\ l &= \inf_{x,y_1 \neq y_2 \in [0,1]} \frac{g(x,y_2) - g(x,y_1)}{y_2 - y_1}. \end{align*} Let $K$ denote the kernel used in density estimation (see Section \ref{sec:alg_noisy}) with finite support within $[-1, 1]$ in the Fourier domain (see Appendix \ref{sec:kernel}). \[ K_{max}=\max_{t \in [-1,1]} \left| \phi_K(t) \right| < \infty \] denotes the maximum modulus of the kernel used. For future reference, we define two conditioning events on $M$: \begin{equation}\label{eqn:sufficient_overlap} \Erow := \left\{ |\cB_i| \geq \frac{np}{2}, \forall i\in [m]\right\},\qquad \Ecol:= \left\{|\cB^j| \geq \frac{mp}{2}, \forall j\in [n]\right\}. \end{equation} \subsection{Key Lemmas}\label{sec:key_lemmas} \subsubsection{Estimating the Column Quantiles}\label{sec:quantile_lemmas} In this section, we present two lemmas which claim that the quantile estimation is consistent. Moreover, the estimates concentrate to the true values, which turn out to be the column features in our model. Lemma \ref{lem:noiseless_quantile} shows the result under the noiseless setting, while Lemma \ref{lem:noisy_quantile} ascertains their consistency despite the existence of noise. In short, we can achieve consistent quantile estimates as long as $\omega\left( \max\{ m^{-1}, n^{-1}\} \right)$ observations are available along each row and column. When we assume the uniformly random observation, the quantile estimates are consistent as long as $p = \Omega\left(\frac{\log n}{m}, \frac{\log m}{n} \right)$. Essentially, the difference in sample complexity corresponds to the cost of randomization in sampling. \begin{lemma}\label{lem:noiseless_quantile} When there is no noise ($N=0$) in the model, the quantile estimator $\hat{q}(j)$ (see Eq. \eqref{eqn:quantile_est}) concentrates to $\theta_{col}^{(j)}$ with high probability: \[ \Prob{ \left| \hat{q}(j) - \theta_{col}^{(j)} \right| \geq t } \leq 2 \exp\left( -\frac{2|\cB^j|^2 t^2}{\sum_{i \in \cB^j} \frac{1}{|\cB_i|}} \right). \] \end{lemma} Assuming each entry is observed with probability $p$ as in Eq. \eqref{eq:masking}, we can achieve the following universal upper bound, which does not depend on the size of sets $|\cB_i|$ and $|\cB^j|$. Specifically, we obtain the following uniform and universal probabilistic bound. \begin{corollary}\label{coro:noiseless_quantile_uniform} When conditioned on $\Erow \cap \Ecol$, for any $j \in [n]$, \begin{align*} \Prob{ \left. \left| \hat{q}(j) - \theta_{col}^{(j)} \right| \geq t \right| \Erow \cap \Ecol} \leq 2 \exp \left(- \frac{mnp^2}{2} t^2 \right). \end{align*} \end{corollary} Noiseless quantile estimation relies on the concentration of the sum of i.i.d. indicator variables. However, under the influence of nontrivial noise, the indicator may not be reliable any more. Hence, we need a way to control the effect of noise. The main idea is that if we sum up multiple rows, the noise is diluted by the effect of averaging. Also, the portion of uncontrolled rows gets vanishingly small as $m, n \to \infty$, and $\hat{q}_{marg}$ (see Eq. \eqref{eqn:estimate_marg}) concentrates around $\fcol{j}$. We assume sub-Gaussian noise of mean zero and sub-Gaussian parameter $\sigma$ in this paragraph according to Eq. \eqref{eqn:def_subG_noise}, unless otherwise stated. \begin{lemma}\label{lem:noisy_quantile} The marginal quantile estimator $\hat{q}_{marg}(j)$ concentrates to $\theta_{col}^{(j)}$ with high probability. Specifically, for any $s \geq (mp)^{-1/3}$ and $t \geq 16 s + 32 \exp\left( -c (mp)^{1/3} \right)$, \begin{align*} &\Prob{ \left| \hat{q}_{marg}(j) - \fcol{j} \right| \geq t } \leq 2\exp\left( -\frac{n t^2}{2} \right) + \exp\left(-\frac{nt}{12} \right)\\ &\qquad+ n \exp\left( -\frac{mp}{8} \right) + n \exp\left( - \frac{ns}{3} \right) + 4n \exp\left(-c (mp)s^{2}\right), \end{align*} where $c = \min \left\{\frac{l^2}{16\left( D_{max} - D_{min} \right)^2}, \frac{l^2}{64\sigma^2} \right\}$. \end{lemma} \subsubsection{Estimating the Row CDFs}\label{sec:CDF_lemmas} We will claim uniform concentration bounds on the difference between the true CDF and the estimated CDF. In other words, the uniform distance between those two objects is well-controlled by the probability bounds given in Lemmas \ref{lem:noiseless_CDF}, \ref{lem:noisy_known_CDF}, and \ref{lem:noisy_unknown_CDF}. \begin{lemma}[Concentration of noiseless CDF estimation]\label{lem:noiseless_CDF} When there is no noise in the model, the empirical cumulative distribution function (ECDF) $\breve{F}^{(i)}$ (Eq. \eqref{eqn:ECDF_noiseless}) uniformly concentrates to the true CDF $F^{(i)} = g^{-1}_{x=\frow{i}}$, i.e., for each $i \in [m]$, \[ \Prob{\sup_{z \in \Reals} \left| \breve{F}^{(i)}(z) - F^{(i)}(z) \right| > t } \leq 2 e^{-2 n_i t^2}. \] \end{lemma} The lemma aboves states the concentration of empirical CDF to the CDF as sample size grows. \begin{lemma}[Concentration of noisy CDF estimation with known noise distribution]\label{lem:noisy_known_CDF} When the addtive noise in the model is supersmooth and its density is known, the kernel smoothed ECDF $\tilde{F}^{(i)}$ defined as in Eq. \eqref{eqn:ECDF_known_noise} uniformly converges to the true CDF $F^{(i)} = g^{-1}_{x=\frow{i}}$ in probability. To be more specific, for any $t >C \left( \log n \right)^{-1/\beta}$, and for any $z \in [-n_i^{1/6}, n_i^{1/6}]$, \begin{align*} &\Prob{\left| \tilde{F}^{(i)} (z) - F^{(i)} \right| > t}\\ & \qquad \leq 2\exp\left( \frac{- \pi^2 \left(4\gamma \right)^{\frac{2}{\beta}}n_i^{1/6} }{8B^2 K^2_{max}\left( \log n_i \right)^{\frac{2}{\beta}}} \left( t -C \left( \log n_i \right)^{-1/\beta} \right)^2 \right), \end{align*} where $n_i = \left| \cB_i \right|$, $\beta, \gamma$ are smoothness parameter of the noise as in Eq. \eqref{eqn:model_supersmooth}, and $C$ is a constant. \end{lemma} Again, we can obtain a simpler upper bound when conditioned on $\Erow$ (see Eq. \eqref{eqn:sufficient_overlap} for the definition of $\Erow$). \begin{corollary}\label{coro:noisy_CDF_uniform} When conditioned on $\Erow$, for any $i \in [m]$, we have \begin{align*} &\Prob{ \left. \left| \tilde{F}^{(i)} (z) - F^{(i)} \right| > t \right| \Erow}\\ & \quad \leq 2\exp\left( \frac{- \pi^2 \left(4\gamma \right)^{\frac{2}{\beta}}\left( \frac{np}{2} \right)^{1/6} }{8B^2 K^2_{max}\left( \log n \right)^{\frac{2}{\beta}}} \left( t -C \left( \log \frac{np}{2} \right)^{-1/\beta} \right)^2 \right). \end{align*} \end{corollary} \DG{***TODO: the only missing auxiliary lemma} \begin{lemma}[Concentration of noisy CDF estimation with unknown noise distribution]\label{lem:noisy_unknown_CDF} \DG{Fill in} \end{lemma} These lemmas w \subsection{Probabilistic Error Bounds}\label{appendix:main_prob} \DG{The current status of analysis is as follows: To achieve vanishing MSE bounds, we require \begin{enumerate} \item noiseless: $p = \Omega\left(\frac{\log n}{m} \right)$ or $\omega(\frac{1}{m})$ samples \item known noise: $p = \Omega\left(\frac{\log n}{m} \right)$ \item unknown noise: $p = \Omega\left(\frac{\log n \log^k n}{m} \right)$ \end{enumerate} That $\log^k n$ is coming from applying the union bound in noisy quantile estimation - I think this can be removed if we apply concentration inequality for the sum of independent random variables..... (time matters..) } Although the lemmas used in the proof may vary depending on the noise model, the underlying proof idea remains the same. We will let $F^{(i)}$ denote the inverse of $g\left( \frow{i}, \cdot \right)$, i.e., $F^{(i)}\left( z \right) = y$ if and only if $\left( \frow{i}, y \right) = z$. Also, we let \begin{align*} \breve{A}(i,j) &= \breve{g}^{(i)}\left(\hat{q}(j) \right),\\ \tilde{A}(i,j) &= \tilde{g}^{(i)}\left(\hat{q}_{marg}(j) \right),\\ \hat{A}(i,j) &= \hat{g}^{(i)}\left(\hat{q}_{marg}(j) \right), \end{align*} for all $(i,j) \in [m] \times [n]$, where $\breve{g}^{(i)}, \tilde{g}^{(i)}, \hat{g}^{(i)}$ respectively denote the quantile functions associated with $\breve{F}^{(i)}, \tilde{F}^{(i)}, \hat{F}^{(i)}$. \subsubsection{Noiseless} \begin{theorem}[Probabilistic bound: noiseless]\label{thm:tail_noiseless} For each $(i,j) \in [m] \times [n]$, \begin{align} &\Prob{\left|\breve{A}(i, j) - A(i,j)\right| > t} \leq 2\exp \left(- \frac{mnp^2}{18L^2} t^2 \right) + 2\exp\left(-\frac{2}{9L^2} n_i t^2\right) \nonumber\\ &\qquad + m \exp\left( -\frac{np}{8} \right) + n \exp\left( -\frac{mp}{8} \right). \label{eqn:tail_noiseless} \end{align} \end{theorem} \begin{proof} Let $\breve{g}^{(i)} = \left(\breve{F}^{(i)}\right)^{-1}$ denote the quantile function (right pseudo-inverse) associated with $\breve{F}^{(i)}$. Note that $A(u,i) = g \left( \frow{i}, \fcol{j} \right)$ and $\breve{A}(i,j) = \breve{g}^{(i)}\left(\hat{q}(j) \right)$. Let $\theta^* := F^{(i)}\left( \breve{A}(i,j) \right) = F^{(i)}\left( \breve{g}^{(i)}\left(\hat{q}(j) \right) \right) $. We can observe that $\left|\theta^* - \hat{q}(j) \right| \leq 2\left\| \breve{F}^{(i)} - F^{(i)} \right\|_{\infty}$; it is trivial at continuity points of $\breve{F}^{(i)}$ that by definition of the uniform norm (actually, $\left| \theta^* - \hat{q}(j)\right| \leq \left\| \breve{F}^{(i)} - F^{(i)} \right\|_{\infty}$ at continuity points). When $\breve{g}^{(i)}\left(\hat{q}(j) \right)$ is a jump discontinuity of $\breve{F}^{(i)}$, we can see that for any $\delta >0$, $ \breve{F}^{(i)}\left( \breve{g}^{(i)}\left(\hat{q}(j) \right) - \delta \right) \leq \hat{q}(j) \leq \breve{F}^{(i)}\left(\breve{g}^{(i)}\left(\hat{q}(j) \right) \right) $. Since $F^{(i)}$ is assumed to be continuous, $\left\| \breve{F}^{(i)} - F^{(i)} \right\|_{\infty} \geq \frac{1}{2}\sup_y \lim_{\delta \to 0+}\breve{F}^{(i)}\left(y \right) - \breve{F}^{(i)}\left(y - \delta \right)$. Therefore, $\left|\theta^* - \hat{q}(j)\right| \leq 2\left\| \breve{F}^{(i)} - F^{(i)} \right\|_{\infty}$ . Since $\breve{A}(i,j) = \breve{g}^{(i)}\left(\hat{q}(j) \right)=g \left( \frow{i}, \theta^* \right)$, and $g$ is $(l, L)$-biLipschitz, \begin{align*} \left|A(i, j) - \breve{A}(i,j)\right| &= \left|g \left( \frow{i}, \fcol{j} \right) - g \left( \frow{i}, \theta^* \right)\right| \\ &\leq L \left| \fcol{j} - \theta^* \right|\\ &\leq L \left( \left| \fcol{j} - \hat{q}(j) \right| + \left| \hat{q}(j) - \theta^* \right| \right)\\ &\leq L \left( \left| \fcol{j} - \hat{q}(j) \right| + 2\left\| \breve{F}^{(i)} - F^{(i)} \right\|_{\infty}\right). \end{align*} If $\left| \fcol{j} - \hat{q}(j) \right| \leq \frac{t}{3L}$ and $\left\| \breve{F}^{(i)} - F^{(i)} \right\|_{\infty} \leq \frac{t}{3L}$, then $\left|A(u,i) - \breve{A}(i,j)\right| \leq t$. We can achieve the following upper bound by applying the union bound on the contraposition. Let $E := \Erow \cap \Ecol$. Then \begin{align*} &\Prob{\left|A(i, j) - \breve{A}(i,j)\right| > t}\\ &\quad\leq \Prob{E^c} \Prob{ \left. \left| \hat{q}(j) - \theta_{col}^{(j)} \right| > \frac{t}{3L} \right| E }\\ &\qquad + \Prob{ \left. \sup_{z \in \Reals} \left| \breve{F}^{(i)}(z) - F^{(i)}(z) \right| > \frac{t}{3L} \right| E}\\ &\leq 2\exp \left(- \frac{mnp^2}{18L^2} t^2 \right) + 2\exp\left(-\frac{np}{9L^2} t^2\right)\\ &\qquad + m \exp\left( -\frac{np}{8} \right) + n \exp\left( -\frac{mp}{8} \right), \end{align*} where the last inequality follows from Lemma \ref{lem:noiseless_quantile} (Corollary \ref{coro:noiseless_quantile_uniform}) and Lemma \ref{lem:noiseless_CDF}. \end{proof} \subsubsection{When noise distribution is known} We assume the $n$ is sufficiently large. Specifically, we assume the support of the parameter matrix is contained in $\left[ -\frac{np}{2}, \frac{np}{2} \right]$ so that $A(i,j) \in [-n_i^{1/6}, n_i^{1/6}]$, for all index pairs $(i,j) \in [m] \times [n]$ with high probability. This is a technical assumption to exploit the deconvolution results. We can achieve a similar probabilistic tail bound even when measurements are noisy. The proof idea is almost the same, except for extra care taken in parsing the consistency results for CDF estimation. Note that both $\tilde{F}^{(i)}$ and $\hat{F}^{(i)}$ are continuous because they are defined by integrating estimated densities. \begin{theorem}[Main theorem: noise distribution is known]\label{thm:tail_noisy_known} For each $(i,j) \in [m] \times [n]$, \begin{align*} &\Prob{\left|A(i, j) - \tilde{A}(i,j)\right| > t}\\ &\leq 2\exp\left( -\frac{n t^2}{8L^2} \right) + \exp\left(-\frac{nt}{24L} \right)\\ &+ 2\exp\left( \frac{-\pi^2 \left(4\gamma \right)^{\frac{2}{\beta}}\left( \frac{np}{2} \right)^{1/6} }{8B^2 K^2_{max}\left( \log n \right)^{\frac{2}{\beta}}} \left( t -C \left( \log \frac{np}{2} \right)^{-1/\beta} \right)^2 \right)\\ & + n \exp\left( - \frac{ns}{3} \right) + 4n \exp\left(-c (mp)s^{2}\right) + m \exp\left( -\frac{np}{8} \right) + n \exp\left( -\frac{mp}{8} \right), \end{align*} $c = \min \left\{\frac{l^2}{16\left( D_{max} - D_{min} \right)^2}, \frac{l^2}{64\sigma^2} \right\}$, and $C$ is a constant which controls the bias of $\tilde{F}^{(i)}$ (see Lemma \ref{lem:mean_difference_tilde}). \end{theorem} Note that given a constant $s > 0$, as long as the sampling probability is sufficiently large, i.e., $p = \Omega \left( \max \left\{ \frac{\log n}{m}, \frac{\log m}{n} \right\} \right)$, the terms in the last line, which are independent of $t$, exponentially decay to $0$ as $m, n \to \infty$. \begin{proof} Let $\theta^* := F^{(i)}\left( \tilde{A}(i,j) \right) = F^{(i)}\left( \tilde{g}^{(i)}\left(\hat{q}_{marg}(j) \right) \right) $. Since $\tilde{F}^{(i)}$ is continuous, $\left|\theta^* - \hat{q}_{marg}(j)\right| \leq \left\| \tilde{F}^{(i)} - F^{(i)} \right\|_{\infty}$. By the same line of argument as in the proof of Theorem \ref{thm:tail_noiseless}, since $\tilde{A}(i,j) = \tilde{g}^{(i)}\left(\hat{q}_{marg}(j) \right)=g \left( \frow{i}, \theta^* \right)$, and $g$ is $(l, L)$-biLipschitz, \begin{align*} \left|A(u,i) - \tilde{A}(i,j)\right| &= \left|g \left( \frow{i}, \fcol{j} \right) - g \left( \frow{i}, \theta^* \right)\right| \\ &\leq L \left| \fcol{j} - \theta^* \right|\\ &\leq L \left( \left| \fcol{j} - \hat{q}_{marg}(j) \right| + \left| \hat{q}_{marg}(j) - \theta^* \right| \right)\\ &\leq L \left( \left| \fcol{j} - \hat{q}_{marg}(j) \right| + \left\| \tilde{F}^{(i)} - F^{(i)} \right\|_{\infty}\right). \end{align*} Again, if $\left| \fcol{j} - \hat{q}_{marg}(j) \right| \leq \frac{t}{2L}$ and $\left\| \tilde{F}^{(i)} - F^{(i)} \right\|_{\infty} \leq \frac{t}{2L}$, then $\left|A(u,i) - \tilde{A}(i,j)\right| \leq t$. We can achieve the following upper bound by applying the union bound on the contraposition. For $t > 2L C \left( \log n \right)^{-1/\beta}$, \begin{align*} &\Prob{\left|A(i, j) - \tilde{A}(i,j)\right| > t}\\ &\quad\leq \Prob{ \left| \hat{q}_{marg}(j) - \theta_{col}^{(j)} \right| \geq \frac{t}{2L} } + \Prob{\sup_{z \in \Reals} \left| \tilde{F}^{(i)}(z) - F^{(i)}(z) \right| > \frac{t}{2L} }\\ &\quad\leq \Prob{ \left| \hat{q}_{marg}(j) - \theta_{col}^{(j)} \right| \geq \frac{t}{2L} }\\ &\qquad + \Prob{ \left. \sup_{z \in \Reals} \left| \tilde{F}^{(i)}(z) - F^{(i)}(z) \right| > \frac{t}{2L} \right| \Erow} + \Prob{\Erow^c}\\ &\quad\leq 2\exp\left( -\frac{n t^2}{8L^2} \right) + \exp\left(-\frac{nt}{24L} \right)\\ &\qquad + 2\exp\left( \frac{-\pi^2 \left(4\gamma \right)^{\frac{2}{\beta}}\left( \frac{np}{2} \right)^{1/6} }{8B^2 K^2_{max}\left( \log n \right)^{\frac{2}{\beta}}} \left( t -C \left( \log \frac{np}{2} \right)^{-1/\beta} \right)^2 \right)\\ &\qquad + n \exp\left( - \frac{ns}{3} \right) + 4n \exp\left(-c (mp)s^{2}\right) + m \exp\left( -\frac{np}{8} \right) + n \exp\left( -\frac{mp}{8} \right), \end{align*} where $c = \min \left\{\frac{l^2}{16\left( D_{max} - D_{min} \right)^2}, \frac{l^2}{64\sigma^2} \right\}$, and $C$ is a constant which controls the bias of $\tilde{F}^{(i)}$ (see Lemma \ref{lem:mean_difference_tilde}). \end{proof} \subsubsection{When noise distribution is also estimated} \DG{***TODO: fill in later} [Delaigle et al., 2008] That is also not bad \begin{theorem}[Main theorem: noise is also estimated] For each $(i,j) \in [m] \times [n]$, .... \end{theorem} \ref{lem:noiseless_quantile} \ref{lem:noisy_quantile} \ref{lem:noiseless_CDF} \ref{lem:noisy_known_CDF} \subsection{Mean Squared Error; Proof of the Main Theorems}\label{appendix:main_proof} Recall the definition of the mean squared error (see Eq. \eqref{eqn:MSE}) of the estimator $\varphi$: \begin{align} &MSE\left( \varphi \right) \nonumber\\ &=\Exp{\frac{1}{mn} \sum_{i=1}^m \sum_{j=1}^n \left( \hat{A}(i,j) - A(i,j) \right)^2} \nonumber\\ &=\frac{1}{mn} \sum_{i=1}^m \sum_{j=1}^n \Exp{\left( \hat{A}(i,j) - A(i,j) \right)^2} &\text{by linearity of expectation}\nonumber\\ &= \Exp{\left( \hat{A}(i,j) - A(i,j) \right)^2} &\text{by exchangeability}\nonumber\\ &= \int_0^{\infty} \Prob{\left( \hat{A}(i,j) - A(i,j) \right)^2 > t } dt & \because \left(\hat{A}(i,j) - A(i,j) \right)^2 \geq 0\nonumber\\ &= \int_0^{\infty} \Prob{\left| \hat{A}(i,j) - A(i,j) \right| > \sqrt{t} } dt & \text{substitute }s = \sqrt{t}\nonumber\\ &= \int_0^{\infty} 2s\Prob{\left| \hat{A}(i,j) - A(i,j) \right| > s } ds \label{eqn:integration} \end{align} Now it remains to integrate the tail bounds obtained in the previous sections to conclude our main theorems. In general, we can derive the following formulae from integration by substitution \begin{equation}\label{eqn:integral} \int_0^{\infty} s e^{-a s^2} ds =\int_0^{\infty} \frac{1}{2a}e^{-z} dz = \left. -\frac{1}{2a}e^{-z} \right|_0^{\infty} = \frac{1}{2a}. \end{equation} \begin{equation}\label{eqn:Gamma2} \int_0^{\infty} s e^{-as} ds = \int_0^{\infty} \frac{z}{a^2} e^{-z} dz = \frac{\Gamma(2)}{a^2} = \frac{1}{a^2}. \end{equation} These formulae will be frequently used, because many of our error bound have such forms. Also, from the model assumption and the construction of the estimators, there exists an upper bound $M$ on the difference $\left| \hat{A}(i,j) - A(i,j) \right|$ (\DG{???}), independent of $m, n$ \subsubsection{Proof of Theorem \ref{thm:MSE_noiseless}: Noiseless} \begin{proof}[Proof of Theorem \ref{thm:MSE_noiseless}] Integrate the tail bound from Theorem \ref{thm:tail_noiseless}; plug in Eq. \eqref{eqn:tail_noiseless} to Eq. \eqref{eqn:integration}. \begin{align*} MSE\left( \breve{\varphi} \right) &= \int_0^{2M} 2s \Prob{\left|A(i, j) - \breve{A}(i,j)\right| > s} ds\\ &\leq \int_0^{\infty} 4s\exp \left(- \frac{mnp^2}{18L^2} s^2 \right) + \int_0^{\infty} 4s\exp\left(-\frac{2n_i}{9L^2} s^2\right)\\ &\qquad + \left[m \exp\left( -\frac{np}{8} \right) + n \exp\left( -\frac{mp}{8} \right) \right] \int_0^{2M} 2s ds\\ &= \frac{36L^2}{mnp^2} + \frac{9L^2}{n_i} + 4M^2 \left[m \exp\left( -\frac{np}{8} \right) + n \exp\left( -\frac{mp}{8} \right) \right] \end{align*} We may assume $n_i \geq \frac{np}{2}$ because the probability of failing to fulfill $\Erow$ is counted in the tail probability in Theorem \ref{thm:tail_noiseless}, hence, we may condition on $\Erow$ and assume $n_i \geq \frac{np}{2}, \forall i \in [m]$. Consequently, as long as $p = \omega\left(\frac{\log m}{n} \right)$ and $p = \omega\left(\frac{\log n}{m} \right)$, $MSE\left( \breve{\varphi} \right) \to 0$ as $m, n \to \infty$. \end{proof} \subsubsection{Proof of Theorem\ref{thm:MSE_noisy_known}: Noise Distribution is Known} \begin{proof}[Proof of Theorem \ref{thm:MSE_noisy_known}] In order to achieve an upper bound on the MSE for the kernel density estimator with known noise, $\tilde{\varphi}$, we integrate the tail probability bound from Theorem \ref{thm:tail_noisy_known}. \begin{align} MSE\left( \tilde{\varphi} \right) &= \int_0^{2M} 2s \Prob{\left|A(i, j) - \tilde{A}(i,j)\right| > s} ds \nonumber\\ &\leq \int_0^{\infty} 4s\exp\left( -\frac{n s^2}{8L^2} \right) + \int_0^{\infty}2s\exp\left(-\frac{ns}{80L} \right) \nonumber\\ &\quad + \int_0^{\infty}4s\exp\left( \frac{- 2\pi^2 \left(4\gamma \right)^{\frac{2}{\beta}}\left( \frac{s}{2L} -C \left( \log \frac{np}{2} \right)^{-1/\beta} \right)^2 }{16B^2 K^2_{max}\left( \log n \right)^{\frac{2}{\beta}}}\left( \frac{np}{2} \right)^{1/6} \right) \nonumber\\ &\quad+ Const.\int_0^{2M} 2s ds, \label{eqn:intermediate_MSE_tilde} \end{align} where $C$ is the constant controlling the bias of $\tilde{F}^{(i)}$ in Lemma \ref{lem:mean_difference_tilde}, which is independent of $n$, and $Const. = m \exp\left( -\frac{np}{8} \right) + n \exp\left( -\frac{mp}{8} \right) + 4 \exp\left(-c (mp)^{\frac{1}{3}}\right) + n \exp\left( - \frac{1}{3}n^{\frac{1}{2}} \right)$ decays exponentially fast as $m, n \to \infty$, and $c = \min \left\{\frac{l^2}{16\left( D_{max} - D_{min} \right)^2}, \frac{l^2}{64\sigma^2} \right\}$. We will divide the region of integration for the third term, while letting $c_1 = \frac{ 2\pi^2 \left(4\gamma \right)^{\frac{2}{\beta}} }{16B^2 K^2_{max}\left( \log n \right)^{\frac{2}{\beta}}}\left( \frac{np}{2} \right)^{1/6}$, and $c_2 = C \left( \log \frac{np}{2} \right)^{-1/\beta}$. \begin{align*} &\int_0^{\infty}4s\exp\left( - c_1 \left( \frac{s}{2L} - c_2 \right)^2 \right)ds\\ &\qquad= \int_0^{2L c_2}4s\exp\left( - c_1 \left( \frac{s}{2L} - c_2 \right)^2 \right)ds\\ &\qquad + \int_{2L c_2}^{\infty} 4s\exp\left( - c_1 \left( \frac{s}{2L} - c_2 \right)^2 \right)ds\\ &\qquad\leq \int_0^{2L c_2} 4s ds + \int_0^{\infty} 4s \exp\left( - \frac{c_1}{4L^2} s^2 \right) ds\\ &\qquad= 8L^2 c_2^2 + \frac{8L^2}{c_1}. \end{align*} Plugging this into Eq. \eqref{eqn:intermediate_MSE_tilde}, we can obtain the following upper bound \begin{align*} MSE\left( \tilde{\varphi} \right) &\leq \frac{16L^2}{n} + \frac{12800L^2}{n^2} + 8L^2 c_2^2 + \frac{8L^2}{c_1} + Const. (2M)^2. \end{align*} We can observe that when $p = \omega(n^{-1})$, $c_1 \to \infty$ and $c_2 \to 0$ as $n \to \infty$. Also, if $p = \omega\left(\frac{\log m}{n} \right)$ and $p = \omega\left(\frac{\log n}{m} \right)$, $Const. \to 0$ as $m, n \to \infty$. \end{proof} \subsubsection{Proof of Theorem\ref{thm:MSE_noisy_unknown}: Noise Distribution is Unknown} \DG{***TODO} \section{Proof of Lemma \lowercase{\ref{lem:noisy_known_cdf}} and Auxiliary Lemmas }\label{proof.lem.3} \subsection{Lemmas to Control the Bias and Concentration of $\tilde{F}^{(i)}$} We show that the estimated CDF $\tilde{F}^{(i)}$ is close to the true CDF $F^{(i)}$ by showing both the bias $\left| \Exp{\tilde{F}^{(i)} (z)} - F^{(i)}(z) \right| $ and the variance of $\tilde{F}^{(i)} (z)$ are small. The following two lemmas assert these claims, based on consistency results for deconvolution (see Appendix \ref{appx:deconvolution} for detail). \begin{lemma}[Bias is small]\label{lem:mean_difference_tilde} For every $i \in [m]$, the expectation of the kernel smoothed ECDF $\tilde{F}^{(i)}$ defined as in Eq. \eqref{eqn:ECDF_known_noise} uniformly converges to the true CDF $F^{(i)}$, and the convergence rate is given as $\left( \log \left| \cB_i \right| \right)^{1/\beta}$, i.e., there exists a constant $C_3 = C(l) > 0$ such that \[ \sup_{z \in \Reals}\left| \Exp{\tilde{F}^{(i)} (z)} - F^{(i)}(z) \right| \leq C_3 \left( \log \left| \cB_i \right| \right)^{-1/\beta}, \quad \forall i \in [m]. \] Here, $\beta$ is the smoothness parameter of the supersmooth noise. \end{lemma} \begin{proof}[Proof of Lemma \ref{lem:mean_difference_tilde}] The expectation in the lemma statement is taken with respect to the randomness in data, i.e., realization of the samples which play the role of pivot points for kernel density estimation. Hence, \begin{align} \bigg| \Exp{\tilde{F}^{(i)} (z)} - F^{(i)}(z) \bigg| &= \bigg| \Exp{\tilde{F}^{(i)} (z) - F^{(i)}(z)} \bigg| \nonumber\\ & \leq \Exp{\left( \tilde{F}^{(i)} (z) - F^{(i)}(z) \right)^2}^{1/2}, \label{eqn:mean_tilde} \end{align} since $\Exp{X^2}- \Exp{X}^2 \geq 0$. We will control the term in the right hand side of Eq. \eqref{eqn:mean_tilde} by applying Theorem \ref{thm:Fan2}. For that purpose, we need to ensure that our density $f^{(i)}(z) = \frac{d}{dz}F^{(i)}(z)$ is in Fan's class for some $m, a$, and $B$ (see Eq. \eqref{eqn:Fan_class} for the definition of Fan's class). Note that $F^{(i)}$ is the inverse function of a slice of the latent function with a fixed row feature, $g_{x=\frow{i}}$, in our model. We assume it admits a probability density $f^{(i)}$. It is easy to see that $\frac{1}{L} \leq f^{(i)} (z) \leq \frac{1}{l}$ for all $z \in supp~f^{(i)}$ (and $f^{(i)}(z) = 0$ outside the support) because the inverse of $F^{(i)}$ is assumed $(l, L)$ bi-Lipschitz in our model. This $f^{(i)}$ belongs to Fan's class \[ \cC_{m, \alpha, B} = \left\{ f(x): \left| f^{(m)}(x) - f^{(m)}\left( x + \delta \right)\right| \leq B \delta^{\alpha} \right\}, \] with the choice of $m = 0, \alpha = 0$, and $B = \frac{1}{l}$. \iffalse{ For simplicity of argument, we will assume $g$ is differentiable. By the inverse function theorem, \[ f^{(i)}(z) = \frac{d}{dz}F^{(i)}(z) = \frac{1}{\frac{\partial g}{\partial y}\left(\frow{i}, y\right)} \] where $y$ satisfies $z = g\left(\frow{i}, y\right)$. We let $y_{\delta}, y_0$ respectively denote the preimages of $z+ \delta$ and $z$ of the function $g\left(\frow{i}, \cdot\right)$; then \begin{align*} \left| f(z) - f(z+\delta) \right| &= \frac{1}{\frac{\partial g}{\partial y}\left(\frow{i}, y_0\right)} - \frac{1}{\frac{\partial g}{\partial y}\left(\frow{i}, y_{\delta}\right)}\\ &=\frac{\frac{\partial g}{\partial y}\left(\frow{i}, y_{\delta}\right) - \frac{\partial g}{\partial y}\left(\frow{i}, y_0\right)}{\frac{\partial g}{\partial y}\left(\frow{i}, y_0\right) \frac{\partial g}{\partial y}\left(\frow{i}, y_{\delta}\right)}\\ &\leq \frac{L - l}{l^2}. \end{align*} from the bi-Lipschitz assumption on the latent function (see Eq. \eqref{eqn:biLipschitz}). }\fi Therefore, for all $i \in [m]$, the density corresponding to $F^{(i)}$ belongs to a Fan's class, i.e., $f^{(i)} \in \cC_{0,0, \frac{1}{l}}$. As a result, we can apply Theorem \ref{thm:Fan2} on Eq. \eqref{eqn:mean_tilde} to conclude that for any $i \in [m]$, \begin{align*} \sup_{z \in \Reals}\bigg| \Exp{\tilde{F}^{(i)} (z)} - F^{(i)}(z) \bigg| &\leq \sup_{z \in \Reals} \Exp{\left( \tilde{F}^{(i)} (z) - F^{(i)}(z) \right)^2}^{1/2}\\ &\leq \sup_{f \in \cC_{0,0, \frac{1}{l}}} \sup_{z \in \Reals} \Exp{\left( \tilde{F}_{\left|\cB_i\right|} (z) - F(z) \right)^2}^{1/2}\\ &= O\left( \left( \log \left| \cB_i \right| \right)^{-1/\beta} \right). \end{align*} $\tilde{F}_{\left|\cB_i\right|}$ denotes an estimate of $F$ with $\left|\cB_i\right|$ number of samples. Moreover, the constant $C_3$ hidden in the big O notation is dependent on the class $\cC_{0,0, \frac{1}{l}}$, hence, only on the model parameter $l$, because Fan's original result holds uniformly over the whole class $\cC_{0,0, \frac{1}{l}}$. \end{proof} \begin{lemma}[Variance is small]\label{lem:concentration_tilde} For each $i \in [m]$, the kernel smoothed ECDF $\tilde{F}^{(i)}$ defined as in Eq. \eqref{eqn:ECDF_known_noise} concentrates to its expectation, i.e., \begin{align*} \mathbb{P} \bigg( \left| \tilde{F}^{(i)}(z) - \Exp{\tilde{F}^{(i)}(z)} \right| \geq t \bigg) &\leq 2\exp\left( \frac{- \left| \cB_i \right|^{1/2} }{2C_4^2 \left( \log \left| \cB_i \right| \right)^{\frac{2}{\beta}} } t^2 \right). \end{align*} \end{lemma} Recall we defined the constant $C_4 = \frac{B K_{max} (D_2 - D_1)}{\pi \left(4\gamma \right)^{\frac{1}{\beta}} }$ where $\beta, \gamma >0$ are smoothness parameters for the noise, and $K_{max} = \max_{t \in [-1,1]} \left| \phi_K(t) \right|$. \begin{proof} [Proof of Lemma \ref{lem:concentration_tilde}] Recall that when conditioned on $\frow{i}$, the kernel smoothed ECDF $\tilde{F}^{(i)}$ evaluated at $z$ is a function of $\left| \cB_i \right|$ independent random variables $\{Z(i, j)\}_{j \in \cB_i}$, i.e., when $z$ is fixed, $\tilde{F}^{(i)}(z): \Reals^{\left| \cB_i \right|} \to \Reals$ such that \begin{align*} \tilde{F}^{(i)}(z)\left[ Z(i,j_1), \ldots, Z(i,j_{\left| \cB_i \right|}) \right] & = \int_{D_1}^{z \wedge D_2} \frac{1}{h \left| \cB_i \right|} \sum_{j\in \cB_i} L \left( \frac{w- Z(i,j)}{h} \right) dw, \end{align*} where $L(z) = \frac{1}{2\pi} \int e^{-itz} \frac{\phi_K(t)}{\phi_N\left(\frac{t}{h}\right)}dt$ and $h$ is the bandwidth parameter for kernel $K$. We will first show that $\tilde{F}^{(i)}(z)$ satisfies the bounded difference condition (see Eq. \eqref{eqn:bounded_difference}). Let $\zeta^{n} = (\zeta_1, \ldots, \zeta_n)$ and $\zeta^{n}_j = (\zeta_1, \ldots, \zeta_j', \ldots, \zeta_{n})$ be two $n$-tuples of real numbers, which differ only at the $j$-th position. Then \begin{align} &\tilde{F}^{(i)}(z)[\zeta^{n}] - \tilde{F}^{(i)}(z)[\zeta^n_j] \\ &\qquad = \frac{1}{h n} \int_{D_1}^{z \wedge D_2} L \left( \frac{w- \zeta_j}{h} \right) - L \left( \frac{w- \zeta'_j}{h} \right) dw \nonumber\\ &\qquad =\frac{1}{h n} \int_{D_1}^{z \wedge D_2} \frac{1}{2\pi} \int \left(e^{-it \frac{w- \zeta_j}{h} } - e^{-it \frac{w- \zeta'_j}{h} }\right) \frac{\phi_K(t)}{\phi_N\left(\frac{t}{h}\right)}dt dw \nonumber\\ &\qquad \leq \frac{1}{2\pi h n} \int_{D_1}^{z \wedge D_2} \int \Bigg|e^{-it \frac{w- \zeta_j}{h} } - e^{-it \frac{w- \zeta'_j}{h} }\Bigg| \left|\frac{\phi_K(t)}{\phi_N\left(\frac{t}{h}\right)} \right| dt dw. \label{eqn:difference_intermediate} \end{align} Because $e^{-itz}$ is on the unit circle in the complex plane for any real numbers $t$ and $z$, we have \begin{align*} \left|e^{-it \frac{w- \zeta_j}{h} } - e^{-it \frac{w- \zeta'_j}{h} }\right| &\leq \bigg|e^{-it \frac{w- \zeta_j}{h} }\bigg| + \left| e^{-it \frac{w- \zeta'_j}{h} }\right| = 2. \end{align*} Since $\phi_K$ is assumed to have compact support (see Appendix \ref{sec:deconv_results}) within $[-1, 1]$, and a Fourier transform of $L^1$ function is uniformly continuous, there exists $K_{max}=\max_{t \in [-1,1]} \left| \phi_K(t) \right| < \infty$ such that $\left| \phi_K(t) \right| \leq K_{max}, \forall t$. From the supersmoothness assumption on the noise (Eq. \eqref{eqn:model_supersmooth}), we have $\left| \phi_N\left(\frac{t}{h}\right) \right| \geq B^{-1} \exp\left( -\gamma \left|\frac{t}{h}\right|^{\beta}\right)$. We choose the bandwidth parameter $h = \left(4\gamma \right)^{\frac{1}{\beta}}\left( \log n \right)^{-\frac{1}{\beta}}$ following Fan (Theorems \ref{thm:Fan1}, \ref{thm:Fan2}). Plugging these expresions into Eq. \eqref{eqn:difference_intermediate} leads to \begin{align*} Eq. \eqref{eqn:difference_intermediate} &\leq \frac{\left( \log n \right)^{\frac{1}{\beta}}}{2\pi \left( 4\gamma \right)^{\frac{1}{\beta}} n} \int_{D_1}^{z \wedge D_2} \int_{-1}^{1} 2 B K_{max} \exp\left( \frac{1}{4} \left|t \right|^{\beta} \log n\right) dt dw\\ &\leq \frac{ B K_{max}\left( \log n \right)^{\frac{1}{\beta}}}{\pi \left( 4\gamma \right)^{\frac{1}{\beta}} n} \int_{D_1}^{z \wedge D_2} \left( 1 - (-1) \right) \max_{t \in [-1,1]}\exp\left( \frac{1}{4} \left|t \right|^{\beta} \log n\right) dw\\ &= \frac{ B K_{max}\left( \log n \right)^{\frac{1}{\beta}}}{\pi \left( 4\gamma \right)^{\frac{1}{\beta}} n} \left( \left(z \wedge D_2 \right) - D_1\right) 2 n^{\frac{1}{4}}\\ &\leq \frac{ 2B K_{max} (D_2 - D_1) \left( \log n \right)^{\frac{1}{\beta}}}{\pi \left( 4\gamma \right)^{\frac{1}{\beta}} n^{\frac{3}{4}}}\\ &= \frac{ 2C_4 \left( \log n \right)^{\frac{1}{\beta}}}{n^{\frac{3}{4}}}, \qquad \text{for any }z \in [D_1, D_2]. \end{align*} Applying McDiarmid's inequality (Lemma \ref{lem:McDiarmid}), we can conclude that, \begin{align*} \mathbb{P} \bigg( \left| \tilde{F}^{(i)}(z)[\zeta^n] - \bbE_{\zeta^n}{\tilde{F}^{(i)}(z)[\zeta^n]} \right| \geq t \bigg) &\leq 2\exp\left( \frac{- n^{1/2} }{2C_4^2 \left( \log n \right)^{\frac{2}{\beta}}} t^2 \right). \end{align*} This argument holds for every $i \in [m]$, with replacing generic variable $n$ with corresponding $\left| \cB_i \right|$. \end{proof} \begin{lemma}[Variance is uniformly small]\label{lem:sup_tilde} For each $i \in [m]$, the kernel smoothed ECDF $\tilde{F}^{(i)}$ defined as in Eq. \eqref{eqn:ECDF_known_noise} uniformly concentrates to its expectation, i.e., for any nonnegative integer $N$ and for any $t \geq \frac{\Delta^{(i)}\left( D_{2} - D_{1} \right)}{N}$ (we define $\Delta^{(i)} := \frac{ B K_{max} }{\pi \left( 4\gamma \right)^{\frac{1}{\beta}} } \left| \cB_i \right|^{\frac{1}{4}} \left( \log \left| \cB_i \right| \right)^{\frac{1}{\beta}}$), \begin{align*} &\Prob{ \sup_{z \in [D_1, D_2]} \left| \tilde{F}^{(i)}(z) - \Exp{\tilde{F}^{(i)}(z)} \right| \geq t}\\ &\qquad\leq 2N \exp\left( \frac{- \left| \cB_i \right|^{1/2} }{2C_4^2 \left( \log \left| \cB_i \right| \right)^{\frac{2}{\beta}} } \left( t - \frac{\Delta^{(i)}\left( D_{2} - D_{1} \right)}{N} \right)^2 \right), \end{align*} where $\beta, \gamma >0$ are smoothness parameters for the noise, and $K_{max} = \max_{t \in [-1,1]} \left| \phi_K(t) \right|$. \end{lemma} \begin{proof} [Proof of Lemma \ref{lem:sup_tilde}] First, we discretize the interval interval $[D_{1}, D_{2}]$ by constructing a finite $\varepsilon$-net. For any $N \geq 1$, define the set \[ \cT_N := \left\{ D_{min} + \frac{2k - 1}{2N}\left( D_{2} - D_{1} \right),~\forall k \in [N] \right\}. \] Then for any $N > 0$, $\cT_N \subset [D_{1}, D_{2}]$ and it forms a $\frac{\left( D_{2} - D_{1} \right)}{2N}$-net with $\left| \cT_N \right| = N$, i.e., for any $z \in \left[D_{1}, D_{2}\right]$, there exists $k \in [N]$ such that $\left| z - \frac{2k - 1}{2N}\left( D_{2} - D_{1} \right) \right| \leq \frac{\left( D_{2} - D_{1} \right)}{2N}$. We can observe that \begin{align*} \left\| \tilde{f}^{(i)} \right\|_{\infty} &= \bigg\| \frac{1}{h \left| \cB_i \right|} \sum_{j \in \cB_i} L \left( \frac{z - Z(i,j)}{h} \right) \bigg\|_{\infty} \\ &\leq \frac{1}{h} \left\| L \right\|_{\infty}\\ &= \frac{1}{2\pi h} \left\| \int_{-\infty}^{\infty} e^{-\img tz} \frac{\phi_K(t)}{\phi_N\left( \frac{t}{h} \right)} dt \right\|_{\infty}\\ &\leq \frac{1}{2\pi h} \int_{-\infty}^{\infty} \left| e^{-\img tz} \frac{\phi_K(t)}{\phi_N\left( \frac{t}{h} \right)} \right| dt\\ &\leq \frac{1}{2\pi h} \int_{-1}^1 \frac{K_{max}}{B^{-1} \exp \left(-\gamma \left| \frac{t}{h} \right|^{\beta} \right)} dt\\ &\leq \frac{B K_{max} \left( \log \left| \cB_i \right| \right)^{\frac{1}{\beta}}}{2\pi \left( 4\gamma \right)^{\frac{1}{\beta}} } \int_{-1}^1 \exp\left( \frac{1}{4} \left| t \right|^{\beta} \log \left| \cB_i \right| \right) dt \\%&\because h = \left( 4\gamma\right)^{\frac{1}{\beta}} \left( \log \left| \cB_i \right| \right)^{-\frac{1}{\beta}} \\ &\leq \frac{ B K_{max} \left( \log \left| \cB_i \right| \right)^{\frac{1}{\beta}}}{2\pi \left( 4\gamma \right)^{\frac{1}{\beta}} } \int_{-1}^1 \left| \cB_i \right|^{\frac{1}{4}} dt\\ &= \frac{ B K_{max} }{\pi \left( 4\gamma \right)^{\frac{1}{\beta}} } \left| \cB_i \right|^{\frac{1}{4}} \left( \log \left| \cB_i \right| \right)^{\frac{1}{\beta}} . \end{align*} Let $\Delta^{(i)}$ denote the upper bound in the last line. Since this upper bound is universal for all realization of samples, $\left\| \Exp{ \tilde{f}^{(i)} } \right\|_{\infty} \leq \Delta^{(i)}$, too. Then $\left\| \tilde{f}^{(i)} - \Exp{ \tilde{f}^{(i)} } \right\|_{\infty} \leq 2\Delta^{(i)}$ and it follows from the definition of $\tilde{F}^{(i)}$ (see Eq. \eqref{eqn:ECDF_known_noise}) that \[ \sup_{z \in [D_1, D_2]} \bigg| \tilde{F}^{(i)}(z) - \Exp{\tilde{F}^{(i)}(z)} \bigg| \leq \sup_{z \in \cT_N} \bigg| \tilde{F}^{(i)}(z) - \Exp{\tilde{F}^{(i)}(z)} \bigg| + \frac{\Delta^{(i)}\left( D_{2} - D_{1} \right)}{N}. \] Therefore, if $\left| \tilde{F}^{(i)}(z) - \Exp{\tilde{F}^{(i)}(z)} \right| \leq \varepsilon$ for all $z \in \cT_n$, the supremum over the whole domain is bounded above up to an additional term, that is to say, $\sup_{z \in [D_{1}, D_{2}]} \left| \tilde{F}^{(i)}(z) - \Exp{\tilde{F}^{(i)}(z)} \right| \leq \varepsilon + \frac{\Delta^{(i)}\left( D_{2} - D_{1} \right)}{N}$. An application of the union bound on the contraposition of the previous statement yields \begin{align*} &\Prob{ \sup_{z \in [D_{1}, D_{2}]} \left| \tilde{F}^{(i)}(z) - \Exp{\tilde{F}^{(i)}(z)} \right| \geq t}\\ &\qquad\leq \Prob{ \sup_{z \in \cT_N} \left| \tilde{F}^{(i)}(z) - \Exp{\tilde{F}^{(i)}(z)} \right| \geq t - \frac{\Delta^{(i)}\left( D_{2} - D_{1} \right)}{N} }\\ &\qquad\leq \sum_{z \in \cT_N} \Prob{ \left| \tilde{F}^{(i)}(z) - \Exp{\tilde{F}^{(i)}(z)} \right| \geq t - \frac{\Delta^{(i)}\left( D_{2} - D_{1} \right)}{N} }\\ &\qquad\leq 2N \exp\left( \frac{- \left| \cB_i \right|^{1/2} }{2C_4^2 \left( \log \left| \cB_i \right| \right)^{\frac{2}{\beta}} } \left( t - \frac{\Delta^{(i)}\left( D_{2} - D_{1} \right)}{N} \right)^2 \right). \end{align*} \end{proof} \subsection{Proof of Lemma \ref{lem:noisy_known_cdf}} \begin{proof}[Proof of Lemma \ref{lem:noisy_known_cdf}] By Lemma \ref{lem:mean_difference_tilde}, we have a universal upper bound: for any $i \in [m]$, $\sup_{z \in \Reals} \left| \Exp{\tilde{F}^{(i)} (z)} - F^{(i)}(z) \right|= O\left( \left( \log \left| \cB_i \right| \right)^{-1/\beta} \right)$. Actually this bound is uniform over all possible realizations of $\frow{i} \in [0,1]$. Therefore, we can explicitly introduce a constant $C_3 = C(l)$, which does not depend on $i \in [m]$, to write \begin{equation}\label{eqn:bias_C} \sup_i \sup_{z \in \Reals} \bigg| \Exp{\tilde{F}^{(i)} (z)} - F^{(i)}(z) \bigg| \leq C_3 \left( \log \left| \cB_i \right| \right)^{-1/\beta}. \end{equation} The concentration rate obtained in Lemma \ref{lem:sup_tilde} is stronger than $\left( \log \left| \cB_i \right| \right)^{1/\beta}$ as long as $N$ is a subexponential function of $|\cB_i|$: \begin{align*} &\Prob{\sup_{z \in [D_1, D_2]} \left| \tilde{F}^{(i)} (z) - \Exp{\tilde{F}^{(i)} (z)} \right| \geq t}\\ &\qquad \leq 2N \exp\left( \frac{- \left| \cB_i \right|^{1/2} }{2C_4^2 \left( \log \left| \cB_i \right| \right)^{\frac{2}{\beta}} } \left( t - \frac{\Delta^{(i)}\left( D_{2} - D_{1} \right)}{N} \right)^2 \right), \end{align*} Therefore, it is the bias which dominates the discrepancy between the kernel smoothed ECDF $\tilde{F}^{(i)}$ and the true CDF $F^{(i)} = g^{-1}_{x=\frow{i}}$. Now we will combine these two inequality by applying the union bound. For any $\delta_1, \delta_2 > 0$, suppose that both $\left| F^{(i)} (z) - \Exp{\tilde{F}^{(i)} (z)} \right| \leq \delta_1$ and $\left| \tilde{F}^{(i)} (z) - \Exp{\tilde{F}^{(i)} (z)} \right| \leq \delta_2$ are satisfied. Then $\left| \tilde{F}^{(i)} (z) - F^{(i)} \right \leq \delta_1 + \delta_2$ follows by triangle inequality. We can obtain the desired concentration inequality by applying the union bound on the contraposition of this statement with the particular choice of $\delta_1 = C_3 \left( \log \left| \cB_i \right| \right)^{-1/\beta}$ and $\delta_2 = t - \delta_1$. To be more specific, for any nonnegative integer $N$ and for any $t > \frac{\Delta^{(i)}\left( D_{2} - D_{1} \right)}{N} + C_3 \left( \log \left| \cB_i \right| \right)^{-1/\beta}$ (where $C_3$ is the constant as in Eq. \eqref{eqn:bias_C}), \begin{align*} &\Prob{\sup_{z \in [D_1, D_2]} \left| \tilde{F}^{(i)} (z) - F^{(i)} \right| > t}\\ &\qquad \leq \Prob{\sup_{z \in [D_1, D_2]}\left| F^{(i)} (z) - \Exp{\tilde{F}^{(i)} (z)} \right| > C_3 \left( \log \left| \cB_i \right| \right)^{-1/\beta}}\\ &\qquad \quad+ \Prob{\sup_{z \in [D_1, D_2]}\left| \tilde{F}^{(i)} (z) - \Exp{\tilde{F}^{(i)} (z)} \right| > t - C_3 \left( \log \left| \cB_i \right| \right)^{-1/\beta}}\\ &\qquad\leq 2N\exp\left( \frac{- \left| \cB_i \right|^{1/2} }{2C_4^2 \left( \log \left| \cB_i \right| \right)^{\frac{2}{\beta}}} \left( t - \frac{\Delta^{(i)}\left( D_{2} - D_{1} \right)}{N} - C_3 \left( \log \left| \cB_i \right| \right)^{-1/\beta} \right)^2 \right). \end{align*} Finally, letting $N = \left| \cB_i \right|^{\frac{1}{4}} \left( \log \left| \cB_i \right| \right)^{\frac{2}{\beta}}$ leads to $ \frac{\Delta^{(i)}\left( D_{2} - D_{1} \right)}{N} = C_4 \left( \log \left| \cB_i \right| \right)^{-\frac{1}{\beta}}.$ \end{proof} \begin{proof}[Proof of Corollary \ref{coro:noisy_CDF_uniform}] Conditioned on event $\Erow$, it holds for all $i \in [m]$ that $\left| \cB_i \right| \geq \frac{np}{2}$. Similarly, $\left| \cB_i \right| \leq 2np$ for all $i \in [m]$, when conditioned on event $\Erowp$. Therefore, for any $i \in [m]$, \begin{align*} &\Prob{\sup_{z \in [D_1, D_2]} \left. \left| \tilde{F}^{(i)} (z) - F^{(i)} \right| > t \right| \Erow, \Erowp}\\ & \qquad \leq c_{n,p} \exp\left( \frac{- \left( \frac{np}{2} \right)^{1/2} }{2C_4^2 \left( \log \left( 2np \right) \right)^{\frac{2}{\beta}}} \left( t -C \left( \log \frac{np}{2} \right)^{-1/\beta} \right)^2 \right). \end{align*} where $c_{n,p} = 2(2np)^{\frac{1}{4}} \left( \log \left(2np\right) \right)^{\frac{2}{\beta}}$. \end{proof} \subsection{Proof of Lemma \ref{lem:noiseless_CDF}} \begin{proof}[Proof of Lemma \ref{lem:noiseless_CDF}] A direct application of DKW inequality (see Lemma \ref{lem:DKW}) leads to the following concentration inequality: \[ \Prob{\sup_{z \in \Reals} \left| \breve{F}^{(i)}(z) - F^{(i)}(z) \right| \geq t } \leq 2 e^{-2 n_i t^2}, \] where $n_i = \sum_{j =1}^n R(i,j) = | \cB_i|$. \end{proof} \section{Some Known Results from Deconvolution Literature }\label{appx:deconvolution} In this section, we introduce some known results for estimating the unknown density $f_X$ of random variable $X$ by deconvolution techniques. Suppose that $Z = X+N$ is a measurement of $X$ with additive noise $N$ and we have $n$ i.i.d. observations $Z_1, \ldots, Z_n$. \citet{Fan1991} reported that we can achieve an asymptotically consistent density estimate when the noise density is known and $f_X$ satisfies certain smoothness conditions. Later, \citet{Delaigle2008} showed that consistent estimation is possible even when the noise distribution is unknown, with aid of repeated measurements. Their estimators and proof techniques rely on the kernel smoothing method. Here we only present the abbreviated version of the concepts, the estimator, and the results to the minimum amount we need. We would refer interested readers to relevant references for more detail; for example, \citet{Carroll1988, Fan1991, Delaigle2008}. \subsection{Deconvolution Kernel Density Estimator} We provide a summary for deconvolution kernel density estimator, which we already discussed in detail to provide intuition for our algorithm in Section \ref{sec:alg_noisy}. For more detailed explanations to see how and why it works, please see that discussion. Our goal is to recover distribution of random variable $X$, but we observe samples of $Z = X + N$ instead of $X$. We assume we know the distribution of $N$. Due to independence, we know that $\phi_Z(t) = \phi_X(t) \phi_N(t)$ for all $t \in \Reals$, where $\phi_Z, \phi_X, \phi_N$ denote the characteristic function of random variable $Z, X$ and $N$ respectively. Let $\cF$ denote Fourier transformation operator and $\cF^{-1}$ denote the inverse Fourier transformation operator. By applying these operators, we obtain \begin{equation}\label{eqn:kernel.est} \hat{f}_X(x) = \cF^{-1} \left\{ \frac{ \cF\{ \hat{f}_Z(x) \} (t)}{\phi_N(t)} \right\} = \frac{1}{hn} \sum_{i=1}^n L\Big(\frac{x - Z_i}{h}\Big), \end{equation} where \begin{align*} L & \equiv \cF^{-1} \left\{ \frac{ \phi_K(\, \cdot \,) }{\phi_N(\, \cdot \, h^{-1})} \right\}, \quad \text{i.e.,} \quad L(z) = \frac{1}{2\pi} \int \exp(- \img\, t z ) \frac{\phi_K(t)}{\phi_N\left(\frac{t}{h}\right)} dt, ~~z \in \Reals. \end{align*} A more detailed description of the derivation can be found in Section \ref{sec:alg_noisy}. Indeed, this is known as deconvolution kernel density estimator in literature. We shall adopt prior results of \citet{Fan1991} on its consistency to establish our results. We refer interested readers to \citet{WandJones94} for more details and properties of kernel density estimation. \subsection{Consistency Results for Deconvolution}\label{sec:deconv_results} \subsubsection{Assumptions} \paragraph{Assumptions on the signal density} For constants $m,B \geq0$, and $\alpha \in [0,1)$, Fan defined a class of densities as \begin{equation}\label{eqn:Fan_class} \cC_{m, \alpha, B} = \{ f_X(x): \left| f_X^{(m)}(x) - f_X^{(m)}(x + \delta) \right| \leq B \delta^{\alpha} \}. \end{equation} Intuitively, that implies that $f_X$ is slowly varying, i.e., the density is sufficiently ``smooth' so that there is a hope to reconstruct it from a finite number of samples by interpolating the empirical density. \paragraph{Assumptions on the noise}\label{appx:noise} \citet{Fan1991} showed that the difficulty of deconvolution depends on the smoothness of the noise distribution and that of the density to be estimated. Here, the term `smoothness' means the order of the characteristic function as $t \to \infty$. In short, the deconvolution becomes more difficult as it is corrupted by smoother additive noise. Following \citet{Fan1991}, we call the distribution of a random variable $N$ smooth of order $\beta$ if its characteristic function $\phi_N$ satisfies \begin{equation}\label{eqn:ord_smooth} B^{-1} \left( 1 + |t| \right)^{-\beta} \leq \left| \phi_N (t) \right| \leq B \left( 1 + |t| \right)^{-\beta}, \end{equation} for some positive constants $\beta> 0$ and $B > 0$, and for all real $t$. This class of densities with polynomially decaying tails in the Fourier domain is called ordinary-smooth. Some examples of this ordinary-smooth error distributions include symmetric Gamma and double exponential distributions. There is another interesting class of error distributions, whose tails decay much faster in the Fourier domain. We will call the distribution of a random variable $N$ super-smooth of order $\beta$ if its characteristic function $\phi_N$ satisfies \begin{equation}\label{eqn:supersmooth} B^{-1} \exp \left( -\gamma |t|^{\beta} \right) \leq \left| \phi_N (t) \right| \leq B\exp \left( -\gamma |t|^{\beta} \right), \end{equation} for some positive constants $\beta, \gamma>0$ and $B > 1$, and for all real $t$. Normal, mixture normal, Cauchy distributions belong to the super-smooth class. \paragraph{Assumptions on the Kernel} We summarize some required properties of kernel used in the density estimator and the smoothness of noise before stating the results of \citet{Fan1991}. \begin{enumerate} \item[(K1)] $\phi_K(t)$ is a symmetric function, which has bounded integrable derivatives up to order $m+2$ on $\Reals$; \item[(K2)] $\phi_K(t) = 1 + O\left(|t|^m\right)$ as $t \to 0$; \item[(K3)] $\phi_K(t) = 0$, for $|t| \geq 1$. \item[(N1)] $\phi_N(t)$ is supersmooth of order $\beta$; see Eq. \eqref{eqn:supersmooth} \end{enumerate} Note that $\phi_N(t) \neq 0, \forall t$ is subsumed in (N1). \subsubsection{Some Deconvolution Results} The following theorem provides the consistency and the convergence rate of the kernel density estimator with known noise density (Eq. \eqref{eqn:kernel.est}) when the error distribution is supersmooth. We use subscript $n$ in $\hat{f}_n$ to emphasize that $\hat{f}_X$ is an estimator for $f_X$ based on $n$ samples. \begin{theorem}[\citet{Fan1991}, Theorem 1]\label{thm:Fan1} Let the kernel satisfies (K1), (K2), (K3), and the distribution of error satisfies (N1). With the choice of kernel bandwidth parameter $h_n = \left(4\gamma\right)^{\frac{1}{\beta}}\left( \log n \right)^{-\frac{1}{\beta}}$, we have \[ \sup_{f \in \cC_{m,\alpha,B}}\sup_{x \in \Reals} \Exp{\left( \hat{f}_n (x) - f(x)\right)^2} = O \left( \left( \log n \right)^{-2(m+\alpha)/\beta} \right). \] \end{theorem} There is another result (which is actually a corollary of the above theorem) in the same paper, which serves better for our purpose. With $\hat{f}_n$, it is possible to define an estimator of the CDF, $F$, of the random variable $X$ by integration: \begin{equation}\label{eqn:kernel_CDF} \hat{F}_n(x) = \int_{-M_n}^{x} \hat{f}_n(z) dz. \end{equation} $M_n$ is a sequence of constants, which tends to $-\infty$ as $n \to \infty$. The following theorem provides a convergence rate, which is better than na\"ively integrating that bound from Theorem \ref{thm:Fan1}. \begin{theorem}[\citet{Fan1991}, Theorem 3]\label{thm:Fan2} Let the same assumptions with Theorem \ref{thm:Fan1} except for that $m$ is replaced with $m+1$ in (K1) and (K2). Then by choosing the same bandwidth parameter $h_n = \left(4\gamma\right)^{\frac{1}{\beta}}\left( \log n \right)^{-\frac{1}{\beta}}$ and $M_n = n^{\frac{1}{6}}$, we have \[ \sup_{f \in \cC'_{m,\alpha,B}}\sup_{x \in \Reals} \Exp{\left( \tilde{F}_n (x) - F(x)\right)^2} = O \left( \left( \log n \right)^{-2(m+\alpha+1)/\beta} \right). \] where $\cC'_{m,\alpha,B}= \left\{ f \in \cC_{m,\alpha,B}: F(-n) \leq D \left( \log n \right)^{-(m+2)/\beta} \right\}$. \end{theorem} In the original paper, $M_n = n^{\frac{1}{3}}$ is used. However, the theorem still remains valid with the modificatio to $M_n = n^{\frac{1}{6}}$ (see \citet{Fan1991}, the proof of Theorem 3). \section{Proof of Lemma \lowercase{\ref{lem:quantile_noiseless}} }\label{proof.lem.1} \begin{proof}[Proof of Lemma \ref{lem:quantile_noiseless}] Recall from Eq. \eqref{eqn:quantile_rowwise} that when conditioned on $\frow{i}$, the quantile of $j$ estimated from row $i$ is a function of $|\cB_i| = \sum_{j' = 1}^n M(i,j')$ many independent random variables, $H\big( Z(i, j) - Z(i, j') \big)$: \[\hat{q}_i(j) = \frac{\sum_{j'=1}^n M(i,j') H\big( Z(i, j) - Z(i, j') \big) }{\sum_{j' = 1}^n M(i,j')}.\] Since $H\big( Z(i, j_1) - Z(i, j_2) \big)$ takes value in $\{ 0, \frac{1}{2}, 1\}$, it satisfies the bounded difference condition. To be more specific, let's consider a perturbation on the column feature associated with one index. For any $j_0 \in [n]$, if $j_0 \in \cB_i$ (i.e., if $M(i, j_0) = 1$), then \[ \left| \left.\hat{q}_i(j)\right|_{\fcol{j_0}=a} - \left.\hat{q}_i(j)\right|_{\fcol{j_0}=b} \right| \leq \frac{1}{\left| \cB_i \right|}, \] for any value $a, b \in [0,1]$, while if $j_0 \not\in \cB_i$ (i.e., if $M(i, j_0) = 0$), then obviously \[ \left| \left.\hat{q}_i(j)\right|_{\fcol{j_0} = a} - \left.\hat{q}_i(j)\right|_{\fcol{j_0} = b} \right| = 0. \] Since $\Exp {\hat{q}_i(j)} = \fcol{j}$, we can achieve the following probabilistic tail bound by an application of McDiarmid's inequality \[ \Prob{ \left| \hat{q}_i(j) - \fcol{j} \right| \geq t } \leq 2 \exp\left( -2 |\cB_i| t^2 \right). \] \end{proof} \iffalse{ Therefore, we can achieve the following universal upper bound, independent of $|\cB_i|$ and $|\cB^j|$: \begin{align*} &\Prob{ \exists j \in [n]: \left| \hat{q}(j) - \theta_{col}^{(j)} \right| \geq t }\\ &\quad= \Prob{ \bigcup_{j=1}^n \left\{ \left| \hat{q}(j) - \theta_{col}^{(j)} \right| \geq t \right\} }\\ &\quad= \Prob{ \left. \bigcup_{j=1}^n \left\{ \left| \hat{q}(j) - \theta_{col}^{(j)} \right| \geq t \right\} \right| \Erow \cap \Ecol}\Prob{\Erow \cap \Ecol} \\ &\qquad+ \Prob{ \left. \bigcup_{j=1}^n \left\{ \left| \hat{q}(j) - \theta_{col}^{(j)} \right| \geq t \right\} \right| \left(\Erow \cap \Ecol\right)^c}\Prob{{\Erow}^c \cup {\Ecol}^c}\\ &\quad\leq \Prob{ \left. \bigcup_{j=1}^n \left\{ \left| \hat{q}(j) - \theta_{col}^{(j)} \right| \geq t \right\} \right| \Erow \cap \Ecol} + \Prob{{\Erow}^c \cup {\Ecol}^c}\\ &\quad\leq \sum_{j=1}^n \Prob{ \left. \left| \hat{q}(j) - \theta_{col}^{(j)} \right| \geq t \right| \Erow \cap \Ecol} + \Prob{{\Erow}^c} + \Prob{{\Ecol}^c}\\ &\quad\leq 2n\exp \left(- \frac{mnp^2}{2} t^2 \right) + m \exp\left( -\frac{np}{8} \right) + n \exp\left( -\frac{mp}{8} \right). \end{align*} \end{proof} }\fi \section{Known Facts about Distribution}\label{appx:distribution} \subsection{Basic Definitions} In this section, we briefly restate some basic facts and functions related to a random variable. We let $(\Omega, \cF, P)$ denote our probability space. \begin{definition}[Random variable] A random variable $X: \Omega \to E$ is a measurable function from a set of possible outcomes $\Omega$ to a measurable space $E$. When $E= \Reals$, we call $X$ a real-valued random variable. \end{definition} For a real-valued random variable $X$, we can define its distribution function, whose evaluation at $x$ is the probability that $X$ will take a value less than or equal to $x$. \begin{definition}[Cumulative distribution function (CDF)]\label{defn:CDF} The cumulative distribution function of a real-valued random variable $X$ is defined as a function $F_X: \Reals \to [0,1]$ such that \[ F_X(x) = \Prob{X \leq x}. \] \end{definition} Every cumulative distribution function $F$ is non-decreasing, right-continuous, $\lim_{x \to -\infty} F(x) = 0$, and $\lim_{x \to \infty} F(x) = 1$. Conversely, every function with these four properties is a CDF, i.e., a random variable can be defined so that the function is the CDF of that random variable. We can define a pseudo-inverse of the distribution function, which returns a threshold value $x$ below which random draws from the given CDF would fall with given input probability $p$. \begin{definition}[Quantile function]\label{defn:Quantile} Given a distribution function $F: \Reals \to [0,1]$, the associated quantile function $Q:(0,1) \to \Reals$ is defined as \[ Q(p) = \inf \left\{ x \in \Reals: p \leq F(x) \right\}. \] If the function $F$ is continuous and strictly monotone increasing, then the infimum can be replaced by the minimum and $Q = F^{-1}$. \end{definition} When $F$ is absolutely continuous, then there exists a Lebesgue-integrable function $f(x)$ such that \[ F(b) - F(a) = \Prob{a < X \leq b} = \int_a^b f(x) dx, \] for all real numbers $a$ and $b$. The function $f$ is the (Radon-Nikodym) derivative of $F$, and it is called the probability density function of distribution of $X$. Note that the CDF can be expressed as the expectation of an indicator function, $F_X(x) = \Exp{\Ind{X \leq x}}$. There is an alternative way to describe a random variable. \begin{definition}[Characteristic function]\label{defn:ch_ftn} The characteristic function $\phi_X: \Reals \to \Cx$ for a real-valued random variable is defined as the expected value of $e^{itX}$, where $i$ is the imaginary unit, and $t \in \Reals$ is the argument of the characteristic function: \begin{align*} \phi_X(t) &= \Exp{e^{itX}}\\ & = \int_{\Reals} e^{itx} dF_X(x)\\ & = \int_{\Reals} e^{itx} f_X(x) dx\\ & = \int_0^1 e^{itQ_X(p)} dp. \end{align*} \end{definition} If random variable $X$ has a probability density function $f_X$, then the characteristic function is the Fourier transform with sign reversal in the complex exponential (note that the constant differs from the usual convention for the Fourier transform). \subsection{Empirical CDF and Empirical Characteristic Function} Given $X_1, \ldots, X_n$ ($n$ is a natural number) be real-valued independent and identically distributed random variables with common cumulative distribution function $F$. We let $F_n$ denote the empirical distribution function associated with $\{X_1, \ldots, X_n\}$, which is defined as \[ F_n(x) = \frac{1}{n} \sum_{i=1}^n \Ind{X_i \leq x}, \quad, \forall x \in \Reals. \] $F_n(x)$ is the average number of random variables among $\{X_1, \dots, X_n\}$ which take value smaller than $x$. It is knwon that the empirical distribution function converges to the distribution function from which the samples are drawn. The following concentration results known as the Dvoretzky-Kiefer-Wolfowitz (DKW) inequality quantifies the rate of convergence of $F_n$ to $F$ with respect to the uniform norm as $n$ tends to infinity. It is named after Aryeh Dvoretzky, Jack Kiefer, and Jacob Wolfowitz, who proved the inequality in 1956 with an unspecified multiplicative constant $C$. Later in 1990, Pascal Massart proved the inequality with the sharp constant $C=2$. This result strengthens the Glivenko-Cantelli theorem. \begin{lemma}[Dvoretzky-Kiefer-Wolfowitz]\label{lem:DKW} Given a natural number $n$, let $X_1, \ldots, X_n$ be real-valued independent and identically distributed random variables with common cumulative distribution function $F$. Then for every $\eps > 0$, \[ \Prob{\sup_{x \in \Reals} \left| F_n(x) - F(x) \right| > \eps} \leq 2 e^{-2n\eps^2}. \] \end{lemma} \section{Sub-Gaussian Random Variable and the Chernoff Bound} First of all, we recall the Markov's inequality. \begin{theorem}[Markov's inequality] Given a nonnegative random variable $X$, for all $t > 0$, \[ \Prob{X \geq t} \frac{\leq \Exp{X}}{t}. \] \end{theorem} \begin{proof} For all $t > 0$, $t \Ind{X \geq t} \leq X \Ind{ X \geq t} \leq X$. Taking expectation, $t \Prob{X \geq t} \leq \Exp{X}$, and hence, $\Prob{X \geq t} \frac{\leq \Exp{X}}{t}$. \end{proof} Now let $X$ be a real-valued random variable. Applying Markov's inequality with an exponential function, it follows that for $\lambda \geq 0$, \[ \Prob{X \geq t } = \Prob{e^{\lambda X} \geq e^{\lambda t}} \leq \frac{\Exp{e^{\lambda X}}}{e^{\lambda t}}. \] Since this inequality holds for all values of $\lambda \geq 0$, one may optimize $\lambda$ to obtain the tightest tail bound. Next, we define a class of random variables, whose tail behavior is easy to control. \begin{definition}[Sub-Gaussian random variable] A random variable $X$ with mean $\mu = \Exp{X}$ is called sub-Gaussian if there is a positive constant $\sigma$ such that \[ \Exp{e^{\lambda(X-\mu)}} \leq e^{\frac{\lambda^2 \sigma^2}{2}}, \quad \forall \lambda \in \Reals. \] We will call $\sigma$ the sub-Gaussian parameter of $X$. \end{definition} An application of the Chernoff bound leads to \[ \Prob{X-\mu \geq t} \leq \inf_{\lambda} \frac{\Exp{e^{\lambda(X-\mu)}}}{e^{\lambda t}}, \] where $\lambda$ is optimized over the interval $[0, \lambda^*]$ in which the moment generating function of $X$ exists. It is possible to achieve the same upper bound for $\Prob{ X-\mu \leq -t } = \Prob{ -(X-\mu) \geq t }$. We can conclude that a sub-Gaussian random variable satisfies that for all $t \in \Reals$, \[ \Prob{|X - \mu| \geq t} \leq 2 e^{-\frac{t^2}{2\sigma^2}}. \] The class of sub-Gaussian random variables subsumes Gaussian random variable and any bounded random variables. \paragraph{Hoeffding-type Inequalities} Now, we present several forms of concentration inequalities for the sum of independent random variables. Essentially they are all Chernoff bounds, tailored to specific random variable assumptions. We present three lemmas in the increasing order of generality, starting from the bound for a sum of independent Bernoulli trials. \begin{lemma}[Binomial Chernoff bound]\label{lem:Chernoff} Let $X = \sum_{i=1}^n X_i$, where $X_i = 1$ with probability $p_i$, and $X_i = 0$ with probability $1 - p_i$, and $X_i$'s are independent. Let $\mu = \Exp{X} = \sum_{i=1}^n p_i$. Then \begin{enumerate} \item Upper tail: $ \Prob{X \geq (1+\delta) \mu} \leq \exp\left(-\frac{\delta^2}{2+\delta}\mu \right)$ for all $\delta > 0$. \item Lower tail: $ \Prob{X \leq (1-\delta) \mu} \leq \exp\left(-\frac{\delta^2}{2}\mu \right)$ for all $0 < \delta < 1$. \end{enumerate} \end{lemma} Hoeffding derived a more general result for bounded random variables, which is known as (Azuma-) Hoeffding's inequality. \begin{lemma}[Hoeffding's inequality for bounded ranom variables]\label{lem:Hoeffding_bounded} Let $X_1, \ldots, X_n$ be $n$ independent random variables such that almost surely $X_i \in [a_i, b_i], \forall i$. Let $X = \sum_{i=1}^n X_i$, then for any $t > 0$, \[ \Prob{X - \Exp{X} \geq t} \leq \exp\left( - \frac{2t^2}{\sum_{i=1}^n (b_i - a_i)^2} \right), \] and \[ \Prob{X - \Exp{X} \leq -t} \leq \exp\left( - \frac{2t^2}{\sum_{i=1}^n (b_i - a_i)^2} \right). \] \end{lemma} Although Hoeffding's inequality is often presented only for the special case of bounded random variables, the same idea applies to sub-Gaussian random variables. \begin{lemma}[Hoeffding's inequality for sub-Gaussian ranom variables]\label{lem:Hoeffding_subG} Let $X_1, \ldots, X_n$ be $n$ independent random variables such that $X_i$ has mean $\mu_i$ and sub-Gaussian parameter $\sigma_i$. Let $X = \sum_{i=1}^n X_i$, then for any $t > 0$, \[ \Prob{X - \Exp{X} \geq t} \leq \exp\left( - \frac{t^2}{2\sum_{i=1}^n \sigma_i^2} \right), \] and \[ \Prob{X - \Exp{X} \leq -t} \leq \exp\left( - \frac{t^2}{2\sum_{i=1}^n \sigma_i^2} \right). \] \end{lemma} \paragraph{Bounded Difference Condition} While the previous inequalities showed concentration for the sum of independent random variables whose tail probability behavior is well-controlled, McDiarmid's inequality provides concentration results for general class of functions which depend on independent random variables, but in a limited way, satisfying the so-called ``bounded difference'' condition. \begin{lemma}[McDiarmid's inequality]\label{lem:McDiarmid} Let $X_1, \ldots, X_n$ be independent random variables such that for each $i \in [n]$, $X_i \in X$. Let $\xi: \prod_{i=1}^n X_i \to \Reals$ be a function of $(X_1, \ldots, X_n)$ that satisfies $\forall i, \forall x_1, \ldots, x_n, \forall x_i' \in X_i$, \begin{equation}\label{eqn:bounded_difference} \left| \xi\left(x_1, \ldots, x_i, \ldots, x_n\right) - \xi\left(x_1, \ldots, x_i', \ldots, x_n\right) \right| \leq c_i. \end{equation} Then for all $t > 0$, \[ \Prob{\xi - \Exp{\xi} \geq t} \leq \exp \left( \frac{-2 t^2}{\sum_{i=1}^n c_i^2} \right). \] \end{lemma} By considering the negation of the function $-\xi$ in lieu of $\xi$, one can obtain the same tail bound for the opposite direction. \section{Proof of Lemma \lowercase{\ref{lem:noisy_quantile}} \label{sec:quantile_noisy} \iffalse{ \subsection{Lemmas to Ensure Sufficient Observations.} To begin with, we present two lemmas which guarantee there are sufficiently large number of observations on each row and column, hence, there is sufficient information available with high probability. \begin{lemma} [Sufficient observations in every row]\label{lem:overlap} The number of revealed entries is at least $\frac{np}{2}$ for every row $i \in [m]$ with high probability. That is to say, \[ \Prob{ \left| \cB_i \right| \geq \frac{np}{2}, \forall i\in [m]} \geq 1 - m \exp\left( -\frac{np}{8} \right). \] \end{lemma} \begin{proof}[Proof of Lemma \ref{lem:overlap}] Given $i \in [m]$, $\left\{ M(i,j) \right\}_{j \in [n]}$ is a set of $n$ i.i.d. random variables following Bernoulli distribution with probability $p$. Applying the binomial Chernoff bound on the mean of them, we obtain the following probabilistic tail bound for each $i$: \[ \Prob{\sum_{j=1}^n M(i,j) < \frac{np}{2}} \leq \exp\left( -\frac{np}{8} \right). \] Recall that $\cB_i = \{ j' \in [n]: M(i,j') = 1 \}$, and hence, $\left| \cB_i \right| = \sum_{j=1}^n M(i,j)$. Then an application of the union bound for every row $i \in [m]$ leads to the desired result. \end{proof} \begin{corollary} [Sufficient observations in every column]\label{coro:overlap} For every column $j \in [n]$, the number of revealed entries is at least $\frac{mp}{2}$ with high probability. That is to say, \[ \Prob{ \left| \cB^j \right| \geq \frac{mp}{2}, \forall j\in [n]} \geq 1 - n \exp\left( -\frac{mp}{8} \right). \] \end{corollary} \begin{proof}[Proof of Corollary \ref{coro:overlap}] This directly follows from Lemma \ref{lem:overlap} applied to the transpose of the matrix. \end{proof} }\fi When there is nontrivial noise present, the indicator may not be reliable any more. Hence, we need a way to control the effect of noise. We assume the additive noise is sub-Gaussian. In addition to condition defined in \eqref{eqn:sufficient_overlap}, we will use the following notation. \begin{align}\label{eqn:sufficient_overlap.col} \Ecol &\equiv \left\{|\cB^j| \geq \frac{mp}{2} \right\}. \end{align} \begin{proof}[Proof of Lemma \ref{lem:noisy_quantile}] Recall from Section \ref{sec:alg_noisy} (see Eqs. \eqref{eqn:estimate_marg} and \eqref{eqn:Z_marg}) that the quantile estimator is defined as \[ \hat{q}_{\marg}(j) = \frac{1}{n}\sum_{j' = 1}^n H \left( Z_{\marg}(j) - Z_{\marg}(j') \right), \] where \[ Z_{\marg}(j) = \begin{cases} \frac{ \sum_{i=1}^m M(i,j) Z(i,j)}{\sum_{i=1}^m M(i,j)}, & \text{if } \cB^j \neq \emptyset\\ \frac{1}{2}, & \text{if } \cB^j = \emptyset. \end{cases} \] We also note that since the marginalization of the latent function $g_{\marg}(y) := \int_0^1 g(x,y) dx$ is strictly increasing and $(l, L)$-biLipschitz, hence, invertible. We let $\zeta^{(j)} = g_{\marg}^{-1}\left( Z_{\marg}(j)\right)$ for the purpose of analysis. We also define an imaginary estimator \[ \hat{q}_{*}(j) = \frac{1}{n}\sum_{j' = 1}^n H \left( \fcol{j} - \fcol{j'} \right), \] which will be used solely for analysis. By triangle inequality, the error in quantile estimation is upper bounded as \[ \left| \hat{q}_{\marg}(j) - \fcol{j} \right| \leq \Big| \hat{q}_{\marg}(j) - \hat{q}_*(j) \Big| + \left| \hat{q}_*(j) - \fcol{j}\right|. \] If both $\Big| \hat{q}_{\marg}(j) - \hat{q}_*(j) \Big| \leq t_1$ and $\left| \hat{q}_*(j) - \fcol{j}\right| \leq t_2$ are satisfied, then $\left| \hat{q}_{\marg}(j) - \fcol{j} \right| \leq t_1 + t_2$. Therefore, for any $t_1, t_2 > 0$ and $t = t_1 + t_2$, \begin{align} &\Prob{ \left| \hat{q}_{\marg}(j) - \fcol{j} \right| > t } \label{eqn:partition1}\\ & \qquad \leq \mathbb{P}\Big({\Big| \hat{q}_{\marg}(j) - \hat{q}_*(j) \Big| > t_1 }\Big) + \Prob{ \left| \hat{q}_*(j) - \fcol{j}\right| > t_2 }. \nonumber \end{align} Note that $\hat{q}_{*}(j)$ exponentially concentrates to $\fcol{j}$ as $n \to \infty$ by McDiarmid's inequality, for example. Therefore, it suffices to find a probabilistic tail upper bound for $\left| \hat{q}_{\marg}(j) - \hat{q}_*(j) \right|$: \begin{align*} \left| \hat{q}_{\marg}(j) - \hat{q}_*(j) \right| &= \left| \frac{1}{n} \sum_{j' = 1}^n \left[ H \left( Z_{\marg}(j) - Z_{\marg}(j') \right) - H \left( \fcol{j} - \fcol{j'} \right) \right] \right|\\ &\leq \frac{1}{n} \sum_{j' = 1}^n \Bigg| \left[ H \left( Z_{\marg}(j) - Z_{\marg}(j') \right) - H \left( \fcol{j} - \fcol{j'} \right) \right] \Bigg|. \end{align*} For $j' \neq j$, $\left| \left[ H \left( Z_{\marg}(j) - Z_{\marg}(j') \right) - H \left( \fcol{j} - \fcol{j'} \right) \right] \right| = 1$ with probability $\pf$, and $0$ otherwise (it is uniformly $0$ for $j' = j$). Now, if we can find an upper bound $\pfs \geq \pf$, then for $t > \pfs$, \[ \mathbb{P}\Big( \left| \hat{q}_{\marg}(j) - \hat{q}_*(j) \right| > t \Big) \leq \mathbb{P} \Big( Y > nt \Big) \leq \exp\left( -\frac{n( t - \pfs)^2}{t + \pfs} \right), \] where $Y \sim Binomial(n, \pfs)$. We define a monotone decreasing function $\Qf: \Ints_+ \to \Reals_+$ as \[ \Qf(x) = 2\sqrt{\pi} \left( \frac{1}{\sqrt{C_1 x}} + \frac{1}{\sqrt{C_2 x}} + \frac{1}{\sqrt{mp C_1 e^{-C_1}}} + \frac{1}{\sqrt{mp C_2 e^{-C_2}}} \right), \] where $C_1 = \frac{l^2}{2(D_{max} - D_{min})^2}$ and $C_2 = \frac{l^2}{8\sigma^2}$ are some model-dependent constants. \paragraph{Claim 1} We show that $\pf \leq \Qf\left( \left|\cB^j \right|\right)$, i.e., $\pf$ is bounded above by a function of the number of revealed entries on column $j$, $|\cB^j|$. The estimator $\hat{q}_{\marg}(j)$ exploits the pairwise ordering information of column pair $(j,j')$ by taking the sign of $Z_{\marg}(j) - Z_{\marg}(j')$, which might be different from the true ordering $sign\left( \fcol{j} - \fcol{j'} \right)$ due to the presence of noise. We analyze the probability of the order to be distrubed. Note that $sign \Big( Z_{\marg}(j) - Z_{\marg}(j') \Big) = sign \left( \zeta^{(j)} - \zeta^{(j')}\right)$ because $g_{\marg}$ is strictly monotone increasing. Let $X_j := \zeta^{(j)} - \fcol{j}$. Then since $g_{\marg}$ is $(l,L)$-biLipschitz, for any $s > 0$, \begin{align*} \Prob{X_j \geq s} &\leq \Prob{ Z_{\marg}(j) - g_{\marg}\left(\fcol{j}\right) \geq ls}\\ &= \Prob{ \frac{1}{|\cB^j|} \sum_{i' \in \cB^j} Z(i',j) - g_{\marg}\left(\fcol{j}\right) \geq ls }\\ &\leq \Prob{ \frac{1}{|\cB^j|} \sum_{i' \in \cB^j} A(i',j) - g_{\marg}\left(\fcol{j}\right) \geq \frac{ls}{2} }\\ &\quad + \Prob{ \frac{1}{|\cB^j|} \sum_{i' \in \cB^j} N(i',j) \geq \frac{ls}{2} }\\ &\leq \exp\left( - \frac{\left| \cB^j \right| l^2 s^2}{2( D_{max} - D_{min} )^2} \right) + \exp \left( -\frac{\left| \cB^j \right| l^2 s^2}{ 8 \sigma^2} \right). \end{align*} For the brevity, we will let $C_1 = \frac{l^2}{2( D_{max} - D_{min} )^2}$ and $C_2 = \frac{l^2 }{ 8 \sigma^2}$ throughout the rest of the proof. Also, we can achieve the same upper bound for $\Prob{X_j \leq -s}$. Since $X_j - X_{j'} = \left( \zeta^{(j)} - \zeta^{(j')}\right) - \left( \fcol{j} - \fcol{j'} \right)$, the pairwise order is conserved unless \begin{align*} \begin{cases} X_j - X_{j'} < - \left( \fcol{j} - \fcol{j'}\right), & \text{when } \fcol{j} - \fcol{j'} \geq 0,\\ X_j - X_{j'} > \fcol{j'} - \fcol{j}, & \text{when } \fcol{j} - \fcol{j'} < 0. \end{cases} \end{align*} Given $\fcol{j}$, the probability of $\fcol{j'}$ be smaller than $\fcol{j}$ is equal to $\fcol{j}$, i.e., $\Prob{\fcol{j} - \fcol{j'} \geq 0} = \fcol{j}$. Therefore, the probability of the problematic event can be partitioned as \begin{align*} & \Prob{ sign\left( \zeta^{(j)} - \zeta^{(j')} \right) \neq sign\left( \fcol{j} - \fcol{j'} \right) } \\ &\qquad= \Prob{\left. X_j - X_{j'} < - \left( \fcol{j} - \fcol{j'}\right) \right| \fcol{j} - \fcol{j'} \geq 0} \Prob{\fcol{j} - \fcol{j'} \geq 0 }\\ &\qquad\quad+\Prob{\left. X_j - X_{j'} > \fcol{j'} - \fcol{j} \right| \fcol{j} - \fcol{j'} < 0} \Prob{\fcol{j} - \fcol{j'} < 0 }. \end{align*} The first conditional probability can be upper bounded by \begin{align*} & \Prob{\left. X_j - X_{j'} < - \left( \fcol{j} - \fcol{j'}\right) \right| \fcol{j} - \fcol{j'} \geq 0}\\ &\qquad \leq \Prob{\left. X_j < - \frac{ \fcol{j} - \fcol{j'}}{2} \right| \fcol{j} - \fcol{j'} \geq 0}\\ &\qquad\quad + \Prob{\left. X_{j'} > \frac{ \fcol{j} - \fcol{j'}}{2} \right| \fcol{j} - \fcol{j'} \geq 0}. \end{align*} Meanwhile, if we define a new random variable $T := \frac{ \fcol{j} - \fcol{j'}}{2} $ and let $\tau$ denote its realization, we can see that $f_T(\tau) = \frac{2}{\fcol{j}} \Ind{0 \leq T \leq \frac{\fcol{j}}{2}}$, conditioned on $\fcol{j} - \fcol{j'} \geq 0$. \begin{align*} &\Prob{ X_j < - \tau \left| \fcol{j} - \fcol{j'} \geq 0 \right.}\\ &\qquad = \sum_{k=0}^m \Prob{|\cB^j| = k} \Prob{ X_j < - \tau \left| \fcol{j} - \fcol{j'} \geq 0, |\cB^j| = k \right.}\\ &\qquad \leq \sum_{k=0}^m {m \choose k} p^k (1-p)^{m-k} \Big[ \exp \left( - C_1 k \tau^2 \right) + \exp \left( -C_2 k \tau^2 \right) \Big]\\ &\qquad = \left[ p e^{-C_1 \tau^2} + (1-p) \right]^m + \left[ p e^{-C_2 \tau^2} + (1-p) \right]^m\\ &\qquad = \left[ 1 - \frac{mp \left( 1 - e^{-C_1 \tau^2}\right)}{m} \right]^m + \left[ 1 - \frac{mp \left( 1 - e^{-C_2 \tau^2}\right)}{m} \right]^m\\ &\qquad \leq \exp\bigg[ -mp \left( 1 - e^{-C_1 \tau^2}\right) \bigg] + \exp \bigg[ - mp \left( 1 - e^{-C_2 \tau^2}\right) \bigg]. \end{align*} As a result, \begin{align} &\Prob{ \left. sign\left( \zeta^{(j)} - \zeta^{(j')} \right) \neq sign\left( \fcol{j} - \fcol{j'} \right) \right| |\cB^j| =k } \nonumber\\ &\qquad= \Prob{\fcol{j} - \fcol{j'} \geq 0 } \label{eqn:term.1}\\ &\qquad\qquad \times \Prob{\left. X_j - X_{j'} < - \left( \fcol{j} - \fcol{j'}\right) \right| \fcol{j} - \fcol{j'} \geq 0, |\cB^j| =k} \nonumber\\ &\qquad\quad+ \Prob{\fcol{j} - \fcol{j'} < 0 } \label{eqn:term.2}\\\ &\qquad\qquad \times \Prob{\left. X_j - X_{j'} > \fcol{j'} - \fcol{j} \right| \fcol{j} - \fcol{j'} < 0, |\cB^j| =k}. \nonumber \end{align} Note that $X_j < -\tau$ and $X_{j'} > \tau$ implies $X_j - X_{j'} < - 2\tau$ for any $\tau \in \Reals$. Therefore, for any $\tau \in \Reals$, it follows that $\Prob{ X_j - X_{j'} < - 2\tau} \leq \Prob{X_j < -\tau } + \Prob{X_{j'} > \tau}$. Now we will obtain an upper bound on Eq. \eqref{eqn:term.1} by finding upper bounds on each terms and then taking the union bound. Note that $\frac{d\Prob{ \fcol{j} - \fcol{j'} = 2\tau \left| \fcol{j} - \fcol{j'} \geq 0 \right.}}{d \tau}= \frac{2}{\fcol{j}} \bI \big\{0 \leq \tau \leq \frac{\fcol{j}}{2} \big\}$ and $\Prob{\fcol{j} - \fcol{j'} \geq 0 }=\fcol{j}$. \begin{align*} &\Prob{\left. X_j < - \frac{ \fcol{j} - \fcol{j'}}{2} \right| \fcol{j} - \fcol{j'} \geq 0, |\cB^j| =k} \Prob{\fcol{j} - \fcol{j'} \geq 0 }\\ &\qquad= \int_{\tau} \Prob{ X_j < -\tau \left| \fcol{j} - \fcol{j'} = 2\tau, |\cB^j| =k\right.} \times \\ &\qquad\qquad \frac{d\Prob{ \fcol{j} - \fcol{j'} = 2\tau \left| \fcol{j} - \fcol{j'} \geq 0 \right.}}{d \tau} \Prob{\fcol{j} - \fcol{j'} \geq 0 } d \tau\\ &\qquad = 2\int_0^{\frac{\fcol{j}}{2}} \Prob{ X_j < - \tau \left| \fcol{j} - \fcol{j'} = 2\tau, |\cB^j| =k\right.} d\tau\\ &\qquad \leq 2\int_0^{\frac{\fcol{j}}{2}} \exp\left( -C_1 k \tau^2 \right) + \exp \left( - C_2 k \tau^2 \right) d\tau\\ &\qquad \leq 2\int_0^{\infty} \exp\left( -C_1 k \tau^2 \right) + \exp \left( - C_2 k \tau^2 \right) d\tau\\ &\qquad = \sqrt{\pi} \left( \frac{1}{\sqrt{C_1 k}} + \frac{1}{\sqrt{C_2 k}} \right). \end{align*} Similarly, we can obtain an upper bound for $X_{j'}$. Note that column $j$ and $j'$ are independent (because $\fcol{j}$ and $\fcol{j'}$ are independently drawn) and that for $c > 0$, $1 - e^{-cu^2} \geq c e^{-c} u^2, ~\forall u \in [0,1]$. \begin{align*} &\Prob{\left. X_{j'} > \frac{ \fcol{j} - \fcol{j'}}{2} \right| \fcol{j} - \fcol{j'} \geq 0, |\cB^j| =k} \Prob{\fcol{j} - \fcol{j'} \geq 0 }\\ &\qquad= \int_{\tau} \Prob{ X_{j'} > \tau \left| \fcol{j} - \fcol{j'} = 2\tau, |\cB^j| =k\right.} \times \\ &\qquad\qquad \frac{d\Prob{ \fcol{j} - \fcol{j'} = 2\tau \left| \fcol{j} - \fcol{j'} \geq 0 \right.}}{d \tau} \Prob{\fcol{j} - \fcol{j'} \geq 0 } d \tau\\ &\qquad = 2\int_0^{\frac{\fcol{j}}{2}} \Prob{ X_{j'} > \tau \left| \fcol{j} - \fcol{j'} = 2\tau, |\cB^j| =k\right.} d\tau\\ &\qquad = 2\int_0^{\frac{\fcol{j}}{2}} \Prob{ X_{j'} > \tau \left| \fcol{j} - \fcol{j'} = 2\tau\right.} d\tau \\%&\because \text{column independence}\\ &\qquad \leq 2\int_0^{\frac{\fcol{j}}{2}} \exp\left[ -mp \left( 1 - e^{-C_1 \tau^2}\right) \right] + \exp \left[ - mp \left( 1 - e^{-C_2 \tau^2}\right) \right] d\tau\\ &\qquad \leq 2\int_0^{\frac{\fcol{j}}{2}} \exp\left( -mp C_1 e^{-C_1} \tau^2\right) + \exp \left( - mp C_2 e^{-C_2} \tau^2 \right) d\tau \\%&\because 1 - e^{-cu^2} \geq c e^{-c} u^2, ~\forall u \in [0,1]\\ &\qquad \leq 2\int_0^{\infty} \exp\left( -mp C_1 e^{-C_1} \tau^2\right) + \exp \left( - mp C_2 e{-C_2} \tau^2 \right) d\tau\\ &\qquad = \sqrt{\pi} \left( \frac{1}{\sqrt{mp C_1 e^{-C_1}}} + \frac{1}{\sqrt{mp C_2 e^{-C_2}}} \right). \end{align*} We used the fact (see Eq. \eqref{eqn:half_normal} that \[ \int_0^{\infty} e^{-ax^2} dx = \frac{1}{2}\sqrt{\frac{\pi}{a}}. \] From these, we can conclude that \[ Eq. \eqref{eqn:term.1} \leq \sqrt{\pi} \left( \frac{1}{\sqrt{C_1 k}} + \frac{1}{\sqrt{C_2 k}} + \frac{1}{\sqrt{mp C_1 e^{-C_1}}} + \frac{1}{\sqrt{mp C_2 e^{-C_2}}} \right). \] In the same vein, a similar upper bound can be derived for Eq. \eqref{eqn:term.2}. It suffices to remark that \begin{align*} \frac{d\Prob{ \fcol{j} - \fcol{j'} = -2\tau \left| \fcol{j} - \fcol{j'} < 0 \right.}}{d \tau} &= \frac{2}{1 - \fcol{j}} \Ind{0 \leq \tau \leq \frac{1 - \fcol{j}}{2} },\\ \Prob{\fcol{j} - \fcol{j'} < 0 } &= 1-\fcol{j}. \end{align*} Then by the same logic, \[ Eq. \eqref{eqn:term.2} \leq \sqrt{\pi} \left( \frac{1}{\sqrt{C_1 k}} + \frac{1}{\sqrt{C_2 k}} + \frac{1}{\sqrt{mp C_1 e^{-C_1}}} + \frac{1}{\sqrt{mp C_2 e^{-C_2}}} \right). \] Consequently, we can conclude our claim 1: \begin{align*} q &= \Prob{ sign\left( \zeta^{(j)} - \zeta^{(j')} \right) \neq sign\left( \fcol{j} - \fcol{j'} \right)}\\ &\leq 2\sqrt{\pi} \left( \frac{1}{\sqrt{C_1 \left| \cB^j \right|}} + \frac{1}{\sqrt{C_2 \left| \cB^j \right|}} + \frac{1}{\sqrt{mp C_1 e^{-C_1}}} + \frac{1}{\sqrt{mp C_2 e^{-C_2}}} \right)\\ &=: \Qf \left( \left|\cB^j \right|\right). \end{align*} \paragraph{Claim 2} Next, we can observe that for $t \geq \Qf \left( \frac{mp}{2}\right)$, \[ \Prob{ \left| \hat{q}_{\marg}(j) - \hat{q}_*(j) \right| > t } \leq \left.\exp\left( -\frac{n( t - \pfs)}{3} \right) \right|_{\pfs = \Qf(\frac{mp}{2})} + \exp\left( - \frac{mp}{8} \right). \] It follows from the usual union bound trick with conditioning event $\Ecol$ (see Eq. \eqref{eqn:sufficient_overlap.col}) : \begin{align*} &\Prob{ \left| \hat{q}_{\marg}(j) - \hat{q}_*(j) \right| > t } \\ &\qquad \leq \Prob{ Y > nt }\\ &\qquad = \Prob{ Y > nt \left| \Ecol \right.} \Prob{ \Ecol } + \Prob{ Y > nt \left| \Ecol^c \right.} \Prob{ \Ecol^c }\\ &\qquad \leq \Prob{ Y > nt \left| \Ecol \right.} + \Prob{ \Ecol^c }\\ &\qquad \leq \left. \exp\left( -\frac{n( t - \pfs)^2}{t + \pfs} \right) \right|_{\pfs = \Qf(\frac{mp}{2})} + \exp\left( - \frac{mp}{8} \right). \end{align*} We respectively used the fact that $\Qf$ is monotone decreasing and the Binomial Chernoff bound to bound the terms. For $t \geq 2\pfs$, $\frac{t-\pfs}{t+\pfs} \geq \frac{1}{3}$ and hence, \[ \Prob{ \left| \hat{q}_{\marg}(j) - \hat{q}_*(j) \right| > t } \leq \left.\exp\left( -\frac{n( t - \pfs)}{3} \right) \right|_{\pfs = \Qf(\frac{mp}{2})} + \exp\left( - \frac{mp}{8} \right). \] Combining the results in Claims 1 and 2 back to Eq. \eqref{eqn:partition1} with the choice of $t_1 = t_2 = \frac{t}{2}$, we have for any $t \geq 4\Qf(\frac{mp}{2}) = \frac{8\sqrt{\pi}}{\sqrt{mp}} \left( \frac{\sqrt{2} + e^{C_1/2}}{\sqrt{C_1}} + \frac{\sqrt{2} + e^{C_2/2}}{\sqrt{C_2}} \right)$, \begin{align*} &\Prob{ \left| \hat{q}_{\marg}(j) - \fcol{j} \right| > t }\\ &\qquad \leq \Prob{\Big| \hat{q}_{\marg}(j) - \hat{q}_*(j) \Big| > \frac{t}{2} } + \Prob{ \left| \hat{q}_*(j) - \fcol{j}\right| > \frac{t}{2} }\\ &\qquad \leq \left.\exp\left( -\frac{n( \frac{t}{2} - \pfs)}{3} \right) \right|_{\pfs = \Qf(\frac{mp}{2})} + \exp\left( - \frac{nt^2}{2} \right)\\ &\qquad\quad + \exp\left( - \frac{mp}{8} \right) . \end{align*} \end{proof} \iffalse{ \subsection{A Stronger Variant of Lemma \ref{lem:noisy_quantile_prelim}} For later use in the proof of Lemma \ref{lem:T_good}, we state an even stronger variant of the lemma \ref{lem:noisy_quantile_prelim} as a corollary. \begin{corollary}\label{coro:noisy_quantile_uniform} The marginal quantile estimator $\hat{q}_{\marg}(j)$ concentrates to $\theta_{col}^{(j)}$ uniformly for all $j \in [n]$ with high probability. Specifically, for any $s \geq (mp)^{-1/3}$ and $t \geq 16 s + 32 \exp\left( -c (mp)^{1/3} \right)$, \begin{align*} &\Prob{ \exists j \in [n]: \left| \hat{q}_{\marg}(j) - \fcol{j} \right| \geq t }\\ &\quad \leq 2n\exp\left( -\frac{n t^2}{2} \right) + n\exp\left(-\frac{nt}{12} \right) + n \exp\left( -\frac{mp}{8} \right)\\ &\qquad + n \exp\left( - \frac{ns}{3} \right) + 4n \exp\left(-c (mp)s^{2}\right), \end{align*} where $c = \min \left\{\frac{l^2}{16\left( D_{max} - D_{min} \right)^2}, \frac{l^2}{64\sigma^2} \right\}$. \end{corollary} \begin{proof} Again we will prove the corollary by applying the union bound: \begin{align*} &\Prob{ \exists j \in [n]: \left| \hat{q}_{\marg}(j) - \fcol{j} \right| \geq t }\\ & \leq \Prob{ \exists j \in [n]: \left| \hat{q}_{\marg}(j) - \hat{q}_{\theta}(j) \right| \geq \frac{t}{2} }\\ &\quad + \Prob{ \exists j \in [n]: \left| \hat{q}_{\theta}(j) - \fcol{j} \right| \geq \frac{t}{2} }\\ & \leq \Prob{ \exists j \in [n]: \left| \hat{q}_{\marg}(j) - \hat{q}_{\theta}(j) \right| \geq \frac{t}{2} }\\ &\quad + \Prob{ \left.\exists j \in [n]: \left| \hat{q}_{\marg}(j) - \hat{q}_{\theta}(j) \right| \geq \frac{t}{2} \right| E} + \Prob{E^c}\\ & \leq \sum_{j=1}^n \Prob{ \left| \hat{q}_{\marg}(j) - \hat{q}_{\theta}(j) \right| \geq \frac{t}{2} }\\ &\quad + \sum_{j=1}^n \Prob{ \left. \left| \hat{q}_{\marg}(j) - \hat{q}_{\theta}(j) \right| \geq \frac{t}{2} \right| E} + \Prob{E^c}. \end{align*} The conclusion can be derived by plugging in Eqs. \eqref{eqn:bound1}, \eqref{eqn:conditionE}, and \eqref{eqn:discrepancy} to the last line. \end{proof} }\fi \section{Setup}\label{sec:model} \subsection{Problem Statement}\label{sec:statement} We wish to estimate matrix $A \in \Reals^{m \times n}$ from its partial, and possibly noisy observations $Z \in \Reals^{m \times n}$. Let $\cO \subset [m] \times [n]$ denote the set of indices for which $Z_{ij}$ is observed; $Z_{ij}$ are such that $\Exp{Z(i,j)} = A(i,j)$. In this paper, we assume the additive noise model \[ Z(i,j) = A(i,j) + N(i,j), \quad \forall (i,j) \in \cO, \] where $N(i,j)$ are independent and identically distributed random variable with zero mean: $\Exp{N(i,j)} = 0$. For $(i,j) \in [m] \times [n] \setminus \cO$, $Z(i,j)$ is not observed, denoted as $Z(i,j) = \star$. We shall assume that each entry $(i,j) \in [m] \times [n]$ belongs to $\cO$ with probability $p \in (0,1]$ independently. We assume a nonparametric model for the matrix $A$: each row $i \in [m]$ and column $j \in [n]$ is associated with latent features $\frow{i}, \fcol{j} \in [0,1] \subset \Reals$ which are independent and identically distributed as per some distribution, say, the uniform distribution, and the $(i,j)$-th entry of matrix $A$ takes the form \begin{equation}\label{eqn:latent_ftn} A(i,j) = g\left( \frow{i}, \fcol{j}\right) \end{equation} for some latent measurable function $g: [0,1]^2 \to \Reals$. By the celebrated Aldous-Hoover theorem (\citet{Aldous81, Hoover82}), there always exists such a latent model representation for exchangeable data. However, this representation is not unique, because we can apply an invertible transform to the domain (latent feature space) and take the push-forward of the latent function with respect to the transform, so that $A(i,j)$ remains the same under the new representation. Therefore, estimation of the latent function $g$ is an ill-posed problem, and we would rather focus on prediction of the values $A(i,j)$ for $(i,j) \in [m] \times [n]$. \begin{problem}\label{problem:problem_statement} Given a data matrix $Z \in \Reals^{m \times n}$, can we recover the true parameter matrix $A \in \Reals^{m \times n}$ under the aforementioned setup in an algorithmically efficient manner? \end{problem} As we are released from the burden of identifying the latent features and latent function, we may assume our latent features, $\frow{i}$ for $i \in [m]$, and $\fcol{j}$, $j \in [n]$, are distributed as per the uniform distribution over the unit interval $[0,1]$. \subsection{Performance Metric}\label{sec:MSE} Given an estimator $\varphi: \Reals^{m \times n} \to \Reals^{m \times n}$, which returns the estimate $\hat{A} = \varphi(Z)$ of matrix $A$ using $Z$, we use the mean-squared error (MSE) to evaluate the performance: \begin{align} MSE(\varphi) &= \Exp{\frac{1}{mn} \sum_{i=1}^m \sum_{j=1}^n \left( \hat{A}(i,j) - A(i,j) \right)^2}. \label{eqn:MSE} \end{align} We call the estimator $\varphi$ to be consistent if MSE vanishes as the problem size $(m,n)$ increases, i.e. \[ \lim_{m, n \to \infty} MSE(\varphi) = 0. \] With these notations, the refined problem of interest is as follows. \begin{problem}\label{problem:problem_statement2} If consistent recovery in Problem \ref{problem:problem_statement} is possible for $p$ large enough, how fast does the MSE converge to $0$ as a function of $p, m$ and $n$? \end{problem} \subsection{Operating Model Assumptions} In addition to the assumptions for the additive noise model presented in Section \ref{sec:statement}, we assume some additional properties for the latent function $g$ (see Eq. \eqref{eqn:latent_ftn}) and the noise distribution. \subsubsection{Assumptions on the latent function} In addition to measurability, cetain types of smoothness conditions are usually imposed on the latent function, such as Lipschitz- or H\"older continuity. In this paper, we will focus on the class of functions $g: [0,1]^2 \to \Reals$, which are bounded, monotone increasing (Eq. \eqref{eqn:increasing}) and $(l, L$) bi-Lipschitz (Eq. \eqref{eqn:biLipschitz}) with respect to the second argument. That is to say, \begin{align} y_1 \leq y_2 \implies g(x,y_1) \leq g(x, y_2), \quad\forall x \in [0,1], \qquad\text{ and} \label{eqn:increasing}\\ \exists l, L > 0 \quad \text{s.t.}\quad 0 < l \leq \frac{g(x,y_2) - g(x,y_1)}{y_2 - y_1} \leq L < \infty, \quad \forall x, \forall y_1 \neq y_2. \label{eqn:biLipschitz} \end{align} However, we impose no further restrictions on $g$ with regard to the first argument A bi-Lipschitz mapping is injective, and is a bijection onto its image. Therefore, for each $x \in [0,1]$, we can define the inverse of $g(x, \cdot): [0,1] \to \left[g(x,0), g(x,1) \right]$, as $g^{-1}_{x} : \left[g(x,0), g(x,1) \right] \to [0,1]$. It is easy to check that $g^{-1}$ is also monotone increasing and $(\frac{1}{L}, \frac{1}{l})$ bi-Lipschitz. \subsubsection{Assumptions on the noise}\label{sec:noise} We assume noise is symmetric with mean zero, and sub-Gaussian with parameter $\sigma$, i.e., $ \Exp{e^{tX}} \leq e^{\frac{t^2 \sigma^2}{2}}, ~ \forall t \in \Reals.$ In addition, we assume the noise is supersmooth (see Appendix \ref{appx:noise}, cf. \citet{Fan1991, Delaigle2008} for more detail), i.e., there exist $B > 1$, and $\beta, \gamma > 0$ such that \begin{equation}\label{eqn:model_supersmooth} B^{-1} \exp \left( -\gamma |t|^{\beta} \right) \leq \phi_N (t) \leq B\exp \left( -\gamma |t|^{\beta} \right),\qquad\forall t\in \Reals, \end{equation} where $\phi_N(t)$ is the characteristic function of the noise distribution. For example, Gaussian noise is a typical example of super-smooth noise with parameter $\beta = 2$. As the name suggests, supersmooth noise is smoother than the class of `ordinary-smooth' noise (cf. \citet{Fan1991} for definition), which has polynomially decaying tail in the Fourier domain. \subsection{Recapping the Model} For a succinct representation of the model introduced so far, we introduce three matrices of the same size, $A, N, M \in \Reals^{m \times n}$. Specifically, $A$ is the matrix which we would like to estimate. $N$ is a random matrix of size $(m,n)$, whose entries are drawn i.i.d. as per a noise distribution. $M$ is a random binary masking matrix with each entry being $1$ with probability $p$ and $0$ with probability $1-p$, independently. The observation matrix $Z$ is such that $Z(i,j) = A(i,j) + N(i,j)$ if $M(i,j) = 1$, and $Z(i,j) = \star$ if $M(i,j) = 0$ regardless of the value of $A(i,j) + N(i,j)$. \iffalse{ \subsection{Exchangeability \DG{***TODO***}} Suppose that our data matrix $Z \in \Reals^{m \times n}$ is an instance of the first $m \times n$ entries of a 2-dimensional exchangeable random array $\bZ = \left\{\bZ(i,j) \right\}_{(i,j) \in \Nats^2}$. Recall that a random array is called \emph{exchangeable} if its distribution is invariant under permutations. Formally, \begin{equation}\label{eqn:exchangeable} \bZ(i,j) \eqd \bZ\left( \sigma(i), \tau(j) \right) \quad \text{for all }(i,j), \end{equation} for every pair of permutations $\sigma, \tau$ of $\Nats$. We use the symbol $\eqd$ to denote the equivalence in distribution, i.e., random variables on both sides have the same distribution. \citet{Austin12}, \citet{OrbRoy2015} \DG{Cite Austin (2012) and Orbanz and Roy (2015)} Commutatitivity of the algorithm and permutation can be a rationale behind the exchangeability. \DG{Exchangeablility assumption is appropriate for a variety of reasons; it makes sense especially when there is no canonical order, e.g., for anonymized dataset. Should I mention this here, or in Introduction?} Exchangeability leads to so-called latent variable model, which is convenient to work with. According to the Aldous-Hoover representation theorem, a random array $\bZ$ is exchangeable if and only if it can be represented as a random measurable function of independent random variables: \begin{equation}\label{eq:Aldous-Hoover} \bZ(i,j) \eqd g_{\theta} \left( \frow{i}, \fcol{j}, \finter{ij} \right)\quad \text{for all }(i,j), \end{equation} where $\theta, \left\{ \frow{i} \right\}_{i \in \Nats}, \left\{ \fcol{j} \right\}_{j \in \Nats}, \left\{ \finter{ij} \right\}_{(i,j) \in \Nats^2}$ are independent random variables drawn from the uniform distribution over the unit interval $[0,1]$. Here, $g_{\theta}$ is a function indexed by a random parameter $\theta$, such that $g_t: [0,1]^3 \to \Reals$ is measurable for any particular realization $\theta = t$. As described in (e.g., Orbanz and Roy), this model suggests the following hierarchical generative process: \begin{enumerate} \item Sample $\theta \sim U[0,1]$, which determines the underlying governing function $g_{\theta}$. \item For given $(m,n)$, sample independent and identically distributed (i.i.d.) uniform random variables $\frow{i}, \fcol{j}, \finter{ij} \sim U[0,1]$ for every row $i \in [m]$ and every column $j \in [n]$. \item Compute the value at the $(i,j)$-th entry of data matrix $Z$ as \[ Z(i,j) = g_{\theta} \left( \frow{i}, \fcol{j}, \finter{ij} \right). \] \end{enumerate} Comparing Eq. \eqref{eq:observation} to \eqref{eq:Aldous-Hoover}, we model $A + N$ as a latent function of row and column features with additive noise coming from $\theta_{int}^{(ij)}$. \begin{align*} A(i,j) &= g_{\theta} \left( \frow{i}, \fcol{j} \right),\\ N(i,j) &= F_{N}^{-1}\left( \finter{ij} \right). \end{align*} $F_N$ is the cumulative distribution function of the noise, and $F_N^{-1}$ denote its right pseudo-inverse (quantile function). If there exists nontrivial noise (i.e., $F_N$ is not a step centered at $x=0$), we assume its distribution is absolutely continuous and let $f_N$ denote its probability density function. We also let $\phi_N(t)$ denote its Fourier transform (i.e., the characteristic function). These concepts are summarized in Appendix \ref{appx:distribution}. \DG{Move it to the proof? ---- under the exchangeability assumption to follow, this expression can be reduced to $\Exp{\left( \hat{A}(i,j) - A(i,j) \right)^2}$.} \subsection{Connection to Graphons \DG{***TODO***}} A graphon is a symmetric measurable function $W: [0,1]^2 \to [0,1]$, which is a widely used model in the study of large graphs. A graphon can be used to define an exchangeable random graph model. Given the number of vertices $n$ (i.e., vertex set $V = [n]$), the edge set $E$ of a random graph $\bG = (V, \bE)$ is generated according to the following 2-step procedure. \begin{enumerate} \item Each vertex $i \in V$ is assigned an independent random value $\theta_i \sim U[0,1]$. \item For each pair $(i,j)$, the corresponding edge is included in $\bE$ with probability $W(\theta_i, \theta_j)$. \end{enumerate} This process can be rewritten in the language of exchangeable model discussed in the previous section as follows: \[ \Ind{(i,j) \in E} = \Ind{W_{\theta}(\theta_i, \theta_j) \leq \theta_{int}^{(ij)} }. \] $\Ind{\cdot}$ denotes the indicator function which returns $1$ if the condition is satisfied and $0$ otherwise (see Appendix \ref{sec:notation}, Eq. \eqref{eqn:Indicator} for its definition). The universal parameter $\theta$ determines the graphon $W_{\theta}$ used, and the row and column features are identical $\theta_i = \theta_{row}^{(i)} = \theta_{col}^{(i)}$ because this is a symmetric model. \DG{Cite some related literature?} }\fi \section{Lower bounds: Proofs of Theorems \ref{thm:optimal_noiseless} and \ref{thm:optimal_deconvolution} } \label{sec:full_proof_lower_bounds} \subsection{Lower Bounding MSE by the Squared $L^2$ Distance of Functions} Here, we establish that the MSE of any estimator can be lower bounded by $L^2$ distance between ``estimated'' latent function and actual latent function. This will be useful steps towards establishing the desired bounds in Theorems \ref{thm:optimal_noiseless} and \ref{thm:optimal_deconvolution} since for each context, we will identify hard instance of latent functions that will be difficult to estimate in terms of $L^2$ distances. To that end, we shall assume that our estimator $\phi$ has access to an oracle that provide information about latent parameter associated with each column. Clearly, the lower bound on MSE for such a powerful estimator will be lower bound on MSE for any valid estimator. Recall that the $L^2$ norm of a function $g$ defined on $[0,1]$ is defined as \begin{equation}\label{eqn:L2_I} \| g \|_{L^2[0,1]} = \left(\int_{0}^1 \left| g(x) \right|^2 dx \right)^{1/2}. \end{equation} We use a subscript to explicitly indicate the function is estimated from a certain number of sample observations, i.e., we let $\hat{g}_{\nu}$ denote an estimated function for $g$ from $\nu$ sample points. Also recall the definition of MSE from Eq. \eqref{eqn:MSE}: for estimator $\varphi: Z \mapsto \Aphi$, \[ MSE(\varphi) = \Exp{\frac{1}{mn} \sum_{i=1}^m \sum_{j=1}^n \left( \Aphi(i,j) - A(i,j) \right)^2}. \] \begin{lemma}\label{lem:MSE_L2} For any algorithm $\varphi: Z \mapsto \Aphi$, \[ MSE\left( \varphi \right) \geq (1-p) \bbE_{\frow{1}, \fcol{-1}, \nu} \left[ \left\| \gphi_{\nu}( \frow{1}, \cdot) - g(\frow{1}, \cdot) \right\|^2_{L^2[0,1]} \right], \] where $\nu \sim Binomial(n-1, p)$ and $\fcol{-1}$ denotes $\{ \fcol{j}: j \in [n], j \neq 1\}$. \end{lemma} \begin{proof} Given an algorithm $\varphi: Z \mapsto \Aphi$, we first reduce the expression of MSE as follows: \begin{align} MSE(\varphi) &= \Exp{\frac{1}{mn} \sum_{i=1}^m \sum_{j=1}^n \left( \Aphi(i,j) - A(i,j) \right)^2} \nonumber\\ &= \frac{1}{m}\sum_{i=1}^m \Exp{\frac{1}{n} \sum_{j=1}^n \left( \Aphi(i,j) - A(i,j) \right)^2} \nonumber\\ &= \frac{1}{n}\sum_{j=1}^n \Exp{ \left( \Aphi(1,j) - A(1,j) \right)^2} \qquad\because \text{rows are exchangeable} \nonumber\\ &= \Exp{ \left( \Aphi(1,1) - A(1,1) \right)^2} ~~\qquad\because \text{columns are exchangeable} \nonumber\\ &= \Exp{ \left( \gphi( \frow{1}, \fcol{1}) - g(\frow{1},\fcol{1}) \right)^2}. \label{eqn:single_L2} \end{align} Note that $\gphi$ is a function estimated based on the data $\{ Z(i,j): (i,j) \in \cO \}$. Now suppose that the algorithm $\varphi$ is equipped with an oracle, i.e., it can access to the true value of the latent features for $(i,j) \in \cO$. For an oracle algorithm having access to $\fcol{j}$, there is no information utilized to estimate $\Aphi(1,1)$ from the observations $\{ Z(i,j): (i,j) \in \cO, i \neq 1\}$. Note that no regularity is assumed over the first coordinate of latent functions $g$ in our model other than the monotonicity assumption. The restriction of function $g$ at two different row features, $g(\frow{1}, \cdot)$ and $g(\frow{2}, \cdot)$, can be very different and the only information transferrable from one row to another is the pairwise order between column features. Since $\varphi$ already has access to the true column features, the mutual information between $\Aphi(1,1)$ and $\{ Z(i,j): (i,j) \in \cO, i \neq 1\}$ is zero. Recall that $M$ is the masking matrix, i.e., $M(i,j) = 1$ if and only if $(i,j) \in \cO$; otherwise, $M(i,j) = 0$. We let $M(1, \cdot)$ denote the first row of the matrix $M$, and let $\nu = \sum_{j=2}^n M(1, j)$ denote the number of observed entries in row $1$, excluding $(1,1)$. Note that $\nu$ is a random variable distributed as per binomial distribution $Binomial(n-1, p)$. We use $\fcol{-1}$ as a shorthand notation to denote $\{ \fcol{j}: j \in [n], j \neq 1 \}$. Assuming $\varphi$ perfectly restores $g$ at $\left\{ \left(\frow{i}, \fcol{j}\right): (i,j) \in \cO \right\}$, it follows that \begin{align*} &Eq. \eqref{eqn:single_L2}\\ &= \bbE_{\frow{1}}\left[ \bbE_{\fcol{1}, \fcol{-1}, M}\left[ \left.\left( \gphi( \frow{1}, \fcol{1}) - g(\frow{1},\fcol{1}) \right)^2 \right| \frow{1} \right] \right]\\ &= \bbE_{\frow{1}}\left[ \bbE_{\fcol{1}, \fcol{-1}, M(1,\cdot)}\left[ \left.\left( \gphi( \frow{1}, \fcol{1}) - g(\frow{1},\fcol{1}) \right)^2 \right| \frow{1} \right] \right] \quad \because \text{oracle}\\ &\geq \bbE_{\frow{1}}\Big[ \bbE_{\fcol{1}, \fcol{-1}, M(1, \cdot)}\Big[ \left( \gphi( \frow{1}, \fcol{1}) - g(\frow{1},\fcol{1}) \right)^2 \\ &\qquad\times\Ind{M(1,1) \neq 1} \Big| \frow{1} \Big] \Big]\\ &\geq (1-p) \bbE_{\frow{1}, \fcol{-1}, \nu} \left[ \left\| \gphi_{\nu}( \frow{1}, \cdot) - g(\frow{1}, \cdot) \right\|^2_{L^2[0,1]} \right]. \end{align*} \end{proof} We investigate lower bounds on $\left\|\gphi_{\nu}( \frow{1}, \cdot) - g(\frow{1}, \cdot) \right\|^2_{L^2[0,1]}$ in the subsequent sections to establish Theorems \ref{thm:optimal_noiseless} and \ref{thm:optimal_deconvolution}. Without loss of generality, we may assume our matrix is a $1$ by $n$ matrix due to the oracle argument. To further establish lower bound, we shall suppose that given $\{ \frow{1} \}$, $\{ \frow{j} \}_{j \in [n]}$, $\{ Z(1,j): j \in [n] \text{ and } M(1,j) = 1 \}$, the algorithm $\varphi$ can perfectly estimate the function $g(\frow{1}, x)$ for $x \in \{\fcol{j} : j \in [n], M(1, j) = 1\}$. Then we show that there exists an adversarial function $g^{\dagger}$ such that \begin{enumerate} \item $g \left( \frow{1}, \fcol{j} \right) = g^{\dagger} \left( \frow{1}, \fcol{j} \right)$ for all $j \in [n]$ such that $M(1,j) = 1$ \item $ \left\| g(\frow{1}, \cdot) - g^{\dagger}(\frow{1}, \cdot) \right\|_{L^2[0,1]}$ is sufficiently large. \end{enumerate} Then, there is no way for $\varphi$ to distinguish $g^{\dagger}$ from $g$ based on the data, $\varphi$ would return the same output $g$ even if the latent function $g$ were replaced with $g^{\dagger}$. Therefore, $ \left\| g(\frow{1}, \cdot) - g^{\dagger}(\frow{1}, \cdot) \right\|_{L^2[0,1]}$ establishes a lower bound on $MSE(\varphi)$. More detailed argument for the noiseless case (Section \ref{sec:noiseless_LB}) and noisy case (Section \ref{sec:noisy_LB}) will follow. \subsection{Proof of Theorem \ref{thm:optimal_noiseless}}\label{sec:noiseless_LB} In this section, we show that for any slice of true latent function $g_1:= g(\frow{1}, \cdot) : [0,1] \to \Reals$ and for any set of sampling points $y_1, \ldots, y_{\nu} \in [0,1]$, there exists an adversarial function $g_1^{\dagger}: [0,1] \to \Reals$ such that $g_1(y) = g_1^{\dagger}(y)$ for all $y \in \{y_1, \ldots, y_\nu\}$, yet $\| g_1 - g_1^{\dagger} \|_2^2 \geq \frac{c}{\nu}$ for some universal constant $c$, independent of $g_1$ and $\nu$. This claim follows from a classical result in function approximation theory. We define some notations before introducing the function approximation lemma. Recall that the $L^1$ norm of a function $g: [0,1] \to \Reals$ is defined as $\| g \|_{L^1[0,1]} := \int_0^1 | g(x)| dx$ (see Eq. \eqref{eqn:L2_I} for comparison with $L^2$ norm). We let $L^1[0,1] := \left\{ g: [0,1] \to \Reals: \| g \|_{L^1[0,1]} < \infty \right\}$ denote the space of functions with finite $L^1$ norm, i.e., integrable functions. We also recall that $C^{\infty}[0,1]$ is the space of functions defined on $[0,1]$, which are infinitely differentiable. Lastly, we call a function $g$ to be $\delta$-Lipschitz if $\| g(y_1) - g(y_2) \| \leq \delta \| y_1 - y_2 \|$ for any two points $y_1, y_2$ in the domain of $g$. \begin{theorem}[\citet{Kudryavtsev1995}, Lemma 4.4, simplified]\label{thm:Kudryatsev} There exists a universal constant $c$ such that for every $\nu \in \Nats$, and for any $y_1, \ldots, y_{\nu} \in [0,1]$, there exists a $\delta$-Lipschitz function $h \in L^1[0,1] \cap C^{\infty}[0,1]$ for which \begin{enumerate} \item $h(y_i) = 0$, for all $i=1, \ldots, \nu$, and \item $\left\| h \right\|_{L^2[0,1]} \geq c \frac{\delta}{\sqrt{\nu}}$. \end{enumerate} \end{theorem} We use this theorem to prove Theorem \ref{thm:optimal_noiseless}. \begin{theorem}[Full version of Theorem \ref{thm:optimal_noiseless}]\label{thm:noiseless_LB} In the noiseless scenario, for any estimation algorithm $\varphi$, there exists a hard instance for which $$ MSE(\varphi) \geq (1-p) \frac{c^2 \delta^2}{ (n-1)p}. $$ \end{theorem} \begin{proof}[Proof of Theorem \ref{thm:noiseless_LB}] Choose a positive real number $\delta < \frac{L - l}{2}$. Consider a bounded function $g: [0,1]^2 \to \Reals$, which is $(l + \delta, L - \delta)$ bi-Lipschitz with respect to the second argument. We suppose that given any data $\{ \frow{1} \}$, $\{ \frow{j} \}_{j \in [n]}$, $\{ g( \frow{1}, \fcol{j}): j \in [n] \text{ and } M(1,j) = 1 \}$, algorithm $\varphi$ can perfectly restore the function $g(\frow{1}, \cdot)$ from data. Let $\nu := \sum_{j = 2}^n M(1,j)$ denote the number of samples observed. By Theorem \ref{thm:Kudryatsev}, there exists a $\delta$-Lipschitz function $h \in L^1[0,1] \cap C^{\infty}[0,1]$ which satisfies \begin{enumerate} \item $h( \fcol{j} ) = 0$, for all $j \in [n]$ such that $M(1,j) =1$, and \item $\left\| h \right\|_{L^2[0,1]} \geq c \frac{\delta}{\sqrt{\nu}}$. \end{enumerate} Now we consider an adversarial function $g^{\dagger}: [0,1]^2 \to \Reals$ (for given data), which is defined as \[ g^{\dagger}(x, y) = g(x,y) + h(y)\Ind{x = \frow{1}}. \] First of all, we remark that $g^{\dagger}$ is a valid latent function which satisfies every criterion in our model (see Section \ref{sec:model}), because $h$ is continuous and $\delta$-Lipschitz. If the latent function $g$ were replaced with $g^{\dagger}$ by an adversary, the algorithm $\varphi$ could not recognize that from given data because $h( \fcol{j} ) = 0$, for all $j \in [n]$ such that $M(1,j) =1$. Therefore, $\varphi$ would still return $\gphi = g$ instead of yielding $\gphi = g^{\dagger}$ even though the true latent function is now $g^{\dagger}$. This leads to the following lower bound, regardless of $\frow{1} \in [0,1]$: \begin{align*} \left\| \gphi_{\nu}( \frow{1}, \cdot) - g^{\dagger}(\frow{1}, \cdot) \right\|^2_{L^2[0,1]} &= \left\| g ( \frow{1}, \cdot) - g^{\dagger}(\frow{1}, \cdot) \right\|^2_{L^2[0,1]}\\ &= \left\| h \right\|^2_{L^2[0,1]} \geq \frac{c^2 \delta^2}{\nu}. \end{align*} Inserting this back to Lemma \ref{lem:MSE_L2}, we can conclude the following MSE lower bound even if $\varphi$ is an algorithm which can perfectly estimate $g$ from a finite number of samples. Recall that $\nu$ denotes the number of observations used to estimate $\gphi$ and it is a random variable distributed as per $Binomial(n-1, p)$. \begin{align*} MSE\left( \varphi \right) &\geq (1-p) \bbE_{\frow{1}, \fcol{-1}, \nu} \left[ \left\| \gphi_{\nu}( \frow{1}, \cdot) - g^{\dagger}(\frow{1}, \cdot) \right\|^2_{L^2[0,1]} \right]\\ &\geq (1-p) \bbE_{\fcol{-1}, \nu} \left[ \min_{\frow{1} \in [0,1]} \left\| \gphi_{\nu}( \frow{1}, \cdot) - g^{\dagger}(\frow{1}, \cdot) \right\|^2_{L^2[0,1]} \right]\\ &\geq (1-p) \bbE_{\fcol{-1}, \nu} \left[ \frac{c^2 \delta^2}{\nu} \right]\\ &\geq (1-p) \bbE_{\nu} \left[ \frac{c^2 \delta^2}{\nu} \right]\\ &\geq (1-p) \frac{c^2 \delta^2}{ \bbE_{\nu} \left[ \nu \right]} \qquad \because\text{Jensen's inequality}\\ &= (1-p) \frac{c^2 \delta^2}{ (n-1)p}. \end{align*} The lower bound essentially quantifies the uncertainty between two functions $g$ and $g^{\dagger}$ which could have generated the same data to feed algorithm $\varphi$. We have shown a lower bound for an oracle algorithm, which has access to the latent features $\frow{1}$ and $\{\fcol{j} \}_{j \in [n]: M(1,j) = 1}$ and can perfectly restore a certain latent function. Since no algorithm can outperform an oracle, this lower bound holds for any algorithm, i.e., for any algorithm $\varphi$, there exists a hard instance to estimate. \end{proof} \subsection{Proof of Theorem \ref{thm:optimal_deconvolution}}\label{sec:noisy_LB} When the measurements are convoluted by a supersmooth additive noise (see Eq. \eqref{eqn:model_supersmooth} for definition), it gets exponentially harder to estimate the underlying function. We adopt the lower bound result from \citet{Fan1991} to prove our MSE lower bound which supports this claim. For that purpose, we first remark that we can interpret a slice of latent function, $g(\frow{1}, \cdot)$, as the (pseudo-) inverse of a cumulative distribution function $F^{(1)}$. That is to say, if $g(\frow{1}, y) = z$ for $y \in [0,1]$, we can rewrite it as $F^{(1)}(z) = y$ with the support of the distribution $F^{(1)}$ being the same with the range of $g(\frow{1}, \cdot)$. Since the latent function $g$ is bi-Lipschitz, the distribution $F^{(1)}$ is absolutely continuous, and it which admits a probability density $f^{(1)}$. \citet{Fan1991} defined the following class of density parametrized by three parameters $m, B$, and $0 \leq \alpha < 1$. \[ \cC_{m, \alpha, B} = \left\{ f(x): \left| f^{(m)}(x) - f^{(m)}\left( x + \delta \right)\right| \leq B \delta^{\alpha} \right\}, \] Since our density $f^{(i)}$ is the derivative of $F^{(i)}$, it satisfies $f^{(i)}(z) \leq \frac{1}{l}$ by the inverse function theorem. Therefore, for any valid latent function $g: [0, 1]^2 \to \Reals$, $f^{(1)} = \frac{d}{dz} F^{(1)} = \frac{d}{dz} g^{-1}\left( \frow{1}, \cdot \right)$ belongs to Fan's class $\cC_{0,0, \frac{1}{l}}$. The following hardness result is excerpted from \citet{Fan1991}. We let $\nu$ denote the number of measurements corrupted by additive noise. \begin{theorem}[\citet{Fan1991}, Theorem 4, simplified]\label{thm:deconv_hard} For any $x_0$, no estimator $\hat{T}_N$ can estimate $T(f) = f^{(\lambda)}(x_0)$ with the constraint $f \in \cC_{m,\alpha, B}$ faster than $O \left( \left( \log \nu \right)^{-(m+\alpha-\lambda)/\beta} \right)$, i.e., there is a universal constant $c > 0$ such that \begin{equation}\label{eqn:lower_deconvolution} \sup_{f \in \cC_{m,\alpha, B}} \bbE \left[ \left( \hat{T}_\nu - T(f) \right)^2 \right] > c \left( \log \nu \right)^{-2(m+\alpha - \lambda )/\beta}. \end{equation} \end{theorem} Since the cumulative distribution function can be considered as the anti-derivative of the density, or the derivative of ``order $-1$'' as discussed in \citet{Fan1991} Theorem 6 and Section 4, we have for any $x \in \Reals$, \begin{equation}\label{eqn:hardness_deconvolution} \sup_{f \in \cC_{0,0, \frac{1}{l}}} \bbE \left[ ( \hat{F}_\nu (x) - F (x) )^2 \right] > c \left( \log \nu \right)^{-2 / \beta}, \end{equation} by inserting $m = 0, \alpha =0, B = \frac{1}{l}$, and $\lambda = -1$ to Eq. \eqref{eqn:lower_deconvolution}. Now we are ready to use this result to prove Theorem \ref{thm:optimal_deconvolution}. \begin{theorem}[Full version of Theorem \ref{thm:optimal_deconvolution}]\label{thm:noisy_LB} In the additive noise scenario, for any estimation algorithm $\varphi$, there exists a hard instance for which \[ MSE(\varphi) \geq \frac{(1-p) l^2 c^{3/2}}{6 \sqrt{2}} \big( \log (n-1)p \big)^{-3 / \beta}. \] \end{theorem} \begin{proof}[Proof of Theorem \ref{thm:noisy_LB}] We let $\hat{F}$ and $F$ denote the pseudo-inverse of $ \hat{g}\left( \frow{1}, \cdot \right)$ and $g\left( \frow{1}, \cdot \right)$, respectively. Since we assumed the latent function $g\left( \frow{1}, \cdot \right)$ is $(l, L)$ bi-Lipschitz for any $\frow{1} \in [0,1]$, its inverse function $F = g^{-1} \left( \frow{1}, \cdot \right)$ is continuous, monotone increasing over $[0,1]$ and $\left( \frac{1}{L}, \frac{1}{l} \right)$ bi-Lipschitz. Therefore, we can treat $F$ as an absolutely continuous distribution function and its derivative $f$ belongs to Fan's class $\cC_{m, \alpha, B}$ with $m = 0, \alpha = 0$, and $B = \frac{1}{l}$. Suppose that given any data $\{ \frow{1} \}$, $\{ \frow{j} \}_{j \in [n]}$, $\{ g( \frow{1}, \fcol{j}): j \in [n] \text{ and } M(1,j) = 1 \}$, algorithm $\varphi$ returns an estimate of the latent function $\gphi_{\nu}(\frow{1}, \cdot)$. Here, $\nu := \sum_{j = 2}^n M(1,j)$ in the subscript denotes the number of samples used for estimation of $\gphi_{\nu}$. Let $\Fphi_\nu := (\gphi_{\nu})^{-1}(\frow{1}, \cdot)$. We may assume $\gphi_{\nu}$ is nondecreasing, because $g$ is monotone increasing from the model assumption. In fact, $g\left( \frow{1}, \cdot \right)$ is assumed to be not only monotone increasing, but $(l, L)$ bi-Lipschitz. Therefore, $F$ is $\left(\frac{1}{L}, \frac{1}{l}\right)$ bi-Lipschitz. Let $z^* := \arg \max_{z \in \Reals} \bbE \left[ ( \Fphi_\nu (z) - F (z) )^2 \right]$. Then let $y^* = F(z^*)$ and $\hat{y}^*_\nu := \Fphi_\nu (z^*)$ denote the image of $z^*$ under $F$ and $\Fphi_\nu$, respectively. Note that $\Fphi_\nu$ is a random function, and hence, $\hat{y}^*_\nu$ is a random variable. Subsequently, we define $\hat{z}^*_\nu := F^{-1}(\hat{y}^*_\nu)$. Without loss of generality, we may assume $y^* \leq \hat{y}^*_\nu$ and it follows that $\hat{z}^*_\nu \geq z^*$. Then for $y \in [y^*, \hat{y}^*_\nu]$, \begin{align*} g\left( \frow{1}, y \right) - \gphi_\nu \left( \frow{1}, y \right) &\geq g\left( \frow{1}, y \right) - \gphi_\nu\left( \frow{1}, \hat{y}^*_\nu \right)\\ & = g\left( \frow{1}, y \right) - g\left( \frow{1}, y^* \right)\\ & \geq l (y - y^*). \end{align*} From the definition of $L^2$ distance, it follows that \begin{align} \left\| \gphi_\nu \left( \frow{1}, \cdot \right) - g\left( \frow{1}, \cdot \right) \right\|_{L^2[0,1]}^2 &= \int_0^1 \left| \gphi_\nu \left( \frow{1}, y \right) - g\left( \frow{1}, y \right) \right|^2 dy \nonumber\\ &\geq \int_{y^*}^{\hat{y}^*_\nu} \left| \gphi_\nu \left( \frow{1}, y \right) - g\left( \frow{1}, y \right) \right|^2 dy \nonumber\\ &\geq \int_{y^*}^{\hat{y}^*_\nu} l^2 \left| y - y^* \right|^2 dy \nonumber\\ &= \frac{l^2}{3} \left| \hat{y}^*_\nu - y^* \right|^3 \nonumber\\ &= \frac{l^2}{3} \left| \Fphi_{\nu}(z^*) - F(z^*) \right|^3. \label{eqn:L2_LB} \end{align} Recall from Lemma \ref{lem:MSE_L2} that for any algorithm $\varphi: Z \mapsto \Aphi$, \begin{align*} MSE\left( \varphi \right) &\geq (1-p) \bbE_{\frow{1}, \fcol{-1}, \nu} \left[ \left\| \gphi_{\nu}( \frow{1}, \cdot) - g(\frow{1}, \cdot) \right\|^2_{L^2[0,1]} \right], \end{align*} where $\nu \sim Binomial(n-1, p)$ and $\fcol{-1}$ denotes $\{ \fcol{j}: j \in [n], j \neq 1\}$. If we restrict our latent function to take the form $g\left( \frow{i}, \fcol{j} \right) = g_2 ( \fcol{j})$ for some $g_2: [0,1] \to \Reals$, then we can remove the expectation with respect to $\frow{1}$. From Eq. \eqref{eqn:L2_LB}, it follows that \begin{align*} MSE\left( \varphi \right) &= (1-p) \bbE_{\fcol{-1}, \nu} \left[ \frac{l^2}{3} \left| \Fphi_{\nu}(z^*) - F(z^*) \right|^3 \right]\\ &\geq \frac{(1-p) l^2}{3} \bbE_{\nu} \left[ \bbE_{\fcol{-1}} \left[ \left| \Fphi_{\nu}(z^*) - F(z^*) \right|^2 \right]^{3/2} \right] &\because \text{Jensen's inequality} \end{align*} By Theorem \ref{thm:deconv_hard}--more precisely, by Eq. \eqref{eqn:hardness_deconvolution}--for any $\nu$, there exists a latent function $g_2$ (and corresponding $f \in \cC_{0,0,\frac{1}{l}}$ such that for any oracle algorithm $\varphi$, \[\bbE_{\fcol{-1}} \left[ \left| \Fphi_{\nu}(z^*) - F(z^*) \right|^2 \right] \geq \frac{c}{2} \left( \log \nu \right)^{-2 / \beta}.\] All in all, there exists a hard instance of latent function $g$ such that \begin{align*} MSE\left( \varphi \right) &\geq \frac{(1-p) l^2}{3} \bbE_{\nu} \left[ \left( \frac{c}{2} \left( \log \nu \right)^{-2 / \beta} \right)^{3/2} \right]\\ &= \frac{(1-p) l^2 c^{3/2}}{6 \sqrt{2}} \bbE_{\nu} \left[ \left( \log \nu \right)^{-3 / \beta} \right]\\ &\geq \frac{(1-p) l^2 c^{3/2}}{6 \sqrt{2}} \big( \log \bbE_{\nu} \left[ \nu \right] \big)^{-3 / \beta} &\because \text{Jensen's inequality}\\ &= \frac{(1-p) l^2 c^{3/2}}{6 \sqrt{2}} \big( \log (n-1)p \big)^{-3 / \beta}. \end{align*} We can apply Jensen's inequality because $\left( \log x \right)^{-3/\beta}$ is convex when $x > 1$, for any $\beta > 0$. \end{proof} \section{Algorithm}\label{sec:algorithm} The primary purpose of the algorithm presented here is to estimate $A(i,j)$ at the entries $(i,j)$ with $M(i,j) = \infty$, minimizing the mean squared error defined in Eq. \eqref{eqn:MSE}. However, it can be also used to denoise $Z(i,j)$ to reveal true $A(i,j)$ even for $(i,j)$ with $M(i,j) = 1$. \subsection{Generic Algorithm:}\label{sec:algorithm_generic} The algorithm consists of three steps: \begin{enumerate} \item Estimate the feature $\fcol{j}$ of column $j$, which is equal to the quantile of column $j$ in our model; \item Estimate the distribution $F^{(i)} = g^{-1}_{x=\frow{i}}$ on row $i$, which is the inverse of the latent function $g \left( \theta_{row}^{(i)}, \cdot \right)$ restricted on the first coordinate; \item Plug in the estimated column feature $\hat{\theta}_{col}(j)$ to the estimated quantile function $\hat{g}^{(i)} = \left(\hat{F}^{(i)}\right)^{-1}$, associated with the empirically estimated CDF. \end{enumerate} As a result, we obtain $\hat{A}(i,j) = \hat{g}^{(i)}\left( \hat{q}(j) \right)$ from the algorithm. The complexity involved in steps 1 and 2 may vary depending on the noise model assumed. We present an adaptation for the simplest scenario without noise (Section \ref{sec:alg_noiseless}) first, and then describe another general-purpose variant which adopts smoothing and deconvolution ideas to overcome the noise (Section \ref{sec:alg_noisy}). \subsection{Intuition \DG{***TODO***}} From the model assumption, the latent function with the first coordinated fixed to be $x$ is continuous and monotone increasing. Therefore, $g(x, \cdot)$ is invertible for every $x \in [0,1]$. For each $i \in [m]$, we consider the hidden latent function restricted to $x = \frow{i}$ as a quantile function, the inverse of the cumulative distribution function along row $i$ (see Appendix \ref{appx:distribution}, Definitions \ref{defn:CDF} and \ref{defn:Quantile} for the definition of CDF and quantile function). \subsection{Adaptations to 3 Noise Scenarios}\label{sec:alg_adapt} \subsubsection{Notation} To begin with, we inroduce some notation to use in algorithm description and in analysis. First, we let $\cB_i$ denote the set of column indices for which $Z(i,j)$ is observed (similarly, $\cB^j$ denotes the set of row indices, respectively), i.e., \[ \cB_i = \{ j' \in [n]: M(i,j') = 1 \},\qquad \cB^j = \{ i' \in [m]: M(i',j) = 1 \}. \] With aid of the indicator and Heaviside step functions (see Appendix \ref{sec:notation}, Eqs. \eqref{eqn:Indicator}, \eqref{eqn:Heaviside} for their definition), we can define useful auxiliary variables. \begin{align} R(i,j) &= \Ind{M(i,j) = 1},\qquad\qquad \forall (i,j) \in [m] \times [n],\\ Q_i(j_1, j_2) &= H\left[ Z(i, j_1) - Z(i, j_2) \right],\quad\forall i \in [m], \forall j_1, j_2 \in [n]. \end{align} Note that $\sum_{j_2 = 1}^n Q_i(j_1, j_2)$ is the number of entries $Z(i, j)$ in row $i$ whose value smaller than $Z(i, j_1)$ while $Z(i, j_1)$ itself is counted with weight $\frac{1}{2}$. \subsubsection{Noiseless scenario}\label{sec:alg_noiseless} In short, we assume $N(i,j) = 0$ for all $(i,j) \in [m] \times [n]$, hence, $Z = M \circ A$. \paragraph{Estimating the the column feature $\fcol{j}$.} Given $Z \in \Reals^{m \times n}$ and $i \in [m]$, we can define the following quantile estimate for $j \in \cB_i$. \begin{equation}\label{eqn:quantile_rowwise} \hat{q}_i(j) = \frac{\sum_{j'=1}^n R(i,j')Q_i(j, j') }{\sum_{j' = 1}^n R(i,j')}. \end{equation} Note that this is the average of $\sum_{j' = 1}^n R(i,j')$ independent random variables, where the independence comes from the Aldous-Hoover theorem. Therefore, it concentrates to the expectation with exponentially decaying probability tail bound (e.g., by Chernoff bound; see Lemma \ref{lem:Chernoff}). We can estimate the column feature, or the relative position, of column $j$ by further averaging this $\hat{q}_i(j)$ over the rows in $\cB^j$: \begin{equation}\label{eqn:quantile_est} \hat{q}(j) = \frac{\sum_{i'=1}^n R(i',j)\hat{q}_i(j) }{\sum_{i' = 1}^n R(i',j)}. \end{equation} \paragraph{Estimating the distribution $F^{(i)} = g^{-1}_{x=\frow{i}}$.} The CDF on row $i$, which will be denoted by $F^{(i)} = g^{-1}_{x=\frow{i}}$, will be estimated from empirical samples $\left\{ Z(i, j) : j \in \cB_i \right\}$ when there is no noise in the model ($N = 0$). Given $Z \in \Reals^{m \times n}$ and $i \in [m]$, we can define the following empirical distribution function for all $x \in \Reals$, \begin{equation}\label{eqn:ECDF_noiseless} \breve{F}^{(i)} (z) = \frac{\sum_{j = 1}^n R(i,j) \Ind{Z(i,j) \leq z }}{\sum_{j =1}^n R(i,j)}. \end{equation} By Glivenko-Cantelli theorem, this ECDF is a consistent estimator for the distribution. Also, we have an explicit expression for the exponentially decaying tail bound from the Dvoretzky-Kiefer-Wolfowitz inequality (Appendix \ref{appx:distribution}, Lemma \ref{lem:DKW}). \subsubsection{General Version of the Algorithm}\label{sec:alg_noisy} The estimates described in the previous subsection become no more reliable when our observation is noisy ($N(i,j) \neq 0$). For column feature (quantile) estimation, we take a simple approach of smoothing over rows, which is based on the universal monotonicity in the second coordinate in our model. As a result, the effect of additive noise terms is suppressed by smoothing. We would estimate the distribution with deconvolution techniques. Whether the noise distribution is known, or the noise distribution itself has to be estimated, we can achieve a consistent estimator of signal distributions, i.e., the latent function can be reconstructed. \paragraph{Estimating the the column feature $\fcol{j}$.} Because $g(x, \cdot)$ is nondecreasing regardless of specific value of $x$, its marginal $g_{marg}(y) := \int_0^1 g(x,y) dx$ is also nondecreasing. We first marginalize the data matrix by taking average over row index: \begin{equation}\label{eqn:Z_marg} Z_{marg}(j) = \frac{ \sum_{i=1}^m R(i,j) Z(i,j)}{\sum_{i=1}^m R(i,j)}. \end{equation} Then we estimate the column feature of $j$ with \begin{equation}\label{eqn:estimate_marg} \hat{q}_{marg}(j) = \frac{1}{n}\sum_{j' = 1}^n Q_{marg}(j, j'), \end{equation} where \begin{equation}\label{eqn:Q_marg} Q_{marg}(j, j') = H \left( Z_{marg}(j) - Z_{marg}(j') \right). \end{equation} \paragraph{Estimating the distribution $F^{(i)} = g^{-1}_{x=\frow{i}}$.} When there is nontrivial noise, the empirical CDF obtained from $\left\{ Z(i, j) : j \in \cB_i \right\}$ needs to be ``de-noised.'' For that purpose, we estimate density with kernel $K$ deconvoluted by the noise density; see Appendix \ref{sec:kernel} for more detail. We adopt results on consistency of deconvolution kernel density estimators (see Appendix \ref{appx:deconvolution} for those results; we suggested interested readers read \citet{Carroll1988, Fan1991, Delaigle2008}). \DG{***TODO: we may need to elaborate this kernel density estimation step} Although the main idea is the same, the resulting CDF estimator takes slightly different form based on whether the noise distribution is known or not. If the noise distribution is unknown, it has to be also estimated by an auxiliary procedure (for example, by the module presented in section \ref{sec:alg_aux}). \subparagraph{{\bf Case 1:} When the noise distribution is known.} Let $n_i = \left| \cB_i \right| = \sum_{j=1}^n R(i,j)$. We define the kernel smoothed CDF estimator with known noise density as follows: \begin{equation}\label{eqn:ECDF_known_noise} \tilde{F}^{(i)}(z) = \int_{-|\cB_i|^{1/6}}^{z} \tilde{f}^{(i)}(w) dw, \end{equation} where \begin{align} \tilde{f}^{(i)}(z) &= \frac{1}{h |\cB_i|} \sum_{j\in \cB_i} L \left( \frac{z- Z(i,j)}{h} \right) \text{ and } \label{eqn:known_density}\\ L(z) &= \frac{1}{2\pi} \int e^{-itz} \frac{\phi_K(t)}{\phi_N\left(\frac{t}{h}\right)}dt. \nonumber \end{align} The kernel bandwidth parameter $h = \left(4\gamma \right)^{\frac{1}{\beta}}\left( \log |\cB_i| \right)^{-\frac{1}{\beta}}$ where $\beta$ and $\gamma$ are smoothness parameters for the noise $N$ with known distribution (see Eq. \eqref{eqn:model_supersmooth}) \subparagraph{{\bf Case 2:} When the noise distribution is unknown.} When the noise distribution is unknown, $\phi_N$ is not available; $\hat{\phi}_N$ also has to be estimated from the measurements of the noise. We define the kernel smoothed CDF estimator with unknown noise density as follows: \begin{equation}\label{eqn:ECDF_unknown_noise} \hat{F}^{(i)}(z) = \int_{D_{min}}^{z} \hat{f}^{(i)}(w) dw, \end{equation} \DG{Can't we stick to $-|\cB_i|^{1/6}$ instead of $D_{min}$?} \DG{the above cdf estimator is modified; hence, we may need to introduce the notation $D_{min}, D_{max}$ in our model section} where \begin{align} \hat{f}^{(i)}(z) &= \frac{1}{h |\cB_i|} \sum_{j\in \cB_i} \hat{L} \left( \frac{z- Z(i,j)}{h} \right) \text{ and } \label{eqn:unknown_density}\\ \hat{L}(z) &= \frac{1}{2\pi} \int e^{-itz} \frac{\phi_K(t)}{\hat{\phi}_N\left(\frac{t}{h}\right) + \rho}dt. \label{eqn:kernel_estimated} \end{align} The kernel bandwidth parameter $h = D \left( \log n_i \right)^{-\frac{1}{\beta}}$ for some $D > \left(4\gamma\right)^{\frac{1}{\beta}}$ where $\beta$ and $\gamma$ are smoothness parameters for the noise (see Eq. \eqref{eqn:model_supersmooth}) though the exact density of noise is unknown. In this paper, we choose the ridge parameter $\rho = C_{\rho}|\cB_i|^{-7/24}$ for some constant $C_{\rho} > 0$. If $D_{min} := \inf_{x, y \in [0,1]} g(x,y)$ is not known a priori, we can replace $D_{min}$ in Eq. \eqref{eqn:ECDF_unknown_noise} with any constant $D' \leq D_{min}$ as long as it does not scale with $m, n$. In the following section, we present a procedure to estimate the noise characteristic function $\hat{\phi}_N(t)$. We show in Lemma \ref{lem:noisy_unknown_CDF} that it is possible with aid of this noise estimation procedure to achieve an estimate of the true CDF which uniformly converges with high probability. \subsubsection{Auxiliary Module for Noise Density Estimation}\label{sec:alg_aux} In case the noise distribution is unknown, we need an auxiliary procedure to estimate the noise density. Here we explain an algorithm to estimate the noise characteristic function $\hat{\phi}_N(t)$. \begin{enumerate} \item Estimate the column features of $j \in [n]$ as $\hat{q}_{marg}(j)$ defined in Eq. \eqref{eqn:estimate_marg}. \item Define a set of index triples \begin{align} \cT &= \Big\{ (i, j_1, j_2) \in [m] \times [n]^2: j_1, j_2 \in \cB_i, j_1 < j_2, \nonumber\\ &\quad\text{ and }\left| \hat{q}_{marg}(j_1) - \hat{q}_{marg}(j_2) \right| \leq \left( \frac{1}{\sqrt{2}} \left| \cB_i \right| \right)^{-2/3} \Big\}. \label{eqn:set_T} \end{align} \item Estimate $\hat{\phi}_N$ with the triples in $\cT$ in the following way: \begin{equation}\label{eqn:chN_est} \hat{\phi}_N(t) = \left| \frac{1}{\left| \cT \right|} \sum_{ \left(i, j_1, j_2) \right) \in \cT} \cos \big[ t \left( Z(i, j_1) - Z(i, j_2) \right) \big] \right|^{1/2}, \end{equation} \end{enumerate} Roughly speaking, $\cT$ is the set of index triples to mimic the repeated measurements. \subsection{Smoothing Distributions} \DG{***TODO: Introduce the idea of enhancing estimated CDFs by smoothing with nearby distributions} \section{Discussions} \DG{*** TODO} Our algorithm exploits similarity between rows from distribution signature (the moments) instead of measuring similarity from overlap between rows. As a result, we no longer requires $np^2 > 1$ to certify a sufficient overlap to exist. Although the convergence rate may be slow, we get convergence, and the sample complexity (required fraction for recovery) can be reduced beyond the previous limit $\frac{1}{\sqrt{n}}$. Actually, the sample complexity obtained is optimal - if we go below, some rows/columns become totally inaccessible. \DG{random thoughts - needs to be cleaned and written formally} In this work, the global structures are used, such as co-monotonicity. In that sense, it is 'minimally' collaborative. However, there might be a way to exploit similarity info in a local way.... related to III.4, and to relaxing the co-monotonicity assumption The idea has some similarity with histogram-based graphon estimator - however they required strict monotonicity in degree for identifiability. Current model is slightly more restrictive than that, but our model lets the first variable free. Moreover, our model is not symmetric, and our method doesn't need an ensemble of observations. Furthermore, we believe this work can be extended further (note that our model is not symmetric, hence, has more freedom) Deconvolution / quantile estimation (ranking) both provide fruitful byproduct other than filling in the missing entries; that can be good - however, that might be the bottleneck in the performance. Especially, deconvolution is known to be very costly - can we take an alternative other than deconvolution? As a final remark, the idea of using distribution (all the moments) as similarity measure might be extendable to generic noise model, not necessarily additive. \section{Proof Sketch of the Main Theorems} We sketch the proof of our main theorems in this section. \DG{Due to the page limit, we may have to sketch the main ideas only, in one paragrph or two} \subsection{Proof of the Main Theorems}\label{sec:thm_proof} Combination of Lemma 1 + 3 Combination of Lemma 2 + 4 Combination of Lemma 2 + 5 \section{Main Results}\label{sec:results We present main results of our work by answering Problems \ref{problem:problem_statement} and \ref{problem:problem_statement2} respectively. We provide simple estimation algorithms that require robust deconvolution method. The convergence rate for MSE under these algorithms are contrasted with lower bound results which primarily follow from the classical literature in function approximation and deconvolution. \subsection{Algorithmic Upper Bounds on MSE}\label{sec:thm_upper} We build up towards our main result by considering increasing order of difficulty in terms of assumption on noise model: (1) {\em Noiseless}: $N(i,j) = 0$ for all $(i,j) \in [m] \times [n]$; (2) {\em Known noise}: the noise distribution is known; and (3) {\em Unknown noise}: the noise distribution is unknown and has to be also estimated. Again, the main result is the scenario (3) with unknown noise, however, the other two cases help in building solution up and are presented for completeness. The following three main theorems explicitly state upper bounds on the MSE rate for each noise scenario, which turn out to be (near-) optimal in comparison with Theorems \ref{thm:optimal_noiseless} and \ref{thm:optimal_deconvolution}. We present the theorems in the language of matrix estimation, however, the proposed algorithm essentially recovers the underlying latent function, namely, graphon. \begin{theorem}[Informal version of Theorem \ref{thm:MSE_noiseless}; noiseless]\label{thm:simple_noiseless} In the noiseless scenario, there is a polynomial time algorithm $\breve{\varphi}: Z \mapsto \hat{A}$, which consistently estimates $A$ from a data matrix $Z$ with $ MSE\left( \breve{\varphi} \right) = O \left( \frac{1}{(n-1)p}\right). $ \end{theorem} \begin{theorem}[Informal version of Theorem \ref{thm:MSE_noisy_known}; known noise]\label{thm:simple_known} In the known noise scenario, there is a polynomial time algorithm $\tilde{\varphi}: Z \mapsto \hat{A}$, which consistently estimates $A$ from a data matrix $Z$ with $ MSE\left( \tilde{\varphi} \right) = O \left( \left(\log np \right)^{-\frac{2}{\beta}} \right). $ \end{theorem} \begin{theorem}[Informal version of Theorem \ref{thm:MSE_noisy_unknown}; unknown noise]\label{thm:simple_unknown} In the unknown noise scenario, there is a polynomial time algorithm $\hat{\varphi}: Z \mapsto \hat{A}$, which consistently estimates $A$ from a data matrix $Z$ with $ MSE\left( \hat{\varphi} \right) = O \left( \left(\log np \right)^{-\frac{2}{\beta}} \right). $ \end{theorem} The full statements and the proofs of these theorems can be found in Sections \ref{sec:full_proof_noiseless}, \ref{sec:full_proof_noisy_known}, and \ref{sec:full_proof_noisy_unknown}, respectively with corresponding adaptations of the estimation algorithm and their analysis. In a nutshell, the proposed algorithm consists of a two separate procedures of estimating the column features (quantiles) of all columns and then estimating the latent function (the inverse of signal CDF) for all rows. We show our proposed algorithm achieves the (near-) optimal rate of MSE in all three noise scenarios. We remark that the MSE converges to $0$ as $m, n \to \infty$ as long as $p = \omega \left( n^{-1} \right)$, regardless of the noise assumption. Even when there is nontrivial noise, our algorithm attains a vanishing MSE upper bound as long as $p = \omega(\max\{m^{-1}, n^{-1}\})$. This provides a positive answer to Problem \ref{problem:problem_statement}. However, answering to Problem \ref{problem:problem_statement2}, we require a technical condition for the aspect ratio between $m$ and $n$ when there is nontrivial noise. It is necessary to have $ \left( \log np \right)^{\frac{2}{\beta}} \ll mp \ll n$ to achieve the MSE upper bound as described in the Theorems \ref{thm:simple_known} and \ref{thm:simple_unknown}. This condition stems from our analysis; our proposed algorithm does not require it. The condition ensures that the error in function estimation dominates the error in column feature estimation in the noisy scenarios. Note that this condition is easily satisfied in most setups, and that there is no such restriction in the noiseless scenario. \subsection{Information-theoretic Lower Bounds on MSE}\label{sec:thm_lower} \iffalse{ \DG{ we may not want to include this in a conference version: \\ Now we argue that the convergence rates of mean squared error in the previous theorems are (near-) optmal, and hence, the proposed algorithms are optimal in terms of sample complexity as well. First of all, we can easily see that we need at least one observation per each row/column to exactly reconstruct any matrix $A \in \Reals^{m \times n}$. It is because there is no hope to retrieve any information on the latent feature associated with a row/column which is totally conceiled without any observation. Therefore, one requires $p = \Omega\left( \max \left\{ \frac{\log n }{m}, \frac{\log m}{n} \right\} \right)$ to ensure that every row/column has at least one observation with high probability in our Bernoulli observation model. (A similar argument can be found in \citet{candes2009exact}, \citet{candes2010power} for example). However, our objective is to consistently estimate the matrix, i.e., to build an estimator with vanishing MSE. Since vanishing MSE is a weaker condition compared to the exact recovery, we might be able to achieve the desired result even when the sample complexity $p$ violates the condition $p = \Omega\left( \max \left\{ \frac{\log n }{m}, \frac{\log m}{n} \right\} \right)$. } }\fi In order to argue the lower bound on the MSE rate for any estimation procedure, we show there exists a pair of latent functions, which are not possible to distinguish beyond certain resolution (the lower bound) by any algorithm from given data. Specifically, we show that for any given data $\frow{i}$, $\fcol{j}$, $Z(i,j)$, there exist two functions $g$ and $g^{\dagger}$ which would generate identical data at the sampling points, yet are significantly different. Suppose that there is an oracle algorithm $\varphi^*$ which has access not only to $Z(i,j)$ but also to $\frow{i}$, $\fcol{j}$. However, since $g\left( \frow{i}, \fcol{j} \right) = g^{\dagger} \left( \frow{i}, \fcol{j} \right)$ for all $(i,j)$ such that $M(i,j) = 1$, even an oracle cannot tell if the data is generated as per either $g$ or $g^{\dagger}$ based on the given data. No algorithm can outperform the oracle, and therefore, the MSE cannot be smaller than the squared $L^2$ distance between $g$ and $g^{\dagger}$. The details of the argument are provided in Section \ref{sec:full_proof_lower_bounds}. \begin{theorem}[Informal version of Theorem \ref{thm:noiseless_LB}; noiseless]\label{thm:optimal_noiseless} In the noiseless scenario, for any estimation algorithm $\varphi$, there exists a hard instance for which $ MSE(\varphi) = \Omega\left( \frac{1-p}{(n-1)p} \right). $ \end{theorem} \begin{theorem}[Informal version of Theorem \ref{thm:noisy_LB}; additive noise]\label{thm:optimal_deconvolution} In the additive noise scenario, for any estimation algorithm $\varphi$, there exists a hard instance for which $ MSE(\varphi) = \Omega\left( (1-p) \big( \log (n-1)p \big)^{-3/\beta} \right). $ \end{theorem} \section{Algorithm}\label{sec:algorithm} \subsection{Generic Recipe} We shall use a ``generic'' recipe for estimation in all three scenarios considered in this work: noiseless, noisy with known noise distribution, and noisy with unknown noise distribution. The generic algorithm is adapted for each setup to deal with the effect of the noise. Due to limitation of space, we shall provide details in Section \ref{ssec:algo.details} for the scenario when the noise distribution is unknown, which is the most challenging case. However, details for all the scenarios as well as accompanying analysis for each setup can be found in the Appendix \ref{sec:full_proof_noiseless} (noiseless); \ref{sec:full_proof_noisy_known} (known noise) and \ref{sec:full_proof_noisy_unknown} (unknown noise). The generic algorithm for each of the three scenarios is as follows: \begin{algorithm} \label{alg:generic} {\caption{Generic recipe of the algorithm} \begin{enumerate} \item Estimate the latent feature (=quantile) $\fcol{j}$ of column $j $. Let it be denoted by $\hat{q}(j)$ ~$j \in [n]$. \item Estimate $F^{(i)} = g^{-1}_{x=\frow{i}}$ on row $i$, which is the inverse of the latent function $g \left( \theta_{row}^{(i)}, \cdot \right)$ restricted on the first coordinate. Let it be denoted by $\hat{F}^{(i)}$, ~$i \in [m]$. \item Plug in the estimates: $\hat{A}(i,j) = \hat{g}^{(i)}(\hat{q}(j))$, ~$i \in [m], ~j \in [n]$, where $\hat{g}^{(i)} = \left(\hat{F}^{(i)}\right)^{-1}$. \end{enumerate} } \end{algorithm} We note that, by assumption, for any given $x \in [0,1]$ the latent function $g(x, \cdot): [0,1] \to \mathbb{R}$ along the second dimension is continuous and monotone increasing in our model, and hence invertible. The inverse of $g$ (for a fixed $x$), namely, $g^{-1}(x, \cdot): \mathbb{R} \to [0,1]$, can be viewed as a cumulative distribution function for a certain distribution on $\mathbb{R}$. In short, for each row $i \in [m]$, we can consider the latent function restricted to $x = \frow{i}$, i.e., $g(x, \cdot)$, as the inverse of the cumulative distribution function of signal along row $i$. The estimation of $F^{(i)}$ changes depending upon whether it is noiseless or noisy with known / unknown noise distribution. \subsection{Details}\label{ssec:algo.details} We describe details of the steps outlined in the generic algorithm above for the most challenging scenario with unknown noise distribution. We will be brief here due to space limitation. However, further details can be found in the Appendix \ref{sec:full_proof_noisy_unknown}. \subsubsection{Some notations} For $i \in [m], ~j \in [n]$, let \begin{align} \cB_i = \{ j' \in [n]: M(i,j') = 1 \} ~~\text{and}~~ \cB^j = \{ i' \in [m]: M(i',j) = 1 \}. \label{eqn:set_support.0} \end{align} Define Heaviside step function $H: \Reals \to \left\{0, \frac{1}{2}, 1 \right\}$ using the indicator function $\mathbb{I}$ as $H(x) = \frac{1}{2} \big( \Ind{x > 0} + \Ind{ x\geq 0 } \big).$ That is, $\sum_{j_2 = 1}^n H\big( Z(i, j_1) - Z(i, j_2) \big)$ is the number of entries $Z(i, j)$ in row $i$ whose value smaller than $Z(i, j_1)$ while $Z(i, j_1)$ (and indices with value equal to it) being counted with weight $\frac{1}{2}$. \subsubsection{Step 1: Estimating $\fcol{j}$ by $\hat{q}_{marg}(j)$, ~$j \in [n]$.} Given $Z \in \Reals^{m \times n}$, define $Z_{\marg}$ as the column average of observed data. That is, $Z_{\marg}(j) = \frac{ \sum_{i=1}^m M(i,j) Z(i,j)}{\sum_{i=1}^m M(i,j)}$ if $\cB^n \neq \emptyset$. If $\cB^j = \emptyset$, we let $Z_{\marg}(j) = \frac{1}{2}$ by default. Then, for $j \in [n]$ let \begin{align}\label{eqn:estimate_marg.0} \hat{q}_{\marg}(j) & = \frac{1}{n}\sum_{j' = 1}^n H \left( Z_{\marg}(j) - Z_{\marg}(j') \right). \end{align} \subsubsection{Step 2: Estimating $F^{(i)} = g^{-1}_{x=\frow{i}}$ by $\hat{F}^{(i)}$, ~$i \in [m]$.} Each entry in the row $i$ can be viewed as a sum of two independent random variables: the first random variable is $g(\frow{i}, \fcol{j})$ with the randomness induced due to that in the column parameter $\fcol{j}$ that are sampled uniformly from $[0,1]$; the second random variable is the additive noise. Therefore, the empirical CDF of the observations gives good estimation of distribution of the summation of these two random variables. However, the interest is in recovering the distribution of the first random variable. If we do know the distribution of the second random variable, we can deconvolute the effect of noise by deconvolution kernel estimator. Putting it other way, we wish to recover distribution of random variable $X$, but we observe samples of $Z = X + N$ instead of $X$. And we do not know the distribution of $N$. Due to independence, we know that $\phi_Z(t) = \phi_X(t) \phi_N(t)$ for all $t \in \Reals$, where $\phi_Z, \phi_X, \phi_N$ denote the characteristic function of random variable $Z, X$ and $N$ respectively. To overcome the challenge of unknown noise distribution, we estimate the noise characteristic function first and then estimate the CDF using kernel deconvolution, but with an additional ridge parameter to avoid division by zero. Indeed, this is known as deconvolution kernel density estimator in literature. We shall adopt prior results \citet{Carroll1988, Fan1991, Delaigle2008} to our setting. In particular, in the prior setting, to estimate noise distribution, it is assumed that for a given {\em fixed} instance of X, we have multiple noisy observations, e.g. $X + N_1, \dots, X + N_k$ with $k$ large enough. In our setting, it is effectively {\em one} sample per instance of $X$. So it is not straightforward to estimate noise distribution. We overcome this challenge as follows (further details can be found in Appendix \ref{appx:deconvolution}). \paragraph{Noise Density Estimation.} We shall explain how to produce estimation $\hat{\phi}_{N}(t)$ for noise distribution using pairs of observations from rows $i \in [m]$. To begin with, suppose that we can repeatedly observe the same instance $X_i$ of target random variable up to independent additive noise, i.e., $Z_{ij} = X_i + N_{ij}$ with $N_{ij}$ independent. Although we don't know the value of $X_i$, we can see that the difference in the observed data entries is equal to the difference between two independent noise instances: $Z_{i1} - Z_{i2} = \left( X_i + N_{i 1} \right) - \left( X_i + N_{i2}\right) = N_{i1} - N_{i2}$. Assuming symmetry in the noise distribution, $N_{i1} - N_{i2} \equiv N_{i1} + N_{i2}$. Therefore, $\phi_{N_{i1} - N_{i2}}(t) = \phi_N(t)^2$. From symmetry of $N$, we know that $\phi_N(t)$ is real-valued. Moreover, it is positive because $N$ is supersmooth. Therefore, we can estimate $\phi_N(t)$ by taking square root of the (the absolute value of) estimate $\hat{\phi}_{N_1 - N_2}(t)$ as $$ \hat{\phi}_N(t) = \hat{\phi}_{N_1 - N_2}(t)^{\frac{1}{2}} = \left| \frac{1}{n} \sum_{i=1}^n \cos \left[ t\left(N_{i1} - N_{i2}\right) \right] \right|^{\frac{1}{2}}. $$ However, the repeated measurement assumption is not feasible because we have {\em at most} one measurement for a given index $(i, j)$. Despite this challenge, we can still hope to obtain {\em almost} repeated samples from observations in a given row, if we choose columns $j_1, j_2 \in [n]$ that have {\em very} similar features $\fcol{j_1} \approx \fcol{j_2}$ so that \begin{align*} Z(i,j_1) - Z(i,j_2) &= \underbrace{ \left[ A(i, j_1) - A(i,j_2) \right] }_{\approx 0,~ \because \fcol{j_1} \approx \fcol{j_2} } + \left[ N(i,j_1) - N(i,j_2) \right] \approx N(i,j_1) - N(i,j_2). \end{align*} This intuition leads to the following procedure. For each $i \in [m]$, we produce different estimates $\hat{\phi}_N$, namely, $\hat{\phi}_{N, i}$ using data only from the rows $i' \in [m] \setminus i$. \begin{enumerate} \item Let $\cT := \big\{ (i, j_1, j_2) \in [m] \times [n]^2: M(i,j_1) = M(i,j_2) =1 \text{ and } \hat{q}_{marg}(j_1) \approx \hat{q}_{marg}(j_2) \big\}$. \item For $i \in [m]$, define $\cT_i$ as $\cT_i := \Big\{ (i',j_1, j_2) \in \cT: i' \neq i\Big\}$. \item For $i \in [m]$, estimate $\hat{\phi}_{N, i}(t) = \left| \frac{1}{\left| \cT_i \right|} \sum_{ \left(i, j_1, j_2 \right) \in \cT_i} \cos \Big[ t \left( Z(i, j_1) - Z(i, j_2) \right) \Big] \right|^{1/2}$. \end{enumerate} Intuitively, $\cT$ is the set of index triples to imitate the repeated measurements: Algorithm \ref{alg:setT} in Appendix \ref{sec:full_proof_noisy_unknown} for its construction. For each row $i$, we estimate the noise characteristic function $\hat{\phi}_{N, i}$ by using $\cT_i$, which is a subset of $\cT$ tailored to exclude the data from row $i$. \paragraph{Estimating $\hat{F}^{(i)}$.} Recall $\cB_i = \{ j \in [n]: M(i,j) = 1 \}$ (see Eq. \eqref{eqn:set_support.0}). We define the kernel smoothed CDF estimator with unknown noise density as follows. Given constants $D_1$, $D_2$ such that $D_{1} \leq \inf_{x,y \in [0,1]} g(x,y)$ and $D_{2} \geq \sup_{x,y \in [0,1]} g(x,y)$, \begin{equation}\label{eqn:ECDF_unknown_noise} \hat{F}^{(i)}(z) = \begin{cases} \int_{D_1}^{z } \hat{f}^{(i)}(w) dw, & \text{if } z < D_2,\\ 1, & \text{if } z \geq D_2, \end{cases} \end{equation} where \begin{align*} \hat{f}^{(i)}(z) = \frac{1}{h |\cB_i|} \sum_{j\in \cB_i} \hat{L} \left( \frac{z- Z(i,j)}{h} \right) \quad\text{ and \quad\hat{L}(z) = \frac{1}{2\pi} \int e^{-\img tz} \frac{\phi_K(t)}{\hat{\phi}_{N,i}\left(\frac{t}{h}\right) + \rho}dt \end{align*} The kernel bandwidth parameter $h = \left(4\gamma\right)^{\frac{1}{\beta}} \left( \log \left| \cB_i \right| \right)^{-\frac{1}{\beta}}$ where $\beta$ and $\gamma$ are smoothness parameters for the noise (see Eq. \eqref{eqn:model_supersmooth}). We choose the ridge parameter $\rho = |\cB_i|^{-7/24}$. We choose a kernel $K$ satisfying the following conditions: (i) $K$ is symmetric, i.e., $K(x) = K(-x)$ for all $x \in \Reals$; and (ii) $\phi_K$ is supported within $[-1, 1]$. More details can be found in Remark \ref{rem:kernel} in Appendix \ref{sec:full_proof_noisy_known}. \subsubsection{Step 3: Estimating $A(i,j)$ by $\hat{A}(i,j)$, ~$i \in [m], j \in [n]$.} For each $i \in [m]$, let $\hat{g}^{(i)} = \left(\hat{F}^{(i)}\right)^{-1}$ denote the quantile function (right pseudo-inverse) associated with $\hat{F}^{(i)}$. Plugging Eq. \eqref{eqn:estimate_marg.0} into it leads to the estimate of matrix entry: \begin{equation}\label{eqn:estimate_noisy_known} \hat{A}(i,j) = \hat{g}^{(i)}\left( \hat{q}_{marg}(j) \right). \end{equation} \section{Introduction} Deconvolution is a statistical inverse problem to estimate the unknown density $f_X$ of a random variable $X$ based on observations of random variable $Z$ whose density takes the form $f_Z = T(f_X)$ for some transformation $T$. For example, let the observed random variable be $Z = X + N$, with $N$ being independent, identically distributed noise; the density $f_Z = f_X * f_N$ with $f_N$ being noise density and $*$ representing convolution. In this case, estimating $f_X$ is effectively the process of deconvolution. In a large body of such problems, including density deconvolution and errors-in-variables regression, the transformation $T$ is commonly assumed to be known. In the simplest scenario, we have $n$ independent observations of $Z$ from which its density is estimated, thereby leading to estimation of $f_X = T^{-1}(f_Z)$ since $T$ is known. \citet{Fan1991} discussed how well the unknown density and its cumulative distribution function (CDF) can be estimated by nonparametric kernel methods with certain smoothness conditions imposed on the density $f_X$. In this celebrated work, they not only address how to estimate the density and compute the rate of convergence, but they also discuss how difficult the deconvolution problem is and how the difficulty depends on the noise characteristic. The work provides insights on the optimal rates of convergence and the best estimators in terms of the rates of convergence. However, the noise density $f_N$ and hence the transformation $T$ may not be known a priori in many real-world applications. To overcome the challenge, it is often assumed that additional samples from replication or validation data are available to estimate $f_N$. For example, samples of replicated contaminated data in the form of repeated measurements as in \citet{Delaigle2008}, or sometimes direct samples from the error distribution are assumed available. Another line of works have suggested to estimate the scale of the error distribution, but they require a particular parametric model for the noise density and even restrictive smoothness assumptions on the signal distribution in some cases. In this paper, we consider a generalization of the deconvolution problem stated above that arises naturally in the context of matrix estimation. The problem of matrix estimation is as follows. We are given a partial observation of a data matrix $Z = [Z_{ij}] \in \Reals^{m \times n}$ which is generated as per the so-called {\em latent variable model}. Specifically, each row $i \in [m] = \{1,\dots, m\}$ and column $j \in [n]$ are associated with latent parameters $\frow{i}, \fcol{j} \in [0,1]$ respectively. There is also a latent function $g: [0,1]\times [0,1] \to \mathbb{R}$. The random variables $Z_{ij}$ are conditionally independent across $i,j$, given the latent features $\frow{i}, \fcol{j}$, and are generated as $Z_{ij} = g(\frow{i}, \fcol{j}) + N_{ij}$ where $N_{ij}$ are independent, identically distributed noise random variables. The distribution of noise random variables is unknown. We observe each of $Z_{ij}$ with probability $p \in (0,1]$, independently. The goal is to recover the ``mean'' matrix $A = [A_{ij}]$ where $A_{ij} = \mathbb{E}[Z_{ij}]$ $= g(\frow{i}, \fcol{j})$. Ideally, we wish to retrieve a good estimate of $A$ with as small $p$ as possible. Now consider row $i \in [m]$ of matrix $A$. Recovering it requires knowing $g(\frow{i}, \cdot)$ where $\cdot \in \{ \fcol{j}, ~j \in [n]\}$. Now learning $g(\frow{i}, \cdot), ~\cdot \in [0,1]$ boils down to learning distribution of random variable $X^i = f(\frow{i}, U)$, where $U$ is uniform on $[0,1]$. That is, matrix estimation problem is about learning $m$ distributions, $X^i, ~i \in [m]$ simultaneously from their noisy samples. This is like the setup of \cite{Delaigle2008}, but harder. Because, in the setup of \cite{Delaigle2008}, we had {\em repeated} measurements while we have only a {\em single} measurement here. To articulate this, consider $m = 1$: it is impossible to learn the distribution corresponding to $X^1 = g(\frow{1}, U)$ when the additive noise is unknown because of the lack of {\em repeated} measurements as required in \cite{Delaigle2008}. For $m$ large enough, as we shall show, even though above difficulty remains, we can utilize ``commonality'' between columns to create a ``noisy version'' of repeated measurements by looking across a row. And this requires a robust version of the method introduced in \cite{Delaigle2008} which is an important contribution of this work. Using this improved ``collective deconvolution'' method, we show that for the class of matrix estimation problem considered here, our efficient algorithm provides a minimax rate that is nearly optimal. To enable ``commonality'' as mentioned above, we utilize the monotonicity property of the matrix. Precisely, we assume there exists a permutation of columns which leads to rearranging entries in all the rows in a monotone nondecreasing manner simultaneously. This assumption has similarity to the strong stochastic transitivity in rank aggregation (see \citet{SBGW16}) context and degree monotonicity in graphon estimation context; see \citet{Bickel09} and \citet{chan2014consistent} for example. We note that our model is asymmetric unlike graphon which is symmetric. We note here that our work has a similar flavor with so-called isotonic regression, whose goal is in estimating an unknown nondecreasing regression function which minimizes the average loss at design points. The connection is evident as deconvolution can be viewed as estimating the cumulative distribution function from samples. However, there is a significant difference in our model from that of isotonic regression; our objective is also in minimizing the average loss, but our model is essentially the error-in-variable model as in the deconvolution setup where covariates are corrupted with noise. This already sets a major obstacle in applying pooling algorithms (which is the zero-th order local smoothing), which are widely studied in the isotonic regression literuatures to our setup. Moreover, in our setup, one is allowed to observe only a fraction $p$ of $n$ noisy samples in each row, instead of accessing to all of them, and has to infer the function in the void, entirely based on samples in other rows and the `commonality' over the rows. \iffalse{ \subsection{Related Works} Early works on the problem of density estimation under the assumption of known measurement error distribution focused on addressing how to estimate the unknown density and compute the rates of convergence of the methods for specific error distributions. These early works include \citet{Carroll1988}, \citet{devroye1989consistent}, \citet{fan1993adaptively}, \citet{mendelsohn1982deconvolution}, \citet{Stefanski1990}, \citet{stefanski1990rates}. Among those vast amount of literature, \citet{Fan1991} discusses how difficulty of the deconvolution problem depend on the dispersion of the noise by introducing the notion of supersmooth and ordinary smooth noise, thereby providing insights on the nonparametric deconvolution. Subsequently, the problem of density estimation with unknown error density, which is also estimated from samples of the error itself, has been considered; see \citet{diggle1993fourier} and \citet{neumann1997effect}, for example. In particular, the setup where there are replicated measurements for each inherently different samples--with errors being independent and the intrinsic signal of the observations being the same among repeated measurements--drew much attention. For example, \citet{jaech1985statistical} described an experimental setup where the uranium concentration is repeatedly measured for several fuel pellets; \citet{biemer2011measurement} discusses repeated observations in a social science context, e.g., in surveys. There are also a plenty of works under the setup on medical and clinical research, for example, \citet{bland1986statistical} on lung function, \citet{dunn1989design} on a brain-related study, \citet{eliasziw1994statistical} on physiotherapy for the knee, etc. Further medical examples can be found in \citet{carroll2006measurement} and \citet{dunn2009statistical}. \citet{Delaigle2008} argues that even in such a setting of unknown error density with repeated measurements, a modified kernel deconvolution estimator using the estimated error density and a ridge parameter to avoid division-by-zero achieves the same first order property as the original kernel deconvolution estimator considered in \citet{Carroll1988}, \citet{Fan1991}. Our problem of interest is closely related to, but not limited to the problem of matrix completion. It is because we are not only recovering the matrix as a stack of numbers, but the underlying latent functions and column features. There have been a huge amount of intellectual advances in the matrix completion, especially in spectral approaches such as matrix factorization. This method is based on the observation that all matrices admit a unique singular value decomposition, and its goal is to recover the target matrix by estimating row and column singular vectors from the partial noisy observation. Since \citet{srebro2004generalization} suggested to use low-rank matrix approximation in this context, many statistically efficient estimators based on optimization have been suggested. They prove that $rn \log n$ samples out of $n^2$ entries suffice to impute the missing entries by matrix factorization, where $r$ is rank of the matrix to recover; see \citet{candes2009exact}, \citet{candes2010power}, \citet{rohde2011estimation}, \citet{keshavan56matrix}, \citet{negahban2012restricted}, \citet{jain2013low}, for example. However, many of these approaches require that the matrix is of low rank ($r \ll n$) to achieve a sensible sample complexity. As \citet{GantiBalzanoWillett2015} pointed out, a simple nonlinear entrywise transformation can produce a matrix of high rank, although there are only a few free model parameters. Latent variable model is a more general model and it subsumes the low rank model as a special case where the latent features are $r$ dimensional vectors and the latent function is given as their inner product (or a bilinear function). \citet{Chatterjee15} proposed the universal singular value thresholding (USVT) estimator inspired by low-rank matrix approximation and he argued that the USVT estimator provides an accurate estimate for any Lipschitz function under latent variable model. However, with his analysis based on step function approximation (stochastic block model approximation), to obtain a consistent estimate for an $n \times n$ matrix, $\Omega\left( n^{2 - \frac{2}{r+2}} \right)$ observations out of $n^2$ are required, where $r$ stands for the dimension of the latent spaces where the row and column latent variables are drawn from. The rate of USVT is further investigated in a more recent work by \citet{xu2017rates}. In contrast, \citet{LLSS2016} suggested a similarity-based estimator for collaborative filtering and they proved that their estimator requires $\Omega\left( n^{\frac{3}{2} + \delta} \right)$ for any small $\delta >0$ out of $n^2$ for consistency of the estimator, as long as $r = o \left( \log n \right)$. They reported that the bottleneck in sample complexity was the overlap requirement between pairs of rows, which necessitates $np^2 \gg 1$, which is a commonly observed phenomenon in neighbor-based approaches. When interpreted as matrix completion method, the algorithm suggested in this paper can avoid this restrictive overlap requirement by using distribution signatures, such as moments of distribution (in fact, the characteristic function is used). For that purpose we additionally assumed monotonicity of the latent function with respect to the column feature. We will discuss later that this monotonicity assumption is required only 1) when our goal is to estimate the matrix entries, or 2) when the noise density has to be estimated. The assumption is not necessary if we are to estimate only the distributions, or equivalently, the latent function. This flavor of monotonicity assumption is quite common in crowdsourcing and ranking literature. For example, the Dawid-Skene model suggested in \citet{DS79} and its generalization (see \citet{ZLPMS15}, \citet{KhetanOh16}) assumes each worker $i$ and task $j$ are respectively assigned latent features $p_i$ and $q_j$ in the interval $[0,1]$. Roughly, $p_i$ denotes the competence of worker $i$ and $q_j$ denotes the difficulty of task $j$. Actually our assumption is weaker than this, because we assume monotonicity only for the column features while this line of works assumes monotonicity in both directions. Similarly, in the literature of rank aggregation from pairwise comparison, the Bradley-Terry-Luce model ( \citet{BradleyTerry52}, \citet{Luce59}) and the Thurstone model (\citet{Thurstone27}) are in the mainstreams. Some generalization of it such as nonparametric Bradley-Terry model by \citet{Chatterjee15} and Strong Stochastic Transitivity \citet{SBGW16} are suggested, but they still share the monotonicity at the core. Another related field of research is that of graphon estimation. A graphon is a measurable function $W: [0,1]^2 \to [0,1]$, which was originally introduced as a limit object of the connectivity pattern in graph instances, but it is now also widely used as a generative model in the study of large networks. Suggested as a nonparametric framework for the analysis of networks, estimating graphon has gained huge interest in the scene of modern statistics. The framework relates to stochastic blockmodels \citet{Airoldi08}, \citet{rohe2011spectral} and degree-based models \citet{Bickel09}, \citet{chatterjee2011random}, \citet{Bickel11}. Theory and algorithm for the consistency and the rate of convergence for the graphon estimation have been pursued via numerous approaches including \citet{wolfe2013nonparametric}, \citet{AiroldiCostaChan13}, \citet{ZhangLevinaZhu15}. Recently, \citet{GaoLuZhou15} and \citet{klopp2017oracle} discussed the optimal minimax rate of convergence, but unfortunately their algorithms are not computationally tractable. }\fi \subsection{Related Works} Early works on the problem of density estimation under the assumption of known measurement error distribution focused on addressing how to estimate the unknown density and compute the rates of convergence of the methods for specific error distributions. These early works include \citet{Carroll1988}, \citet{devroye1989consistent}, \citet{fan1993adaptively}, \citet{mendelsohn1982deconvolution}, \citet{Stefanski1990}, \citet{stefanski1990rates}. Among those vast amount of literature, \citet{Fan1991} discusses how difficulty of the deconvolution problem depend on the dispersion of the noise by introducing the notion of supersmooth and ordinary smooth noise, thereby providing insights on the nonparametric deconvolution. Subsequently, the problem of density estimation with unknown error density, which is also estimated from samples of the error itself, has been considered; see \citet{diggle1993fourier} and \citet{neumann1997effect}, for example. In particular, the setup where there are replicated measurements for each inherently different samples--with errors being independent and the intrinsic signal of the observations being the same among repeated measurements--drew much attention. For example, \citet{jaech1985statistical} described an experimental setup where the uranium concentration is repeatedly measured for several fuel pellets; \citet{biemer2011measurement} discusses repeated observations in a social science context, e.g., in surveys. There are also a plenty of works under the setup on medical and clinical research, for example, \citet{bland1986statistical} on lung function, \citet{dunn1989design} on a brain-related study, \citet{eliasziw1994statistical} on physiotherapy for the knee, etc. Further medical examples can be found in \citet{carroll2006measurement} and \citet{dunn2009statistical}. \citet{Delaigle2008} argues that even in such a setting of unknown error density with repeated measurements, a modified kernel deconvolution estimator using the estimated error density and a ridge parameter to avoid division-by-zero achieves the same first order property as the original kernel deconvolution estimator considered in \citet{Carroll1988}, \citet{Fan1991}. Our problem of interest is closely related to, but not limited to the problem of matrix completion. It is because we are not only recovering the matrix as a stack of numbers, but the underlying latent functions and column features. There have been a huge amount of intellectual advances in the matrix completion, especially in spectral approaches such as matrix factorization. This method is based on the observation that all matrices admit a unique singular value decomposition, and its goal is to recover the target matrix by estimating row and column singular vectors from the partial noisy observation. Since \citet{srebro2004generalization} suggested to use low-rank matrix approximation in this context, many statistically efficient estimators based on optimization have been suggested. They prove that $rn \log n$ samples out of $n^2$ entries suffice to impute the missing entries by matrix factorization, where $r$ is rank of the matrix to recover; see \citet{candes2009exact}, \citet{candes2010power}, \citet{rohde2011estimation}, \citet{keshavan56matrix}, \citet{negahban2012restricted}, \citet{jain2013low}, for example. However, many of these approaches require that the matrix is of low rank ($r \ll n$) to achieve a sensible sample complexity. As \citet{GantiBalzanoWillett2015} pointed out, a simple nonlinear entrywise transformation can produce a matrix of high rank, although there are only a few free model parameters. Latent variable model is a more general model and it subsumes the low rank model as a special case where the latent features are $r$ dimensional vectors and the latent function is given as their inner product (or a bilinear function). \citet{Chatterjee15} proposed the universal singular value thresholding (USVT) estimator inspired by low-rank matrix approximation and he argued that the USVT estimator provides an accurate estimate for any Lipschitz function under latent variable model. However, with his analysis based on step function approximation (stochastic block model approximation), to obtain a consistent estimate for an $n \times n$ matrix, $\Omega\left( n^{2 - \frac{2}{r+2}} \right)$ observations out of $n^2$ are required, where $r$ stands for the dimension of the latent spaces where the row and column latent variables are drawn from. The rate of USVT is further investigated in a more recent work by \citet{xu2017rates}. In contrast, \citet{LLSS2016} suggested a similarity-based estimator for collaborative filtering and they proved that their estimator requires $\Omega\left( n^{\frac{3}{2} + \delta} \right)$ for any small $\delta >0$ out of $n^2$ for consistency of the estimator, as long as $r = o \left( \log n \right)$. They reported that the bottleneck in sample complexity was the overlap requirement between pairs of rows, which necessitates $np^2 \gg 1$, which is a commonly observed phenomenon in neighbor-based approaches. When interpreted as matrix completion method, the algorithm suggested in this paper can avoid this restrictive overlap requirement by using distribution signatures, such as moments of distribution (in fact, the characteristic function is used). For that purpose we additionally assumed monotonicity of the latent function with respect to the column feature. We will discuss later that this monotonicity assumption is required only 1) when our goal is to estimate the matrix entries, or 2) when the noise density has to be estimated. The assumption is not necessary if we are to estimate only the distributions, or equivalently, the latent function. This flavor of monotonicity assumption is quite common in crowdsourcing and ranking literature. For example, the Dawid-Skene model suggested in \citet{DS79} and its generalization (see \citet{ZLPMS15}, \citet{KhetanOh16}) assumes each worker $i$ and task $j$ are respectively assigned latent features $p_i$ and $q_j$ in the interval $[0,1]$. Roughly, $p_i$ denotes the competence of worker $i$ and $q_j$ denotes the difficulty of task $j$. Actually our assumption is weaker than this, because we assume monotonicity only for the column features while this line of works assumes monotonicity in both directions. Similarly, in the literature of rank aggregation from pairwise comparison, the Bradley-Terry-Luce model ( \citet{BradleyTerry52}, \citet{Luce59}) and the Thurstone model (\citet{Thurstone27}) are in the mainstreams. Some generalization of it such as nonparametric Bradley-Terry model by \citet{Chatterjee15} and Strong Stochastic Transitivity \citet{SBGW16} are suggested, but they still share the monotonicity at the core. Another related field of research is that of graphon estimation. A graphon is a measurable function $W: [0,1]^2 \to [0,1]$, which was originally introduced as a limit object of the connectivity pattern in graph instances, but it is now also widely used as a generative model in the study of large networks. Suggested as a nonparametric framework for the analysis of networks, estimating graphon has gained huge interest in the scene of modern statistics. The framework relates to stochastic blockmodels \citet{Airoldi08}, \citet{rohe2011spectral} and degree-based models \citet{Bickel09}, \citet{chatterjee2011random}, \citet{Bickel11}. Theory and algorithm for the consistency and the rate of convergence for the graphon estimation have been pursued via numerous approaches including \citet{wolfe2013nonparametric}, \citet{AiroldiCostaChan13}, \citet{ZhangLevinaZhu15}. Recently, \citet{GaoLuZhou15} and \citet{klopp2017oracle} discussed the optimal minimax rate of convergence, but unfortunately their algorithms are not computationally tractable. Lastly, we note here that estimating a monotone nondecreasing regression function is a classical topic in the field of nonparametric statistics and has drawn many researchers' interests on its own . In the simplest form, one assumes the response variables $Y_i$ and covariates $X_i$ satisfy $Y_i = f(X_i) + N_i$, $1 \leq i \leq n$ for some nondecreasing regression function, where $N_i$'s are i.i.d. noises. The objective of isotonic regression is in estimating a nondecreasing function $\hat{f}_n$ that minimizes the average loss at design points. Since the least squares type methods for isotonic estimation were proposed by \citet{ayer1955empirical}, \citet{vaneeden1956maximum}, \citet{grenander1956theory}, there has been an extensive study to develop algorithms and analyze the risk bounds. For example, \citet{rao1969estimation} and \citet{brunk1969estimation} established the convergence in distribution at a fixed point with the rate no slower than $n^{-1/3}$. \citet{van1990estimating, van1993hellinger} achieved rates of convergence in probability for the least square estimator with the $n^{-1/3}$-consistency in probability under the sub-Gaussian noise assumption. \citet{donoho1990gelfand} obtained the $n^{-1/3}$ upper bound on the mean squared error ($l_2$ risk) for i.i.d. Gaussian noise, and this assumption is weakened to the finiteness of some exponential moment by \citet{birge1993rates}. \citet{meyer2000degrees} obtained risk bounds under i.i.d. Gaussian errors based on Stein's unbiased estimation of mean squared error and \citet{zhang2002risk} provided more general $l_p$ risk bounds for maximum likelihood type isotonic estimators based on martingale method, assuming sufficiently large number of samples. We refer interested readers to \citet{barlow1972statistical}, \citet{barlow1972isotonic}, \citet{grenander1981abstract}, \citet{robertson1988order}, \citet{groeneboom2012information} for a more general discussion on statistical methods with order restrictions. \subsection{Our Contributions} As the main contribution of this work, as noted earlier, we present a robust extension of the works by \citet{Fan1991} and \citet{Delaigle2008} with the near optimal rate of convergence in terms of mean squared error. Ours is a neighborhood-based matrix completion method that operates with a very sparse data set. Technically, the refined use of concentration inequalities and $\varepsilon$-net argument in the proofs can be interesting in its own right. The key technical contribution is the noise density estimation algorithm described in Section \ref{sec:full_proof_noisy_unknown} (see Algorithm \ref{alg:setT}) and its analysis. It is aimed at imitating the setup of repeated measurements by detecting the columns having column features close to each other. Our estimation algorithm (which is described in Section \ref{sec:algorithm_unknown_hat}) first estimates the column features for every column by taking average values, and then estimate the noise density using the estimated column features. The regularity assumption on the latent function with respect to the column features is used in this noise density estimation step. Thereafter, the latent function, or the inverse of the signal CDF, can be restored exploiting the estimated noise density. We also analyze the consistency and the rate of convergence of the proposed algorithm, which is summarized as Theorem \ref{thm:MSE_noisy_unknown} (see Theorem \ref{thm:simple_unknown} for a simplified version). A full description of the algorithm and its analysis can be found in Section \ref{sec:full_proof_noisy_unknown}. For comparison with easier noise scenarios, we discuss the algorithm and analysis adapted to noiseless setup (Section \ref{sec:full_proof_noiseless}) and to noisy setup when the noise distribution is known a priori (Section \ref{sec:full_proof_noisy_known}). We also discuss information-theoretic lower bounds on the mean-squared error of matrix estimation algorithms (Section \ref{sec:full_proof_lower_bounds}). Our lower bound for the noiseless setup (Theorem \ref{thm:optimal_noiseless}) follows from the hardness of function approximation, while that for the noisy setup (Theorem \ref{thm:optimal_deconvolution}) is based on the hardness of deconvolution. Note that our lower bound on the convergence rate is not exactly same with that of \citet{Fan1991}, perhaps due to the difference in the objective of the problem and the performance metric. The algorithmic upper bounds and the information-theoretic lower bounds for the rates of convergence under three different noise scenarios are summarized in Table \ref{sample-table}. \begin{table*}[h!] \caption{Mean Squared Error of function estimation depending on the noise models.} \label{sample-table} \centering \begin{tabular}{lll} \toprule Noise Model & Algorithmic upper bound & Info-theoretic lower bound \\ \midrule Noiseless & $O\left( \frac{1}{(n-1)p} \right)$ & $\Omega\left( \frac{1-p}{(n-1)p} \right)$ \\ & Theorem \ref{thm:simple_noiseless} & Theorem \ref{thm:optimal_noiseless} \\ \midrule Supersmooth & $O\left( \big(\log np \big)^{-\frac{2}{\beta}} \right)$ & $\Omega\left( (1-p)\big(\log (n-1)p \big)^{-\frac{3}{\beta}} \right)$ \\ known distribution & Theorem \ref{thm:simple_known} & Theorem \ref{thm:optimal_deconvolution} \\ \midrule Supersmooth & $O\left( \left(\log np \right)^{-\frac{2}{\beta}} \right)$ & same as above \\ unknown distribution & Theorem \ref{thm:simple_unknown} & \\ \bottomrule \end{tabular} \end{table*} \subsection{Organization} The paper is organized as follows. In Section \ref{sec:statement}, we state the problem of interest and our model assumptions. In Section \ref{sec:results}, we present our main theoretical results, exhibiting the rates of convergence of our algorithm and its near optimality. The lower bounds on MSE are stated in Theorem \ref{thm:optimal_noiseless} and \ref{thm:optimal_deconvolution}. The proof of these two theorems can be found in Section \ref{sec:full_proof_lower_bounds}. For comparison with easier noise scenarios, we discuss the algorithm and analysis adapted to noiseless setup (Section \ref{sec:full_proof_noiseless}) and to noisy setup when the noise distribution is known a priori (Section \ref{sec:full_proof_noisy_known}). The full details of the analysis and proof for the most general noise scenario are deferred until Section \ref{sec:full_proof_noisy_unknown}. \section{Proof of Theorem \ref{thm:simple_noiseless}: Noiseless Scenario} \label{sec:full_proof_noiseless} In this section, we prove Theorem \ref{thm:simple_noiseless} establishing an upper bound on MSE achievable in the noiseless setup. This is done by evaluating MSE for a specific algorithm. We start by describing the algorithm followed by evaluating its performance in terms of MSE. \subsection{Algorithm Description} We shall use a ``generic'' recipe for estimation in all three scenarios considered in this work: noiseless, noisy with known noise distribution and noisy with unknown noise distribution. The only change in each case would be how we handle the noise. \subsubsection{Generic Description}\label{sec:alg_generic} \begin{enumerate} \item Estimate the latent feature (or quantile) $\fcol{j}$ of column $j \in [n]$. Let it be $\hat{q}(j)$. \item Estimate $F^{(i)} = g^{-1}_{x=\frow{i}}$ on row $i$, which is the inverse of the latent function $g \left( \theta_{row}^{(i)}, \cdot \right)$ restricted on the first coordinate. Let it be $\hat{F}^{(i)}$, ~$i \in [m]$. \item Estimate $\hat{g}^{(i)} = \left(\hat{F}^{(i)}\right)^{-1}$, ~$i \in [m]$. \item Plug in estimate: $\hat{A}(i,j) = \hat{g}^{(i)}(\hat{q}(j))$, ~$i \in [m], ~j \in [n]$, where $\hat{g}^{(i)} = \left(\hat{F}^{(i)}\right)^{-1}$. \end{enumerate} By assumption, the function $g(x, \cdot)$ is invertible for every $x \in [0,1]$ since $g(x, \cdot): [0,1] \to \mathbb{R}$ is continuous and monotonically increasing. Let the inverse (given fixed $x$) be denoted as $g^{-1}(x, \cdot): \mathbb{R} \to [0,1]$. That is, $g^{-1}(x, \cdot)$ can be viewed as a cumulative distribution function for distribution on $\mathbb{R}$. In short, for each row $i \in [m]$, we can consider the hidden latent function restricted to $x = \frow{i}$, $g(x, \cdot)$, as the inverse of the cumulative distribution function along row $i$ (see Appendix \ref{appx:distribution}, Definitions \ref{defn:CDF} and \ref{defn:Quantile} for details). The first two steps of the algorithm will vary across scenarios to account for noise. \subsubsection{Detailed Description: Noiseless Setup} \paragraph{Notations} For $i \in [m]$, we let $\cB_i$ denote the set of column indices for which $Z(i,j)$ is observed (similarly, $\cB^j$ denotes the set of row indices for $j \in [n]$, respectively), that is \begin{align} \cB_i = \{ j' \in [n]: M(i,j') = 1 \} ~~\text{and}~~ \cB^j = \{ i' \in [m]: M(i',j) = 1 \}. \label{eqn:set_support} \end{align} Define indicator function \begin{equation}\label{eqn:Indicator} \mathbb{I}\{\text{condition} \} = \begin{cases} 1, & \text{if ~} \text{condition} \mbox{~is~true},\\ 0, &\text{if ~} \text{condition} \mbox{~is~false.} \end{cases} \end{equation} Define Heaviside step function $H: \Reals \to \left\{0, \frac{1}{2}, 1 \right\}$ as \begin{equation}\label{eqn:Heaviside} H(x) = \frac{1}{2} \big( \Ind{x > 0} + \Ind{ x\geq 0 } \big) = \begin{cases} 1, & \text{if }x > 0,\\ \frac{1}{2}, &\text{if } x = 0,\\ 0, &\text{if } x < 0. \end{cases} \end{equation} That is, $\sum_{j_2 = 1}^n H\big( Z(i, j_1) - Z(i, j_2) \big)$ is the number of entries $Z(i, j)$ in row $i$ whose value smaller than $Z(i, j_1)$ while $Z(i, j_1)$ itself is counted with weight $\frac{1}{2}$. Now the details of the steps of the algorithm. \paragraph{1. $\hat{q}(j)$: Estimate of $\theta_{col}(j)$, ~$j \in [n]$} Given $Z \in \Reals^{m \times n}$ and $j \in [n]$, for $i \in \cB^j$ define \begin{equation}\label{eqn:quantile_rowwise} \hat{q}_i(j) = \frac{\sum_{j'=1}^n M(i,j')H\big( Z(i, j) - Z(i, j') \big) }{\sum_{j' = 1}^n M(i,j')}. \end{equation} Subsequently, define estimation of $\theta_{col}(j)$ as \begin{equation}\label{eqn:quantile_est} \hat{q}(j) = \begin{cases} \frac{1}{2}, & \text{if } \cB^j = \emptyset, \text{~else~}\\ \hat{q}_{i^*(j)}(j), & \text{where~}i^*(j) \text{ is randomly chosen from~} \cB^j. \end{cases} \end{equation} \paragraph{2. $\breve{F}^{(i)}$: Estimate of $F^{(i)} = g^{-1}_{x=\frow{i}}$, ~$i \in [m]$} For $z \in \Reals$, define \begin{equation}\label{eqn:ECDF_noiseless} \breve{F}^{(i)} (z) = \frac{\sum_{j = 1}^n M(i,j) \Ind{Z(i,j) \leq z }}{\sum_{j =1}^n M(i,j)}. \end{equation} \paragraph{3. and 4. $\breve{A}(i,j)$: Estimate of $A(i,j)$, ~$i \in [n], j \in [m]$} For each $i \in [m]$, let $\breve{g}^{(i)} = \left(\breve{F}^{(i)}\right)^{-1}$ denote the quantile function (right pseudo-inverse) associated with $\breve{F}^{(i)}$. Plugging in Eq. \eqref{eqn:quantile_est} into it leads to the estimate of matrix entry: \begin{equation}\label{eqn:estimate_noiseless} \breve{A}(i,j) = \breve{g}^{(i)}\left( \hat{q}(j) \right), \quad \forall(i,j) \in [m] \times [n]. \end{equation} By definition, $\breve{F}^{(i)}$ is simply the empirical cumulative distribution function. Hence, by Glivenko-Cantelli theorem, it follows that it is a consistent estimator for $F^{(i)}$. Using the Dvoretzky-Kiefer-Wolfowitz inequality (see Appendix \ref{appx:distribution}, Lemma \ref{lem:DKW}), we obtain concentration of $\breve{F}^{(i)}$ around $F^{(i)}$. This is summarized in Lemma \ref{lem:noiseless_CDF}. \subsection{Algorithm Analysis}\label{sec:analysis_noiseless} We start by establishing two key results needed for establishing proof of Theorem \ref{thm:simple_noiseless}. To that end, note that $\hat{q}_i(j)$ is the average of $\sum_{j' = 1}^n M(i,j')$ independent random variables as per our model. Therefore, by Chernoff bound, for each $i$, it concentrates around its expectation, which is the true parameter $\fcol{j}$ of interest. This explain the choice of \eqref{eqn:quantile_rowwise}-\eqref{eqn:quantile_est}. This is summarized in Lemma \ref{lem:quantile_noiseless}. By definition, $\breve{F}^{(i)}$ is simply the empirical cumulative distribution function. Hence, by Glivenko-Cantelli theorem, it follows that it is a consistent estimator for $F^{(i)}$. Using the Dvoretzky-Kiefer-Wolfowitz inequality (see Appendix \ref{appx:distribution}, Lemma \ref{lem:DKW}), we obtain concentration of $\breve{F}^{(i)}$ around $F^{(i)}$. This is summarized in Lemma \ref{lem:noiseless_CDF}. Finally, we obtain the error bound for estimation $\breve{A}(i,j)$ in Lemma \ref{thm:tail_noiseless}. This will further lead to proof of Theorem \ref{thm:simple_noiseless}. We will use the following definition in what follows. \begin{align*} D_{max} \equiv \sup_{x,y \in [0,1]} g(x,y) \quad &\text{~and~}\qua D_{min} \equiv \inf_{x,y \in [0,1]} g(x,y) ,\\ L \equiv \sup_{x,y_1 \neq y_2 \in [0,1]} \frac{g(x,y_2) - g(x,y_1)}{y_2 - y_1}~~ &\text{and}~ l \equiv \inf_{x,y_1 \neq y_2 \in [0,1]} \frac{g(x,y_2) - g(x,y_1)}{y_2 - y_1}. \end{align*} \subsubsection{Concentration of $\hat{q}(j)$ around $\theta_{col}(j)$} We state the following. \begin{lemma}\label{lem:quantile_noiseless} When there is no noise ($N=0$) in the model, for any $j \in [n]$, the quantile estimator $\hat{q}(j)$ (see Eq. \eqref{eqn:quantile_est}) concentrates to $\theta_{col}^{(j)}$ with high probability: \[ \Prob{ \left| \hat{q}(j) - \theta_{col}^{(j)} \right| \geq t } \leq 2 \exp\left( -2 \left| \cB_{i^*} \right| t^2 \right), \] where $i^*$ denote the row index chosen in Eq. \eqref{eqn:quantile_est}. \end{lemma} Note that, when $\cB^j = \emptyset$ and $\hat{q}(j)$ is chosen to be $\frac{1}{2}$, we shall use $i^*$ as any index leading to $\cB_{i^*} \subset \cB^j$ being $\emptyset$ and hence $\Prob{ \left| \hat{q}(j) - \theta_{col}^{(j)} \right| \geq t } \leq 2$, which is always true! The proof of the above Lemma can be found in Appendix \ref{proof.lem.1}. \subsubsection{Concentration of $\breve{F}^{(i)}$ around $F^{(i)}$} We state the following. \begin{lemma}[Concentration of noiseless CDF estimation]\label{lem:noiseless_CDF} When there is no noise in the model, the empirical cumulative distribution function (ECDF) $\breve{F}^{(i)}$ (Eq. \eqref{eqn:ECDF_noiseless}) uniformly concentrates to the true CDF $F^{(i)} = g^{-1}_{x=\frow{i}}$, that is for each $i \in [m]$, \[ \Prob{\sup_{z \in \Reals} \left| \breve{F}^{(i)}(z) - F^{(i)}(z) \right| > t } \leq 2 \exp\left(-2 |\cB_i| t^2\right). \] \end{lemma} \begin{proof} The proof is a direct application of Dvoretzky-Kiefer-Wolfowitz inequality (see Lemma \ref{lem:DKW}). \end{proof} \subsection{Completing Proof of Theorem \ref{thm:simple_noiseless}}\label{sec:proof_thm1} We complete the proof of Theorem \ref{thm:simple_noiseless} using Lemmas \ref{lem:quantile_noiseless} and \ref{lem:noiseless_CDF}. To that end, we first state exponential tail bound on error in estimation, $|\breve{A}(i, j) - A(i,j)|$ in Lemma \ref{thm:tail_noiseless} and then using it, obtain bound on Mean-Square-Error (MSE) to conclude the proof in Theorem \ref{thm:MSE_noiseless}. \subsubsection{Tail Bound on $|\breve{A}(i, j) - A(i,j)|$} \begin{theorem}[Probabilistic bound: noiseless]\label{thm:tail_noiseless} For each $(i,j) \in [m] \times [n]$ and $t \geq 0$, \begin{align} &\Prob{\left|\breve{A}(i, j) - A(i,j)\right| > t} \nonumber\\ &\quad\leq 2 \exp\left( -mp \right) + 4\exp \left( -(n-1)p \left( 1 - \exp \left(- \frac{2 t^2}{9L^2} \right) \right) \right).\label{eqn:tail_noiseless} \end{align} \end{theorem} \begin{proof} Let $\breve{g}^{(i)} = \left(\breve{F}^{(i)}\right)^{-1}$ denote the quantile function (right pseudo-inverse) associated with $\breve{F}^{(i)}$. Note that $A(u,i) = g \left( \frow{i}, \fcol{j} \right)$ and $\breve{A}(i,j) = \breve{g}^{(i)}\left(\hat{q}(j) \right)$. Let $\theta^* := F^{(i)}\left( \breve{A}(i,j) \right) = F^{(i)}\left( \breve{g}^{(i)}\left(\hat{q}(j) \right) \right) $. We can observe that $\left|\theta^* - \hat{q}(j) \right| \leq 2\left\| \breve{F}^{(i)} - F^{(i)} \right\|_{\infty}$. By definition of uniform norm, at the point of continuity, we have that $\left| \theta^* - \hat{q}(j)\right| \leq \left\| \breve{F}^{(i)} - F^{(i)} \right\|_{\infty}$. Else if $\breve{g}^{(i)}\left(\hat{q}(j) \right)$ is a jump discontinuity of $\breve{F}^{(i)}$, then it follows that for any $\delta >0$, $ \breve{F}^{(i)}\left( \breve{g}^{(i)}\left(\hat{q}(j) \right) - \delta \right) \leq \hat{q}(j) \leq \breve{F}^{(i)}\left(\breve{g}^{(i)}\left(\hat{q}(j) \right) \right) $. Since $F^{(i)}$ is assumed to be continuous, $\left\| \breve{F}^{(i)} - F^{(i)} \right\|_{\infty} \geq \frac{1}{2}\sup_y \lim_{\delta \to 0+}\breve{F}^{(i)}\left(y \right) - \breve{F}^{(i)}\left(y - \delta \right)$. Therefore, $\left|\theta^* - \hat{q}(j)\right| \leq 2\left\| \breve{F}^{(i)} - F^{(i)} \right\|_{\infty}$ . Since $\breve{A}(i,j) = \breve{g}^{(i)}\left(\hat{q}(j) \right)=g \left( \frow{i}, \theta^* \right)$, and $g$ is $(l, L)$-biLipschitz, \begin{align*} \left|A(i, j) - \breve{A}(i,j)\right| &= \left|g \left( \frow{i}, \fcol{j} \right) - g \left( \frow{i}, \theta^* \right)\right| \\ &\leq L \left| \fcol{j} - \theta^* \right|\\ &\leq L \left( \left| \fcol{j} - \hat{q}(j) \right| + \left| \hat{q}(j) - \theta^* \right| \right)\\ &\leq L \left( \left| \fcol{j} - \hat{q}(j) \right| + 2\left\| \breve{F}^{(i)} - F^{(i)} \right\|_{\infty}\right). \end{align*} If $\left| \fcol{j} - \hat{q}(j) \right| \leq \frac{t}{3L}$ and $\left\| \breve{F}^{(i)} - F^{(i)} \right\|_{\infty} \leq \frac{t}{3L}$, then $\left|A(u,i) - \breve{A}(i,j)\right| \leq t$. Therefore, \begin{align*} &\Prob{\left| \breve{A}(i, j) - A(i,j)\right| > t}\\ &\qquad\leq \Prob{ \left| \hat{q}(j) - \theta_{col}^{(j)} \right| > \frac{t}{3L}\ } + \Prob{ \sup_{z \in \Reals} \left| \breve{F}^{(i)}(z) - F^{(i)}(z) \right| > \frac{t}{3L} }\\ &\qquad\leq 2\exp \left(- \frac{2 \left| \cB_{i^*} \right| t^2}{9L^2} \right) + 2\exp\left(- \frac{2 \left| \cB_i \right| t^2}{9L^2} \right), \end{align*} where the last inequality follows from Lemma \ref{lem:quantile_noiseless} and Lemma \ref{lem:noiseless_CDF}. Recall that $i^*$ denote the row index chosen in the algorithm (see Eq. \eqref{eqn:quantile_est}). Note that $\left| \cB_i \right|$ is the sum of $n$ independent Bernoulli random variables with parameter $p$ under our Bernoulli model. Therefore, it takes integral value in $\{0, 1, \ldots, n \}$ following Binomial$(n,p)$ distribution. $\left| \cB_{i^*} \right|$ follows a slightly different distribution. By algorithm description (see Eq. \eqref{eqn:quantile_est}), $\left| \cB_{i^*} \right| = 0$ if and only if $\cB^j = \emptyset$, whose probability is $(1-p)^m$. For $i \in \cB^j$, it is already conditioned that $M(i,j) = 1$. Therefore, \begin{align*} &\Prob{\left| \cB_{i^*}\right| = k} = \begin{cases} (1-p)^m, & \text{if } k=0,\\ \left[ 1 - (1-p)^m \right] {n-1 \choose k-1} p^{k-1}(1-p)^{n-k}, & \text{if } k \geq 1. \end{cases} \end{align*} As a last step, we will marginalize out $\left| \cB_i \right|$ and $\left| \cB_{i^*} \right|$. \begin{align} & \Prob{\left| \breve{A}(i, j) - A(i,j)\right| > t} \nonumber \\ &\qquad = \sum_{k_1, k_2} \Big[ \Prob{\left.\left| \breve{A}(i, j) - A(i,j)\right| > t \right| \left| \cB_i \right| = k_1, \left| \cB_{i^*} \right| = k_2} \nonumber\\ &\qquad\qquad \times\Prob{\left| \cB_i \right| = k_1, \left| \cB_{i^*} \right| = k_2} \Big] \nonumber\\ &\qquad \leq \sum_{k_1} 2\exp \left(- \frac{2 k_1 t^2}{9L^2} \right) \Prob{\left| \cB_i \right| = k_1} \label{eqn:cond1}\\ &\qquad\qquad + \sum_{k_2} 2\exp \left(- \frac{2 k_2 t^2}{9L^2} \right) \Prob{\left| \cB_{i^*} \right| = k_2}. \label{eqn:cond2} \end{align} We can further simplify the last two terms as follows: \begin{align*} Eq. \eqref{eqn:cond1} &= \sum_{k_1} 2\exp \left(- \frac{2 k_1 t^2}{9L^2} \right) {n \choose k_1} p^{k_1} (1-p)^{n-k_1} \nonumber\\ &= 2 \sum_{k_1} {n \choose k_1} \left[p\exp \left(- \frac{2 t^2}{9L^2} \right)\right]^{k_1} (1-p)^{n-k_1} \nonumber\\ &= 2\left[1 - p \left( 1 - \exp \left(- \frac{2 t^2}{9L^2} \right) \right) \right]^n &\because \text{binomial theorem}\\ &= 2\left[1 - \frac{np}{n} \left( 1 - \exp \left(- \frac{2 t^2}{9L^2} \right) \right) \right]^n\\ &\leq 2 \exp \left( -np \left( 1 - \exp \left(- \frac{2 t^2}{9L^2} \right) \right) \right). \end{align*} The inequality in the last line holds because $\left( 1 + \frac{a}{n} \right)^n \leq e^{a}$ for any $a \in \Reals$ and any $n \in \Nats$. In a similar manner, \begin{align*} Eq. \eqref{eqn:cond2} &= 2(1-p)^m + 2\left[ 1 - (1-p)^m \right] \\ &\qquad \times \sum_{k_2 =1}^n \exp \left(- \frac{2 k_2 t^2}{9L^2} \right) {n-1 \choose k_2-1} p^{k_2-1}(1-p)^{n-k_2}\\ &\leq 2(1-p)^m + 2\left[ 1 - (1-p)^m \right] \\ &\qquad \times \exp \left(- \frac{2 t^2}{9L^2} \right) \exp \left( -(n-1)p \left( 1 - \exp \left(- \frac{2 t^2}{9L^2} \right) \right) \right)\\ &\leq 2 \exp\left( -mp \right) + 2\exp \left( -(n-1)p \left( 1 - \exp \left(- \frac{2 t^2}{9L^2} \right) \right) \right). \end{align*} Putting everything together \begin{align*} &\Prob{\left| \breve{A}(i, j) - A(i,j)\right| > t}\\ &\qquad \leq 2 \exp\left( -mp \right) + 4\exp \left( -(n-1)p \left( 1 - \exp \left(- \frac{2 t^2}{9L^2} \right) \right) \right). \end{align*} \end{proof} \subsubsection{Mean Squared Error} Let $\breve{\varphi}$ denote the estimator which maps $Z$ to $\breve{A}$. Then \begin{align} MSE\left( \breve{\varphi} \right) &=\Exp{\frac{1}{mn} \sum_{i=1}^m \sum_{j=1}^n \left( \breve{A}(i,j) - A(i,j) \right)^2} \nonumber\\ &=\frac{1}{mn} \sum_{i=1}^m \sum_{j=1}^n \Exp{\left( \breve{A}(i,j) - A(i,j) \right)^2} &\because\text{linear}\nonumber\\ &= \Exp{\left( \breve{A}(1,1) - A(1,1) \right)^2} &\because\text{exchangeable}\nonumber\\ &= \int_0^{\infty} \Prob{\left( \breve{A}(1,1) - A(1,1) \right)^2 > t } dt &\because\text{positive}\nonumber\\ &= \int_0^{\infty} \Prob{\left| \breve{A}(1,1) - A(1,1) \right| > \sqrt{t} } dt &\because u = \sqrt{t}\nonumber\\ &= \int_0^{\infty} 2u\Prob{\left| \breve{A}(1,1) - A(1,1) \right| > u } du. \label{eqn:integration} \end{align} Now it remains to integrate the tail bounds obtained in the previous section to conclude our first main theorem. In general, we can derive the following formulae from integration by substitution \begin{align} \int_0^{\infty} u e^{-a u^2} ds &=\int_0^{\infty} \frac{1}{2a}e^{-z} dz = \left. -\frac{1}{2a}e^{-z} \right|_0^{\infty} = \frac{1}{2a}, \label{eqn:integral}\\ \int_0^{\infty} u e^{-au} du &= \int_0^{\infty} \frac{z}{a^2} e^{-z} dz = \frac{\Gamma(2)}{a^2} = \frac{1}{a^2}. \label{eqn:Gamma2} \end{align} These formulae will be frequently used, because many of our error bound have such forms. Also, from the model assumption and the construction of the estimators, the estimation error is bounded \[ \left| \breve{A}(i,j) - A(i,j) \right| \leq D_{max} - D_{min} ~\equiv D, \] where $D \equiv D_{max} - D_{min}$, a constant independent of $m, n$. \begin{theorem}[Main theorem 1 -- full version of Theorem \ref{thm:simple_noiseless}; noiseless]\label{thm:MSE_noiseless} The mean squared error of the noiseless estimator $\breve{\varphi}$ is bounded above as follows: \begin{align*} MSE\left( \breve{\varphi} \right) &\leq \frac{18L^2 \exp\left(\frac{2}{9L^2}\right)}{(n-1)p }\\ &\quad + D^2 \left[ 2 \exp(-mp) + 4 \exp\left( -(n-1)p \left( 1 - e^{-\frac{2}{9L^2}} \right)\right) \right]. \end{align*} \end{theorem} It can be seen that as long as $mp \gg \log np$, the dominant term on the right hand side is $ \frac{18L^2 \exp\left(\frac{2}{9L^2}\right)}{(n-1)p} $ which scales as $O\big(\frac{1}{np}\big)$. And $MSE\left( \breve{\varphi} \right) \to 0$ as long as $ p = \omega\left( \frac{1}{m}, \frac{1}{n} \right)$. \begin{proof}[Proof of Theorem \ref{thm:MSE_noiseless}] We can prove the MSE upper bound by integrate the probabilistic tail bound in Theorem \ref{thm:tail_noiseless}. We first observe that for $c > 0$, $1 - e^{-cu^2} \geq c e^{-c} u^2$ for $0 \leq u \leq 1$; and for $u \geq 1$, $1 - e^{-cu^2} \geq 1 - e^{-c}$. Plugging in Eq. \eqref{eqn:tail_noiseless} to Eq. \eqref{eqn:integration} leads to (with notation $c = \frac{2}{9L^2}$ below) \begin{align*} MSE\left( \breve{\varphi} \right) &= \int_0^{D} 2u \Prob{\left|A(i, j) - \breve{A}(i,j)\right| > u} du\\ &\leq \int_0^{D} 4u\exp \left(- mp\right) du \\ &\qquad + \int_0^{D} 8u\exp \left( -(n-1)p \left( 1 - \exp \left(- \frac{2 u^2}{9L^2} \right) \right) \right) du\\ &\leq 2D^2 \exp\left(-mp\right) + \int_0^1 8u \exp \left( -(n-1)p c e^{-c} u^2 \right) du\\ &\qquad + \int_1^D 8u \exp\left( -(n-1)p\left( 1 - e^{-c} \right)\right) du\\ &\leq \frac{18L^2 \exp\left(\frac{2}{9L^2}\right)}{(n-1)p }\\ &\qquad + D^2 \left[ 2 \exp(-mp) + 4 \exp\left( -(n-1)p \left( 1 - e^{-\frac{2}{9L^2}} \right)\right) \right]. \end{align*} \end{proof} \section{Sketch of the Proof of Theorem \ref{thm:simple_unknown}} Here we provide a sketch of the proof of main Theorem \ref{thm:simple_unknown}. Details can be found in Appendix \ref{sec:full_proof_noisy_unknown}. The key to establishing this result is arguing that each of the three steps of the algorithm detailed in Section \ref{ssec:algo.details} succeeds. This is what we do next. \subsection{Step 1 Works} Recall $D_1, D_2$ are some constants such that $D_{1} \leq \inf_{x,y \in [0,1]} g(x,y)$ and $D_{2} \geq \sup_{x,y \in [0,1]} g(x,y)$. We define two other constants $C_1 \equiv \frac{l^2}{2(D_{2} - D_{1})^2}$ and $C_2 \equiv \frac{l^2}{8\sigma^2}$, which depend on model parameters $l, \sigma$. We define a threshold for quantile estimation $ t_q^* \equiv \frac{4\sqrt{\pi}}{\sqrt{mp}} \left( \frac{\sqrt{e^{C_1}} + \sqrt{2}}{\sqrt{C_1}} + \frac{\sqrt{e^{C_2}} + \sqrt{2}}{\sqrt{C_2}} \right)$. Next we establish that the quantile estimates concentrate around the true column features. \begin{lemma}\label{lem:noisy_quantile.0} For any $t \geq 2 t_q^* = \Theta\Big(\frac{1}{\sqrt{mp}}\Big)$, \begin{align*} \Prob{ \left| \hat{q}_{\marg}(j) - \fcol{j} \right| > t } &\leq \exp\left( -\frac{n}{6}\Big( t - t_q^*\Big) \right) + \exp\left( - \frac{nt^2}{2} \right) + \exp\left( - \frac{mp}{8} \right). \end{align*} \end{lemma} \begin{proof}[Sketch] Consider an ideal estimator $\hat{q}_{*}(j) = \frac{1}{n}\sum_{j' = 1}^n$ $ H \left( \fcol{j} - \fcol{j'} \right)$, which has access to the hidden column features. Now $$ \left| \hat{q}_{\marg}(j) - \fcol{j} \right| \leq \left| \hat{q}_{\marg}(j) - \hat{q}_{*}(j) \right| + \left| \hat{q}_{*}(j) - \fcol{j} \right|.$$ Due to uniform distribution on $[0,1]$ for column parameters, one expects $$ \left| \hat{q}_{*}(j) - \fcol{j} \right| \approx \Theta(\frac{1}{\sqrt{n}}).$$ Therefore, to obtain error bound of $t \geq t^*_q$ (we assume $mp \ll n$), it boils down to controlling $\left| \hat{q}_{\marg}(j) - \hat{q}_{*}(j) \right| $. We obtain a probabilistic tail upper bound for $\left| \hat{q}_{marg}(j) - \hat{q}_{*}(j) \right| $ by rewriting it as the sum of indicator which is dominated by a certain binomial random variable. This leads to the desired claimed bound. Please see Appendix \ref{sec:quantile_noisy} for details. \end{proof} \subsection{Step 2 Works} We define thresholds $\tos$ and $\Tos$ relevant for CDF estimation in Appendix\ref{sec:conc_CDF_noisy_unknown} (cf. Eqs. \eqref{eqn:t0s}, \eqref{eqn:T0s}). In effect, they are such that $\tos = O \left( \big( \log np \big)^{-1/\beta} \right)$ and $\Tos = \tos + C \frac{\big( \log 2np \big)^{1/\beta}}{(np)^{5/24}}$ for some constant $C$. We define an event which we shall show to hold with high-probability as follows: for $i \in [m]$, $$E_{(i)} \equiv \Big\{ \frac{np}{2} \leq |\cB_i| \leq 2np \Big\}.$$ Finally, we take a note of ``remainder term'' $\tilde{\Psi}_{m,n,p}$, defined precisely in Eq. \eqref{eqn:Remainder.tilde} which turns out to be $o(1)$ with scaling of $m, n, p$. Now we state the main result about Step 2 of the algorithm working. \begin{lemma}\label{lem:noisy_unknown_CDF.0} For any $i \in [m]$, and for any $t \geq \Tos$, \begin{align*} \Prob{ \left. \sup_{z \in [D_1, D_2]} \left| \tilde{F}^{(i)} (z) - F^{(i)}(z) \right| > t \right| E_{(i)}} \leq (2np)^{\frac{1}{6}}\exp\left( \frac{- \left( \frac{np}{2} \right)^{5/12} (t - \tos)^2 }{8 C_4^2 \left( \log (2np) \right)^{\frac{2}{\beta}} }\right) + \tilde{\Psi}_{m,n,p}. \end{align*} The constant $C_4 = \frac{ B K_{max} \left( D_{2} - D_{1} \right) }{\pi \left( 4\gamma \right)^{\frac{1}{\beta}} }$, where $B \geq 1$ is a noise model parameter (see Eq. \eqref{eqn:model_supersmooth}) and $K_{max} = \sup_t | \phi_K(t)|$. \end{lemma} \begin{proof}[Sketch] Details can be found in Appendix \ref{sec:proof_noisy_unknown_CDF}, here we provide a very succinct summary. The main idea of the proof is to decompose the desired probability into three pieces using triangle inequality and the union bound: (1) the variance of $ \hat{F}^{(i)} (z)$ (Eq. \eqref{eqn:hat_term.1}); (2) the bias of the CDF estimator in the known noise setup (Eq. \eqref{eqn:hat_term.2}); and (3) the discrepancy in the estimator between the known noise and the unknown noise scenarios, i.e., $\Exp{\hat{F}^{(i)} (z)} - \Exp{\tilde{F}^{(i)} (z)}$ (letting $\tilde{F}$ denote the estimator with known $\phi_N$) (Eq. \eqref{eqn:hat_term.3}). Since $[D_1, D_2]$ is a compact set, we can obtain the desired bound by chaining technique whenever the supremum over $[D_1, D_2]$ is considered. Controlling (1) is accomplished by applying McDiarmid's inequality and the result is stated as Lemma \ref{lem:sup_hat}. There is an upper bound for (2), which is stated in Lemma \ref{lem:mean_difference_tilde} and its proof is based on the upper bound result by \citet{Fan1991}. The most challenging aspect of this Lemma (and of this paper) is to establish that (3) is well-behaved. This requires us to identify a set of events that hold with high-enough-probability, and conditioned on those events, the desired bound holds. The set of events are listed in Appendix \ref{sec:conditioning}. A sequence of Lemmas in the first three subsections of \ref{sec:proof_noisy_unknown_CDF} precisely argue what the above statement claims. All in all, when the dust settles, we obtain the desired claim of this Lemma. \end{proof} \subsection{Step 3 Works} Using Lemmas \ref{lem:noisy_quantile.0} and \ref{lem:noisy_unknown_CDF.0}, we establish a probabilistic tail bound on $|\hat{A}(i,j) - A(i,j)|$ and then integrate it to obtain a bound on Mean-Squared-Error (MSE). \subsubsection{Probabilistic Tail Bound} For given choice of parameters $t > 0$ and $L, m, n, p, t_q^*, \Tos$, we define two conditions: \begin{align}\label{eqn:technical_conditions.0} E_1 & =\Big\{ t \leq 4L t_q^* \Big\} \qquad \text{and} \qquad E_2 = \Big\{ t \leq 2L \Tos \Big\}. \end{align} \begin{theorem}\label{thm:tail_noisy_unknown.0} For each $(i,j) \in [m] \times [n]$, for any $t \geq 0$, \begin{align}\label{eq:step3.works} \Prob{\left| \hat{A}(i, j) - A(i,j)\right| > t} &\leq \exp\left( -\frac{n( t - 2L t_q^*)}{12L} \right) \Ind{E_1^c } \nonumber \\ &\quad + (2np)^{\frac{1}{6}} \exp\left( \frac{- \left( \frac{np}{2} \right)^{5/12} }{8 C_4^2 \left( \log (2np) \right)^{\frac{2}{\beta}} }(t - \tos)^2 \right) \Ind{E_2^c } \nonumber \\ &\quad + \exp\left( -\frac{n t^2}{8L^2} \right) + \Ind{E_1 } + \Ind{E_2} + {\Psi}_{m,n,p}, \end{align} where $\tos, \Tos$ and ${\Psi}_{m,n,p}$ are some functions of $m, n, p$, which do not depend on $t$. \end{theorem} In above, ${\Psi}_{m,n,p}$ is defined in \eqref{eqn:Remainder}. It can be seen that the terms in the last line of \eqref{eq:step3.works} decay to $0$ at an exponential rate as $\min(m, n) p \to \infty$, independent of $t$. \begin{proof} Let $\theta^* \equiv F^{(i)}\left( \hat{A}(i,j) \right) = F^{(i)}\left( \hat{g}^{(i)}\left(\hat{q}_{marg}(j) \right) \right) $. Now $\left|\theta^* - \hat{q}_{marg}(j)\right| \leq \left\| \hat{F}^{(i)} - F^{(i)} \right\|_{\infty}$ because $\hat{F}^{(i)}$ is continuous. Since $\hat{A}(i,j) = \hat{g}^{(i)}\left(\hat{q}_{marg}(j) \right)=g \left( \frow{i}, \theta^* \right)$, and $g$ is $(l, L)$-biLipschitz, \begin{align*} \left|\hat{A}(u,i) - A(i,j)\right| = \left|g \left( \frow{i}, \fcol{j} \right) - g \left( \frow{i}, \theta^* \right)\right| \leq L \left( \left| \fcol{j} - \hat{q}_{marg}(j) \right| + \Big\| \hat{F}^{(i)} - F^{(i)} \Big\|_{\infty}\right). \end{align*} If $\left| \fcol{j} - \hat{q}_{marg}(j) \right| \leq \frac{t}{2L}$, $\left\| \hat{F}^{(i)} - F^{(i)} \right\|_{\infty} \leq \frac{t}{2L}$ then $\left| \hat{A}(u,i) - A(i,j)\right| \leq t$. Therefore \begin{align*} &\Prob{\left| \hat{A}(i, j) - A(i,j)\right| > t} \nonumber\\ &\qquad\leq \Prob{ \left| \hat{q}_{marg}(j) - \theta_{col}^{(j)} \right| > \frac{t}{2L} } + \Prob{ \left. \sup_{z \in \Reals} \left| \hat{F}^{(i)}(z) - F^{(i)}(z) \right| > \frac{t}{2L} \right| E_{(i)}} + \Prob{E_{(i)}^c} \end{align*} by applying the union bound. Now, we can conclude the proof by applying Lemmas \ref{lem:noisy_quantile.0} and \ref{lem:noisy_unknown_CDF.0}. \end{proof} \subsubsection{Mean Squared Error} Let $\hat{\varphi}$ denote the estimator which maps $Z$ to $\hat{A}$. The mean squared error of estimator $\hat{\varphi}$ is given as \begin{align} MSE\left( \hat{\varphi} \right) &= \int_0^{\infty} 2u\Prob{\left| \hat{A}(i,j) - A(i,j) \right| > u } du. \label{eqn:integration2} \end{align} Define $$ c(n,p) \equiv \frac{ \left( \frac{np}{2} \right)^{5/12} }{8 C_4^2 \left( \log (2np) \right)^{\frac{2}{\beta}} }.$$ \begin{theorem}[The Full Version of Main theorem 3]\label{thm:main.0} The mean squared error of the deconvolution kernel estimator $\hat{\varphi}$ is bounded above as follows: \begin{align*} MSE\left( \hat{\varphi} \right) &\leq 4L^2{\Tos}^2 + (4L t_q^*)^2 + 4L t_q^* \sqrt{\frac{3L\pi}{n}}\\ &\quad + 4L^2 (2np)^{\frac{1}{6}} \left[ \frac{1}{ c(n,p)} + \tos \sqrt{\frac{\pi}{c(n,p)}} \right] + \frac{8L^2}{n} + \frac{288L^2}{n^2} + {\Psi}_{m,n,p} \Big(D_2 - D_1\Big)^2. \end{align*} \end{theorem} We remark that $4L^2{\Tos}^2$ is the dominant term, which scales as $O \left( (\log np )^{-\frac{2}{\beta}} \right)$ (see Eq. \eqref{eqn:T0} for definition of $\Tos$). As a result, the upper bound diminishes to $0$ at the rate of $(\log np )^{-\frac{2}{\beta}}$ as $mp, np \to \infty$. \section{Discussion} We end this paper with two remarks. First, there is an exponential gap in the mean squared error between the noiseless setup and noisy setup where measurements are corrupted by super-smooth additive noise. The gap is natural because recovery from noisy measurements should be more difficult, but it is surprising to observe an exponential gap. We note that the exponential degradation stems from the super-smooth assumption on the noise, and we strongly believe it is possible to obtain a similar result with only a polynomial gap when the noise is ordinary smooth (i.e., the noise characteristic function has a polynomially decaying tail). Second, it is noteworthy that we do not have column features available at our hand, unlike the setup in those existing literature. However, we are still able to evaluate our estimated function at unknown points to reconstruct the matrix and the asymptotically optimal rate is achieved. This was possible because we are not estimating a single function, but collectively estimating a set of functions and a kind of collaboration is happening between the functions. If the column features (or extrinsic covariates) need not be estimated but are available from other sources, our task truly reduces to learning row-wise distributions and we obtain the same bounds. \section{Proof of the Main Result}\label{sec:notation} First, we present 5 key lemmas used in the proof of the main theorems. First two of them are on the consistency and concentration of quantile estimation (see Section \ref{sec:quantile_lemmas}), while the other three lemmas are on the reliability of CDF estimation (see Section \ref{sec:CDF_lemmas}). Once these five lemmas are established, the proof of main theorems are straightforward; combining a pair of relevant lemmas from each category yields the desired tail probability bounds (Appendix \ref{appendix:main_prob}). Integrating these tail bounds yields the main theorems (see Section \ref{sec:thm_proof} for the concise statements; their full statements and proofs are presented in Appendix \ref{appendix:main_proof}). \subsection{Recapping Some Notations} Recall that the indicator function of a subset $A$ of a set $X$ is a function $\mathbb{I}_A: X \to \{0, 1\}$ defined as \begin{equation}\label{eqn:Indicator} \mathbb{I}_A (x) = \begin{cases} 1, & \text{if }x \in A,\\ 0, &\text{if } x \not\in A. \end{cases} \end{equation} In this paper, we use the indicator function to define auxiliary random variables which identify whether a certain condition is satisfied or not. The fulfillment of conditions is equivalent to the membership of the outcome in the specific event (measurable set) in the language of probability theory. Therefore, when the intensional description of an event $A$ is available and the outcome $\omega$ and the sample space $\Omega$ are obvious, we will write $\Ind{\text{description of }A}$ in lieu of $\mathbb{I}_A(\omega)$ by abuse of notation. The Heaviside step function $H: \Reals \to \left\{0, \frac{1}{2}, 1 \right\}$ is a linear combination of the indicator functions, defined as \begin{equation}\label{eqn:Heaviside} H(x) = \frac{1}{2} \left[ \Ind{x > 0} + \Ind{ x\geq 0 } \right] = \begin{cases} 1, & \text{if }x > 0,\\ \frac{1}{2}, &\text{if } x = 0,\\ 0, &\text{if } x < 0. \end{cases} \end{equation} With aid of these, we can define useful auxiliary variables. \begin{align} R(i,j) &= \Ind{M(i,j) = 1},\qquad\qquad \forall (i,j) \in [m] \times [n],\\ Q^i(j_1, j_2) &= H\left( Z(i, j_1) - Z(i, j_2) \right),\quad\forall i \in [m], \forall j_1, j_2 \in [n]. \end{align} Note that $\sum_{j_2 = 1}^n Q_i(j_1, j_2)$ is the number of entries $Z(i, j)$ in row $i$ whose value is smaller than $Z(i, j_1)$ while $Z(i, j_1)$ itself is counted with weight $\frac{1}{2}$. For $i \in [m]$, we let $\cB_i$ denote the set of column indices for which $Z(i,j)$ is observed (similarly, $\cB^j$ denotes the set of row indices for $j \in [n]$, respectively), i.e., \[ \cB_i = \{ j' \in [n]: M(i,j') = 1 \},\qquad \cB^j = \{ i' \in [m]: M(i',j) = 1 \}. \] Suppose that the latent function $g$ is given. \begin{align*} D_{max} &= \sup_{x,y \in [0,1]} g(x,y) = \sup_{x \in [0,1]}g(x,1),\\ D_{min} &= \inf_{x,y \in [0,1]} g(x,y) = \inf_{x \in [0,1]}g(x,0),\\ L &= \sup_{x,y_1 \neq y_2 \in [0,1]} \frac{g(x,y_2) - g(x,y_1)}{y_2 - y_1},\\ l &= \inf_{x,y_1 \neq y_2 \in [0,1]} \frac{g(x,y_2) - g(x,y_1)}{y_2 - y_1}. \end{align*} Let $K$ denote the kernel used in density estimation (see Section \ref{sec:alg_noisy}) with finite support within $[-1, 1]$ in the Fourier domain (see Appendix \ref{sec:kernel}). \[ K_{max}=\max_{t \in [-1,1]} \left| \phi_K(t) \right| < \infty \] denotes the maximum modulus of the kernel used. For future reference, we define two conditioning events on $M$: \begin{equation}\label{eqn:sufficient_overlap} \Erow := \left\{ |\cB_i| \geq \frac{np}{2}, \forall i\in [m]\right\},\qquad \Ecol:= \left\{|\cB^j| \geq \frac{mp}{2}, \forall j\in [n]\right\}. \end{equation} In a similar way, we define \begin{equation}\label{eqn:not_too_much_overlap} \Erowp := \left\{ |\cB_i| \leq 2np, \forall i\in [m]\right\}. \end{equation} \subsection{Key Lemmas}\label{sec:key_lemmas} \subsubsection{Estimating the Column Quantiles}\label{sec:quantile_lemmas} In this section, we present two lemmas which claim that the quantile estimation is consistent. Moreover, the estimates concentrate to the true values, which turn out to be the column features in our model. Lemma \ref{lem:noiseless_quantile} shows the result under the noiseless setting, while Lemma \ref{lem:noisy_quantile} ascertains their consistency despite the existence of noise. In short, we can achieve consistent quantile estimates as long as $\omega\left( \max\{ m^{-1}, n^{-1}\} \right)$ observations are available along each row and column. When we assume the uniformly random observation, the quantile estimates are consistent as long as $p = \Omega\left(\frac{\log n}{m}, \frac{\log m}{n} \right)$. Essentially, the difference in sample complexity corresponds to the cost of randomization in sampling. \begin{lemma}\label{lem:noiseless_quantile} When there is no noise ($N=0$) in the model, the quantile estimator $\hat{q}(j)$ (see Eq. \eqref{eqn:quantile_est}) concentrates to $\theta_{col}^{(j)}$ with high probability: \[ \Prob{ \left| \hat{q}(j) - \theta_{col}^{(j)} \right| \geq t } \leq 2 \exp\left( -\frac{2|\cB^j|^2 t^2}{\sum_{i \in \cB^j} \frac{1}{|\cB_i|}} \right). \] \end{lemma} Assuming each entry is observed with probability $p$ as in Eq. \eqref{eq:masking}, we can achieve the following universal upper bound, which does not depend on the size of sets $|\cB_i|$ and $|\cB^j|$. Specifically, we obtain the following uniform and universal probabilistic bound. \begin{corollary}\label{coro:noiseless_quantile_uniform} When conditioned on $\Erow \cap \Ecol$, for any $j \in [n]$, \begin{align*} \Prob{ \left. \left| \hat{q}(j) - \theta_{col}^{(j)} \right| \geq t \right| \Erow \cap \Ecol} \leq 2 \exp \left(- \frac{mnp^2}{2} t^2 \right). \end{align*} \end{corollary} Noiseless quantile estimation relies on the concentration of the sum of i.i.d. indicator variables. However, under the influence of nontrivial noise, the indicator may not be reliable any more. Hence, we need a way to control the effect of noise. The main idea is that if we sum up multiple rows, the noise is diluted by the effect of averaging. Also, the portion of uncontrolled rows gets vanishingly small as $m, n \to \infty$, and $\hat{q}_{marg}$ (see Eq. \eqref{eqn:estimate_marg}) concentrates around $\fcol{j}$. We assume sub-Gaussian noise of mean zero and sub-Gaussian parameter $\sigma$ in this paragraph according to Eq. \eqref{eqn:def_subG_noise}, unless otherwise stated. \begin{lemma}\label{lem:noisy_quantile} The marginal quantile estimator $\hat{q}_{marg}(j)$ concentrates to $\theta_{col}^{(j)}$ with high probability. Specifically, for any $s \geq (mp)^{-1/3}$ and $t \geq 16 s + 32 \exp\left( -c (mp)^{1/3} \right)$, \begin{align*} &\Prob{ \left| \hat{q}_{marg}(j) - \fcol{j} \right| \geq t } \leq 2\exp\left( -\frac{n t^2}{2} \right) + \exp\left(-\frac{nt}{12} \right)\\ &\qquad+ n \exp\left( -\frac{mp}{8} \right) + n \exp\left( - \frac{ns}{3} \right) + 4n \exp\left(-c (mp)s^{2}\right), \end{align*} where $c = \min \left\{\frac{l^2}{16\left( D_{max} - D_{min} \right)^2}, \frac{l^2}{64\sigma^2} \right\}$. \end{lemma} \subsubsection{Estimating the Row CDFs}\label{sec:CDF_lemmas} We will claim uniform concentration bounds on the difference between the true CDF and the estimated CDF. In other words, the uniform distance between those two objects is well-controlled by the probability bounds given in Lemmas \ref{lem:noiseless_CDF}, \ref{lem:noisy_known_CDF}, and \ref{lem:noisy_unknown_CDF}. \begin{lemma}[Concentration of noiseless CDF estimation]\label{lem:noiseless_CDF} When there is no noise in the model, the empirical cumulative distribution function (ECDF) $\breve{F}^{(i)}$ (Eq. \eqref{eqn:ECDF_noiseless}) uniformly concentrates to the true CDF $F^{(i)} = g^{-1}_{x=\frow{i}}$, i.e., for each $i \in [m]$, \[ \Prob{\sup_{z \in \Reals} \left| \breve{F}^{(i)}(z) - F^{(i)}(z) \right| > t } \leq 2 e^{-2 n_i t^2}. \] \end{lemma} The lemma aboves states the concentration of empirical CDF to the CDF as sample size grows. \begin{lemma}[Concentration of noisy CDF estimation with known noise distribution]\label{lem:noisy_known_CDF} When the addtive noise in the model is supersmooth and its density is known, the kernel smoothed ECDF $\tilde{F}^{(i)}$ defined as in Eq. \eqref{eqn:ECDF_known_noise} uniformly converges to the true CDF $F^{(i)} = g^{-1}_{x=\frow{i}}$ in probability. To be more specific, for any $t >C \left( \log n \right)^{-1/\beta}$, and for any $z \in [-n_i^{1/6}, n_i^{1/6}]$, \begin{align*} &\Prob{\left| \tilde{F}^{(i)} (z) - F^{(i)} \right| > t}\\ & \qquad \leq 2\exp\left( \frac{- \pi^2 \left(4\gamma \right)^{\frac{2}{\beta}}n_i^{1/6} }{8B^2 K^2_{max}\left( \log n_i \right)^{\frac{2}{\beta}}} \left( t -C \left( \log n_i \right)^{-1/\beta} \right)^2 \right), \end{align*} where $n_i = \left| \cB_i \right|$, $\beta, \gamma$ are smoothness parameter of the noise as in Eq. \eqref{eqn:model_supersmooth}, and $C$ is a constant. \end{lemma} Again, we can obtain a simpler upper bound when conditioned on $\Erow$ (see Eq. \eqref{eqn:sufficient_overlap} for the definition of $\Erow$). \begin{corollary}\label{coro:noisy_CDF_uniform} When conditioned on $\Erow$ and $\Erowp$, for any $i \in [m]$, we have \begin{align*} &\Prob{ \left. \left| \tilde{F}^{(i)} (z) - F^{(i)} \right| > t \right| \Erow}\\ & \quad \leq 2\exp\left( \frac{- \pi^2 \left(4\gamma \right)^{\frac{2}{\beta}}\left( \frac{np}{2} \right)^{1/6} }{8B^2 K^2_{max}\left( \log \left( 2np \right) \right)^{\frac{2}{\beta}}} \left( t -C \left( \log \frac{np}{2} \right)^{-1/\beta} \right)^2 \right). \end{align*} \end{corollary} It is surprising that even if the noise distribution is unknown, we can estimate the signal distribution which uniformly converges to the true distribution in probability. \begin{lemma}[Concentration of noisy CDF estimation with unknown noise distribution]\label{lem:noisy_unknown_CDF} \DG{Fill in} \DG{***TODO: the only missing auxiliary lemma} \end{lemma} These lemmas w \subsection{Probabilistic Error Bounds}\label{appendix:main_prob} \DG{The current status of analysis is as follows: To achieve vanishing MSE bounds, we require \begin{enumerate} \item noiseless: $p = \Omega\left(\frac{\log n}{m} \right)$ or $\omega(\frac{1}{m})$ samples \item known noise: $p = \Omega\left(\frac{\log n}{m} \right)$ \item unknown noise: $p = \Omega\left(\frac{\left(\log n \right)^{1+\delta}}{m} \right)$; we may require $\delta \geq 2$, which will be settled once the auxiliary lemmas for Lemma 6.7 are completely established \end{enumerate} That $\left(\log n\right)^{\delta}$ is coming from applying the union bound in noisy quantile estimation - I think this can be removed if we apply concentration inequality for the sum of independent random variables..... (time matters..) } Although the lemmas used in the proof may vary depending on the noise model, the underlying proof idea remains the same. We will let $F^{(i)}$ denote the inverse of $g\left( \frow{i}, \cdot \right)$, i.e., $F^{(i)}\left( z \right) = y$ if and only if $\left( \frow{i}, y \right) = z$. Also, recall that we defined our estimators as \begin{align*} \breve{A}(i,j) &= \breve{g}^{(i)}\left(\hat{q}(j) \right),\\ \tilde{A}(i,j) &= \tilde{g}^{(i)}\left(\hat{q}_{marg}(j) \right),\\ \hat{A}(i,j) &= \hat{g}^{(i)}\left(\hat{q}_{marg}(j) \right), \end{align*} for all $(i,j) \in [m] \times [n]$ in Eq. \eqref{eqn:estimators}, where $\breve{g}^{(i)}, \tilde{g}^{(i)}, \hat{g}^{(i)}$ respectively denote the quantile functions associated with $\breve{F}^{(i)}, \tilde{F}^{(i)}, \hat{F}^{(i)}$. \DG{We may have to clip the noise estimates onto the interval $[-n, n]$...} \subsubsection{Noiseless} \begin{theorem}[Probabilistic bound: noiseless]\label{thm:tail_noiseless} For each $(i,j) \in [m] \times [n]$, \begin{align} &\Prob{\left|\breve{A}(i, j) - A(i,j)\right| > t} \leq 2\exp \left(- \frac{mnp^2}{18L^2} t^2 \right) + 2\exp\left(-\frac{np}{9L^2} t^2\right) \nonumber\\ &\qquad + m \exp\left( -\frac{np}{8} \right) + n \exp\left( -\frac{mp}{8} \right). \label{eqn:tail_noiseless} \end{align} \end{theorem} \begin{proof} Let $\breve{g}^{(i)} = \left(\breve{F}^{(i)}\right)^{-1}$ denote the quantile function (right pseudo-inverse) associated with $\breve{F}^{(i)}$. Note that $A(u,i) = g \left( \frow{i}, \fcol{j} \right)$ and $\breve{A}(i,j) = \breve{g}^{(i)}\left(\hat{q}(j) \right)$. Let $\theta^* := F^{(i)}\left( \breve{A}(i,j) \right) = F^{(i)}\left( \breve{g}^{(i)}\left(\hat{q}(j) \right) \right) $. We can observe that $\left|\theta^* - \hat{q}(j) \right| \leq 2\left\| \breve{F}^{(i)} - F^{(i)} \right\|_{\infty}$; it is trivial at continuity points of $\breve{F}^{(i)}$ that by definition of the uniform norm (actually, $\left| \theta^* - \hat{q}(j)\right| \leq \left\| \breve{F}^{(i)} - F^{(i)} \right\|_{\infty}$ at continuity points). When $\breve{g}^{(i)}\left(\hat{q}(j) \right)$ is a jump discontinuity of $\breve{F}^{(i)}$, we can see that for any $\delta >0$, $ \breve{F}^{(i)}\left( \breve{g}^{(i)}\left(\hat{q}(j) \right) - \delta \right) \leq \hat{q}(j) \leq \breve{F}^{(i)}\left(\breve{g}^{(i)}\left(\hat{q}(j) \right) \right) $. Since $F^{(i)}$ is assumed to be continuous, $\left\| \breve{F}^{(i)} - F^{(i)} \right\|_{\infty} \geq \frac{1}{2}\sup_y \lim_{\delta \to 0+}\breve{F}^{(i)}\left(y \right) - \breve{F}^{(i)}\left(y - \delta \right)$. Therefore, $\left|\theta^* - \hat{q}(j)\right| \leq 2\left\| \breve{F}^{(i)} - F^{(i)} \right\|_{\infty}$ . Since $\breve{A}(i,j) = \breve{g}^{(i)}\left(\hat{q}(j) \right)=g \left( \frow{i}, \theta^* \right)$, and $g$ is $(l, L)$-biLipschitz, \begin{align*} \left|A(i, j) - \breve{A}(i,j)\right| &= \left|g \left( \frow{i}, \fcol{j} \right) - g \left( \frow{i}, \theta^* \right)\right| \\ &\leq L \left| \fcol{j} - \theta^* \right|\\ &\leq L \left( \left| \fcol{j} - \hat{q}(j) \right| + \left| \hat{q}(j) - \theta^* \right| \right)\\ &\leq L \left( \left| \fcol{j} - \hat{q}(j) \right| + 2\left\| \breve{F}^{(i)} - F^{(i)} \right\|_{\infty}\right). \end{align*} If $\left| \fcol{j} - \hat{q}(j) \right| \leq \frac{t}{3L}$ and $\left\| \breve{F}^{(i)} - F^{(i)} \right\|_{\infty} \leq \frac{t}{3L}$, then $\left|A(u,i) - \breve{A}(i,j)\right| \leq t$. We can achieve the following upper bound by applying the union bound on the contraposition. Let $E := \Erow \cap \Ecol$. Then \begin{align*} &\Prob{\left|A(i, j) - \breve{A}(i,j)\right| > t}\\ &\quad\leq \Prob{E^c} \Prob{ \left. \left| \hat{q}(j) - \theta_{col}^{(j)} \right| > \frac{t}{3L} \right| E }\\ &\qquad + \Prob{ \left. \sup_{z \in \Reals} \left| \breve{F}^{(i)}(z) - F^{(i)}(z) \right| > \frac{t}{3L} \right| E}\\ &\leq 2\exp \left(- \frac{mnp^2}{18L^2} t^2 \right) + 2\exp\left(-\frac{np}{9L^2} t^2\right)\\ &\qquad + m \exp\left( -\frac{np}{8} \right) + n \exp\left( -\frac{mp}{8} \right), \end{align*} where the last inequality follows from Lemma \ref{lem:noiseless_quantile} (Corollary \ref{coro:noiseless_quantile_uniform}) and Lemma \ref{lem:noiseless_CDF}. \end{proof} \subsubsection{When noise distribution is known} We assume the $n$ is sufficiently large. Specifically, we assume the support of the parameter matrix is contained in $\left[ -\frac{np}{2}, \frac{np}{2} \right]$ so that $A(i,j) \in [-n_i^{1/6}, n_i^{1/6}]$, for all index pairs $(i,j) \in [m] \times [n]$ with high probability. This is a technical assumption to exploit the deconvolution results. We can achieve a similar probabilistic tail bound even when measurements are noisy. The proof idea is almost the same, except for extra care taken in parsing the consistency results for CDF estimation. Note that both $\tilde{F}^{(i)}$ and $\hat{F}^{(i)}$ are continuous because they are defined by integrating estimated densities. \begin{theorem}[Main theorem: noise distribution is known]\label{thm:tail_noisy_known} For each $(i,j) \in [m] \times [n]$, \begin{align*} &\Prob{\left|A(i, j) - \tilde{A}(i,j)\right| > t}\\ &\leq 2\exp\left( -\frac{n t^2}{8L^2} \right) + \exp\left(-\frac{nt}{24L} \right)\\ &+ 2\exp\left( \frac{-\pi^2 \left(4\gamma \right)^{\frac{2}{\beta}}\left( \frac{np}{2} \right)^{1/6} }{8B^2 K^2_{max}\left( \log \left( 2np \right) \right)^{\frac{2}{\beta}}} \left( \frac{t}{2L} -C \left( \log \frac{np}{2} \right)^{-1/\beta} \right)^2 \right)\\ & + n \exp\left( - \frac{ns}{3} \right) + 4n \exp\left(-c (mp)s^{2}\right) + 2m \exp\left( -\frac{np}{8} \right) + n \exp\left( -\frac{mp}{8} \right), \end{align*} $c = \min \left\{\frac{l^2}{16\left( D_{max} - D_{min} \right)^2}, \frac{l^2}{64\sigma^2} \right\}$, and $C$ is a constant which controls the bias of $\tilde{F}^{(i)}$ (see Lemma \ref{lem:mean_difference_tilde}). \end{theorem} Note that given a constant $s > 0$, as long as the sampling probability is sufficiently large, i.e., $p = \Omega \left( \max \left\{ \frac{\log n}{m}, \frac{\log m}{n} \right\} \right)$, the terms in the last line, which are independent of $t$, exponentially decay to $0$ as $m, n \to \infty$. \begin{proof} Let $\theta^* := F^{(i)}\left( \tilde{A}(i,j) \right) = F^{(i)}\left( \tilde{g}^{(i)}\left(\hat{q}_{marg}(j) \right) \right) $. Since $\tilde{F}^{(i)}$ is continuous, $\left|\theta^* - \hat{q}_{marg}(j)\right| \leq \left\| \tilde{F}^{(i)} - F^{(i)} \right\|_{\infty}$. By the same line of argument as in the proof of Theorem \ref{thm:tail_noiseless}, since $\tilde{A}(i,j) = \tilde{g}^{(i)}\left(\hat{q}_{marg}(j) \right)=g \left( \frow{i}, \theta^* \right)$, and $g$ is $(l, L)$-biLipschitz, \begin{align*} \left|A(u,i) - \tilde{A}(i,j)\right| &= \left|g \left( \frow{i}, \fcol{j} \right) - g \left( \frow{i}, \theta^* \right)\right| \\ &\leq L \left| \fcol{j} - \theta^* \right|\\ &\leq L \left( \left| \fcol{j} - \hat{q}_{marg}(j) \right| + \left| \hat{q}_{marg}(j) - \theta^* \right| \right)\\ &\leq L \left( \left| \fcol{j} - \hat{q}_{marg}(j) \right| + \left\| \tilde{F}^{(i)} - F^{(i)} \right\|_{\infty}\right). \end{align*} Again, if $\left| \fcol{j} - \hat{q}_{marg}(j) \right| \leq \frac{t}{2L}$ and $\left\| \tilde{F}^{(i)} - F^{(i)} \right\|_{\infty} \leq \frac{t}{2L}$, then $\left|A(u,i) - \tilde{A}(i,j)\right| \leq t$. We can achieve the following upper bound by applying the union bound on the contraposition. We let $E := \Erow \cap \Erowp$ in this proof. For $t > 2L C \left( \log n \right)^{-1/\beta}$, \begin{align*} &\Prob{\left|A(i, j) - \tilde{A}(i,j)\right| > t}\\ &\quad\leq \Prob{ \left| \hat{q}_{marg}(j) - \theta_{col}^{(j)} \right| \geq \frac{t}{2L} } + \Prob{\sup_{z \in \Reals} \left| \tilde{F}^{(i)}(z) - F^{(i)}(z) \right| > \frac{t}{2L} }\\ &\quad\leq \Prob{ \left| \hat{q}_{marg}(j) - \theta_{col}^{(j)} \right| \geq \frac{t}{2L} }\\ &\qquad + \Prob{ \left. \sup_{z \in \Reals} \left| \tilde{F}^{(i)}(z) - F^{(i)}(z) \right| > \frac{t}{2L} \right| E} + \Prob{E^c}\\ &\quad\leq 2\exp\left( -\frac{n t^2}{8L^2} \right) + \exp\left(-\frac{nt}{24L} \right)\\ &\qquad + 2\exp\left( \frac{-\pi^2 \left(4\gamma \right)^{\frac{2}{\beta}}\left( \frac{np}{2} \right)^{1/6} }{8B^2 K^2_{max}\left( \log \left( 2np \right) \right)^{\frac{2}{\beta}}} \left( \frac{t}{2L} -C \left( \log \frac{np}{2} \right)^{-1/\beta} \right)^2 \right)\\ &\qquad + n \exp\left( - \frac{ns}{3} \right) + 4n \exp\left(-c (mp)s^{2}\right) + 2m \exp\left( -\frac{np}{8} \right) + n \exp\left( -\frac{mp}{8} \right), \end{align*} where $c = \min \left\{\frac{l^2}{16\left( D_{max} - D_{min} \right)^2}, \frac{l^2}{64\sigma^2} \right\}$, and $C$ is a constant which controls the bias of $\tilde{F}^{(i)}$ (see Lemma \ref{lem:mean_difference_tilde}). We used an upper bound on $\Prob{E^c}$ obtained from the binomial Chernoff bound: \begin{align*} \Prob{E^c} &= \Prob{\exists i \in [m]: \left| \cB_i \right| < \frac{np}{2} \text{ or } \left| \cB_i \right| > 2np }\\ &= \Prob{ \bigcup_{i=1}^m \left\{ \left\{\left| \cB_i \right| < \frac{np}{2}\right\} \cup \left\{ \left| \cB_i \right| > 2np \right\}\right\}}\\ &\leq \sum_{i=1}^m \left(\Prob{\left| \cB_i \right| < \frac{np}{2}} + \Prob{\left| \cB_i \right| > 2np }\right)\\ &\leq m \left( \exp\left(-\frac{np}{8} \right) + \exp\left(-\frac{np}{3} \right) \right)\\ &\leq 2m \exp\left(-\frac{np}{8} \right). \end{align*} \end{proof} \subsubsection{When noise distribution is also estimated} \DG{***TODO: fill in later} [Delaigle et al., 2008] That is also not bad \begin{theorem}[Main theorem: noise is also estimated] For each $(i,j) \in [m] \times [n]$, .... \end{theorem} \ref{lem:noiseless_quantile} \ref{lem:noisy_quantile} \ref{lem:noiseless_CDF} \ref{lem:noisy_known_CDF} \newpage \subsection{Mean Squared Error; Proof of the Main Theorems}\label{appendix:main_proof} Recall the definition of the mean squared error (see Eq. \eqref{eqn:MSE}) of estimator $\varphi$: \begin{align} &MSE\left( \varphi \right) \nonumber\\ &=\Exp{\frac{1}{mn} \sum_{i=1}^m \sum_{j=1}^n \left( \hat{A}(i,j) - A(i,j) \right)^2} \nonumber\\ &=\frac{1}{mn} \sum_{i=1}^m \sum_{j=1}^n \Exp{\left( \hat{A}(i,j) - A(i,j) \right)^2} &\text{by linearity of expectation}\nonumber\\ &= \Exp{\left( \hat{A}(i,j) - A(i,j) \right)^2} &\text{by exchangeability}\nonumber\\ &= \int_0^{\infty} \Prob{\left( \hat{A}(i,j) - A(i,j) \right)^2 > t } dt & \because \left(\hat{A}(i,j) - A(i,j) \right)^2 \geq 0\nonumber\\ &= \int_0^{\infty} \Prob{\left| \hat{A}(i,j) - A(i,j) \right| > \sqrt{t} } dt & \text{substitute }u = \sqrt{t}\nonumber\\ &= \int_0^{\infty} 2u\Prob{\left| \hat{A}(i,j) - A(i,j) \right| > u } du \label{eqn:integration} \end{align} Now it remains to integrate the tail bounds obtained in the previous section to conclude our main theorems. In general, we can derive the following formulae from integration by substitution \begin{align} \int_0^{\infty} u e^{-a u^2} ds &=\int_0^{\infty} \frac{1}{2a}e^{-z} dz = \left. -\frac{1}{2a}e^{-z} \right|_0^{\infty} = \frac{1}{2a}, \label{eqn:integral}\\ \int_0^{\infty} u e^{-au} du &= \int_0^{\infty} \frac{z}{a^2} e^{-z} dz = \frac{\Gamma(2)}{a^2} = \frac{1}{a^2}. \label{eqn:Gamma2} \end{align} These formulae will be frequently used, because many of our error bound have such forms. Also, from the model assumption and the construction of the estimators, the estimation is bounded (\DG{???}), \begin{align*} \left| \breve{A}(i,j) - A(i,j) \right| &\leq D_{max} - D_{min},\\ \left| \tilde{A}(i,j) - A(i,j) \right| &\leq ??,\\ \left| \hat{A}(i,j) - A(i,j) \right| &\leq ??, \end{align*} independent of $m, n$. Let $M$ indicate the upper bound. \subsubsection{Proof of Theorem \ref{thm:MSE_noiseless}: Noiseless} \begin{proof}[Proof of Theorem \ref{thm:MSE_noiseless}] Integrate the tail bound from Theorem \ref{thm:tail_noiseless}; plugging in Eq. \eqref{eqn:tail_noiseless} to Eq. \eqref{eqn:integration} leads to \begin{align*} MSE\left( \breve{\varphi} \right) &= \int_0^{M} 2u \Prob{\left|A(i, j) - \breve{A}(i,j)\right| > u} du\\ &\leq \int_0^{M} 4u\exp \left(- \frac{mnp^2}{18L^2} u^2 \right) du + \int_0^{M} 4u\exp\left(-\frac{np}{9L^2} u^2\right) du \\ &\qquad + \int_0^{M} 2u \left[m \exp\left( -\frac{np}{8} \right) + n \exp\left( -\frac{mp}{8} \right) \right] du\\ &\leq \int_0^{\infty} 4u\exp \left(- \frac{mnp^2}{18L^2} u^2 \right) du + \int_0^{\infty} 4u\exp\left(-\frac{np}{9L^2} u^2\right) du \\ &\qquad + \left[m \exp\left( -\frac{np}{8} \right) + n \exp\left( -\frac{mp}{8} \right) \right] \int_0^{M} 2u du\\ &= \frac{36L^2}{mnp^2} + \frac{18L^2}{np} + M^2 \left[m \exp\left( -\frac{np}{8} \right) + n \exp\left( -\frac{mp}{8} \right) \right]. \end{align*} $M = D_{max} - D_{min}$ is a constant independent of $m, n$. Consequently, as long as $p = \Omega\left(\max\left\{\frac{\log n}{m}, \frac{\log m}{n}\right\} \right)$, $MSE\left( \breve{\varphi} \right) \to 0$ as $m, n \to \infty$. \end{proof} \subsubsection{Proof of Theorem\ref{thm:MSE_noisy_known}: Noise Distribution is Known} \begin{proof}[Proof of Theorem \ref{thm:MSE_noisy_known}] In order to achieve an upper bound on the MSE for the kernel density estimator with known noise, $\tilde{\varphi}$, we integrate the tail probability bound from Theorem \ref{thm:tail_noisy_known}. \begin{align} MSE\left( \tilde{\varphi} \right) &= \int_0^{M} 2u \Prob{\left|A(i, j) - \tilde{A}(i,j)\right| > u} du \nonumber\\ &\leq \int_0^{\infty} 4u\exp\left( -\frac{n u^2}{8L^2} \right) du + \int_0^{\infty}2u\exp\left(-\frac{nu}{24L} \right) du \nonumber\\ &\quad + \int_0^{\infty}4u\exp\left( \frac{-\pi^2 \left(4\gamma \right)^{\frac{2}{\beta}}\left( \frac{np}{2} \right)^{1/6} }{8B^2 K^2_{max}\left( \log \left( 2np \right) \right)^{\frac{2}{\beta}}} \left( \frac{t}{2L} -C \left( \log \frac{np}{2} \right)^{-1/\beta} \right)^2 \right) du \label{eqn:complicated_integral}\\ &\quad+ \psi(m,n,p,s) \int_0^{M} 2u du, \label{eqn:intermediate_MSE_tilde} \end{align} where $C$ is the constant controlling the bias of $\tilde{F}^{(i)}$ in Lemma \ref{lem:mean_difference_tilde}, and $c = \min \left\{\frac{l^2}{16\left( D_{max} - D_{min} \right)^2}, \frac{l^2}{64\sigma^2} \right\}$. Given a constant $s >0$, the function \begin{align*} &\psi(m,n,p,s)\\ &= n \exp\left( - \frac{ns}{3} \right) + 4n \exp\left(-c (mp)s^{2}\right) + 2m \exp\left( -\frac{np}{8} \right) + n \exp\left( -\frac{mp}{8} \right) \end{align*} decays exponentially fast as $m, n \to \infty$. To compute an upper bound of the term Eq. \eqref{eqn:complicated_integral}, we let $c_1 = \frac{ \pi^2 \left(4\gamma \right)^{\frac{2}{\beta}} }{8B^2 K^2_{max}\left( \log \left( 2np \right) \right)^{\frac{2}{\beta}}}\left( \frac{np}{2} \right)^{1/6}$, and $c_2 = C \left( \log \frac{np}{2} \right)^{-1/\beta}$ and divide the region of integration into 2 parts pivoting on $u = 2L c_2$: \begin{align*} Eq. \eqref{eqn:complicated_integral} &\int_0^{\infty}4u\exp\left( - c_1 \left( \frac{u}{2L} - c_2 \right)^2 \right)du\\ &= \int_0^{2L c_2}4u\exp\left( - c_1 \left( \frac{u}{2L} - c_2 \right)^2 \right)du\\ &\quad + \int_{2L c_2}^{\infty} 4u\exp\left( - c_1 \left( \frac{u}{2L} - c_2 \right)^2 \right)du\\ &\leq \int_0^{2L c_2} 4u du + \int_{2L c_2}^{\infty} 4u \exp\left( - \frac{c_1}{4L^2} u^2 \right) du\\ &\leq \int_0^{2L c_2} 4u du + \int_0^{\infty} 4u \exp\left( - \frac{c_1}{4L^2} u^2 \right) du\\ &= 8L^2 c_2^2 + \frac{8L^2}{c_1}. \end{align*} The first inequality follows from that $\exp\left( - c_1 \left( \frac{u}{2L} - c_2 \right)^2 \right) \leq 1$ for all $u \geq 0$ and the second inequality follows from that for $u \in [0, 2L c_2]$, $u \exp\left( - c_1 \left( \frac{u}{2L} - c_2 \right)^2 \right) \geq 0$. Plugging this into Eq. \eqref{eqn:intermediate_MSE_tilde}, we can obtain the following upper bound \begin{align*} MSE\left( \tilde{\varphi} \right) &\leq \frac{16L^2}{n} + \frac{1152L^2}{n^2} + 8L^2 c_2^2 + \frac{8L^2}{c_1} + \psi(m,n,p,s) M^2. \end{align*} Given $M = O(n)$, we can observe that $MSE\left( \tilde{\varphi} \right) \to 0$ as $m, n \to \infty$, as long as $p = \Omega\left(\max\left\{\frac{\log n}{m}, \frac{\log m}{n}\right\} \right)$. \end{proof} \subsubsection{Proof of Theorem\ref{thm:MSE_noisy_unknown}: Noise Distribution is Unknown} \DG{***TODO} \section{Proof of Theorem \ref{thm:simple_known}} \label{sec:full_proof_noisy_known} In this section, we shall establish Theorem \ref{thm:simple_known} bounding Mean-Squared-Error for an estimator in a noisy setting with the known noise distribution. We shall start by describing the estimation algorithm followed by its analysis that will lead to the desired bound. \subsection{Algorithm Description}\label{sec:alg_noisy} The generic algorithm remains the same as that described in Section \ref{sec:alg_generic}. However, the details of step 1 (estimating $\fcol{j}$, ~$j \in [n]$) and step 2 (estimating $F^{(i)}$, ~$i \in [m]$) of the algorithm change due to presence of noise. We shall explicitly use the knowledge of noise distribution in step 2. \paragraph{1. $\hat{q}_{\marg}(j)$: Estimate of $\fcol{j}$, ~$j \in [n]$} Unlike noiseless case, we can not simply use empirical quantile along a given row $i$, $\hat{q}_i(j)$ as a proxy since noise in data can non-trivially corrupt the estimation. Instead, we need to overcome the effect of noise by ``averaging'' it out. To that end, we shall use empirical quantile estimation with respect to ``column average'' value rather than simply with respect to a given row. Formally, we define this below. Let $g_{\marg}(y) \equiv \int_0^1 g(x,y) dx$. Then $g_{\marg}(\cdot)$ is increasing since $g(x, \cdot)$ is. Given observations $Z \in \Reals^{m \times n}$, define \begin{align}\label{eqn:Z_marg} Z_{\marg}(j) & = \begin{cases} \frac{ \sum_{i=1}^m M(i,j) Z(i,j)}{\sum_{i=1}^m M(i,j)}, & \text{if } \cB^j \neq \emptyset\\ \frac{1}{2}, & \text{if } \cB^j = \emptyset. \end{cases} \end{align} Then, we estimate the column feature of $j \in [n]$ as \begin{align}\label{eqn:estimate_marg} \hat{q}_{\marg}(j) & = \frac{1}{n}\sum_{j' = 1}^n H \left( Z_{\marg}(j) - Z_{\marg}(j') \right), \end{align} where $H$ is the Heaviside step function cf. \eqref{eqn:Heaviside}. \paragraph{2. $\tilde{F}^{(i)}$: Estimate of $F^{(i)} = g^{-1}_{x=\frow{i}}$, ~$i \in [m]$} In the noiseless setting, we simply used the empirical CDF as the estimation for $F^{(i)}$ by using observations along row $i$ in matrix $Z$. Since there is noise added in each entry, such an estimator will provide estimate that is corrupted by additive noise. Effectively, each entry in the row $i$ can be viewed as summation of two independent random variables: the first random variable is $g(\frow{i}, \fcol{j})$ with the randomness induced due to that in the column parameter $\fcol{j}$ that are sample uniformly from $[0,1]$; the second random variable is the additive noise. Therefore, the empirical CDF of the observations gives good estimation of distribution of the summation of these two random variables. However, the interest is to recover the distribution of the first random variable. And we do know the distribution of the second random variable. \medskip \noindent {\em Some Background.} Putting it other way, we wish to recover distribution of random variable $X$, but we observe samples of $Z = X + N$ instead of $X$. And we do know distribution of $N$. Due to independence, we know that $\phi_Z(t) = \phi_X(t) \phi_N(t)$ for all $t \in \Reals$, where $\phi_Z, \phi_X, \phi_N$ denote the characteristic function of random variable $Z, X$ and $N$ respectively. Since we know noise distribution, equivalently $\phi_N(\cdot)$, if we can estimate $\phi_Z(\cdot)$ from observations, say $\hat{\phi}_Z(\cdot)$, then we can ``de-convolve'' it to obtain estimation $\phi_X(\cdot)$ as \begin{align*} \hat{\phi}_X(t) & = \frac{\hat{\phi}_Z(t)}{\phi_N(t)}, ~~t \in \Reals. \end{align*} Now to produce estimate $\hat{\phi}_Z(\cdot)$, the first step is a non-parametric estimator of distribution of $Z$. The Kernel smoothing is a well-studied non-parametric approach which would attempt to estimate the density (which exists in our setting) through interpolation. Precisely, given a kernel $K: \Reals \to \Reals_{\geq 0}$ and bandwidth parameter $h > 0$, the density of $Z$ is estimated as \begin{align}\label{eq:kernel.0} \hat{f}_Z(z) & = \frac{1}{hn} \sum_{i=1}^n K\left( \frac{z - Z_i}{h} \right), ~~ z \in \Reals. \end{align} { Denote Fourier transformation operator $\cF: L^1(\Reals) \to C_b(\Reals)$ which maps the space of absolutely integrable functions $L^1(\Reals)$ to the space of continuous bounded functions. Recall that $\cF$ maps $f \in L^1(\Reals)$ to $\cF\{f\} \in C_b(\Reals)$ where for all $t \in \Reals$, \[ \cF\Big\{f \Big\}(t) = \int_{-\infty}^\infty \exp( \img\, t s) f(s) ds. \] We use notation $\img \equiv \sqrt{-1}$. } Similarly, for any absolutely integrable function $g \in L^1(\Reals)$ and for all $s \in \Reals$, it is possible to define an operator $\cF^{-1}: L^1(\Reals) \to C_b(\Reals)$ as \[ \cF^{-1}\Big\{ g \Big\}(s) = \frac{1}{2\pi}\int_{-\infty}^\infty \exp(- \img\, t s) g(t) dt. \] The Fourier inversion theorem ensures that $\cF^{-1} \cF f = f$ if $f$ satisfies certain conditions. For example, if the function is absolutely integrable and piecewise continuous (which is the case in our model), then $\cF^{-1} \left( \cF f \right) (s) = \frac{1}{2}\left( f(s_-) + f(s_+) \right)$. Applying Fourier operator to \eqref{eq:kernel.0} and using linearity of $\cF$, we obtain \begin{align*} \hat{\phi}_Z(t) & = \cF \left\{ \hat{f}_Z \right\} = \frac{1}{hn} \sum_{i=1}^n \cF \left\{ K \left( \frac{\cdot - Z_i}{h}\right) \right\}. \end{align*} Now, applying inverse Fourier operator, $\cF^{-1}$, to $\hat{\phi}_Z/\phi_N$ we obtain \begin{align} \hat{f}_X &= \cF^{-1} \left\{ \frac{\hat{\phi}_Z}{\phi_N} \right\} \nonumber \\ &= \frac{1}{hn} \sum_{i=1}^n \cF^{-1} \left\{ \frac{\cF \left\{ K \left( \frac{\cdot- Z_i}{h}\right) \right\}}{\phi_N} \right\} \nonumber \\ &= \frac{1}{hn} \sum_{i=1}^n \cF^{-1} \left\{ \frac{ h \exp( \img\, Z_i \, \cdot \,) \phi_K(h\, \cdot\,)}{\phi_N} \right\}, \label{eq:kernel.1} \end{align} where we used the following properties of Fourier operator: \begin{align*} \cF\left\{ f(\cdot - a) \right\}(t) & = \exp(\img\, a \, t) \cF\{ f\}(t) \\ \cF\left\{ f( b \, \cdot \, ) \right\}(t) & = \frac{1}{|b|} \cF\left\{ f(\cdot) \right\}\left( \frac{t}{b} \right). \end{align*} Applying similar properties to inverse Fourier operator, $\cF^{-1}$, we obtain \begin{align}\label{eq:kernel.2} \cF^{-1} \left\{ \frac{ h \exp( \img\, Z_i \, \cdot \,) \phi_K(h\, \cdot\,)}{\phi_N(\, \cdot \,) } \right\} (x), & = \cF^{-1} \left\{ \frac{ \phi_K(\, \cdot \,) }{\phi_N(\, \cdot \, h^{-1})} \right\} \left( \frac{x - Z_i}{h} \right). \end{align} Define function $L$ as \begin{align}\label{eqn:kernel_known} L & \equiv \cF^{-1} \left\{ \frac{ \phi_K(\, \cdot \,) }{\phi_N(\, \cdot \, h^{-1})} \right\}, \quad \text{i.e.,} \quad L(z) = \frac{1}{2\pi} \int \exp(- \img\, t z ) \frac{\phi_K(t)}{\phi_N\left(\frac{t}{h}\right)} dt, ~~z \in \Reals. \end{align} From \eqref{eq:kernel.1} and \eqref{eq:kernel.2}, and definition of $L$, we obtain \begin{align}\label{eq:kernel.4} \hat{f}_X(x) &= \frac{1}{hn} \sum_{i=1}^n L\Big(\frac{x - Z_i}{h}\Big). \end{align} Indeed, this is known as deconvolution kernel density estimator in literature. We shall adopt prior results \citet{Carroll1988, Fan1991, Delaigle2008} on its consistency to establish our results. Appendix \ref{appx:deconvolution} provides their summary. \medskip \noindent {\em Summary of Estimator.} Recall $\cB_i = \{ j \in [n]: M(i,j) = 1 \}$. Let $\phi_N$ be Fourier transform of density of noise which is known. Let $K$ be symmetric Kernel with $\phi_K$ being its Fourier transform. We define $\tilde{F}^{(i)}$, estimate of $F^{(i)}$ as follows: for any choice of constants $D_1$, $D_2$ such that $D_1 \leq D_{min} \leq D_{max} \leq D_2$, \begin{equation}\label{eqn:ECDF_known_noise} \tilde{F}^{(i)}(z) = \begin{cases} \int_{D_1}^{z } \tilde{f}^{(i)}(w) dw, & \text{if } z < D_2,\\ 1, & \text{if } z \geq D_2. \end{cases} \end{equation} where following \eqref{eq:kernel.4} we define \begin{align} \tilde{f}^{(i)}(z) &= \frac{1}{h |\cB_i|} \sum_{j\in \cB_i} L \left( \frac{z- Z(i,j)}{h} \right). \label{eqn:known_density} \end{align} The kernel bandwidth parameter $h = \left(4\gamma \right)^{\frac{1}{\beta}}\left( \log |\cB_i| \right)^{-\frac{1}{\beta}}$ where $\beta$ and $\gamma$ are smoothness parameters for the noise $N$ (see Eq. \eqref{eqn:model_supersmooth}). \begin{remark}[Constraints on kernel $K$]\label{rem:kernel} We choose kernel $K$ to satisfy the following conditions: \begin{itemize} \item[1.] It is symmetric, i.e. $K(x) = K(-x)$ for all $x \in \Reals$. \item[2.] $\sup_{t \in \Reals} \left| \phi_K(t) \right| < \infty$. \item[3.] Support of $\phi_K$ is assumed to be within $[-1, 1]$. For $K \in L_1(\Reals)$, $\cF\{K\}$ is uniformly continuous, so there exists $K_{max}=\max_{t \in [-1,1]} \left| \phi_K(t) \right| < \infty$. \end{itemize} \end{remark} \paragraph{3. $\tilde{A}(i,j)$: Estimate of $A(i,j)$, ~$i \in [m], j \in [n]$} For each $i \in [m]$, let $\tilde{g}^{(i)} = \left(\tilde{F}^{(i)}\right)^{-1}$ denote the quantile function (right pseudo-inverse) associated with $\tilde{F}^{(i)}$. Plugging Eq. \eqref{eqn:estimate_marg} into it leads to the estimate of matrix entry: \begin{align}\label{eqn:estimate_noisy_known} \tilde{A}(i,j) & = \tilde{g}^{(i)}\left( \hat{q}_{\marg}(j) \right). \end{align} \subsection{Algorithm Analysis}\label{sec:analysis_noisy_known} Similar to Section \ref{sec:analysis_noiseless}, we shall establish proof of Theorem \ref{thm:simple_known} by establishing concentration of quantile estimation, $\hat{q}_{\marg}(j)$ around $\fcol{j}$ for $j \in [n]$ in Lemma \ref{lem:noisy_quantile} and concentration of CDF estimator $\tilde{F}^{(i)}$ around $F^{(i)}$ for $i \in [m]$ in Lemma \ref{lem:noisy_known_cdf} to se tup key results needed to conclude the desired Mean-Squared-Error bound on the eventual estimator. \subsubsection{Concentration of $\hat{q}_{\marg}(j)$ around $\fcol{j}$, ~$j \in [n]$} The quantile estimator, $\hat{q}_{\marg}(j)$ as defined in \eqref{eqn:estimate_marg} is shown to be concentrated around $\fcol{j}$ under the assumption on the noise as stated in Section \ref{sec:noise}. We define function $\Qf: \Reals_+ \to \Reals_+$ as \begin{equation}\label{eqn:qstar} \Qf\left(x\right) = 2\sqrt{\pi} \left( \frac{1}{\sqrt{C_1 x}} + \frac{1}{\sqrt{C_2 x}} + \frac{1}{\sqrt{mp C_1 e^{-C_1}}} + \frac{1}{\sqrt{mp C_2 e^{-C_2}}} \right), \end{equation} where $C_1 = \frac{l^2}{2(D_{max} - D_{min})^2}$ and $C_2 = \frac{l^2}{8\sigma^2}$ are model dependent constants. \begin{lemma}\label{lem:noisy_quantile} For any $t \geq 4 \Qf(\frac{mp}{2}) = \Theta\Big(\frac{1}{\sqrt{mp}}\Big)$, \begin{align*} \Prob{ \left| \hat{q}_{\marg}(j) - \fcol{j} \right| > t } &\leq \exp\left( - \frac{nt^2}{2} \right) + \exp\left( -\frac{n( \frac{t}{2} - \Qf\left(\frac{mp}{2}\right))}{3} \right)\\ &\quad + \exp\left( - \frac{mp}{8} \right). \end{align*} \end{lemma} In the main text, we defined $t_q^* = \Qf\left(\frac{mp}{2}\right)$ for simplicity. Proof can be found in Appendix \ref{sec:quantile_noisy}. \subsubsection{Concentration of $\tilde{F}^{(i)}$ around $F^{(i)}$, ~$i \in [m]$} Here we shall establish that $\tilde{F}^{(i)}$ converges uniformly to $F^{(i)}$ in large sample limit. Specifically, we obtain the following Lemma that provides an exponentially decaying probabilistic tail bound for this uniform convergence Before stating the lemma, we recall $C_3 = C(l)$ (see Lemma \ref{lem:mean_difference_tilde}) is an absolute constant which depends only on the parameter $l$ and define a new constant $C_4 = \frac{ B K_{max} \left( D_{2} - D_{1} \right) }{\pi \left( 4\gamma \right)^{\frac{1}{\beta}} }$ which also depends only on the model parameter. Let $C = C_3 + C_4$ denote the sum of those two constants. \begin{lemma}\label{lem:noisy_known_cdf} For any $i \in [m]$, and for any $t >C \left( \log \left| \cB_i \right| \right)^{-1/\beta}$, \begin{align*} &\Prob{ \sup_{z \in [D_1, D_2]} \left| \tilde{F}^{(i)} (z) - F^{(i)}(z) \right| > t}\\ & \qquad \leq 2 \left| \cB_i \right|^{\frac{1}{4}} \left( \log \left| \cB_i \right| \right)^{\frac{2}{\beta}} \exp\left( \frac{- \left| \cB_i \right|^{1/2} }{2 C_4^2\left( \log \left| \cB_i \right| \right)^{\frac{2}{\beta}}} \left( t -C \left( \log \left| \cB_i \right| \right)^{-1/\beta} \right)^2 \right). \end{align*} \end{lemma} We state a useful consequence of the above result. To that end, for any $i \in [m]$, define \begin{align} \Erow &\equiv \Big\{ |\cB_i| \geq \frac{np}{2}\Big\}, \text{~and~} \Erowp \equiv \Big\{ |\cB_i| \leq 2np\Big\}. \label{eqn:sufficient_overlap} \end{align} We define another constant for the sake of brevity: \[ c_{n,p} \equiv 2(2np)^{\frac{1}{4}} \left( \log \left(2np\right) \right)^{\frac{2}{\beta}}. \] \begin{corollary}\label{coro:noisy_CDF_uniform} For any $i \in [m]$, and any $t >C \left( \log \frac{np}{2} \right)^{-1/\beta}$, \begin{align*} &\Prob{ \left. \sup_{z \in [D_1, D_2]} \left| \tilde{F}^{(i)} (z) - F^{(i)}(z) \right| > t \right| \Erow, \Erowp}\\ & \qquad \leq c_{n,p} \exp\left( \frac{- \left( \frac{np}{2} \right)^{1/2} }{2C_4^2 \left( \log \left( 2np \right) \right)^{\frac{2}{\beta}}} \left( t -C \left( \log \frac{np}{2} \right)^{-1/\beta} \right)^2 \right). \end{align*} \end{corollary} \subsection{Completing Proof of Theorem \ref{thm:simple_known}} In this section, we complete the proof of Theorem \ref{thm:simple_known} by using Lemma \ref{lem:noisy_quantile} and Corollary \ref{coro:noisy_CDF_uniform}. The proof follows similar structure as that of Theorem \ref{thm:simple_noiseless}. First, we establish tail bound on $|\tilde{A}(i,j) - A(i,j)|$ and then integrate it to obtain bound on Mean-Squared-Error (MSE). The details differ due to extra care required to handle noisy setting. \subsubsection{Tail Bound on $|\tilde{A}(i,j) - A(i,j)|$} For given choice of parameters $t > 0$ and $L, \beta, \Qf, m, n$ and $p$ as defined before along with a universal constant $C$, define conditions \begin{align}\label{eqn:technical_conditions} E_1 & \equiv\Big\{ t \leq 8L \Qf\left(\frac{mp}{2} \right) \Big\} ~~ \text{and} ~~ E_2 \equiv \Big\{ t \leq 4LC \left( \log \frac{np}{2} \right)^{-1/\beta} \Big\}. \end{align} \begin{theorem}\label{thm:tail_noisy_known} For each $(i,j) \in [m] \times [n]$, for any $t \geq 0$, \begin{align*} &\Prob{\left| \tilde{A}(i, j) - A(i,j)\right| > t}\\ &\qquad \leq \Ind{ E_1 } + \Ind {E_2} + \exp\left( -\frac{n( \frac{t}{4L} - \Qf\left(\frac{mp}{2}\right))}{3} \right) \Ind{E_1^c} \\ &\qquad\quad + c_{n,p}\exp\left( \frac{-\left( \frac{np}{2} \right)^{1/2} }{2C_4^2\left( \log \left( 2np \right) \right)^{\frac{2}{\beta}}} \left( \frac{t}{2L} -C \left( \log \frac{np}{2} \right)^{-1/\beta} \right)^2 \right) \Ind{E_2^c} \\ &\qquad\quad + \exp\left( -\frac{n t^2}{8L^2} \right) + \exp\left( - \frac{mp}{8} \right) + 2 \exp\left(-\frac{np}{8} \right). \end{align*} \end{theorem} Note that the terms in the last line which are independent of $t$, decays to $0$ as $n \to \infty$ at the exponential rate of $np$ as long as the sampling probability is sufficiently large, i.e., $p = \omega \left(\frac{1}{n}\right)$, . \begin{proof} Let $\theta^* \equiv F^{(i)}\left( \tilde{A}(i,j) \right) = F^{(i)}\left( \tilde{g}^{(i)}\left(\hat{q}_{\marg}(j) \right) \right) $. Since $\tilde{F}^{(i)}$ is continuous, $\left|\theta^* - \hat{q}_{\marg}(j)\right| \leq \left\| \tilde{F}^{(i)} - F^{(i)} \right\|_{\infty}$. By the same line of argument as in the proof of Theorem \ref{thm:tail_noiseless}, since $\tilde{A}(i,j) = \tilde{g}^{(i)}\left(\hat{q}_{\marg}(j) \right)=g \left( \frow{i}, \theta^* \right)$, and $g$ is $(l, L)$-biLipschitz, \begin{align*} \left|\tilde{A}(u,i) - A(i,j)\right| &= \left|g \left( \frow{i}, \fcol{j} \right) - g \left( \frow{i}, \theta^* \right)\right| \\ &\leq L \left| \fcol{j} - \theta^* \right|\\ &\leq L \left( \left| \fcol{j} - \hat{q}_{\marg}(j) \right| + \Big| \hat{q}_{\marg}(j) - \theta^* \Big| \right)\\ &\leq L \left( \left| \fcol{j} - \hat{q}_{\marg}(j) \right| + \Big\| \tilde{F}^{(i)} - F^{(i)} \Big\|_{\infty}\right). \end{align*} Again, if both $\left| \fcol{j} - \hat{q}_{\marg}(j) \right| \leq \frac{t}{2L}$ and $\left\| \tilde{F}^{(i)} - F^{(i)} \right\|_{\infty} \leq \frac{t}{2L}$ are satisfied, then $\left| \tilde{A}(u,i) - A(i,j)\right| \leq t$. We can achieve the following upper bound by applying the union bound on the contraposition. We let $E_{(i)} := \Erow \cap \Erowp$ in this proof. Then it follows that \begin{align} &\Prob{\left| \tilde{A}(i, j) - A(i,j)\right| > t} \label{eqn:prob_aggr}\\ &\qquad\leq \Prob{ \left| \hat{q}_{\marg}(j) - \theta_{col}^{(j)} \right| > \frac{t}{2L} } + \Prob{\sup_{z \in \Reals} \left| \tilde{F}^{(i)}(z) - F^{(i)}(z) \right| > \frac{t}{2L} } \nonumber\\ &\qquad\leq \Prob{ \left| \hat{q}_{\marg}(j) - \theta_{col}^{(j)} \right| > \frac{t}{2L} } \nonumber\\ &\qquad\quad + \Prob{ \left. \sup_{z \in \Reals} \left| \tilde{F}^{(i)}(z) - F^{(i)}(z) \right| > \frac{t}{2L} \right| E_{(i)}} + \Prob{E_{(i)}^c}. \nonumber \end{align} Because we have a trivial upper bound $1$ on probability, it follows from Lemma \ref{lem:noisy_quantile} that \begin{align*} &\Prob{ \left| \hat{q}_{\marg}(j) - \theta_{col}^{(j)} \right| > \frac{t}{2L} }\\ &\qquad \leq \Ind{t \leq 8L \Qf\left(\frac{mp}{2}\right) }\\ &\qquad\quad+ \Ind{t \geq 8L \Qf\left(\frac{mp}{2}\right) }\\ &\qquad\qquad\quad\times\left[ \exp\left( -\frac{n t^2}{8L^2} \right) + \exp\left( -\frac{n( \frac{t}{4L} - \Qf\left(\frac{mp}{2}\right))}{3} \right) + \exp\left( - \frac{mp}{8} \right)\right]. \end{align*} In a similar manner, we have \begin{align*} &\Prob{ \left. \sup_{z \in \Reals} \left| \tilde{F}^{(i)}(z) - F^{(i)}(z) \right| > \frac{t}{2L} \right| E_{(i)}}\\ &\qquad \leq \Ind{t \leq 4LC \left( \log \frac{np}{2} \right)^{-1/\beta} } \\ &\qquad\quad + \Ind{t \geq 4LC \left( \log \frac{np}{2} \right)^{-1/\beta} } \\ &\qquad\qquad\quad\times c_{n,p}\exp\left( \frac{-\left( \frac{np}{2} \right)^{1/2} }{2C_4^2 \left( \log \left( 2np \right) \right)^{\frac{2}{\beta}}} \left( \frac{t}{2L} -C \left( \log \frac{np}{2} \right)^{-1/\beta} \right)^2 \right). \end{align*} Note that $t \geq 4LC \left( \log \frac{np}{2} \right)^{-1/\beta}$ implies that $\frac{t}{2L} \geq C \left( \log \frac{np}{2} \right)^{-1/\beta}$. We used an upper bound on $\Prob{E_{(i)}^c}$ obtained from the binomial Chernoff bound: \begin{align*} \Prob{E_{(i)}^c} &= \Prob{ \left| \cB_i \right| < \frac{np}{2} \text{ or } \left| \cB_i \right| > 2np }\\ &\leq \Prob{\left| \cB_i \right| < \frac{np}{2}} + \Prob{\left| \cB_i \right| > 2np }\\ &\leq \exp\left(-\frac{np}{8} \right) + \exp\left(-\frac{np}{3} \right)\\ &\leq 2 \exp\left(-\frac{np}{8} \right). \end{align*} Substituting these three upper bounds back to Eq. \eqref{eqn:prob_aggr}, we can conclude that \begin{align*} &\Prob{\left| \tilde{A}(i, j) - A(i,j)\right| > t}\\ &\qquad \leq \Ind{t \leq 8L \Qf\left(\frac{mp}{2}\right) } + \Ind{t \leq 4LC \left( \log \frac{np}{2} \right)^{-1/\beta} }\\ &\qquad\quad + \exp\left( -\frac{n( \frac{t}{4L} - \Qf\left(\frac{mp}{2}\right))}{3} \right) \Ind{t \geq 8L \Qf\left(\frac{mp}{2}\right) }\\ &\qquad\quad + \Ind{t \geq 4LC \left( \log \frac{np}{2} \right)^{-1/\beta} } \\ &\qquad\qquad\quad \times c_{n,p} \exp\left( \frac{-\left( \frac{np}{2} \right)^{1/2} }{2C_4^2 \left( \log \left( 2np \right) \right)^{\frac{2}{\beta}}} \left( \frac{t}{2L} -C \left( \log \frac{np}{2} \right)^{-1/\beta} \right)^2 \right)\\ &\qquad\quad + \exp\left( -\frac{n t^2}{8L^2} \right) + \exp\left( - \frac{mp}{8} \right) + 2 \exp\left(-\frac{np}{8} \right). \end{align*} \end{proof} \subsubsection{Mean Squared Error} Let $\tilde{\varphi}$ denote the estimator which maps $Z$ to $\tilde{A}$. By the same line of arguments as in Eq. \eqref{eqn:integration}, the mean squared error of estimator $\tilde{\varphi}$ is given as \begin{align} MSE\left( \tilde{\varphi} \right) &= \int_0^{\infty} 2u\Prob{\left| \tilde{A}(i,j) - A(i,j) \right| > u } du \label{eqn:integration2} \end{align} Also, from the model assumption and the construction of the estimators, the estimation error is bounded above: \[ \left| \tilde{A}(i,j) - A(i,j) \right| \leq D_2 - D_1, \] Let $D = D_2 - D_1$ denote the upper bound. Note that $D$ is a constant independent of $m, n$. \begin{theorem}[Main theorem 2 -- full version of Theorem \ref{thm:simple_known}; known noise]\label{thm:MSE_noisy_known} The mean squared error of the deconvolution kernel estimator $\tilde{\varphi}$ is bounded above as follows: \begin{align*} &MSE\left( \tilde{\varphi} \right) \\ &\leq 16L^2C^2 \left( \log \frac{np}{2} \right)^{-2/\beta} + 64\sqrt[4]{8}L^2 C_4^2 \frac{\left( \log \left( 2np \right) \right)^{\frac{2}{\beta}}}{(np)^{\frac{1}{4}}} + 64L^2\Qf\left(\frac{mp}{2}\right)^2\\ &\quad + \frac{8L^2}{n} + \frac{288L^2}{n^2} + 8L\Qf\left(\frac{mp}{2}\right) \sqrt{\frac{3L\pi}{n}} + D^2 \left[ \exp\left( - \frac{mp}{8}\right) + 2 \exp\left( - \frac{np}{8}\right) \right]. \end{align*} \end{theorem} First of all, we note that $MSE\left( \tilde{\varphi} \right) \to 0$ as $m, n \to \infty$ as long as the sample complexity satisfies $p = \omega \Big(\max \big\{ \frac{1}{m}, \frac{1}{n}\big\}\Big)$. Recall from Eq. \eqref{eqn:qstar} that $\Qf\left( \frac{mp}{2}\right) = \Theta \left( \frac{1}{\sqrt{mp}}\right)$. We can observe that the term $ 16L^2C^2 \left( \log \frac{np}{2}\right)^{-\frac{2}{\beta}} $ dominates in MSE, while the other terms decay faster unless the matrix is highly imbalanced so that $mp = O\left( \log np \right)$. This MSE bound achieves the asymptotically optimal rate of convergence as long as $mp = \omega(\log np)$. \begin{proof}[Proof of Theorem \ref{thm:MSE_noisy_known}] In order to achieve an upper bound on the MSE for the kernel density estimator with known noise, $\tilde{\varphi}$, we integrate the tail probability bound from Theorem \ref{thm:tail_noisy_known}. First of all, we recall from Eqs. \eqref{eqn:integral} and \eqref{eqn:Gamma2} that \begin{align*} \int_0^{\infty} u e^{-a u^2} du =\frac{1}{2a}, \quad\text{and}\quad \int_0^{\infty} u e^{-au} du = \frac{1}{a^2}. \end{align*} Now, the mean squared error can be written in the following form: \begin{align} MSE\left( \tilde{\varphi} \right) &= \int_0^{D} 2u \Prob{\left| \tilde{A}(i, j) - A(i,j)\right| > u} du \nonumber\\ &\leq \int_0^{D} 2u \left[ \exp\left( - \frac{mp}{8}\right) + 2 \exp\left( - \frac{np}{8}\right) \right] du \nonumber\\ &\quad + \int_0^{8L\Qf\left(\frac{mp}{2}\right)} 2u~ du + \int_0^{4LC \left( \log \frac{np}{2} \right)^{-1/\beta}} 2u~ du \nonumber\\ &\quad + \int_0^{D} 2u\exp\left( -\frac{n u^2}{8L^2} \right) du \label{eqn:term1inMSEtilde}\\ &\quad + \int_{8L\Qf\left(\frac{mp}{2}\right)}^{D} 2u\exp\left( -\frac{n( \frac{u}{4L} - \Qf\left(\frac{mp}{2}\right))}{3} \right)du \label{eqn:intermediate_MSE_tilde}\\ &\quad + \int_{4LC \left( \log \frac{np}{2} \right)^{-1/\beta}}^D 4c_{n,p}u \nonumber\\ &\quad\qquad \times\exp\left( \frac{-\left( \frac{np}{2} \right)^{1/2} }{2C_4^2 \left( \log \left( 2np \right) \right)^{\frac{2}{\beta}}} \left( \frac{u}{2L} -C \left( \log \frac{np}{2} \right)^{-1/\beta} \right)^2 \right) du \label{eqn:complicated_integral}. \end{align} Recall that $\Qf: \Reals_+ \to \Reals_+$ is the monotone decreasing function defined in front of Lemma \ref{lem:noisy_quantile}: $\Qf\left(x\right) = 2\sqrt{\pi} \left( \frac{1}{\sqrt{C_1 x}} + \frac{1}{\sqrt{C_2 x}} + \frac{1}{\sqrt{mp C_1 e^{-C_1}}} + \frac{1}{\sqrt{mp C_2 e^{-C_2}}} \right)$, where $C_1 = \frac{l^2}{2(D_{max} - D_{min})^2}$ and $C_2 = \frac{l^2}{8\sigma^2}$ are some constants which depend only on model parameters. $C = C_3 + C_4$ is the sum of two model dependent constants, where $C_3 = C(l)$ (see Lemma \ref{lem:mean_difference_tilde}) and $C_4 = \frac{ B K_{max} \left( D_{2} - D_{1} \right) }{\pi \left( 4\gamma \right)^{\frac{1}{\beta}} }$. We also recall $c_{n,p} = 2(2np)^{\frac{1}{4}} \left( \log \left(2np\right) \right)^{\frac{2}{\beta}}$. First of all, Eq. \eqref{eqn:term1inMSEtilde} is bounded above by \begin{align*} Eq. \eqref{eqn:term1inMSEtilde} &\leq \int_0^{\infty} 2u\exp\left( -\frac{n u^2}{8L^2} \right) du = \frac{8L^2}{n}. \end{align*} Next, we can achieve the following upper bound on Eq. \eqref{eqn:intermediate_MSE_tilde}: \begin{align*} Eq. \eqref{eqn:intermediate_MSE_tilde} &= \int_{8L\Qf\left(\frac{mp}{2}\right)}^{D} 2u\exp\left( -\frac{n\left( u - 4L\Qf\left(\frac{mp}{2}\right)\right)}{12L} \right)du\\ &= \int_{0}^{D} 2\left(u' + 8L\Qf\left(\frac{mp}{2}\right) \right) \exp\left( -\frac{n\left( u' + 4L\Qf\left(\frac{mp}{2}\right)\right)}{12L} \right)du'\\ &\leq \int_{0}^{D} 2\left(u' + 8L\Qf\left(\frac{mp}{2}\right) \right) \exp\left( -\frac{n u'}{12L} \right)du' \qquad\because \Qf\left(\frac{mp}{2}\right) \geq 0 \\ &\leq \int_{0}^{\infty} 2\left(u' + 8L\Qf\left(\frac{mp}{2}\right) \right) \exp\left( -\frac{n u'}{12L} \right)du'\\ &= \frac{288L^2}{n^2} + 8L\Qf\left(\frac{mp}{2}\right) \sqrt{\frac{3L\pi}{n}}. \end{align*} Lastly, we compute an upper bound of the term Eq. \eqref{eqn:complicated_integral}. For brevity's sake, we let $c_1 = \frac{ 1 }{2C_4^2 \left( \log \left( 2np \right) \right)^{\frac{2}{\beta}}}\left( \frac{np}{2} \right)^{\frac{1}{2}}$, and $c_2 = C \left( \log \frac{np}{2} \right)^{-1/\beta}$ and divide the region of integration into two parts pivoting on $u = 2L c_2$: \begin{align*} Eq. \eqref{eqn:complicated_integral} &= \int_{4L c_2}^{D}4 c_{n,p}u\exp\left( - c_1 \left( \frac{u}{2L} - c_2 \right)^2 \right)du\\ &\leq \int_{4L c_2}^{D}4 c_{n,p}u\exp\left( - c_1 \left( \frac{u}{4L} \right)^2 \right)du \qquad\because \frac{u}{2L} - c_2 \geq \frac{u}{4L}, \forall u \geq 4Lc_2\\ &\leq \int_{0}^{\infty}4 c_{n,p}u\exp\left( - \frac{c_1}{16L^2} u^2 \right)du \qquad\because u \exp\left( - \frac{c_1}{16L^2} u^2 \right) \geq 0, \forall u \geq 0\\ &= \frac{32 c_{n,p} L^2}{c_1}. \end{align*} Plugging these upper bounds back into Eqs. \eqref{eqn:term1inMSEtilde}, \eqref{eqn:intermediate_MSE_tilde} and \eqref{eqn:complicated_integral} , we can obtain the following upper bound \begin{align*} &MSE\left( \tilde{\varphi} \right)\\ &\leq D^2 \left[ \exp\left( - \frac{mp}{8}\right) + 2 \exp\left( - \frac{np}{8}\right) \right] + \left[ 8L\Qf\left(\frac{mp}{2}\right) \right]^2 + \left[ 4LC \left( \log \frac{np}{2} \right)^{-1/\beta} \right]^2\\ &\quad + \frac{8L^2}{n} + \frac{288L^2}{n^2} + 8L\Qf\left(\frac{mp}{2}\right) \sqrt{\frac{3L\pi}{n}} + 64\sqrt[4]{8}L^2 C_4^2 \frac{\left( \log \left( 2np \right) \right)^{\frac{4}{\beta}}}{(np)^{\frac{1}{4}}}. \end{align*} Rearranging the terms in the increasing order of convergence rates concludes the proof. \end{proof} \section{Proof of Theorem \ref{thm:simple_unknown}}\label{sec:full_proof_noisy_unknown} In the previous section, we proposed an estimation procedure assuming the noise distribution is known. Here, we discuss consistent estimation procedure for similar setting with the only difference that noise distribution is {\em unknown}. Specifically, we shall establish Theorem \ref{thm:simple_unknown}. The structure of the section is the same with the preceding sections. \subsection{Algorithm Description}\label{sec:algorithm_unknown_hat} In the absence of knowledge of noise distribution, the CDF estimation algorithm presented in the previous section is no longer valid because the noise characteristic function $\phi_N$ in Eq. \eqref{eqn:kernel_known} is not available. To overcome the challenge of unknown noise distribution, we estimate the noise characteristic function first and then estimate the CDF using kernel deconvolution in a similar manner, but with an additional ridge parameter to avoid division by zero. It is important to recall that knowledge of noise distribution was not used for the column feature estimation in \ref{sec:alg_noisy}. And hence it still remains valid. The generic algorithm remains the same as that described in Section \ref{sec:alg_generic}. Step 1 (estimating $\fcol{j}$, ~$j \in [n]$) remains the same as in Section \ref{sec:alg_noisy}, but Step 2 (estimating $F^{(i)}$, ~$i \in [m]$) of the algorithm requires an additional procedure of estimating the noise density because $\phi_N$ is unknown. \paragraph{1. $\hat{q}_{\marg}(j)$: Estimate of $\fcol{j}$, ~$j \in [n]$} The same as in Section \ref{sec:alg_noisy}: see Eqs. \eqref{eqn:Z_marg} and \eqref{eqn:estimate_marg}. \paragraph{2. $\hat{F}^{(i)}$: Estimate of $F^{(i)} = g^{-1}_{x=\frow{i}}$, ~$i \in [m]$} We estimate the distribution over each row by essentially same procedure as in Section \ref{sec:alg_noisy}. Recall that the characteristic function of the additive noise, $\phi_N$, is unknown and has to be estimated from data which we describe next. \subparagraph{2-1. $\hat{\phi}_N(t)$: Estimate for $\phi_N(t)$} Since the noise distribution is unknown, we need an auxiliary procedure to estimate the noise density. Here we explain an algorithm to estimate the noise characteristic function $\hat{\phi}_N(t)$. \medskip \noindent {\em Some Background.} Before presenting the noise estimation procedure, we provide intuition behind that. Suppose that we can repeatedly observe the same instance $X_i$ of target random variable up to independent additive noise, i.e., $Z_{ij} = X_i + N_{ij}$ with $N_{ij}$ independent. Although we don't know the value of $X_i$, we can see that the difference in the observed data entries is equal to the difference between two independent noise instances: $Z_{i1} - Z_{i2} = \left( X_i + N_{i 1} \right) - \left( X_i + N_{i2}\right) = N_{i1} - N_{i2}$. Assuming symmetry in the noise distribution, $N \equiv -N$ in distribution, and $N_{i1} - N_{i2}$ follows the same distribution with the sum of two independent copies of noise: $N_{i1} - N_{i2} \equiv N_{i1} + N_{i2}$. Therefore, $\phi_{N_{i1} - N_{i2}}(t) = \phi_N(t)^2$. From symmetry of $N$, we know that $\phi_N(t)$, the Fourier transform of the noisy density is real-valued. In fact, we know $\phi_N(t)$ is not only real-valued but positive from the model assumption of the supersmooth noise (see Eq. \eqref{eqn:model_supersmooth}). It implies $\phi_{N_1 - N_2} (t) = \phi_N(t)^2$ is also positive real-valued. Hence, \begin{align*} \phi_{N_1 - N_2} (t) &= \Exp{e^{\img t (N_1 - N_2)}}\\ &= \Exp{\frac{e^{\img t (N_1 - N_2)} + e^{-it (N_1 - N_2)}}{2}}\\ &= \Exp{ \cos t (N_1 - N_2)}. \end{align*} Therefore, we can estimate $\phi_N(t)$ by taking square root of the (the absolute value of) estimate $\hat{\phi}_{N_1 - N_2}(t)$, which is computed as the sample-analog estimator with $n$ independent copies of noise difference $\{ N_{i1} - N_{i2}\}_{i=1}^n$. Specifically, \[ \hat{\phi}_N(t) = \hat{\phi}_{N_1 - N_2}(t)^{\frac{1}{2}} = \left| \frac{1}{n} \sum_{i=1}^n \cos \left[ t\left(N_{i1} - N_{i2}\right) \right] \right|^{\frac{1}{2}}. \] However, the repeated measurement assumption may not be realistic, because we may not be allowed to measure the same entry multiple times. Therefore, we imitate the setup of repeated measurements by considering two columns $j_1, j_2 \in [n]$ with similar column features $\fcol{j_1} \approx \fcol{j_2}$ so that \begin{align*} Z(i,j_1) - Z(i,j_2) &= \left[ A(i, j_1) + N(i,j_1) \right] - \left[ A(i,j_2) + N(i,j_2) \right]\\ &= \underbrace{ \left[ A(i, j_1) - A(i,j_2) \right] }_{\approx 0,~ \because \fcol{j_1} \approx \fcol{j_2} } + \left[ N(i,j_1) - N(i,j_2) \right]\\ & \approx N(i,j_1) - N(i,j_2). \end{align*} \medskip \noindent {\em Summary of the Noise Density Estimation Procedure.} \begin{enumerate} \item Construct $\cT := \big\{ (i, j_1, j_2) \in [m] \times [n]^2: M(i, j_1) = M(i, j_2) = 1 \text{and } \hat{q}_{\marg}(j_1) \approx \hat{q}_{\marg}(j_2) \big\}$ as described in Algorithm \ref{alg:setT}. \item For each $i \in [n]$, define $\cT_i$ as $\cT_i := \Big\{ (i',j_1, j_2) \in \cT: i' \neq i\Big\}$. \item For each $i \in [n]$, estimate the noise characteristion function $\phi_N$ with the triples in $\cT_i$ as \begin{equation}\label{eqn:chN_est} \hat{\phi}_{N, i}(t) = \left| \frac{1}{\left| \cT_i \right|} \sum_{ \left(i, j_1, j_2 \right) \in \cT_i} \cos \Big[ t \left( Z(i, j_1) - Z(i, j_2) \right) \Big] \right|^{1/2}, \end{equation} \end{enumerate} Roughly speaking, $\cT$ is the set of index triples to mimic the repeated measurements. For row $i$, we use $\cT_i$, which is a subset of $\cT$ tailored to exclude the data from row $i$. This refinement of $\cT$ to $\cT_i$ for each row $i$ is done for the convenience in analysis. \begin{algorithm \SetAlgoLined \KwResult{Return the set of triples $\cT$ for noise density estimation } $J \gets \left\{ j \in [n]: |\cB^j| \geq \frac{mp}{2}\right\}$\; $I \gets \left\{ i \in [m]: |\cB_i \cap J| \geq \frac{|J|p}{2}\right\}$\; $\cT \gets \emptyset$ \; Sort $j \in [n]$ in the increasing order of $\hat{q}_{\marg}(j)$, i.e., find a permutation $\pi$ such that $\hat{q}_{\marg}\left( j \right) \leq \hat{q}_{\marg}\left( j' \right)$ if $\pi(j) <\pi( j')$\; \For{$i \in I$}{ Renumber $j \in \cB_i \cap J$ with $j' \in \left[\left| \cB_i \cap J \right| \right]$ in the increasing order of $\hat{q}_{\marg}\left( j \right)$\; (let $\sigma_i: \cB_i \cap J \subseteq [n] \to \left[\left| \cB_i \cap J \right| \right]$; this map can be induced from $\pi$)\\ $j' \gets 0$\; \While{$j' \leq \left| \cB_i \cap J \right| - 1 $}{ \eIf{$\hat{q}_{\marg}\left( \sigma_i^{-1} (j' + 1)\right) - \hat{q}_{\marg}\left( \sigma_i^{-1} (j')\right) \leq \frac{1}{\sqrt{\left| \cB_i \cap J \right|}}$}{ $\cT \gets \cT \cup \left\{ (i, \sigma_i^{-1}\left(j'\right), \sigma_i^{-1}(j'+1)) \right\}$\; $j' \gets j' + 2$\; }{ $j' \gets j' + 1$\; } } } \caption{Construction of the set $\cT$ for noise density estimation.} \label{alg:setT} \end{algorithm} \subparagraph{2-2. Computing $\hat{F}^{(i)}$} If we blindly replace $\phi_N$ with $\hat{\phi}_{N,i}$ in Eq. \eqref{eqn:kernel_known}, it might happen that $\hat{\phi}_{N,i}\left(\frac{t}{h}\right) = 0$ while $\phi_K(t) \neq 0$ for some $t$. To avoid the division-by-zero problem, we introduce a ridge parameter $\rho$ in the denominator of deconvolution kernel. By choosing an appropriate value of $\rho$, it vanishes fast enough as the number of samples increases so that we can achieve a consistent CDF estimator even when the noise distribution is unknown. \medskip \noindent {\em Summary of Estimator.} Recall that $\cB_i$ is the set of column indices $j$ for which $Z(i,j)$ is observed; $\cB_i = \{ j \in [n]: M(i,j) = 1 \}$ (see Eq. \eqref{eqn:set_support}). We define the kernel smoothed CDF estimator with unknown noise density as follows: for any choice of constants $D_1$, $D_2$ such that $D_1 \leq D_{min}$ and $D_2 \geq D_{max}$, \begin{equation}\label{eqn:ECDF_unknown_noise} \hat{F}^{(i)}(z) = \begin{cases} \int_{D_1}^{z } \hat{f}^{(i)}(w) dw, & \text{if } z < D_2,\\ 1, & \text{if } z \geq D_2, \end{cases} \end{equation} where \begin{align} \hat{f}^{(i)}(z) &= \frac{1}{h |\cB_i|} \sum_{j\in \cB_i} \hat{L} \left( \frac{z- Z(i,j)}{h} \right) \text{ and } \label{eqn:unknown_density}\\ \hat{L}(z) &= \frac{1}{2\pi} \int e^{-\img tz} \frac{\phi_K(t)}{\hat{\phi}_{N,i}\left(\frac{t}{h}\right) + \rho}dt. \label{eqn:kernel_estimated} \end{align} The kernel bandwidth parameter $h = \left(4\gamma\right)^{\frac{1}{\beta}} \left( \log \left| \cB_i \right| \right)^{-\frac{1}{\beta}}$ where $\beta$ and $\gamma$ are smoothness parameters for the noise (see Eq. \eqref{eqn:model_supersmooth}) though the exact density of noise is unknown. In this paper, we choose the ridge parameter $\rho = |\cB_i|^{-7/24}$. \paragraph{3. $\hat{A}(i,j)$: Estimate of $A(i,j)$, ~$i \in [m], j \in [n]$} For each $i \in [m]$, let $\hat{g}^{(i)} = \left(\hat{F}^{(i)}\right)^{-1}$ denote the quantile function (right pseudo-inverse) associated with $\hat{F}^{(i)}$. Plugging Eq. \eqref{eqn:estimate_marg} into it leads to the estimate of matrix entry: \begin{equation}\label{eqn:estimate_noisy_known} \hat{A}(i,j) = \hat{g}^{(i)}\left( \hat{q}_{\marg}(j) \right). \end{equation} \subsection{Algorithm Analysis} The analysis is done in parallel to those in sections \ref{sec:analysis_noiseless} and \ref{sec:analysis_noisy_known}. Since the quantile estimator is the same as before, we can reuse Lemma \ref{lem:noisy_quantile} to show that the quantile estimates for all $j \in [n]$ concentrate to the true values (the column features in our model) with high probability. It suffices to show the regularized deconvolution kernel ECDF consistently estimates the true CDF even when the distribution of the additive noise is unknown. Lemma \ref{lem:noisy_unknown_cdf} ensures the deconvolution kernel ECDF $\hat{F}^{(i)}$ uniformly converges to $F^{(i)}$ with high probability. \subsubsection{Concentration of $\hat{q}_{\marg}(j)$ around $\fcol{j}$, ~$j \in [n]$} The quantile estimator, $\hat{q}_{\marg}(j)$ as defined in \eqref{eqn:estimate_marg} is shown to be concentrated around $\fcol{j}$ under the assumption on the noise as stated in Section \ref{sec:noise}. See Lemma \ref{lem:noisy_quantile} for detailed statement. \subsubsection{Concentration of $\hat{F}^{(i)}$ around $F^{(i)}$, ~$i \in [m]$}\label{sec:conc_CDF_noisy_unknown} Here we shall establish that $\hat{F}^{(i)}$ converges uniformly to $F^{(i)}$ in large sample limit. Specifically, we obtain a Lemma (Lemma \ref{lem:noisy_unknown_cdf}) that provides an exponentially decaying probabilistic tail bound for this uniform convergence (See Lemma \ref{lem:noisy_known_cdf} for comparison with the known noise case). \medskip \noindent {\em Notation.} We recall absolute constants $C_1 \equiv \frac{l^2}{2(D_{max} - D_{min})^2}$, $C_2 \equiv \frac{l^2}{8\sigma^2}$ and defin \begin{align*} c_{\Delta A} \equiv 8\sqrt{\pi}\left( \frac{\sqrt{e^{C_1}} + \sqrt{2}}{\sqrt{C_1}} + \frac{\sqrt{e^{C_2}} + \sqrt{2}}{\sqrt{C_2}} \right). \end{align*} Define a monotone increasing function $s_{\phi}: \Ints_+ \to \Reals_+$ with $c_{\Delta A}$ as (see item 6 in Appendix \ref{sec:conditioning} for reasons behind this definition) \begin{equation}\label{eqn:sphi} s_{\phi} (x) = \frac{8 \sigma (\log x)^{\frac{1}{\beta}} }{(4\gamma)^{\frac{1}{\beta}}} \frac{ \sqrt{ \log (4mnp)} }{(mnp)^{\frac{1}{4}} } + \frac{2 (\log x)^{\frac{1}{\beta}}}{(4\gamma)^{\frac{1}{\beta}}} \left[ \frac{c_{\Delta A}}{\sqrt{mp}} + \frac{2L\sqrt{2}}{\sqrt{np}}(1 + \sqrt[4]{np}) \right]. \end{equation} Note that $\sigma, \beta, \gamma$ are model parameters for noise, and $L, l$ are lipschitz constants for the class of latent functions. The absolute constant $C_3 = C_3(l)$ (see Lemma \ref{lem:mean_difference_tilde}) depends only on $l$. The bandwidth parameter $h$ is chosen as $h = (4 \gamma)^{\frac{1}{\beta}} (\log |\cB_i| )^{-\frac{1}{\beta}}$ and the ridge parameter $\rho = |\cB_i|^{-\frac{7}{24}}$. $K_{max}=\max_{t \in [-1,1]} \left| \phi_K(t) \right| < \infty$ is the maximum modulus of the kernel used. \medskip \noindent {\em Error thresholds.} Our objective in this section is to obtain a probabilistic tail bound on the uniform convergence of $\hat{F}^{(i)}$ to $F^{(i)}$. However, we cannot expect convergence up to arbitrary precision, but there exists a fundamental limit. We define thresholding values for the error in CDF estimation for the convenience in presenting the results. For $i \in [m]$, we let \begin{align} \toi &\equiv C_3 \left( \log \left| \cB_i \right| \right)^{-1/\beta} + \frac{2K_{max}(D_2 - D_1)}{ \pi h } \left( s_{\phi}\big(|\cB_i| \big) + \rho\right), \quad\text{and} \label{eqn:t0}\\ \Toi &\equiv \toi + \frac{4K_{max} (D_2 - D_1) }{\pi \left( 4\gamma \right)^{\frac{1}{\beta}} } \left| \cB_i \right|^{-\frac{5}{24}} \left( \log \left| \cB_i \right| \right)^{\frac{1}{\beta}}. \label{eqn:T0} \end{align} Note that these are not constants but functions which depend on $|\cB_i|$. We also remark that $C_3 \left( \log \left| \cB_i \right| \right)^{-1/\beta}$ is the essential limit for the convergence, while the other slack terms are introduced for the convenience of analysis. Recall we defined the following conditioning events (see Eq. \eqref{eqn:sufficient_overlap}) to make the probabilistic tail bound more amenable for the analysis: for any $i \in [m]$, \begin{align*} \Erow &\equiv \Big\{ |\cB_i| \geq \frac{np}{2}\Big\},\quad \text{~and~}\quad \Erowp \equiv \Big\{ |\cB_i| \leq 2np\Big\}. \end{align*} We define $\tos$ (resp. $\Tos$) as the supremum of $\toi$ (resp. $\Toi$) under $\Erow \cap \Erowp$: \begin{align} \tos &\equiv C_3 \left( \log \left( \frac{np}{2} \right) \right)^{-1/\beta} + \frac{2K_{max}(D_2 - D_1)}{ \pi h } \Big( s_{\phi}\big( 2np \big) + \rho \Big), \quad\text{and} \label{eqn:t0s}\\ \Tos &\equiv \tos + \frac{4K_{max} (D_2 - D_1) }{\pi \left( 4\gamma \right)^{\frac{1}{\beta}} } \left( \frac{np}{2} \right)^{-\frac{5}{24}} \left( \log (2np) \right)^{\frac{1}{\beta}}. \label{eqn:T0s} \end{align} \medskip \noindent {\em Lemma statements.} We define a function $\tilde{\Psi}_{m,n,p}: \Ints_+ \to \Reals_+$ as \begin{align} \tilde{\Psi}_{m,n,p}\left( x \right) &= \exp\left( - \frac{n}{16} \right) + \exp \left( - \frac{m}{16} \right) + \exp \left( - \frac{mnp}{3} \right) \nonumber\\ &\quad + n \exp\bigg( - n^{\frac{1}{2}} \bigg) + n \exp\bigg( -\frac{1}{3\sqrt{2}} n^{\frac{3}{4}} \bigg) + \frac{128}{mnp} \nonumber\\ &\quad + \exp \left(- \frac{\sigma^4 (\log x)^{\frac{4}{\beta}} }{(4\gamma)^{\frac{4}{\beta}}} \log^2 (4mnp) + \log (4mnp) \right) \nonumber\\ &\quad + \exp \Bigg( - \frac{(\log x)^{\frac{2}{\beta}}}{256 (4\gamma)^{\frac{2}{\beta}}} \left[ c_{\Delta A}\sqrt{n} + 2L\sqrt{2m} \right]^2 \nonumber\\ &\qquad\qquad\quad +\frac{1}{2} \Big( \log{mnp} + \log \log (4mnp) \Big) + \log \frac{16\sigma}{c_{\Delta A}+ 2L\sqrt{2}} \Bigg). \label{eqn:Remainder.tilde} \end{align} In the following lemma, we will let $\tilde{\Psi}_{m,n,p}\left( |\cB_i|\right)$ denote the remainder term which does not depend on the error level $t$. For completeness, we note that the remainder term is the sum of upper bounds in Eq. \eqref{eqn:EJc} - \eqref{eqn:Ephic}, which vanishes as $mp, np \to \infty$. Recall that $C_4 = \frac{ B K_{max} \left( D_{2} - D_{1} \right) }{\pi \left( 4\gamma \right)^{\frac{1}{\beta}} }$, where $B \geq 1$. \begin{lemma}\label{lem:noisy_unknown_cdf} For any $i \in [m]$, and for any $t \geq \Toi$, \begin{align*} &\Prob{ \sup_{z \in [D_1, D_2]} \left| \hat{F}^{(i)} (z) - F^{(i)}(z) \right| > t}\\ &\qquad \leq | \cB_i |^{\frac{1}{6}} \exp\left( \frac{-\left| \cB_i \right|^{5/12} }{8 C_4^2 \left( \log \left| \cB_i \right| \right)^{\frac{2}{\beta}} }(t - \toi)^2 \right) + \tilde{\Psi}_{m,n,p}\left( |\cB_i|\right). \end{align*} \end{lemma} We state a useful consequence of the above result with conditioning events $\Erow, \Erowp$. Note that $\tilde{\Psi}_{m,n,p}\left( \frac{np}{2} \right)$ sets an upper bound on $\tilde{\Psi}_{m,n,p}\left( |\cB_i|\right)$ under $\Erow \cap \Erowp$. \begin{corollary}\label{coro:unknown_CDF_uniform} For any $i \in [m]$, and any $t \geq \Tos$, \begin{align*} &\Prob{ \left. \sup_{z \in [D_1, D_2]} \left| \tilde{F}^{(i)} (z) - F^{(i)}(z) \right| > t \right| \Erow, \Erowp}\\ &\qquad \leq (2np)^{\frac{1}{6}}\exp\left( \frac{- \left( \frac{np}{2} \right)^{5/12} }{8 C_4^2 \left( \log (2np) \right)^{\frac{2}{\beta}} }(t - \tos)^2 \right) + \tilde{\Psi}_{m,n,p}\left( \frac{np}{2} \right). \end{align*} \end{corollary} \subsection{Proof of Theorem \ref{thm:simple_unknown}} In this section, we complete the proof of Theorem \ref{thm:simple_unknown}. The proof follows similar structure as that of Theorem \ref{thm:simple_known}. First, we establish tail bound on $|\tilde{A}(i,j) - A(i,j)|$ and then integrate it to obtain bound on Mean-Squared-Error (MSE). The main difference is that we use Lemma \ref{lem:noisy_unknown_cdf} (Corollary \ref{coro:unknown_CDF_uniform}) in place of Lemma \ref{lem:noisy_known_cdf} due to the lack of knowledge on noise distribution $\phi_N(t)$. \subsubsection{Tail Bound on $|\hat{A}(i,j) - A(i,j)|$} For given choice of parameters $t > 0$ and $L, \Qf, m, n, p$ and $\Tos$ as defined before , we define conditions in the same manner as in Eq. \eqref{eqn:technical_conditions} (we newly define $E_3$ instead of $E_2$ there): \begin{align}\label{eqn:technical_conditions} E_1 & =\Big\{ t \leq 8L \Qf\left(\frac{mp}{2} \right) \Big\} \qquad \text{and} \qquad E_3 = \Big\{ t \leq 2L \Tos \Big\}. \end{align} \begin{theorem}\label{thm:tail_noisy_unknown} For each $(i,j) \in [m] \times [n]$, for any $t \geq 0$, \begin{align*} &\Prob{\left| \hat{A}(i, j) - A(i,j)\right| > t}\\ &\qquad \leq \Ind{E_1 } + \exp\left( -\frac{n( \frac{t}{4L} - \Qf\left(\frac{mp}{2}\right))}{3} \right) \Ind{E_1^c }\\ &\qquad\quad + \Ind{E_3} + (2np)^{\frac{1}{6}} \exp\left( \frac{- \left( \frac{np}{2} \right)^{5/12} }{8 C_4^2 \left( \log (2np) \right)^{\frac{2}{\beta}} }(t - \tos)^2 \right) \Ind{E_3^c }\\ &\qquad\quad + \exp\left( -\frac{n t^2}{8L^2} \right) + \exp\left( - \frac{mp}{8} \right) + 2 \exp\left(-\frac{np}{8} \right) + \tilde{\Psi}_{m,n,p}\left( \frac{np}{2} \right), \end{align*} where $\tos, \Tos$ and $\tilde{\Psi}_{m,n,p}\left( \frac{np}{2} \right)$ are as defined previously. \end{theorem} Note that the term in the last line, which are independent of $t$, decays to $0$ at an exponential rate as $mp, np \to \infty$. \begin{proof} The proof follows the same logic as in the proof of Theorem \ref{thm:tail_noisy_known}, while we use the upper bound from Corollary \ref{coro:unknown_CDF_uniform} in lieu of Corollary \ref{coro:noisy_CDF_uniform}. Let $\theta^* \equiv F^{(i)}\left( \hat{A}(i,j) \right) = F^{(i)}\left( \hat{g}^{(i)}\left(\hat{q}_{\marg}(j) \right) \right) $. Since $\hat{F}^{(i)}$ is continuous, $\left|\theta^* - \hat{q}_{\marg}(j)\right| \leq \left\| \hat{F}^{(i)} - F^{(i)} \right\|_{\infty}$. By the same line of argument as in the proof of Theorem \ref{thm:tail_noisy_known}, since $\hat{A}(i,j) = \hat{g}^{(i)}\left(\hat{q}_{\marg}(j) \right)=g \left( \frow{i}, \theta^* \right)$, and $g$ is $(l, L)$-biLipschitz, \begin{align*} \left|\hat{A}(u,i) - A(i,j)\right| &= \left|g \left( \frow{i}, \fcol{j} \right) - g \left( \frow{i}, \theta^* \right)\right| \leq L \left| \fcol{j} - \theta^* \right|\\ &\leq L \left( \left| \fcol{j} - \hat{q}_{\marg}(j) \right| + \Big| \hat{q}_{\marg}(j) - \theta^* \Big| \right)\\ &\leq L \left( \left| \fcol{j} - \hat{q}_{\marg}(j) \right| + \Big\| \hat{F}^{(i)} - F^{(i)} \Big\|_{\infty}\right). \end{align*} If $\left| \fcol{j} - \hat{q}_{\marg}(j) \right| \leq \frac{t}{2L}$, $\left\| \hat{F}^{(i)} - F^{(i)} \right\|_{\infty} \leq \frac{t}{2L}$ then $\left| \hat{A}(u,i) - A(i,j)\right| \leq t$. We can achieve the following upper bound by applying the union bound on the contraposition. We let $E_{(i)} := \Erow \cap \Erowp$ in this proof. Then it follows that \begin{align*} &\Prob{\left| \hat{A}(i, j) - A(i,j)\right| > t} \nonumber\\ &\qquad\leq \Prob{ \left| \hat{q}_{\marg}(j) - \theta_{col}^{(j)} \right| > \frac{t}{2L} } + \Prob{\sup_{z \in \Reals} \left| \hat{F}^{(i)}(z) - F^{(i)}(z) \right| > \frac{t}{2L} } \nonumber\\ &\qquad\leq \Prob{ \left| \hat{q}_{\marg}(j) - \theta_{col}^{(j)} \right| > \frac{t}{2L} }\\ &\qquad\quad + \Prob{ \left. \sup_{z \in \Reals} \left| \hat{F}^{(i)}(z) - F^{(i)}(z) \right| > \frac{t}{2L} \right| E_{(i)}} + \Prob{E_{(i)}^c}. \end{align*} Because we have a trivial upper bound $1$ on probability, it follows from Lemma \ref{lem:noisy_quantile} that \begin{align*} &\Prob{ \left| \hat{q}_{\marg}(j) - \theta_{col}^{(j)} \right| > \frac{t}{2L} }\\ &\qquad \leq \Ind{t \leq 8L \Qf\left(\frac{mp}{2}\right) }\\ &\qquad\quad+ \Ind{t \geq 8L \Qf\left(\frac{mp}{2}\right) }\\ &\qquad\qquad\quad \times \left[ \exp\left( -\frac{n t^2}{8L^2} \right) + \exp\left( -\frac{n( \frac{t}{4L} - \Qf\left(\frac{mp}{2}\right))}{3} \right) + \exp\left( - \frac{mp}{8} \right)\right]. \end{align*} In a similar manner, we have \begin{align*} &\Prob{ \left. \sup_{z \in \Reals} \left| \hat{F}^{(i)}(z) - F^{(i)}(z) \right| > \frac{t}{2L} \right| E_{(i)}}\\ &\qquad \leq \Ind{t \leq 2L \Tos } \\ &\qquad\quad + \Ind{t \geq 2L \Tos } (2np)^{\frac{1}{6}} \exp\left( \frac{- \left( \frac{np}{2} \right)^{5/12} }{8 C_4^2 \left( \log (2np) \right)^{\frac{2}{\beta}} }(t - \tos)^2 \right)\\ &\qquad\quad + \Ind{t \geq 2L \Tos } \tilde{\Psi}_{m,n,p}\left( \frac{np}{2} \right). \end{align*} Note that $t \geq 2L \Tos$ implies that $\frac{t}{2L} \geq \tos$. We used an upper bound on $\Prob{E_{(i)}^c}$ obtained from the binomial Chernoff bound: \begin{align*} \Prob{E_{(i)}^c} &= \Prob{ \left| \cB_i \right| < \frac{np}{2} \text{ or } \left| \cB_i \right| > 2np }\\ &\leq \Prob{\left| \cB_i \right| < \frac{np}{2}} + \Prob{\left| \cB_i \right| > 2np }\\ &\leq \exp\left(-\frac{np}{8} \right) + \exp\left(-\frac{np}{3} \right)\\ &\leq 2 \exp\left(-\frac{np}{8} \right). \end{align*} Substituting these three upper bounds back to Eq. \eqref{eqn:prob_aggr}, we can conclude that \begin{align*} &\Prob{\left| \hat{A}(i, j) - A(i,j)\right| > t}\\ &\qquad \leq \Ind{t \leq 8L \Qf\left(\frac{mp}{2}\right) } + \Ind{t \leq 2L \Tos }\\ &\qquad\quad + \exp\left( -\frac{n( \frac{t}{4L} - \Qf\left(\frac{mp}{2}\right))}{3} \right) \Ind{t \geq 8L \Qf\left(\frac{mp}{2}\right) }\\ &\qquad\quad + \Ind{t \geq 2L \Tos } (2np)^{\frac{1}{6}} \exp\left( \frac{- \pi^2 \left(4\gamma \right)^{\frac{2}{\beta}}\left( \frac{np}{2} \right)^{5/12} }{8 K^2_{max} (D_2 - D_1)^2 \left( \log (2np) \right)^{\frac{2}{\beta}} }(t - \tos)^2 \right)\\ &\qquad\quad + \exp\left( -\frac{n t^2}{8L^2} \right) + \exp\left( - \frac{mp}{8} \right) + 2 \exp\left(-\frac{np}{8} \right) + \tilde{\Psi}_{m,n,p}\left( \frac{np}{2} \right). \end{align*} \end{proof} \subsubsection{Mean Squared Error} Let $\hat{\varphi}$ denote the estimator which maps $Z$ to $\hat{A}$. By the same line of arguments as in Eq. \eqref{eqn:integration}, the mean squared error of estimator $\hat{\varphi}$ is given as \begin{align} MSE\left( \hat{\varphi} \right) &= \int_0^{\infty} 2u\Prob{\left| \hat{A}(i,j) - A(i,j) \right| > u } du \label{eqn:integration2} \end{align} Also, from the model assumption and the construction of the estimators, the estimation error is bounded above: \[ \left| \hat{A}(i,j) - A(i,j) \right| \leq D_2 - D_1, \] Let $D = D_2 - D_1$ denote the upper bound. Note that $D$ is a constant independent of $m, n$. For brevity's sake, we introduce some notations for abbreviation. We let \begin{align*} c_3 \equiv \frac{ \left( \frac{np}{2} \right)^{5/12} }{8 C_4^2 \left( \log (2np) \right)^{\frac{2}{\beta}} }. \end{align*} We define $\Psi(m,n,p)$ to capture all constant terms in the probabilistic bound of Theorem \ref{thm:tail_noisy_unknown}. That is to say, \begin{align} \Psi(m,n,p) &\equiv \exp\left( - \frac{mp}{8} \right) + 2 \exp\left(-\frac{np}{8} \right) + \tilde{\Psi}_{m,n,p}\left( \frac{np}{2} \right) \nonumber\\ &= \exp\left( - \frac{mp}{8} \right) + 2 \exp\left(-\frac{np}{8} \right)\\ &\quad + \exp\left( - \frac{n}{16} \right) + \exp \left( - \frac{m}{16} \right) + \exp \left( - \frac{mnp}{3} \right) \nonumber\\ &\quad + n \exp\bigg( - n^{\frac{1}{2}} \bigg) + n \exp\bigg( -\frac{1}{3\sqrt{2}} n^{\frac{3}{4}} \bigg) + \frac{128}{mnp} \nonumber\\ &\quad + \exp \left(- \frac{\sigma^4 (\log \frac{np}{2})^{\frac{4}{\beta}} }{(4\gamma)^{\frac{4}{\beta}}} \log^2 (4mnp) + \log (4mnp) \right) \nonumber\\ &\quad + \exp \Bigg( - \frac{(\log \frac{np}{2})^{\frac{2}{\beta}}}{256 (4\gamma)^{\frac{2}{\beta}}} \left[ c_{\Delta A}\sqrt{n} + 2L\sqrt{2m} \right]^2 \nonumber\\ &\qquad\qquad\quad +\frac{1}{2} \Big( \log{mnp} + \log \log (4mnp) \Big) + \log \frac{16\sigma}{c_{\Delta A}+ 2L\sqrt{2}} \Bigg). \label{eqn:Remainder} \end{align} \begin{theorem}[Main theorem 3 -- full version of Theorem \ref{thm:simple_unknown}; unknown noise]\label{thm:MSE_noisy_unknown} The mean squared error of the deconvolution kernel estimator $\hat{\varphi}$ is bounded above as follows: \begin{align*} MSE\left( \hat{\varphi} \right) &\leq 4L^2{\Tos}^2 + 64L^2\Qf\left(\frac{mp}{2}\right)^2 \\ &\quad + 8L\Qf\left(\frac{mp}{2}\right) \sqrt{\frac{3L\pi}{n}} + 4L^2 (2np)^{\frac{1}{6}} \left[ \frac{1}{ c_3} + \tos \sqrt{\frac{\pi}{c_3}} \right]\\ &\quad + \frac{8L^2}{n} + \frac{288L^2}{n^2} + D^2 \Psi(m,n,p). \end{align*} The upper bound diminishes to $0$ as $mp, np \to \infty$ at the rate of $(\log np )^{-\frac{2}{\beta}}$. \end{theorem} We remark that $4L^2{\Tos}^2$ is the asymptotically dominant term, which scales as $O \left( (\log np )^{-\frac{2}{\beta}} \right)$ (see Eq. \eqref{eqn:T0} for definition of $\Tos$). All the other terms decay at polynomial rate in the least. For example, $\Qf\left(\frac{mp}{2}\right) = O \left(\frac{1}{\sqrt{mp}} \right)$ (see Eq. \eqref{eqn:qstar}). To see the polynomial convergence of $4L^2 (2np)^{\frac{1}{6}} \left[ \frac{1}{ c_3} + \tos \sqrt{\frac{\pi}{c_3}} \right]$, recall from Eqs. \eqref{eqn:sphi} and \eqref{eqn:T0} that \begin{align*} &\tos \equiv C_3 \left( \log \left( \frac{np}{2} \right) \right)^{-1/\beta} + \frac{2K_{max}(D_2 - D_1)}{ \pi h } \Big( s_{\phi}\big( 2np \big) + \rho \Big), \quad\text{where}\\ &s_{\phi} (2np) = \frac{8 \sigma (\log (2np))^{\frac{1}{\beta}} }{(4\gamma)^{\frac{1}{\beta}}} \frac{ \sqrt{ \log (4mnp)} }{(mnp)^{\frac{1}{4}} }\\ &\qquad\qquad\quad + \frac{2 (\log (2np))^{\frac{1}{\beta}}}{(4\gamma)^{\frac{1}{\beta}}} \left[ \frac{c_{\Delta A}}{\sqrt{mp}} + \frac{2L\sqrt{2}}{\sqrt{np}}(1 + \sqrt[4]{np}) \right]. \end{align*} We can see that $\left[ \frac{1}{ c_3} + \tos \sqrt{\frac{\pi}{c_3}} \right] = O\left( (np)^{-\frac{5}{24}} \right)$ because $\tos \sqrt{\frac{\pi}{c_3}}$ dominates asymptotically. \begin{proof}[Proof of Theorem \ref{thm:MSE_noisy_unknown}] In order to achieve an upper bound on the MSE for the kernel density estimator with known noise, $\hat{\varphi}$, we integrate the tail probability bound from Theorem \ref{thm:tail_noisy_unknown}. First of all, we recall from Eqs. \eqref{eqn:integral} and \eqref{eqn:Gamma2} that \begin{align*} \int_0^{\infty} u e^{-a u^2} du =\frac{1}{2a}, \quad\text{and}\quad \int_0^{\infty} u e^{-au} du = \frac{1}{a^2}. \end{align*} Also, we know that \begin{equation}\label{eqn:half_normal} \int_0^{\infty} e^{-au^2} du = \frac{1}{2}\sqrt{\frac{\pi}{a}}. \end{equation} Now, the mean squared error can be written in the following form: \begin{align} MSE\left( \hat{\varphi} \right) &= \int_0^{D} 2u \Prob{\left| \hat{A}(i, j) - A(i,j)\right| > u} du \nonumber\\ &\leq \int_0^{D} 2u \Psi(m,n,p) du + \int_0^{8L\Qf\left(\frac{mp}{2}\right)} 2u du + \int_0^{2L \Tos} 2u du \nonumber\\ &\quad + \int_0^{D} 2u\exp\left( -\frac{n u^2}{8L^2} \right) du \label{eqn:term1inMSEhat}\\ &\quad + \int_{8L\Qf\left(\frac{mp}{2}\right)}^{D} 2u\exp\left( -\frac{n( \frac{u}{4L} - \Qf\left(\frac{mp}{2}\right))}{3} \right)du \label{eqn:intermediate_MSE_hat}\\ &\quad + \int_{2L \Tos}^D 2u (2np)^{\frac{1}{6}} \exp\left( \frac{- \left( \frac{np}{2} \right)^{5/12} }{8 C_4^2 \left( \log (2np) \right)^{\frac{2}{\beta}} }\left(\frac{u}{2L} - \tos \right)^2 \right) du \label{eqn:complicated_integral.hat}. \end{align} We can reuse some calculations from the proof of Theorem \ref{thm:MSE_noisy_known}. Note that the term in Eq. \ref{eqn:term1inMSEhat} is the same with that in Eq. \eqref{eqn:term1inMSEtilde}, and Eq. \eqref{eqn:intermediate_MSE_hat} is the same with Eq. \eqref{eqn:intermediate_MSE_tilde}. Therefore, \begin{align*} Eq. \eqref{eqn:term1inMSEhat} &\leq \int_0^{\infty} 2u\exp\left( -\frac{n u^2}{8L^2} \right) du = \frac{8L^2}{n},\\ Eq. \eqref{eqn:intermediate_MSE_hat} &\leq \frac{288L^2}{n^2} + 8L\Qf\left(\frac{mp}{2}\right) \sqrt{\frac{3L\pi}{n}}. \end{align*} It remains to compute an upper bound of the term Eq. \eqref{eqn:complicated_integral.hat}. \begin{align*} Eq. \eqref{eqn:complicated_integral.hat} &= 2 (2np)^{\frac{1}{6}} \int_{2L \Tos}^{D} u \exp\left(- c_3 \left(\frac{u}{2L} - \tos \right)^2 \right) du\\ &= 2 (2np)^{\frac{1}{6}} \int_{\Tos - \tos}^{\frac{D}{2L} - \tos} (2L)^2(v+ \tos) \exp\left(- c_3 v^2 \right) dv\\ &\leq 8L^2 (2np)^{\frac{1}{6}} \left[ \int_{0}^{\infty} v \exp\left(- c_3 v^2 \right) dv + \tos \int_{0}^{\infty} \exp\left(- c_3 v^2 \right) dv \right] \\ &= 4L^2 (2np)^{\frac{1}{6}} \left[ \frac{1}{ c_3} + \tos \sqrt{\frac{\pi}{c_3}} \right]. \end{align*} The second line follows by substituting $v = \frac{u}{2L} - \tos$ and the third line follows from that $ \Tos - \tos \geq 0$. Plugging these upper bounds back into Eqs. \eqref{eqn:term1inMSEtilde}, \eqref{eqn:intermediate_MSE_tilde} and \eqref{eqn:complicated_integral} , we can obtain the following upper bound \begin{align*} MSE\left( \hat{\varphi} \right) &\leq D^2 \Psi(m,n,p) + \left[ 8L\Qf\left(\frac{mp}{2}\right) \right]^2 + \Big[ 2L\Tos \Big]^2 + \frac{8L^2}{n} + \frac{288L^2}{n^2}\\ &\quad + 8L\Qf\left(\frac{mp}{2}\right) \sqrt{\frac{3L\pi}{n}} + 4L^2 (2np)^{\frac{1}{6}} \left[ \frac{1}{ c_3} + \tos \sqrt{\frac{\pi}{c_3}} \right]. \end{align*} Rearranging the terms in the increasing order of convergence rates concludes the proof. \end{proof} \section{Theoretical Result} Assuming row-wise monotonocity, we interet the inverse of latent function, $F^{(i)} = g^{-1}\left( \frow{i}, \cdot \right)$ as a distribution function. In Section \ref{sec:alg_adapt}, we introduced three estimators for $F^{(i)}$ under different noise scenarios: 1) $\breve{F}^{(i)}$ (see Eq. \eqref{eqn:ECDF_noiseless}) when there is no noise; 2) $\tilde{F}^{(i)}$ (see Eq. \eqref{eqn:ECDF_known_noise}) when the distribution of the additive noise is known; and 3) $\hat{F}^{(i)}$ (see Eq. \eqref{eqn:ECDF_unknown_noise}) when the distribution of the additive noise is not known but has to be estimated. With these estmated distributions, our matrix estimator can be defined following the generic procedure described in Section \ref{sec:algorithm_generic}: \begin{align*} \breve{A}(i,j) &= \breve{g}^{(i)}\left(\hat{q}(j) \right),\\ \tilde{A}(i,j) &= \tilde{g}^{(i)}\left(\hat{q}_{marg}(j) \right),\\ \hat{A}(i,j) &= \hat{g}^{(i)}\left(\hat{q}_{marg}(j) \right), \end{align*} for all $(i,j) \in [m] \times [n]$. In this section, we present our main results on the consistency of these three estimators. \subsection{When there is no noise} \begin{theorem}[Main theorem1; Noiseless MSE]\label{thm:MSE_noiseless} The noiseless estimator $\breve{\varphi}$ is consistent as long as $p = \Omega\left(\max\left\{\frac{\log n}{m}, \frac{\log m}{n}\right\} \right)$. Specifically, the mean squared error of $\breve{\varphi}$ is bounded above as follows: \begin{align*} MSE\left( \breve{\varphi} \right) & \leq\frac{36L^2}{mnp^2} + \frac{18L^2}{np}\\ & + \left( D_{max} - D_{min} \right)^2 \left[m \exp\left( -\frac{np}{8} \right) + n \exp\left( -\frac{mp}{8} \right) \right]. \end{align*} \end{theorem} For example, when $p = 16 \max \left\{ \frac{\log n}{m}, \frac{\log m}{n}\right\}$, the mean-squared error $MSE\left( \breve{\varphi} \right) \leq \frac{9}{64 \log m \log n} + \frac{9 L^2}{8 \log m} + \left( D_{max} - D_{min} \right)^2 \left( \frac{1}{m} + \frac{1}{n} \right) \to 0$ as $m, n \to \infty$. \subsection{When the distribution of additive noise is known} We suppose that $M = O(n)$. For brevity of the theorem statement, we define the following constants \begin{align*} c &= \min \left\{\frac{l^2}{16\left( D_{max} - D_{min} \right)^2}, \frac{l^2}{64\sigma^2} \right\},\\ c_1 &= \frac{ \pi^2 \left(4\gamma \right)^{\frac{2}{\beta}} }{8B^2 K^2_{max}\left( \log \left( 2np \right) \right)^{\frac{2}{\beta}}}\left( \frac{np}{2} \right)^{1/6},\\ c_2 &= C \left( \log \frac{np}{2} \right)^{-1/\beta}, \end{align*} and a function \begin{align*} &\psi(m,n,p,s)\\ &=n \exp\left( - \frac{ns}{3} \right) + 4n \exp\left(-c (mp)s^{2}\right) + 2m \exp\left( -\frac{np}{8} \right) + n \exp\left( -\frac{mp}{8} \right). \end{align*} Here, $C$ is a constant which controls the bias of $\tilde{F}^{(i)}$ as described in Lemma \ref{lem:mean_difference_tilde}. \begin{theorem}[Main theorem2; MSE with known noise]\label{thm:MSE_noisy_known} When the distribution of additive noise is known, the estimator $\tilde{\varphi}$ is consistent as long as $p = \Omega\left(\max\left\{\frac{\log n}{m}, \frac{\log m}{n}\right\} \right)$. Specifically, the mean squared error of $\tilde{\varphi}$ is bounded above as follows: \begin{align*} MSE\left( \tilde{\varphi} \right) &\leq \frac{16L^2}{n} + \frac{1152L^2}{n^2} + \frac{8L^2}{c_1} + 8L^2 c_2^2 + \psi(m,n,p,s) M^2. \end{align*} \end{theorem} \subsection{When the distribution of additive noise has to be also estimated} [Delaigle et al., 2008] That is also not bad \DG{I may briefly mention that this reduces to Theorem 2 under certain conditions, if I can't prove the last lemmas} \begin{theorem}[Main theorem: noise is also estimated] a \end{theorem}
1,314,259,995,687
arxiv
\section{Introduction} Painting art, like Vincent van Gogh's ``The Starry Night'', have attracted people for many years. It is one of the most popular art forms for creative expression of the conceptual intention of the practitioner. Since 1990's, researches have been made by computer scientists on the artistic work, in order to understand art from the view of computer or to turn a camera photo into an artistic image automatically. One early attempt is Non-photorealistic rendering (NPR)\cite{NPR}, an area of computer graphics, which focuses on enabling artistic styles such as oil painting and drawing for digital images. However, NPR is usually limited to specific styles and hard to generalize to produce styled images for any other artistic styles. One significant advancement was made by Gatys \etal in 2015 \cite{GatysNeuralStyle}, called neural style transfer, which could separate the representations of the image content and style learned by deep CNN and then recombine the image content from one and the image style from another to obtain styled images. During this neural style transfer process, fantastic stylized images were produced with the appearance similar to a given real artistic work, such as Vincent van Gogh's ``The Starrry Night''. The success of the style transfer indicates that artistic styles are computable and are able to be migrated from one image to another. Thus, we could learn to draw like some artists apparently without being trained for years. Following Gatys \etal 's pioneering work, a lot of efforts have been made to improve or extend the neural style transfer algorithm.\cite{Yin} considered the semantic content and introduced the semantic style transfer network.\cite{MRF} combined the discriminatively trained CNN with the classical Markov Random Field (MRF) based texture synthesis for better mesostructure preservation in synthesized images. Semantic annotations were introduced by \cite{Champ} to achieve semantic transfer. To imporve the efficiency, \cite{Johnson} as well as \cite{Ulyanov} introduced a fast neural style transfer method, which is a feed-forward network to deal with a large set of images per training. With help of an adversarial training network, results were further improved in \cite{Adver}. For a systematic review on neural style transfer, please refer to \cite{review}. The success of recent progress on style transfer relies on the separable representation learned by deep CNN, in which the layers of convolutional filters automatically learns low-level or abstract representations in a more expressive feature space than the raw pixel-based images. However, it is still challenging to use CNN representations for style transfer due to their uncontrollable behavior as a black-box, and thus it is still difficult to select appropriate composition of styles (e.g., textures, colors, strokes) from images due to the risk of incorporation of unpredictable or incorrect patterns. In this paper, we take a further step on the separated representations of image content and styles. We aim at a computational understanding of the artistic styles, and decompose them into basis elements that are easy to be selected and combined to obtain enhanced and controllable style transfer. Specifically, we propose two types of decomposition methods, i.e., spectrum based methods featured by Fast Fourier Transform (FFT), Discrete Cosine Transform (DCT), and latent variable models such as Principal Component Analysis (PCA), Independent Component Analysis (ICA). Then, we suggest methods of combination of styles by intervention and mixing. The computational decomposition of styles could be embedded as a module to state-of-the-art neural transfer algorithms. Experiments demonstrate the effectiveness of style decomposition in style transfer. We also demonstrate that controlling the style bases enables us to transfer the Chinese landscape painting very well and to transfer the sketch style for a task similar to picture-to-sketch \cite{2017sketch,2018sketch}. \section{Related Work} Style transfer generates a styled image having similar semantic content as the content image and similar style as the style image. Conventional style transfer is realized by patch-based texture synthesis methods \cite{texture,fastTexture} where style is approximated as texture. Given a texture image, patch-based texture synthesis methods can automatically generate new image with the same texture. However, arbitary style images are quite different from textures images \cite{fastTexture}, since patches of arbitary style images from different regions are usually distinguished while patches of texture images from different regions are always similar, which limits the functional ability of patch-based texture synthesis method in style transfer. Moreover, control of the texture transferred by varying the patch size (shown in Figure 2 of \cite{texture}) is limited due to the duplicated texture patterns in the texture image. The neural style transfer algorithm proposed by Gatys \etal \cite{GatysNeuralStyle} is a milestone in style transfer referring to his previous research \cite{gatysTexture} which pioneers to take advantage of pre-trained CNN on imageNet \cite{imagenet}. Rather than using previous texture synthesis methods which are implemented directly on the pixels of raw images, \cite{gatysTexture} uses the feature map of the image which proves to preserve better semantic infomation of the image. Content similarity is measured by comparing the feature map while style similarity is measured by comparing the Gram matrix of the feature map. Algorithm proposed in \cite{GatysNeuralStyle} starts with a noise image and finally converges to the styled image by iterative learning. The loss function $\mathcal{L}$ is composed of the content loss $\mathcal{L}_{content}$ and the style loss $\mathcal{L}_{style}$. $\mathcal{L}_{content}$ is measured by the square error between the feature map of certain layer () while $\mathcal{L}_{style}$ is measured by the square error between the Gram matrix $G_l$ of the feature maps from some certain layers. Notate $h_l, w_l, c_l$ as the height, width and the channel number of the feature map $\mathcal{F}$ in layer $l$ and $e_l$ as the weight of layer $l$ contributing to the style loss $\mathcal{L}_{style}$. $\mathcal{F}_l^{pred}$, $\mathcal{F}_l^{content}$ and $\mathcal{F}_l^{style}$ denote the feature map of the styled image, content image and style image correspondingly, where $\mathcal{F}_l$ is treated as 2-dimensional data ($\mathcal{F}_l \in \mathcal{R}^{(h_l w_l) \times c_l}$). \begin{align} \mathcal{L} = \alpha \mathcal{L}_{content} + \beta \mathcal{L}_{style} \label{totalLoss} \end{align} \begin{align} \mathcal{L}_{content} = \frac{1}{2}(\mathcal{F}_l^{pred} - \mathcal{F}_l^{content}) ^ 2 \end{align} \begin{align} G_l = \mathcal{F}_l^T \times \mathcal{F}_l, G_l \in \mathcal{R}^{c_l \times c_l} \end{align} \begin{align} \mathcal{L}_{style} = \sum_{l} e_l \frac{1}{4h_l^2w_l^2c_l^2}(G_l^{pred} - G_l^{style}) ^ 2 \end{align} \begin{figure*}[htb] \centering \includegraphics[width=\linewidth]{method/method3.pdf} \caption{An overview of our method is indicated by red, where the dotted red rectangle represents the latent space expanded by the style basis, $f$ denotes computational decomposition of the style feature map $F_s$, $g$ denotes mixing or intervention within the latent space. The red part works as a computational module embedded in Gatys’ or other neural style transfer algorithms.} \label{method} \end{figure*} Gatys \etal then proposed the methods of spatial control, color control and scale control for neural style transfer in \cite{ControlFactor}. Spatial control of neural style transfer can transfer style of specific regions of the style image via guided feature maps. Given the binary spatial guidance channels for both content and style image, one approach to generate the guided feature map is to multiply the guidance channel with the feature map element-wise while another approach is to concatnate the guidance channel with the feature map. Spatial control of neural style transfer serves as an effective method for semnatic control in neural style transfer \cite{Champ,stable}. Color control is realized with help of YUV color space , by which a styled image is first generated using \cite{GatysNeuralStyle}. The luminance channel of content image is replaced with that of the styled image which manage to preserve the color of content image \cite{YUV}. Besides, \cite{ControlFactor} referred to histogram matching methods \cite{histogram} which serve as the second approach to preserve the color of the content image. Although both approaches are feasible for color control of style transfer, we cannot control the color of style image transferred with arbitrary degree which makes the control of color binary-like (1: the styled image using \cite{GatysNeuralStyle} with all color of the style image transferred; 0: the styled image using color control in \cite{ControlFactor} with no color of the style image transferred). Moreover, \cite{ControlFactor} proposed feasible method of mixing detailed texture of one style image $I_s^{1}$ (like stroke) with course texture of another style image $I_s^{2}$ (like color) as scale control which first generates a new style image $I_s^{new}$ with $I_s^{1},I_s^{2}$ as content image and style image respectively using Gram matrix from lower layers in CNN. It can be noticed that the scale levels of the style depends on different layers in CNN which represents different abstract levels. However, since the number of layers in CNN is finite (for VGG19 at most 19 layers), the scale of style can only be controlled with finite degree. The limitation of control over neural style transfer proposed by pre-processing and post-processing methods in \cite{ControlFactor} derives from the lack of computational analysis of the artistic style which is the foundation of continuous control for neural style transfer. Inspired by spatial control in \cite{ControlFactor} that operations on the feature map could affect the style transferred, we implement different approaches to analyze the feature map and succeed in computational decomposition of the style via projecting feature map into latent space that is expanded by style basis, like color and stroke. Since every point in the well-organized latent space can be decoded back to one style, the control of style basis can be continuous. Meanwhile, our work facilitates the mixing or intervention of styles based on the style basis from more than one styles so that compound style or new style could be generated which enhances the diversity of styled images. \section{Methods}\label{sec:methods} Thanks to the powerful representation learning by deep CNN, the separation of content and style enables style transfer from an artistic painting to a natural image \cite{GatysNeuralStyle}. However, it is still challenging to use CNN representations for style transfer due to their uncontrollable behavior as a black-box, and thus it is still difficult to select appropriate composition of styles (e.g., textures, colors, strokes) from images due to the risk of incorporation of unpredictable or incorrect patterns. In the following, we propose to decompose the feature map of the style image into style basis in a latent space, in which it becomes easy to mix or intervene style bases of different styles to generate compound styles or new styles which are then projected back to the feature map space. Such decomposition process enables us to continuously control the composition of the style basis and enhance the diversity of the synthesized styled image. Please refer to Figure \ref{method} for an overview of our method, which could be implemented as a computational module in Gatys' or other neural style transfer algorithms. Given the content image $I_{content}$ and style image $I_{style}$, we decompose the style by function $f$ from the feature map $\mathcal{F}_s$ of the style image to $\mathcal{H}_s$ in the latent space which is consisted of style basis $S_i$. We can mix or intervene the style basis via function $g$ which is operated on style basis to generate the desired style coded by $\hat{\mathcal{H}_s}$. Using the inverse function $f^{-1}$, $\hat{\mathcal{H}}_s$ is projected back to the feature map space to get $\hat{F}_s$, which replace the original $\mathcal{F}_s$ for style transfer. Our method can serve as embedded module for the state-of-the-art neural style transfer algorithms, as shown in Figure \ref{method} by red. It can be noted that the module can be regarded as a general transformation from original style feature map $\mathcal{F}_s$ to new style feature map $\hat{\mathcal{F}_s}$. If we let $\hat{\mathcal{F}_s} = \mathcal{F}_s$, our method degenerates back to traditional neural style transfer \cite{GatysNeuralStyle}. Next, we will introduce two types of decomposition function $f$ and also suggest some control functions $g$. Since the transformation of the feature map is only done on the feature map of the style image, we simply notate $\mathcal{F}_s$ as $\mathcal{F}$ the denote the feature map of the style image and $\mathcal{H}_s$ as $\mathcal{H}$ in the rest of the paper. We notate $h$ and $w$ as the height and width of each channel in the feature map. \begin{table*}[htbp] \centering \begin{tabular}{|c|c|c|} \hline Method & Decomposition function $f$ & Projection back \\ \hline 2-d FFT & $\mathcal{H}(u,v) = \frac{1}{hw} \sum^{h-1}_{x=0} \sum^{w-1}_{y=0} \mathcal{F}(x,y)e^{-2(\frac{ux}{h} + \frac{vy}{w})\pi i}, \mathcal{F} \in \mathcal{R}^{h \times w \times c}$ & inverse 2-d FFT\\ \hline 2-d DCT & $\mathcal{H}(u,v) = c(u)c(v) \sum^{h-1}_{x=0} \sum^{w-1}_{y=0} \mathcal{F}(x,y)cos[\frac{(x+0.5)\pi}{h}u]cos[\frac{(y+0.5)\pi}{w}v]$ & inverse 2-d DCT\\ & $\mathcal{F} \in \mathcal{R}^{h \times w \times c}, c(u) = \sqrt{\frac{1}{N}},u = 0$ and $c(u) = \sqrt{\frac{2}{N}},u \neq 0$ & \\ \hline PCA & $\mathcal{F} = UDV^T, U = [v_1,\dots,v_{hw}], \mathcal{H} = U \times \mathcal{F}, \mathcal{F} \in \mathcal{R}^{(hw) \times c}, U \in \mathcal{R}^{(hw) \times (hw)}$ & $\hat{\mathcal{F}} = U^T \times \hat{\mathcal{H}}$\\ \hline ICA & $ [S, A] = fastICA(\mathcal{F}), \mathcal{H} = S, \mathcal{F} \in \mathcal{R}^{(hw) \times c}, S \in \mathcal{R}^{c \times (hw)}, A \in \mathcal{R}^{c \times c}$ & $\hat{\mathcal{F}} = (A \times \hat{\mathcal{H}})^T$\\ \hline \end{tabular} \centering \caption{The mathematical details of $f$ and $f^{-1}$} \label{math} \end{table*} \subsection{Decomposed by spectrum transforms} We adopt 2-dimensional Fast Fourier Transform (FFT) and 2-dimensional Discrete Cosine Transform (DCT) as the decomposition function with details given in Table \ref{math}. Both methods are implemented in channel level of $\mathcal{F}$ where each channel is treated as 2-dimensional data like a gray image. Through the transform by 2-d FFT and 2-d DCT, the style feature map was decomposed as frequencies in the spectrum space where the style is coded by frequency that forms style bases. We will see that some style bases, such as stroke and color, actually correspond to different level of frequencies. With help of decomposition, similar styles are quantified to be close to each other as a cluster in the spectrum space, and it is easy to combine the existing styles to generate compound styles or new styles $\hat{\mathcal{H}}$ by appropriately varying the style codes. $\hat{\mathcal{H}}$ is then projected back to the feature map space via the inverse function of 2-d FFT and 2-d DCT shown in Table \ref{math}. \subsection{Decomposed by latent variable models} We consider another type of decomposition by latent variable models, such as Principal Component Analysis (PCA) or Independent Component Analysis (ICA), which decompose the input signal into uncorrelated or independent components. Details are referred to Table \ref{math}, where each channel of the feature map $\mathcal{F}$ is vectorized as one input vector. \begin{itemize} \item \textbf{Principal Component Analysis (PCA)}: We implement PCA from the perspective of matrix factorization. The eigenvectors are computed via Singular Value Decomposition (SVD). Then, the style is coded as linear combination of orthogonal eigenvectors, which could be regarded as style bases. By varying the combination of eigenvectors, compound styles or new styles are generated and then projected back to feature map space via the inverse of the matrix of the eigenvectors. \item \textbf{Independent Component Analysis (ICA)}: We implement ICA via the fastICA algorithm \cite{fastICA}, so that we decompose the style feature map into statistically independent components, which could be regarded as the style bases. Similar to PCA, we could control the combination of independent components to obtain compound styles or new styles, and then project them back to the feature map space. \end{itemize} \subsection{Control function $g$} The control function $g$ in Figure \ref{method} defines style operations in the latent space expanded by the decomposed style basis. Instead of operating directly on the feature map space, such operations within the latent space bring several advantages. First, after decomposition, style bases are of least redundancy or independent to each other, operations on them are easier to control; Second, the latent space could be made as a low dimensional manifold against noise, by focusing on several key frequencies for the spectrum or principal components in terms of maximum data variation; Third, continuous operations, such as linear mixing, intervention, and interpolation, are possible, and thus the diversity of the output style is enhanced, and even new styles could be sampled from the latent space. Fourth, multiple styles are able to be better mixed and transferred simultaneously. Let $S_{i}^{(n)}, i \in \mathbb{Z}$ denote the i-th style basis of n-th style image. Notate $\{S_{i}^{(n)} | i \in I\}, I \subset \mathbb{Z}$ as $S_{I}^{(n)}$. \begin{itemize} \item \textbf{Single style basis}: Project the latent space on one of the style basis $S_j$. That is $S_i = 0$ if $i \neq j$ \item \textbf{Intervention}: Reduce or amplify the effect of one style basis $S_j$ by multiplying $I$ while keeping other style bases unchanged. That is $S_i = I * S_i$ if $i = j$ \item \textbf{Mixing}: Combine the style bases of $n$ styles. That is $S = \{S_I^{(1)}, S_J^{(2)}, \dots, S_K^{(n)}\}$ \end{itemize} \section{Experiments} We demonstrate the performance of our method using the fast neural style transfer algorithm \cite{fast-algorithm,Johnson,Ulyanov}. We take the feature map `relu4\underline{ }1' from pre-trained VGG-19 model \cite{VGG} as input to our style decomposition method because we try every single activation layer in VGG-19 and find that `relu4\underline{ }1' is more suitable for style transfer. \subsection{Inferiority of feature map and necessity of latent space} Here, we demonstrate that it is not suitable for the style control function $g$ to be applied on the feature map space directly because feature map space is possibly formed by a complicated mixture of style bases. To check whether the basis of feature map $\mathcal{F}$ can form the style bases, we first experimented on the channels of $\mathcal{F}$, then the pixels of $\mathcal{F}$. \begin{figure}[htb] \centering \subfigure[]{ \begin{minipage}[t]{\linewidth} \centering \includegraphics[width=\linewidth]{inferior/inferior2.pdf} \end{minipage}% }% \centering \caption{(a)\ding{172} content image (Stata Center); \ding{173} style image (``The Great Wave off Kanagawa'' by Katsushika Hokusai); \ding{174} styled image by traditional neural style transfer \cite{GatysNeuralStyle}; \ding{175}\ding{176}\ding{177} are the results of implementing control function directly on the feature map $\mathcal{F}$. Specifically, we amplify some pixles of $\mathcal{F}$ which generate \ding{175}\ding{176} and preserve a subset of channels of $\mathcal{F}$ which generate \ding{177}. } \label{inferior} \end{figure} \subsubsection{Channels of $\mathcal{F}$} Assume style is encoded in space $\mathcal{H} = \{S_{1},S_{2},\dots,S_{n}\}$ which is expanded by style basis $S_{i}$. The superiority of space $\mathcal{H}$ should result in that the intuitive similarity of style basis $S_{i}$ conforms to the cluster of $S_{i}$ in space $\mathcal{H}$ under Euclidean distance. Based on the above assumption, we generate the subset $C$ of channels of $\mathcal{F}$ that could possibly represent color basis with semi-supervised method using style images in Figure \ref{manifold}(a-c). It can be noticed that Chinese paintings and pen sketches (Figure \ref{manifold}(a,c)) share the same color style while oil painting (Figure \ref{manifold}(b)) has an exclusive one. We iteratively find the largest channel set $C_{max}$ (384 channels included) whose clustering result out of K-means \cite{kmeans} conforms to the following \textbf{clustering standard for color basis:} \begin{itemize} \item No cluster contains both \textbf{oil painting} and \textbf{Chinese painting or pen sketch}. \item One cluster contains only one or two points, since K-means is not adpative to the cluster number and the cluster number is set as $3$. \end{itemize} However, if we only use $C_{max}$ to transfer style, the styled image (Figure \ref{inferior}(a)\ding{177}) isn't well stylized and doesn't indicate any color style of the style image (Figure \ref{inferior}\ding{173}), which probably indicates that the channels of $\mathcal{F}$ are not suitable to form independent style basis. In comparison, not only does the clustering result of the color basis in spectrum space (defined in Section \ref{sec:decompose}) using K-means conform to the above clustering standard, but the styled image using single color basis (Figure \ref{decompose}(c)) also works well and meets our intuitive standard for color, which indicates that with help of proper decompostion functions, the superiority of latent space $\mathcal{H}$ is reachable. \subsubsection{Pixels of $\mathcal{F}$} We give intervention $I=2.0$ to certain region of each channel of $\mathcal{F}$ to see if any intuitive style basis is amplified. The styled images are shown in Figure \ref{inferior}(a)\ding{175}\ding{176}. Rectangles in style image (Figure \ref{inferior}(a)\ding{173}) are the intervened regions corespondingly. Compared to the styled image using \cite{GatysNeuralStyle} (Figure \ref{inferior}(a)\ding{174}), when the region of small waves in style image is intervened (green rectangle in the style image), the effect of small blue circles in the styled image are amplyfied (green rectangle in the styled image) while when the region of large waves in style image is intervened (red rectangle in the style image), the effect of long blue curves in the styled image are amplyfied (red rectangle in the styled image). Actually, implementing control function $g$ on the pixels of the channels of $\mathcal{F}$ is quite similar to the methods proposed for spatial control of neural style transfer \cite{ControlFactor} which controls style transfer via a spatially guided feature map defined by a binary or real-valued mask on a region of the feature map. Yet it fails to computationally decompose the style basis. \subsection{Transfer by single style basis} \label{sec:decompose} To check whether $\mathcal{H}$ is consisted of style bases, we transfer style with single style basis preserved. We conduct the experiment on $\mathcal{H}$ generated by different decomposition functions, including FFT, DCT, PCA as well as ICA, with details mentioned in Section \ref{sec:methods} and results shown in Figure \ref{decompose}. We preserve the DC component only and the rest frequency components only in the spectrum space generated by FFT respectively with results shown in Figure \ref{decompose}(c)(d). Figure \ref{decompose}(c) preserves the color of style while Figure \ref{decompose}(d) preserves the wave-like stroke, which indicates that FFT is feasible for style decomposition. The result of DCT is quite similar to that of FFT, with DC component representing color and the rest representing stroke. Besides to visual eveluation shown in Figure \ref{decompose}(c)(d), we analyze the spectrum space by projecting the style bases into low dimensional space, which can analytically demonstrate the effectiveness and robustness of spectrum based methods. Given the spectrum space of $\mathcal{F}$, we vectorize the DC component as well as the rest frequency components as color vector $v_{color}$ and stroke vector $v_{stroke}$, ($v_{color} \in \mathcal{R}^{1 \times c}$, $v_{stroke} \in \mathcal{R}^{1 \times ((hw - 1)c)}$). Via Isomap \cite{Isomap}, we project $v_{color}$ and $v_{stroke}$ to $u_{color}, u_{stroke} \in \mathcal{R}$ which forms X-axis and Y-axis of the 2-dimensional plane for visualization where every style is encoded as a point. We experiment on 3 artistic styles, including Chinese painting, oil painting and pen sketch, and each style contains 10 pictures which is shown in Figure \ref{manifold}(a-c). Chinese paintings and pen sketches share similar color style which is sharply distinguished with oil paintings' while the stroke of three artistic styles are quite different from each other. Thus, as in shown in Figure \ref{manifold}(d), Chinese paintings and pen sketches are close to each other and both stay away from oil paintings in X-axis which represents color while three styles are respectively separable in Y-axis which represents stroke, which completely satisfies our analysis of the three artistic styles. When we apply to large scale of style images (Figure \ref{manifold}(e)), X-axis represents the linear transition from dull-colored to rich-colored. However, we fail to conclude any notable linear transition for Y-axis from the 2-dimensional visualization probably because it is hard to describe the style of stroke (boldface,length,curvity,etc.) using only one dimension. Yet, clustering by K-means on the original spectrum still conforms to the true label. In summary, spectrum based methods do work for large scale. \begin{figure}[tb] \centering \subfigure[]{ \begin{minipage}[t]{0.25\linewidth} \centering \includegraphics[width=\linewidth]{decompose/wave-stata-square.pdf} \end{minipage}% }% \subfigure[]{ \begin{minipage}[t]{0.25\linewidth} \centering \includegraphics[width=\linewidth]{decompose/wave-stata-origin.pdf} \end{minipage}% }% \subfigure[]{ \begin{minipage}[t]{0.25\linewidth} \centering \includegraphics[width=\linewidth]{decompose/fft-dc.pdf} \end{minipage}% }% \subfigure[]{ \begin{minipage}[t]{0.25\linewidth} \centering \includegraphics[width=\linewidth]{decompose/fft-anti-dc-inter2.pdf} \end{minipage}% }% \vfill \subfigure[]{ \begin{minipage}[t]{0.25\linewidth} \centering \includegraphics[width=\linewidth]{decompose/pca-most.pdf} \end{minipage}% }% \subfigure[]{ \begin{minipage}[t]{0.25\linewidth} \centering \includegraphics[width=\linewidth]{decompose/pca-except-inter2.pdf} \end{minipage}% }% \subfigure[]{ \begin{minipage}[t]{0.25\linewidth} \centering \includegraphics[width=\linewidth]{decompose/ica-low-square.pdf} \end{minipage}% }% \subfigure[]{ \begin{minipage}[t]{0.25\linewidth} \centering \includegraphics[width=\linewidth]{decompose/ica-low-8-high-504-outline-inter2.pdf} \end{minipage}% }% \centering \caption{(a) the original content and style images; (b) styled image by traditional neural style transfer \cite{GatysNeuralStyle}; (c-h) results of preserving one style basis by different methods. Specifically, (c-d) FFT; (e-f) PCA; (g-h) ICA where (c,e,g) aim to transfer the color of style image while (d,f,h) aim to transfer the stroke of style image.} \label{decompose} \end{figure} Unlike spectrum based methods, the bases of latent space via PCA or ICA are either uncorrelated components or independent signals. Via PCA, the most principle component (Figure \ref{decompose}(e)) fails to separate color and stroke well while the rest components (Figure \ref{decompose}(f)) fail to represent any style basis, which indicates PCA isn't a suitable method for style decomposition. The results of ICA (Figure \ref{decompose}(g)(h)) are as good as the results of FFT but show significant differences. The color basis and stroke basis are formed by the following method. We sum up each column of mixing matrix $A$ where $A_{i,j}$ denotes the contribution of j-th independent signal to the i-th channel in $\mathcal{F}$ to get $A^{sum} \in \mathcal{R}^{c}$ where $A^{sum}_{j}$ denotes the overall contribution of the j-th independent signal to $\mathcal{F}$. Sort $A^{sum}$ in acsent order to get $arg \in \mathcal{R}^{c}$, where $arg_{j}$ denotes the index of signal which ranges $j$ in $A_{sum}$. The stroke basis is formed with $S_{arg_{i}}, i \in [0,n-1] \cup [c-n,c-1]$ while the color basis is formed with the rest independent signals. Using ICA, the color basis (Figure \ref{decompose}(g)) is more murky than Figure \ref{decompose}(c) while the stroke basis (Figure \ref{decompose}(h)) retains the profile of curves with less stroke color preserved compared to Figure \ref{decompose}(d), which indicates both ICA and spectrum based methods work for style decomposition, but generates different results. \begin{figure}[tb] \centering \subfigure[]{ \begin{minipage}[t]{0.174\linewidth} \centering \includegraphics[width=\linewidth]{clustering/china.pdf} \end{minipage}% }% \subfigure[]{ \begin{minipage}[t]{0.174\linewidth} \centering \includegraphics[width=\linewidth]{clustering/oil.pdf} \end{minipage}% }% \subfigure[]{ \begin{minipage}[t]{0.174\linewidth} \centering \includegraphics[width=\linewidth]{clustering/pen.pdf} \end{minipage}% }% \subfigure[]{ \begin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=\linewidth]{clustering/spectrum-Isomap2.pdf} \end{minipage}% }% \vfill \subfigure[]{ \begin{minipage}[t]{\linewidth} \centering \includegraphics[width=\linewidth]{clustering/demo-large3.pdf} \end{minipage}% }% \centering \caption{(a) Chinese paintings; (b) Oil paintings (by Leonid Afremov); (c) Pen sketches; (d) low-dimensional projections of the spectrum of style(a-c) via Isomap; (e) low-dimensional projections of the spectrum of large scale of style images via Isomap \cite{Isomap}. The size of each image shown above does not indicate any other information, but is set to prevent the overlap of the images only.} \label{manifold} \end{figure} \begin{figure*}[htb] \centering \subfigure[]{ \begin{minipage}[t]{0.286\linewidth} \centering \includegraphics[width=\linewidth]{intervention/wave-outline.pdf} \end{minipage}% }% \subfigure[]{ \begin{minipage}[t]{0.357\linewidth} \centering \includegraphics[width=\linewidth]{intervention/spectrum-intervention-wave.pdf} \end{minipage}% }% \subfigure[]{ \begin{minipage}[t]{0.357\linewidth} \centering \includegraphics[width=\linewidth]{intervention/ica-intervention-wave.pdf} \end{minipage}% }% \vfill \subfigure[]{ \begin{minipage}[t]{0.286\linewidth} \centering \includegraphics[width=\linewidth]{intervention/laMuse-outline.pdf} \end{minipage}% }% \subfigure[]{ \begin{minipage}[t]{0.357\linewidth} \centering \includegraphics[width=\linewidth]{intervention/spectrum-intervention-laMuse.pdf} \end{minipage}% }% \subfigure[]{ \begin{minipage}[t]{0.357\linewidth} \centering \includegraphics[width=\linewidth]{intervention/ica-intervention-laMuse.pdf} \end{minipage}% }% \centering \caption{(a) the stroke of the style image `wave'; (d) the stroke of the style image `aMuse'(``A muse'' by Pablo Picasso); (b,c,e,f) the results of giving different intervention $I$ to the stroke basis using different methods. Specifically, (b,e) spectrum based method; (c.f) ICA.} \label{spectrumIntervention} \end{figure*} \subsection{Transfer by intervention}\label{sec:intervention} We give intervention to the stroke basis via control function $g$ to demonstrate the controllable diversified styles and distinguish the difference in stroke basis between spectrum based methods and ICA. We experimented on various styles and we here demonstrate two of them (dur to space limitation), `wave' and `aMuse', to indicate the robustness of our experiment. The strokes of `wave' (Figure \ref{spectrumIntervention}(a)) are curves with light and dark blue while the strokes of `aMuse' (Figure \ref{spectrumIntervention}(d)) are black bold lines and coarse powder-like dots in green, blue, yellow, etc. We further divide the concept of stroke into stroke color and stroke profile (which is character like curve, bold line and coarse powder-like dot) to better illustrate the difference between two methods. Intervention impacts both stroke color and stroke profile using spectrum based methods. With intervention increasing, we see more exaggarated curves with darker blue in Figure \ref{spectrumIntervention}(b) and more black bold lines as well as greener and yellower oil-painting-like sky in Figure \ref{spectrumIntervention}(e). Compared to spectrum based method, there is slight difference in color between results using different intervention. Intervention only impacts stroke profile using ICA. With intervention increasing, only the curvity of curves is amplified (Figure \ref{spectrumIntervention}(c)) and only the margin of profile and the grainy of color become more obvious (Figure \ref{spectrumIntervention}(f)). However different two methods are, the stroke effect of style can be reduced or amplified within our control, which greatly enhances the diversity of styled image. \subsection{Transfer by mixing} Current style mixing method, interpolation, cannot mix the style bases of different styles because styles are integrally mixed however interpolation weights are modified (Figure \ref{mixing}(g-i)), which limits the diversity of style mixing. Based on the success spectrum based methods and ICA in style decomposition, we experiment to mix the stroke of `wave' with the color of `aMuse' to check whether such newly compound artistic style can be transferred to the styled image. Specifically, for ICA, we not only need to replace the color basis of `wave' with that of `aMuse' but also should replace the rows of mixing matrix $A$ corresponding to the exchanged signals. Both spectrum based methods (Figure \ref{mixing}(d-f)) and ICA (Figure \ref{mixing}(j-l)) work well in mixing style bases of different styles and the difference conforms to the conclusion given in Section \ref{sec:intervention}. Moreover, we can intervenve the style basis when mixing, which further enhances the diversity of style mixing. \begin{figure*}[ht] \centering \subfigure[content and style]{ \begin{minipage}[t]{0.165\linewidth} \centering \includegraphics[width=\linewidth]{mix-main/stata-wave-la_muse.pdf} \end{minipage}% }% \subfigure[wave style]{ \begin{minipage}[t]{0.165\linewidth} \centering \includegraphics[width=\linewidth]{mix-main/wave-stata-origin.pdf} \end{minipage}% }% \subfigure[aMuse style]{ \begin{minipage}[t]{0.165\linewidth} \centering \includegraphics[width=\linewidth]{mix-main/la_muse-stata-origin.pdf} \end{minipage}% }% \subfigure[I = 1.0]{ \begin{minipage}[t]{0.165\linewidth} \centering \includegraphics[width=\linewidth]{mix-main/fft-mix-wave-intervention-1x0-la_muse-intervention2-1x0-mode-0.pdf} \end{minipage}% }% \subfigure[I = 1.5]{ \begin{minipage}[t]{0.165\linewidth} \centering \includegraphics[width=\linewidth]{mix-main/fft-mix-wave-intervention-1x5-la_muse-intervention2-1x0-mode-0.pdf} \end{minipage}% }% \subfigure[I = 2.0]{ \begin{minipage}[t]{0.165\linewidth} \centering \includegraphics[width=\linewidth]{mix-main/fft-mix-wave-intervention-2x0-la_muse-intervention2-1x0-mode-0.pdf} \end{minipage}% }% \vfill \subfigure[I1 = 0.3, I2 = 0.7]{ \begin{minipage}[t]{0.165\linewidth} \centering \includegraphics[width=\linewidth]{mix-main/straight-mix-wave-0x3-la_muse-0x7.pdf} \end{minipage}% }% \subfigure[I1 = 0.5, I2 = 0.5]{ \begin{minipage}[t]{0.165\linewidth} \centering \includegraphics[width=\linewidth]{mix-main/straight-mix-wave-0x5-la_muse-0x5.pdf} \end{minipage}% }% \subfigure[I1 = 0.7, I2 = 0.3]{ \begin{minipage}[t]{0.165\linewidth} \centering \includegraphics[width=\linewidth]{mix-main/straight-mix-wave-0x7-la_muse-0x3.pdf} \end{minipage}% }% \subfigure[I = 1.0]{ \begin{minipage}[t]{0.165\linewidth} \centering \includegraphics[width=\linewidth]{mix-main/ica-mix-wave-1x0-la_muse-1x0-mode-0.pdf} \end{minipage}% }% \subfigure[I = 2.0]{ \begin{minipage}[t]{0.165\linewidth} \centering \includegraphics[width=\linewidth]{mix-main/ica-mix-wave-2x0-la_muse-1x0-mode-0.pdf} \end{minipage}% }% \subfigure[I = 3.0]{ \begin{minipage}[t]{0.165\linewidth} \centering \includegraphics[width=\linewidth]{mix-main/ica-mix-wave-3x0-la_muse-1x0-mode-0.pdf} \end{minipage}% }% \centering \caption{(a) the content image and two style images; (b)(c) styled image of single style using traditional methods \cite{GatysNeuralStyle}; (g-i) interpolation mixing where I1 and I2 are the weights of `wave' and `aMuse' in interpolation; (d-f,j-l) results of mixing the color of `aMuse' and the stroke of `wave' where I is the intervention to the stroke of `wave'. Specifically, (d-f) use FFT; (j-l) use ICA.} \label{mixing} \end{figure*} \begin{figure}[t] \centering \subfigure[]{ \begin{minipage}[t]{0.25\linewidth} \centering \includegraphics[width=\linewidth]{sketch/stata-cartoon.pdf} \end{minipage}% }% \subfigure[]{ \begin{minipage}[t]{0.25\linewidth} \centering \includegraphics[width=\linewidth]{sketch/cartoon-1x0-p.pdf} \end{minipage}% }% \subfigure[]{ \begin{minipage}[t]{0.25\linewidth} \centering \includegraphics[width=\linewidth]{sketch/cartoon-1x5-p.pdf} \end{minipage}% }% \subfigure[]{ \begin{minipage}[t]{0.25\linewidth} \centering \includegraphics[width=\linewidth]{sketch/cartoon-2x0-p.pdf} \end{minipage}% }% \centering \caption{Picture-to-sketch using style transfer and binarization. (a) content image and style image; (b-d) styled images. From (b) to (d), the number of stroke increases as more details of the content image are restored.} \label{sketch} \end{figure} \begin{figure}[t] \centering \subfigure[]{ \begin{minipage}[t]{0.25\linewidth} \centering \includegraphics[width=\linewidth]{china/mountain-china.pdf} \end{minipage}% }% \subfigure[]{ \begin{minipage}[t]{0.25\linewidth} \centering \includegraphics[width=\linewidth]{china/china-0x5.pdf} \end{minipage}% }% \subfigure[]{ \begin{minipage}[t]{0.25\linewidth} \centering \includegraphics[width=\linewidth]{china/china-1x5.pdf} \end{minipage}% }% \subfigure[]{ \begin{minipage}[t]{0.25\linewidth} \centering \includegraphics[width=\linewidth]{china/china-1x8.pdf} \end{minipage}% }% \centering \caption{Neural style transfer of Chinese painting with stroke controlled. (a) content image and style image (by Zaixin Miao); (b-d) styled images. From (b) to (d), the strokes are getting more detailed which gradually turns freehand style into finebrush style.} \label{chinese} \end{figure} \subsection{Sketch style transfer} Picture-to-sketch problem challenges how computer can understand and represent the concept of objects both abstractly and semantically. State-of-the-art methods \cite{2017sketch,2018sketch} use variance model of genarative adversary network (GAN) via both supervised and un-supervised methods. One obstacle mentioned by \cite{2018sketch} is that using supervised learning only may result in unstablity due to the noise in the dataset which is caused by variant sketch styles for the same data sample. Controllable neural style transfer proposed by us tackles the above obstacle because inconsistent styles are no more burdens, but can in turn increase the style diversity of output images. Moreover, as is shown in Figure \ref{sketch}, our method can control the abstract level by reserving major semantic details and minor ones automatically. In addition, our method does not require vector sketch dataset, but as the tradeoff, we cannot generate sketch stroke by stroke like \cite{2017sketch,2018sketch}. \subsection{Chinese painting style transfer} Chinese painting is an exclusive artistic style which does not have plentiful color like the Western painting, but mostly represents the artistic conception by strokes. Taking advantage of effective controls over stroke via our methods, the Chinese painting styled image can be either misty-like (Figure \ref{chinese}(b)) which can be called as freehand-brush or meticulous representation (Figure \ref{chinese}(d)) which is called as fine-brush. \section{Conclusions} Artistic styles are made of basic elements, each with distinct characteristics and functionality. Developing such a style decomposition method facilitate the quantitative control of the styles in one or more images to be transfer to another natural image, while still keeping the basic content of natural image. In this paper, we proposed a novel computational decomposition method, and demonstrated its strengths via extensive experiments. To our best knowledge, it is the first such study, which could serve as a computational module embedded in those neural style transfer algorithms. We implemented the decomposition function by spectrum transform or latent variable models, and thus it enables us to computationally and continuously control the styles by linear mixing or intervention. Experiments showed that our method enhanced the flexibility of style mixing and the diversity of stylization. Moreover, our method could be applied in picture-to-sketch problems by transferring the sketch style, and it captures the key feature and facilitates the stylization of the Chinese painting style. {\small \bibliographystyle{ieee}
1,314,259,995,688
arxiv
\section{Introduction} Singularities of map germs have long been studied, especially up to the equivalence under coordinate changes in both source and target (${\mathcal A}$-equivalence). According to \cite{gaff}, ``classification'' for map germs with ${\mathcal A}$-equivalence means finding lists of germs, and showing that all germs satisfying certain conditions are equivalent to a germ on the list. Classification is well understood, with many good references in the literature. ``Recognition'' means finding criteria which will describe which germ on the list a given germ is equivalent to (see \cite{gaff}). The classification problem and recognition problem for map germs from the plane into the plane up to ${\mathcal A}$-equivalence was studied by J. H. Rieger \cite{rie}. He classified map germs $(\boldsymbol{R}^2,0)\to(\boldsymbol{R}^2,0)$ with corank one and ${\mathcal A}_e$-codimension $\leq 6$. Table \ref{tab:rie} shows the list of the ${\mathcal A}_e$-codimension $\leq3$ local singularities obtained in \cite{rie}. Some of these singularities are also called as follows: $4_{2,+}$ ({\it lips\/}), $4_{2,-}$ ({\it beaks\/}), $5$ ({\it swallowtail\/}). These singularities are depicted in Figure \ref{fig:lipetc}. Rieger also discussed the recognition of these map germs after normalizing the coordinate system as $(u,v)\mapsto(u,f_2(u,v))$. However, for applications, criteria of recognition without using normalization are not only more convenient but also indispensable in some cases. \begin{table}[htbp] \begin{center} \begin{tabular}{llc} \hline Name&Normal form&${\mathcal A}_e$-codimension\\ \hline Immersion&$(u,v)$&0\\ Fold &$(u,v^2)$&0\\ Cusp &$(u,v^3+uv)$&0\\ $4_{k,\pm}$ &$(u,v^3\pm u^kv),\ k=2,3$&$k-1$\\ $5$ &$(u,uv+v^4)$&1\\ $6_\pm$ &$(u,uv+v^5\pm v^7)$&2\\ $11_5$ &$(u,uv^2+v^4+v^5)$&2\\ \hline \end{tabular} \label{tab:rie} \caption{Classification of $(\boldsymbol{R}^2,0)\to(\boldsymbol{R}^2,0)$} \end{center} \end{table} \begin{figure}[ht] \centering \includegraphics[width=0.2\linewidth]{lip.eps}\hspace{10mm} \includegraphics[width=0.2\linewidth]{beaks.eps}\hspace{10mm} \includegraphics[width=0.2\linewidth]{sw.eps} \caption{Lips, beaks and swallowtail} \label{fig:lipetc} \end{figure} In this paper, we give criteria for the lips, the beaks and the swallowtails of a map germ $(\boldsymbol{R}^2,0)\to(\boldsymbol{R}^2,0)$ without using the normalizations (Theorem \ref{thm:main}). Since they only use the information of the Taylor coefficients of the germ, Theorem \ref{thm:main} can be applied directly for the recognition of the lips, the beaks and the swallowtail on explicitly parameterized maps. Using the criteria, we study singularities of a conservation law about a time variable. We study singularities of geometric solutions of the equation and show the singularities that appear for the first time are generically the lips (Section 3). The case of wave front surfaces in 3-space, criteria for the cuspidal edge and the swallowtail were given by M. Kokubu et al.\ \cite{krsuy}. By using them, we studied local and global behaviors of flat fronts in hyperbolic 3-space. Using them, K. Saji et al.\ \cite{front} introduced the singular curvature on the cuspidal edge and investigated its properties. Criteria for other singularities of fronts and their applications were given in \cite{fsuy,horo,suy3}. Recently, several applications of these criteria were considered in various situations \cite{ishi-machi,horo,circular,kruy,sch}. Throughout this paper, we work in the $C^\infty$-category. \section{Preliminaries and statements of criteria} Let $U\subset\boldsymbol{R}^2$ be an open set and $f:(U,p)\to(\boldsymbol{R}^2,0)$ a map germ. We call $q\in U$ a {\it singular point\/} of $f$ if $\operatorname{rank}(df)_q\leq1$. We denote by $S(f)\subset U$ the set of singular points of $f$. Two map germs $f_i:(\boldsymbol{R}^2,0)\to(\boldsymbol{R}^2,0)$ $(i=1,2)$ are {\it ${\mathcal A}$-equivalent\/} if there exist diffeomorphism map germs $\Phi_i:(\boldsymbol{R}^2,0)\to(\boldsymbol{R}^2,0)$ $(i=1,2)$ such that $f_1\circ\Phi_1=\Phi_2\circ f_2$ holds. For a positive integer $k$, a map germ $f:(U,p)\to(\boldsymbol{R}^2,0)$ is $k$-{\it determined\/} if any $g:(U,p)\to(\boldsymbol{R}^2,0)$ satisfying the condition that the $k$-jet $j^kg(p)$ of $g$ is equal to $j^kf(p)$, is ${\mathcal A}$-equivalent to $f$. The following fact is well-known. \begin{fact}$($\cite[Lemma 3.2.2 and 3.1.3]{rie}$)$ \label{fact:det} The lips and the beaks\/ $(x,y)\mapsto(x,y^3\pm xy)$ are three-determined. The swallowtail\/ $(x,y)\mapsto(x,xy+y^4)$ is four-determined. \end{fact} Let $f:(U,p)\to(\boldsymbol{R}^2,0)$ be a map germ. A singular point $q$ is of {\it corank one\/} if $\operatorname{rank} (df)_q=1$. If $p$ is a corank one singular point of $f$, then there exists a neighborhood $V$ of $p$ and a never vanishing vector field $\eta\in {\mathfrak X}(V)$ such that $df_q(\eta)=0$ holds for any $q\in S(f)\cap V$. We call $\eta$ the {\it null vector field}. We define a function which plays a crucial role in our criteria. Let $(u_1,u_2)$ be coordinates of $U$. Define the {\it discriminant function\/} $\lambda$ {\it of} $f$ by \begin{equation} \label{eq:lambda}\lambda(u_1,u_2)= \det\left(\frac{\partial f}{\partial u_1}, \frac{\partial f}{\partial u_2}\right)(u_1,u_2). \end{equation} Then $S(f)=\lambda^{-1}(0)$ holds. We call $p\in S(f)$ a {\it non-degenerate singular point\/} if $d\lambda(p)\ne0$ and a {\it degenerate singular point\/} if $d\lambda(p)=0$. Note that a non-degenerate singular point is of corank one. The terminologies ``discriminant function'', ``null vector field'' and ``non-degeneracy'' are defined in \cite{krsuy} in order to state criteria for fronts in the $3$-space. Our definitions of these three terminologies are similar. These notions also play a key role to identify singularities for our case. This seems to be related to the correspondence between singularities of front and its projection to the limiting tangent plane. This correspondence is discussed in \cite{suy3}. We review the criteria for the fold and the cusp, due to Whitney \cite{whit} (see also \cite{suy3}). \begin{fact}$($\cite[Proposition 2.1]{whit}$)$ \label{fact:whit} For a map germ\/ $f:(U,p)\to(\boldsymbol{R}^2,0)$, $f$ at\/ $p$ is\/ ${\mathcal A}$-equivalent to the fold if and only if\/ $\eta\lambda(p)\ne0$. Furthermore, $f$ at\/ $p$ is\/ ${\mathcal A}$-equivalent to the cusp if and only if\/ $p$ is non-degenerate, $\eta\lambda(p)=0$ and\/ $\eta\eta\lambda(p)\ne0$. \end{fact} Here, $\eta\lambda$ means the directional derivative $D_\eta\lambda$. The main result of this paper is the following. \begin{theorem} \label{thm:main} For a map germ\/ $f:(U,p)\to(\boldsymbol{R}^2,0)$, the following hold. \begin{itemize} \item[(1)] $f$ is\/ ${\mathcal A}$-equivalent to the lips if and only if\/ $p$ is of corank one, $d\lambda(p)=0$ and\/ $\lambda$ has a Morse type critical point of index\/ $0$ or\/ $2$ at\/ $p$, namely, $\det\operatorname{Hess}\lambda(p)>0$. \item[(2)]$f$ is\/ ${\mathcal A}$-equivalent to the beaks if and only if\/ $p$ is of corank one, $d\lambda(p)=0$, $\lambda$ has a Morse type critical point of index\/ $1$ at\/ $p$ $($i.e., $\det\operatorname{Hess}\lambda(p)<0$.$)$ and\/ $\eta\eta\lambda(p)\ne0$. \item[(3)]$f$ is\/ ${\mathcal A}$-equivalent to the swallowtail if and only if\/ $d\lambda(p)\ne0$, $\eta\lambda(p)=\eta\eta\lambda(p)=0$ and\/ $\eta\eta\eta\lambda(p)\ne0$. \end{itemize} \end{theorem} Here, for a function $\lambda:(U,u_1,u_2)\to\boldsymbol{R}$, $\operatorname{Hess} \lambda$ is the matrix defined by $\operatorname{Hess}\lambda=(\partial^2\lambda/\partial u_i\,\partial u_j)_{i,j=1,2}$. Remark that in Theorem \ref{thm:main} (1), $\eta\eta\lambda(p)\ne0$ is automatically satisfied because of the symmetricity of $\operatorname{Hess}\lambda$ and the inequality $\det\operatorname{Hess}\lambda(p)>0$. \begin{example} Let us put $$ \begin{array}{l} f_{\text{l}}(u,v)=(u,v^3+u^2v),\quad f_{\text{b}}(u,v)=(u,v^3-u^2v)\\ \hspace{50mm}\text{and}\quad f_{\text{s}}(u,v)=(u,v^4+uv). \end{array} $$ Since these are nothing but the defining formula for the lips, the beaks and the swallowtail, these maps satisfy the conditions in Theorem\/ $\ref{thm:main}$. The discriminant functions for these maps are $$\lambda_{\text{l}}=3v^2+u^2,\quad \lambda_{\text{b}}=3v^2-u^2\quad\text{and}\quad \lambda_{\text{s}}=4v^3+u, $$ respectively. Thus\/ $\lambda_{\text{l}}$ and\/ $\lambda_{\text{b}}$ have a Morse type critical point at the origin. Furthermore, the null vector field can be chosen as\/ $\eta=(0,1)$ for all maps. It holds that\/ $\eta\eta\lambda_{\text{b}}\ne0$, and that\/ $d\lambda_{\text{s}}\ne0$, $\eta\lambda_{\text{s}}=\eta\eta\lambda_{\text{s}}=0$ and\/ $\eta\eta\eta\lambda_{\text{s}}\ne0$ at the origin. Thus we see that each of the conditions in Theorem\/ $\ref{thm:main}$ is satisfied for each map. These observations together with the following Lemma\/ $\ref{lem:indeplamb}$ confirm the only if part of Theorem\/ $\ref{thm:main}$. \end{example} \begin{example} Let\/ $\gamma:I\to\boldsymbol{R}^2$ be a plane curve with\/ $\gamma'(t)\ne0$ for any $t\in I$. The {\it tangential ruling map\/ $R_\gamma$ of\/ $\gamma$} is the map\/ $R_\gamma:(t,u)\mapsto \gamma(t)+u\gamma'(t)$. The discriminant function and the null vector field of\/ $R_\gamma$ are\/ $\lambda=u\kappa$ and\/ $\eta=(-1,1)$, respectively, where\/ $\kappa$ is the curvature of\/ $\gamma$. Thus we have $$\operatorname{Hess}\lambda(t,0)=\pmt{0&\kappa'\\ \kappa'&0}\quad\text{and}\quad \eta\eta\lambda(t,0)=-2\kappa'. $$ Using Theorem\/ $\ref{thm:main}$, $R_\gamma$ at\/ $(t_0,0)$ is\/ ${\mathcal A}$-equivalent to the beaks if and only if\/ $\kappa(t_0)=0$ and\/ $\kappa'(t_0)\ne0$ holds\/ $($See figure\/ $\ref{fig:rulingex})$. \begin{figure}[ht] \centering \includegraphics[width=0.25\linewidth]{rulingex.eps}\\ \caption{The beaks on the tangential ruling map of $(t,t^3)$ at $(t,u)=(0,0)$.} \label{fig:rulingex} \end{figure} \end{example} To prove Theorem \ref{thm:main}, we need the following lemma. \begin{lemma} \label{lem:indeplamb} For a map germ $f:(U,p)\to(\boldsymbol{R}^2,0)$, the conditions in Theorem\/ $\ref{thm:main}$ are independent of the choice of coordinates of the source and target. To be precise, the rank of\/ $(df)_p$, the non-degeneracy of\/ $p$, and the sign of\/ $\det\operatorname{Hess}\lambda(p)$, are independent of the choice of both coordinates on the source and target. Suppose further that $p$ is non-degenerate, and let $\lambda(u_1,u_2)$ and\/ $\tilde\lambda(v_1,v_2)$ are area density functions of\/ $f$, and\/ $\eta$ and\/ $\tilde\eta$ are null vector fields of\/ $f$, then the following hold: \begin{itemize} \item $\eta\lambda(p)=0$ if and only if\/ $\tilde\eta\tilde\lambda(p)=0$. \item If\/ $\eta\lambda(p)=\tilde\eta\tilde\lambda(p)=0$, then\/ $\eta\eta\lambda(p)\ne0$ if and only if\/ $\tilde\eta\tilde\eta\tilde\lambda(p)\ne0$. \item If\/ $\eta\lambda(p)=\eta\eta\lambda(p)= \tilde\eta\tilde\lambda(p)= \tilde\eta\tilde\eta\tilde\lambda(p)=0$, then\/ $\eta\eta\eta\lambda(p)\ne0$ if and only if\/ $\tilde\eta\tilde\eta\tilde\eta\tilde\lambda(p)\ne0$. \end{itemize} \end{lemma} \begin{proof} Needless to say, $\operatorname{rank}(df)_p$ is independent of the choice of the coordinate systems. If we change the coordinates, then the function $\lambda$ is multiplied by a non-zero function. Since the vanishing of $d\lambda$ and the sign of $\det\operatorname{Hess}\lambda$ do not change under this multiplication, the first part of the lemma is proved. We now prove the second part. We can write $\tilde\eta=a_1\xi+a_2\eta$, where $a_1,a_2$ are functions near $p$, satisfies $a_1=0$ on $S(f)$, and $\xi$ is a vector field transverse to $\eta$ at $p$, and assume that $\tilde\lambda$ is a multiplication of $\lambda$ by a non-zero function. Under this setting, since $\{\lambda=0\}=\{a_1=0\}$ holds, one can prove that the non-degeneracy yields the desired equivalences. \end{proof} Now we prove Theorem \ref{thm:main}; the method of proof is due to Rieger \cite{rie}. \begin{proof}[Proof of $(1)$ and $(2)$.] Since $p$ is of corank one, $f$ can be represented as $$f(u,v)=(u,vf_2(u,v)),\quad p=(0,0)$$ by Lemma \ref{lem:indeplamb}. Since $\lambda(p)=0$ and $d\lambda(p)=0$, we have $f_2=(f_2)_u=(f_2)_v=0$ at $p$, where $(f_2)_u=\partial f_2/\partial u$ and $(f_2)_v=\partial f_2/\partial v$. Therefore, $f$ can be written as $$ \big(u,v(au^2+2buv+cv^2)+\operatorname{(higher\ order\ term)}\big), \ a,b,c\in\boldsymbol{R}. $$ Here, the ``higher order term'' consists of the terms whose degrees are greater than $3$. Since $\det\operatorname{Hess}\lambda(p)\ne0$, it holds that $a$, $b$ or $c$ does not vanish at $p$. Moreover, since $\eta=(0,1)$ and $\eta\eta\lambda(p)\ne0$, it holds that $c\ne0$. Now, by the coordinate change $$ U=u,\quad V=v+\frac{2b}{3c}u, $$ $f$ can be written as $$ \big(u,v(\alpha u^2+\beta v^2) + \gamma u^3+\operatorname{(higher\ order\ term)}\big),\ \alpha,\beta,\gamma\in\boldsymbol{R}. $$ Here, the ``higher order term'' consists of the terms whose degrees are greater than $3$. We remark that the sign of $\alpha\beta$ coincides with the sign of $\operatorname{Hess}\lambda(p)$. Hence, by some scaling change and a coordinate change on the target, $f$ can be written as \begin{equation} \label{eq:liptotyu} \big(u,v(u^2\pm v^2)+\operatorname{(higher\ order\ term)}\big). \end{equation} Since the map germ $(u,v(u^2\pm v^2))$ is three-determined, the map germ (\ref{eq:liptotyu}) is ${\mathcal A}$-equivalent to the lips$(+)$ or the beaks$(-)$. \end{proof} \begin{proof}[Proof of $(3)$.] Since $f$ is of corank one, $f$ can be written as $f(u,v)=(u,vh(u,v))$. Then the null vector field is $(0,1)$. Write $$ \begin{array}{rcl} vh(u,v)&=& a_{11}uv+a_{02}v^2+a_{21}u^2v+a_{12}uv^2+a_{03}v^3 +a_{31}u^3v\\ &&\hspace{5mm}+a_{22}u^2v^2+a_{13}uv^3+a_{04}v^4 +\operatorname{(higher\ order\ term)}. \end{array} $$ Here, the ``higher order term'' consists of the terms whose degrees are greater than $4$. The non-degeneracy of $f$ yields that $a_{11}\ne0$. If $a_{02}\ne0$, by Fact \ref{fact:whit}, $f$ is ${\mathcal A}$-equivalent to the fold. Moreover, if $a_{02}=0$ and $a_{03}\ne0$ then by Fact \ref{fact:whit}, $f$ is ${\mathcal A}$-equivalent to the cusp. Hence we can assume $a_{02}=a_{03}=0$. Since $\eta\eta\eta\lambda(p)\ne0$, we have $a_{04}\ne0$. By the coordinate change $$ \begin{array}{l} \tilde u=u,\\ \tilde v=a_{11}v+a_{21}uv+a_{12}v^2+a_{31}u^2v+a_{22}uv^2+a_{13}v^3, \end{array} $$ $f$ is written as $$ f(\tilde u,\tilde v) = \big(\tilde u,\tilde u\tilde v+\tilde v^4 + \operatorname{(higher\ order\ term)}\big). $$ Since $(\tilde u,\tilde u\tilde v+\tilde v^4)$ is four-determined, it is ${\mathcal A}$-equivalent to $(u,uv+v^4)$. \end{proof} \section{Singularities of characteristic surfaces of a single conservation law} \label{sec:cons} In this section, we consider the following Cauchy problem of a single conservation law: \begin{equation}\tag{C} \label{singcons} \left\{ \begin{array}{l} \dfrac{\partial y}{\partial t}(t,\vect{x}) + \sum_{i=1,2}\dfrac{d f_i}{d y}\big(y(t,\vect{x})\big)\dfrac{\partial y}{\partial x_i}(t,\vect{x})=0,\\ y(0,\vect{x})=\phi(\vect{x}),\quad \vect{x}=(x_1,x_2), \end{array} \right. \end{equation} where, $f_1,f_2$ and $\phi$ are functions. We consider the characteristic surfaces of \eqref{singcons} following the framework of \cite{ikcons}. Let $\pi:PT^*(\boldsymbol{R}\times\boldsymbol{R}^2\times\boldsymbol{R})\to\boldsymbol{R}\times\boldsymbol{R}^2\times\boldsymbol{R}$ be the projective cotangent bundle. Identify $PT^*(\boldsymbol{R}\times\boldsymbol{R}^2\times\boldsymbol{R}) =(\boldsymbol{R}\times\boldsymbol{R}^2\times\boldsymbol{R})\times P(\boldsymbol{R}\times\boldsymbol{R}^2\times\boldsymbol{R})$ and denote the local coordinates of this space by $(t,\vect{x},y,[\tau:\vect{\xi}:\eta])$. We consider the canonical contact form, $$ \alpha=[\tau dt+\xi_1dx_1+\xi_2dx_2+\eta dy]. $$ Then the equation \eqref{singcons} is written in the following form: $$ \begin{array}{l} E(1,f_1',f_2',0)= \Big\{\big(t,\vect{x},y,[\tau:\vect{\xi}:\eta]\big)\in PT^*(\boldsymbol{R}\times\boldsymbol{R}^2\times\boldsymbol{R}) \,\Big\vert\,\\ \hspace{75mm}\tau+\displaystyle\sum_{i=1,2}f_i'(y)\xi_i=0\Big\}, \end{array} $$ where $f_i'=df_i/dy$. If (\ref{singcons}) has a classical solution $y$, then the non-zero normal vector $\nu=(y_t,y_{x_1},y_{x_2},-1)$ of smooth hypersurface $(t,\vect{x},y(t,\vect{x}))\subset\boldsymbol{R}\times\boldsymbol{R}^2\times\boldsymbol{R}$ exists, where, $y_{x_1}=\partial y/\partial x_1$ for example. Hence we have a Legendrian immersion $:\boldsymbol{R}\times\boldsymbol{R}^2\to PT^*(\boldsymbol{R}\times\boldsymbol{R}^2\times\boldsymbol{R})$: $$ \begin{array}{l} \tilde{y}(t,\vect{x}):(t,\vect{x})\mapsto (t,\vect{x},y(t,\vect{x}),[\nu])\\ \hspace{30mm} \in E(1,f_{1}',f_{2}',0) \subset PT^*(\boldsymbol{R}\times\boldsymbol{R}^2\times\boldsymbol{R}). \end{array} $$ According to this, we define a {\it geometric solution\/} of (\ref{singcons}) as a Legendrian immersion $L:(U;u_1,u_2)\to E(1,f_{1}',f_{2}',0)\subset PT^*(\boldsymbol{R}\times\boldsymbol{R}^2\times\boldsymbol{R})$ of a domain $U\subset\boldsymbol{R}^2$ such that $\pi\circ L$ is an embedding. We apply the method of characteristic equation. The characteristic equation associated with \eqref{singcons} through $(0,\vect{x}_0)$ is $$ \begin{array}{ll} \dfrac{dx_i}{dt}(t)=\dfrac{df_i}{dy} \Big(y\big(t,\vect{x}(t)\big)\Big),&\vect{x}(0)=\vect{x}_{0}\\ \dfrac{dy}{dt}(t,\vect{x}(t))=0,&y(0,\vect{x}(0))=\phi(\vect{x}_0). \end{array} $$ The solution of the characteristic equation can be expressed by \begin{equation} \label{sol-chara} \begin{array}{l} x_i(\u,t)=u_i+t\dfrac{df_i}{dy}\big(\phi(\u)\big),\\[3mm] \hspace{20mm} y\big(0,\vect{x}(\u,0)\big)=y(0,\u)=\phi(\u),\quad \u=(u_1,u_2)\in U. \end{array} \end{equation} If a map \begin{equation} \label{eq:charasurf} g_t:\u\mapsto \big(x_1(\u,t),x_2(\u,t)\big) \end{equation} is non-singular, $y=\phi\big((g_t)^{-1}(x_1,x_2)\big)$ is the classical solution of (\ref{singcons}) (See \cite[Section 5]{ikcons}). Remark that if $t=0$, $g_t$ is non-singular. Thus, in order to investigate the singularity of \eqref{singcons}, we study the singularities of a family of maps $g_t$. The discriminant function of $g_t(\u)$ is $$ \det \pmt{1+tc_{11}&tc_{12}\\ tc_{21}&1+tc_{22}}, \qquad c_{ij} = \dfrac{d^2f_i}{dy^{2}}(\phi(\u)) \dfrac{\partial\phi}{\partial u_{j}}(\u). $$ Needless to say, this matrix is never equal to the zero-matrix. This implies that $(t,\u)$ is a singular point of (\ref{sol-chara}), if and only if $-t^{-1}$ is an eigen value of the matrix $C=(c_{ij})_{i,j=1,2}$. The eigen equation for an eigen value $\mu$ of $C$ can be computed as $$ \begin{array}{rcl} \displaystyle 0&=&\det\left(C-\mu \pmt{1&0\\0&1}\right)\\[4mm] &=& \det \pmt{ \dfrac{d^2f_1}{dy^{2}}\big(\phi(\u)\big) \dfrac{\partial\phi}{\partial u_1}(\u)-\mu& \dfrac{d^2f_1}{dy^{2}}\big(\phi(\u)\big) \dfrac{\partial\phi}{\partial u_2}(\u)\\[4mm] \dfrac{d^2f_2}{dy^{2}}\big(\phi(\u)\big) \dfrac{\partial\phi}{\partial u_1}(\u)& \dfrac{d^2f_2}{dy^{2}}\big(\phi(\u)\big) \dfrac{\partial\phi}{\partial u_2}(\u)-\mu} \\[11mm] &=& \mu\left( \mu-\dfrac{d^2f_1}{dy^{2}}\big(\phi(\u)\big) \dfrac{\partial\phi}{\partial u_1}(\u)- \dfrac{d^2f_2}{dy^{2}}\big(\phi(\u)\big) \dfrac{\partial\phi}{\partial u_2}(\u)\right)\\[4mm] &=& \mu(\mu-\operatorname{trace} C). \end{array} $$ Hence $(t,\u)$ is a singular point of (\ref{sol-chara}), if and only if $$ t=-1/\operatorname{trace}{C}. $$ We call $C$ the {\it shape operator\/} of (\ref{singcons}). Now we consider the first singular point of (\ref{eq:charasurf}) with respect to $t$ from the initial time $t=0$. For a minimal value of $t(\u)=-1/\operatorname{trace}{C}$, if $\det\operatorname{Hess}{t(\u)}>0$ holds, then by Theorem \ref{thm:main}, the singular point at $\u$ is ${\mathcal A}$-equivalent to the lips. Izumiya and Kossioris \cite{ikcons} have developed an unfolding theory and classified the generic singularities of multi-valued solutions in general dimensions. According to it, the first singular point of (\ref{eq:charasurf}) is generically the lips, where they did not give a condition for the singular point to be equivalent to the lips. Using our criterion for the lips, we detect the singular point and write down an explicit condition for the singular point to be equivalent to the lips. As a corollary of it, we give a simple proof that the first singular point of (\ref{eq:charasurf}) is generically the lips. Since the single conservation law (\ref{singcons}) is determined by functions $(f_1,f_2)$ and the initial value $\phi$, we may regard that the space of single conservation laws is the space $$ \{(f_1,f_2,\phi)\}= C^\infty(\boldsymbol{R},\boldsymbol{R})^2\times C^\infty(\boldsymbol{R}^2,\boldsymbol{R}) $$ with the Whitney $C^\infty$-topology. \begin{theorem} There exists a residual subset\/ ${\mathcal O}\subset C^\infty(\boldsymbol{R},\boldsymbol{R})^2\times C^\infty(\boldsymbol{R}^2,\boldsymbol{R})$ such that for any\/ $(f_1,f_2,\phi)\in{\mathcal O}$, the map germ\/ $(\ref{eq:charasurf})$ defined by\/ $(f_1,f_2,\phi)$ at the first singular point with respect to\/ $t>0$ is\/ ${\mathcal A}$-equivalent to the lips. \end{theorem} Here, a subset is {\it residual\/} if it is a countable intersection of open and dense subsets. \begin{proof} Since, for a function $w$, the behaviors of $dw$ and $\operatorname{Hess} w$ are the same as those of $w^{-1}$, we may calculate these quantities about $\operatorname{trace} C$. By a direct calculation, we have $$ \begin{array}{l} \Xi_1(\u):=(1/t)_{u_1}= f_1^{(3)}\, (\phi_{1})^2 + f_2^{(3)}\, \phi_{1}\,\phi_{2} + f_1''\,\phi_{11}+ f_2''\, \phi_{12},\\ \Xi_2(\u):= (1/t)_{u_2}= f_1^{(3)}\, \phi_{1}\,\phi_{2}+f_2^{(3)}\, (\phi_{2})^2 + f_1''\,\phi_{12} + f_2''\, \phi_{22} \end{array} $$ and $$ \begin{array}{l} \Xi_3(\u):=\det\operatorname{Hess}{(1/t)}=\\ f_1^{(4)}\Bigg[ f_1^{(3)}\,(\phi_{1})^2\Big( (\phi_{1})^2\,\phi_{22} -2\,\phi_{1}\,\phi_{2}\,\phi_{12} +(\phi_{2})^2\,\phi_{11} \Big)\\ \hspace{14mm} +f_1''\,\phi_{1}\Big((\phi_{1})^2\,\phi_{122} -2\,\phi_{1}\,\phi_{2}\,\phi_{112} +(\phi_{2})^2\,\phi_{111}\Big)\\\hspace{16mm} +f_2^{(3)}\,\phi_{1}\,\phi_{2}\, \Big((\phi_{1})^2\,\phi_{22} - 2\,\phi_{1}\,\phi_{2}\, \phi_{12} + (\phi_{2})^2\,\phi_{11} \Big)\\\hspace{22mm} +f_2''\,\phi_{1}\, \Big((\phi_{1})^2\, \phi_{222} - 2\,\phi_{1}\,\phi_{2}\, \phi_{122} + (\phi_{2})^2\,\phi_{112} \Big) \Bigg] \\ +f_2^{(4)}\Bigg[ f_1^{(3)}\phi_{1}\phi_{2}\Big( (\phi_{1})^2\,\phi_{22} -2\,\phi_{1}\,\phi_{2}\,\phi_{12} +(\phi_{2})^2\,\phi_{11} \Big)\\ \hspace{14mm} +f_1''\phi_{2}\Big( (\phi_{1})^2\,\phi_{122} -2\,\phi_{1}\,\phi_{2}\,\phi_{112} +(\phi_{2})^2\,\phi_{111} \Big) \\\hspace{16mm} +f_2^{(3)}\,(\phi_{2})^2\,\Big( (\phi_{1})^2\,\phi_{22} -2\,\phi_{1}\,\phi_{2}\, \phi_{12} + (\phi_{2})^2\,\phi_{11} \Big)\\\hspace{22mm} +f_2''\,\phi_{2}\,\Big( (\phi_{1})^2\,\phi_{222} -2\,\phi_{1}\,\phi_{2}\, \phi_{122} + (\phi_{2})^2\,\phi_{112} \Big) \Bigg] \\ +(f_1^{(3)})^2\Big[ \phi_{1}\phi_{11}(3\phi_{1}\phi_{22} +2\phi_{2}\phi_{12})-4(\phi_{1})^2(\phi_{12})^2 -(\phi_{2})^2(\phi_{11})^2\Big] \\[2mm] + (f_2^{(3)})^2\Big[ \phi_{2}\phi_{22} (2\phi_{1}\phi_{12} + 3\phi_{2}\phi_{11}) -(\phi_{1})^2(\phi_{22})^2 -4(\phi_{2})^2(\phi_{12})^2 \Big] \\[2mm] + f_1^{(3)}f_2^{(3)} \Big[ -2\,(\phi_{1})^2\,\phi_{12}\,\phi_{22} -4\,\phi_{1}\,\phi_{2}\,(\phi_{12})^2\\ \hspace{35mm} +8\,\phi_{1}\,\phi_{2}\,\phi_{11}\,\phi_{22} -2\,(\phi_{2})^2\,\phi_{11}\,\phi_{12} \Big] \\ +f_1^{(3)} \Bigg[ f_1''\Big( 3\,\phi_{1}\,\phi_{11}\,\phi_{122} -4\,\phi_{1}\,\phi_{12}\,\phi_{112} \\\hspace{30mm} -2\,\phi_{2}\,\phi_{11}\,\phi_{112} +\phi_{1}\,\phi_{22}\,\phi_{111} +2\,\phi_{2}\,\phi_{12}\,\phi_{111} \Big) \\\hspace{12mm} +f_2''\Big( -4\phi_{1}\,\phi_{12}\,\phi_{122} +3\,\phi_{1}\,\phi_{11}\,\phi_{222} \\\hspace{30mm} -2\,\phi_{2}\,\phi_{11}\,\phi_{122} +\phi_{1}\,\phi_{22}\,\phi_{112} +2\,\phi_{2}\,\phi_{12}\,\phi_{112} \Big) \Bigg] \end{array} $$ $$ \begin{array}{l} +f_2^{(3)}\, \Bigg[ f_1''\Big( 2\phi_{1}\,\phi_{12}\,\phi_{122} +\phi_{2}\,\phi_{11}\,\phi_{122} \\\hspace{30mm} -2\,\phi_{1}\,\phi_{22}\,\phi_{112} -4\,\phi_{2}\,\phi_{12}\,\phi_{112} +3\,\phi_{2}\,\phi_{22}\,\phi_{111} \Big)\\\hspace{12mm} +f_2'' \Big( 2\phi_{1}\,\phi_{12}\,\phi_{222} -2\,\phi_{1}\,\phi_{22}\,\phi_{122} \\\hspace{30mm} -4\,\phi_{2}\,\phi_{12}\,\phi_{122} +\phi_{2}\,\phi_{11}\,\phi_{222} +3\,\phi_{2}\,\phi_{22}\,\phi_{112} \Big) \Bigg]\\ + (f_1'')^2\, \Big(\phi_{111}\,\phi_{122}-(\phi_{112})^2\Big) +(f_2'')^2\, \Big(\phi_{112}\,\phi_{222} -(\phi_{122})^2\Big)\\ \hspace{60mm}+ f_1''f_2''\, \Big( \phi_{111}\,\phi_{222}-\phi_{112}\,\phi_{122}\Big), \end{array} $$ where for the sake of simplicity, we set $$ \begin{array}{l} f_{\ell}':=\dfrac{d f_{\ell}}{dy},\ f_{\ell}'':=\dfrac{d^2 f_{\ell}}{dy^2},\ f_{\ell}^{(m)}:=\dfrac{d^m f_{\ell}}{dy^m},\ (\ell=1,2,\ m=3,4) \\ \phi_{i}:=\dfrac{\partial \phi}{\partial u_i},\ \phi_{ij}:=\dfrac{\partial^2 \phi}{\partial u_iu_j},\ \text{and}\ \phi_{ijk}:= \dfrac{\partial^3 \phi}{\partial u_iu_ju_k},\ (i,j,k=1,2). \end{array} $$ Next we consider a map $$ \begin{array}{l} j^4(f_1,f_2,\phi):(y,\u)\mapsto \big(j^4f_1(y),j^4f_2(y),j^4\phi(\u) \big)\\ \hspace{50mm}\in J^4(\boldsymbol{R},\boldsymbol{R})^2\times J^4(\boldsymbol{R}^2,\boldsymbol{R}) \end{array} $$ and four subsets of jet spaces $J^4(\boldsymbol{R},\boldsymbol{R})^2\times J^4(\boldsymbol{R}^2,\boldsymbol{R})$ as follows: $$ \begin{array}{l} \hat{\Xi}_0:=\big\{j^4(f_1,f_2,\phi)(y,\u) \,\vert\,y-\phi(\u)=0\big\}\\ \hat{\Xi}_1:=\big\{j^4(f_1,f_2,\phi)(y,\u) \,\vert\,\Xi_1(\u)=0\big\}\\ \hat{\Xi}_2:=\big\{j^4(f_1,f_2,\phi)(y,\u) \,\vert\,\Xi_2(\u)=0\big\}\\ \hat{\Xi}_3:=\big\{j^4(f_1,f_2,\phi)(y,\u) \,\vert\,\Xi_3(\u)=0\big\}. \end{array} $$ Since the coordinate system of $J^4(\boldsymbol{R},\boldsymbol{R})^2\times J^4(\boldsymbol{R}^2,\boldsymbol{R})$ is defined by each coordinate of source and value of derivatives of functions, $\hat\Xi_0$, $\hat\Xi_1$, $\hat\Xi_2$ and $\hat\Xi_3$ are algebraic subsets with respect to the coordinates of $J^4(\boldsymbol{R},\boldsymbol{R})^2\times J^4(\boldsymbol{R}^2,\boldsymbol{R})$. Comparing the coefficients of $\phi_{11}$ and $\phi_{22}$ in $\Xi_1$ and $\Xi_2$, we see that $\Xi_1$ and $\Xi_2$ do not have a common factor. Moreover, $f_1''\phi_{1}(\phi_{2})^2$ is the coefficient of $\phi_{111}f_1^{(4)}$ of $\Xi_3$, but this does not appear in either $\Xi_1$ or $\Xi_2$. Hence $S:=\cap_{i=0}^3\hat{\Xi}_i$ is a closed algebraic subset with codimension $4$ in $J^4(\boldsymbol{R},\boldsymbol{R})^2\times J^4(\boldsymbol{R}^2,\boldsymbol{R})$. So this set has a standard stratification. Applying the Thom jet transversality theorem to $j^4(f_1,f_2,\phi)$ and $S$, there exists a residual subset ${\mathcal O}\subset C^\infty(\boldsymbol{R},\boldsymbol{R})^2\times C^\infty(\boldsymbol{R}^2, \boldsymbol{R})$ such that for any $(f_1,f_2,\phi)\in {\mathcal O}$, the map $j^4(f_1,f_2,\phi)$ is transverse to $S$. Since the codimension of $S$ is 4, transversal condition means having no intersection point. If $g_{t_0}$ at $\u_0$ is the beaks, there is a singularity of $g_{t-\varepsilon}$ near $\u_0$ for a sufficiently small number $\varepsilon$, the beaks never appear at the minimal value of $t$ which $g_t$ is singular. Thus $(f_1,f_2,\phi)\in {\mathcal O}$ satisfies the desired condition. \end{proof} \smallskip The author would like to thank Professors Goro Akagi, Shyuichi Izumiya and Farid Tari for fruitful discussions and helpful advices.
1,314,259,995,689
arxiv
\section{Introduction} \input{sec01-intro} \section{Background and Related Work} \input{sec02-rw} \section{Four Spatial Awareness Tools} \label{section:tooldesc} \input{sec03-fourtools} \section{User Study} \input{sec04-userstudy} \section{RQ1 Results: Aspects of Spatial Awareness Important to Visually Impaired Players} \label{sec:affresults} \input{sec05-RQ1results} \section{RQ2 Results: Comparison of Spatial Awareness Tools for Virtual Worlds} \label{sec:toolresults} \input{sec06-RQ2results} \section{Discussion: Takeaways from RQ1 and RQ2 Together } \label{sec:discussion_section} \input{sec07-discussion} \section{Future Work} \label{sec:disc} \input{sec08-futurework} \section{Limitations} \input{sec09-limitations} \section{Conclusion} \input{sec10-conclusion} \begin{acks} We would like to thank Michael Malcolm and Sebasti\'{a}n Mercado for their assistance during our pilot tests. We would also like to extend our sincere gratitude toward our study participants for their participation and to the anonymous reviewers for their helpful feedback. Mason Hayes, Hannah Huddleston, and Matthew Donnelly were funded by National Science Foundation Grants 2051053 and 2051060. The opinions, findings, conclusions, and/or recommendations expressed are those of the authors and do not necessarily reflect the views of the National Science Foundation. \end{acks} \bibliographystyle{ACM-Reference-Format} \subsection{What do we mean by ``spatial awareness''?} \label{sec:affdesc} Spatial awareness, as used in this work, refers to a user's awareness of their surrounding environment and of their own state within the environment~\cite{Klippel2010, Yang2011}. Past literature within physical world contexts has shown spatial awareness to be multifaceted. Thus, in this work, we investigate RQ1 and RQ2 with respect to \textit{six} distinct aspects of spatial awareness we identified through prior work. Specifically, we looked through existing research in cognitive map formation and spatial awareness for VIPs within the physical world and looked for explicit information on what aspects of spatial awareness are most important to VIPs. We chose to investigate the following six types of spatial awareness since they were mentioned as important across a breadth of prior research~\cite{RowellUngar2005, GiudiceLegge2008, Giudice2020, Kacorri2016, Hill1993, Yatani2012, Klatzky1998, Epstein2017}:\\ \begin{adjustwidth}{0.5cm}{} \noindent \textbf{Types 1 \& 2:} \textbf{\textit{Scale} and \textit{shape} of the area.} Prior work --- mainly in tactile maps~\cite{RowellUngar2005, HolmesArditi1998} and echolocation~\cite{Andrade2021} --- has found area shape to be important to VIPs in obtaining a general impression of the area, which can be especially crucial when exploring and trying to learn about the environment.\\ \noindent \textbf{Type 3:} \textbf{\textit{Position \& orientation}.} Researchers have found that understanding where one is within a mental map of the area (for example, their Cartesian coordinates or their heading direction in degrees) --- that is, within an \textit{allocentric}~\cite{Klatzky1998}, map-like mental representation of the environment --- is vital to continuously updating their own current state within the environment and thus effectively move through it~\cite{Klatzky1998, Epstein2017, Giudice2020}. Yet, prior work in physical world contexts~\cite{GiudiceLegge2008} has shown that obtaining this understanding is especially demanding for VIPs.\\ \noindent \textbf{Types 4 \& 5:} \textbf{\textit{Presence} and \textit{arrangement} of items.} Researchers have emphasized that providing VIPs with the information necessary to perceive the locations of objects can allow them to infer spatial relationships between objects and can lead to increased spatial awareness and more accurate cognitive maps~\cite{GiudiceLegge2008, Hill1993}.\\ \noindent \textbf{Type 6:} \textbf{Areas \textit{adjacent} to the player's current area.} Prior work with physical world tactile maps~\cite{RowellUngar2005} and mobile-based spatial tactile feedback for communicating geographical information~\cite{Yatani2012} have underscored the importance of understanding the global structure of the world --- general overviews of an area and spatial relationships between multiple areas --- for VIPs, which can help them plan out routes and backtrack as needed.\\ \end{adjustwidth} Although prior work has determined these six aspects of spatial awareness to be important to VIPs in the physical world, video games are very different from the physical world. Within the physical world, practicality and physical safety are extremely important factors~\cite{Banovic2013}, while in video games, agency and pleasure (fun) are very important, and VIPs' in-game ``safety'' may not always be a major concern. It is possible that, due to these differences, VIPs may find certain aspects of spatial awareness more or less important within games when compared to the physical world. We, thus, use RQ1 to explore these preferences. \subsection{How do games supplement spatial awareness?} \label{sec:rw-sub2} Games made for VIPs often use ambient signals to provide \textit{implicit} spatial awareness to players. These ambient signals usually take the form of environmental audio cues that communicate information about the player’s immediate environment. For example, hearing running water may indicate that there is a waterfall or stream near the player. When environmental sounds reverberate, the player may realize that they are inside a cave or tunnel, and the extent of the reverberation can indicate the size of the cave or tunnel. The use of 3D sound can additionally communicate the relative direction that the source of sound is in with respect to the player. Although ambient signals may be sufficient for simple environments, they can become less useful to players as environments become more complex, as is typical for many mainstream 3D games. Ambient cues can become overwhelming when there are too many items in the environment, and they may also be vague, giving players little information about what the sounds they are hearing actually represent. As a result, using ambient signals alone as a means to facilitate spatial awareness for players limits the complexity of games that accessible game designers are able to make. Accessible game designers, thus, face a tradeoff between designing environments that are interesting and designing games that are still accessible and playable by VIPs~\cite{Andrade2018, SmithNayar2018}. Given the limitations of implicit forms of spatial awareness, accessible game designers often turn to creating tools that \textit{explicitly} communicate spatial awareness information to players. These spatial awareness tools (or SATs) --- which include (but are not limited to) tactile maps, radar systems, and grid systems --- supplement implicit spatial awareness cues by clarifying environmental information and affording players greater control over what information they hear and when they hear it Table~\ref{tab:sat-table} shows an overview of SATs from prior work. Below we review some explicit approaches for facilitating spatial awareness in games and in the physical world. \input{sec02-TABLE-explicitSATs} \subsubsection{Facilitating spatial awareness for VIPs within video games.} \label{sec:rw-sat-virtual} \hfill\newline Tools that explicitly communicate spatial awareness information to VIPs are not commonplace within mainstream video games. Most examples, instead, come from ``audio games'' (audio-based games created for VIPs), which generally provide players with spatial awareness by presenting environments in the form of lists and grids that players can query. This technique is employed by many well-known audiogames, including \textit{Terraformers}~\cite{PinInteractive2003a, Westin2004}, \textit{A Hero’s Call}~\cite{OutOfSightGames2017}, and \textit{ShadowRine}~\cite{Matsuo2016}. These representations may communicate the presence and arrangement (Types 4~\&~5 from Section \ref{sec:affdesc}) of items and points-of-interest and are sometimes further supplemented by additional tools such as radars and compasses. Several examples of SATs have come from the research community as well. A notable example is NavStick~\cite{Nair2021pp, NairSmith2020}, which repurposes a game controller's right thumbstick to allow VIPs to ``look around'' their in-game surroundings via line-of-sight. A directional scanning system like NavStick could allow VIPs to determine the presence and spatial arrangement of objects around them (Types 4~\&~5) as well as their relative position and orientation within the game world (Type~3). A notable exception to the lack of SATs in mainstream games is \textit{The Last Of Us Part 2}, a 3D action-adventure game released in 2020~\cite{NaughtyDog2020}, which introduced an "enhanced listen mode" for VIPs. The enhanced listen mode provides spatial awareness to players by placing 3D audio beacons at the locations of nearby enemies and other points-of-interest on the press of a button. The beacons may give players a sense of the spatial arrangement of items in the area (Type~5) as well as a sense of the surrounding area's scale (Type~1). \subsubsection{Facilitating spatial awareness for VIPs in the physical world.} \label{sec:rw-sat-physical} \hfill\newline Some audio-based tools within the physical world have features that explicitly provide VIPs with spatial awareness information and can thus inform the design of SATs for game worlds. NavCog3~\cite{Sato2017}, a turn-by-turn indoor navigation system for VIPs, for example, emits notifications about nearby landmarks and points-of-interest to promote awareness in the user of their presence (Type~4). Similarly, Microsoft Soundscape~\cite{MicrosoftResearch2018}, an audio-based wayfinding system that can be used by VIPs, uses 3D sound to communicate the presence and relative direction (i.e., arrangement, Type~5) of nearby landmarks. The spatial awareness that these systems provide is, however, limited. For example, they do not provide any information about the area's shape and size (Types 1~\&~2). Tactile-based systems provide spatial awareness by providing overviews of areas~\cite{RowellUngar2005, HolmesArditi1998}, which may include the scale and shape of an area (Types 1~\&~2), the presence and arrangement of landmarks and other points-of-interest (Types 4~\&~5), and even what areas may be adjacent to a given area (Type~6). These not only include physical tactile maps but also mobile-based tactile systems, such as Timbremap~\cite{Su2010} and SmartTactMaps~\cite{Gotzelmann2015}, which can allow VIPs to survey the area they are in using a commodity smartphone. Echolocation, which has been explored for both physical~\cite{Kish2009, ThalerGoodale2016, Norman2021} and virtual~\cite{Andrade2018, Andrade2020} environments, is another technique that VIPs may use to gain spatial awareness within environments. Using the acoustic properties of the environment can allow individuals to learn about the structure of the area they are in, including the scale and shape of the area (Types 1~\&~2), as well as the presence and arrangement of nearby objects (Types 4~\&~5)~\cite{ThalerGoodale2016, Kish2009}. \subsection{Smartphone Map} \label{sec:toolmap} The smartphone map interface, shown in the upper-left corner of Figure \ref{fig:tools}, uses a smartphone-based touchscreen map that works in tandem with the game. The player can use their finger to survey the map. As the player moves through the level, the map will automatically pan and rotate in real-time, allowing users to explicitly keep track of their own position and orientation, respectively. The smartphone map interface represents prior work in tactile-based maps to support spatial awareness. Tactile maps in the physical world have been shown to support spatial awareness in VIPs by providing general overviews of spaces and landmarks~\cite{HolmesArditi1998, RowellUngar2005}. Our work with video games necessitates a digital solution; as such, the smartphone map interface we implemented also derives from prior work in touchscreen-based accessible graphics, particularly in presenting floor plans and other maps to VIPs~\cite{Su2010, Gotzelmann2015, Goncu2011, Goncu2015, Giudice2020Maps}. When a player places their finger on the screen, they will begin surveying at their position, regardless of where on the screen they are touching. As they move their finger, they will survey the map relative to their position, with the app announcing anything that the player touches. The app will announce all items in the world (as well as the player's position) using sound effects and/or text-to-speech. The app only reacts to touches within the current room that the player is in --- if the player drags their finger outside the room, a continuous warning tone will play. In the first version of this tool, players started surveying at the portion of the map where their finger touched the screen; however, our visually impaired pilot participants ended up spending large amounts of time searching for their current position, which frustrated them. As a result, our second and final version registers a player’s initial touch at their current position. \subsection{Whole-Room Shockwave} \label{sec:toolshockwave} The whole-room shockwave, depicted in the upper-right corner of Figure \ref{fig:tools}, uses an acoustic shockwave that the player triggers to communicate information about their surroundings. When the shockwave hits anything in the room, an announcement and/or sound effect emanates from that object via 3D sound. The shockwave corresponds to real-world physics in that closer objects will emanate their sounds back to the player before objects that are further away. If the player moves while the shockwave is active, the rate of expansion will match the player's speed. The whole-room shockwave originated from our explorations in echolocation, which has been shown to promote spatial awareness in VIPs by communicating physical properties of the room and nearby objects~\cite{Kish2009, Norman2021, ThalerGoodale2016}. Our initial echolocation prototype had players press a button on their game controller to emit a click sound originating from the player’s position, similar to how some VIPs use echolocation within the physical world~\cite{Kish2009, Kolarik2014}. Our echolocation prototype was similar to virtual echolocation techniques used in prior work~\cite{Andrade2018} that used Steam Audio’s built-in head-related transfer function~\cite{ValveSoftware2019} to generate sound reflections based on the physical structure of each room. In our pilot tests, however, we found that echolocation by itself was not equivalent to the other tools. While echolocation communicates the raw layout of an area, the other tools can communicate raw layout \textit{in addition to} specific object information through sound effects and text-to-speech. Furthermore, our visually impaired pilot participants were not at all experienced in echolocation and did not know how to decode and interpret the sound echoes in our game environment; they could only interpret broad qualities of the area such as how large it was. Although users could learn to use echolocation, prior work has indicated that it may take weeks for users to learn how to use click-based echolocation effectively~\cite{Norman2021}. We, thus, made modifications to the initial \textit{echolocation} design and created the \textit{whole-room shockwave} --- a refined and more comprehensible version of echolocation. In its first iteration, the shockwave announced \textit{every} item that it hit, which proved to be auditorily overwhelming. Furthermore, both visually impaired participants found the shockwave to be too fast. As a result, our second and final version halved the speed of the shockwave and implemented a filtering mechanism. Players can press the right button on the D-pad to cycle through four filtering options --- all objects, mission-critical points-of-interest, non-mission-critical (decorative) objects, and walls only. Only items within the selected category will emit sounds during a shockwave. \subsection{Directional Scanner} The directional scanner, illustrated in the lower-left corner of Figure \ref{fig:tools}, allows players to survey in any direction using the right thumbstick. Players use the tool by tilting the thumbstick in any direction. This triggers an announcement naming the first object that lies in that direction via line-of-sight with respect to the player’s current position and orientation. The announcement is made via 3D sound from the point of the object in space. If the first object in a direction being pointed at is \textit{not} an object of interest (i.e., a wall or other generic obstruction), the scanner will emit a 440 Hz sine tone from the direction of the obstruction. This tool represents prior work that has sought to replicate the act of ``looking around'' (or \textit{directionally scanning} an area) to promote spatial awareness for VIPs. We take particular inspiration from NavStick~\cite{Nair2021pp, NairSmith2020}, which introduced the concept of directional scanning within game worlds and showed how VIPs enjoyed the ability to survey their game environments directly by ``looking around.’’ Some prior work with directional scanning also exists in the physical world. Talking Points 3~\cite{Yang2011} is one such example: It features a ``Directional Finder'' that provides a list of landmarks that lie in the general direction that a VIP points their mobile device. Our implementation of the directional scanner was derived from NavStick, and we did not implement any major changes to it as a result of our pilot tests. \subsection{Simple Audio Menu} The simple audio menu, shown in the lower-right corner of Figure \ref{fig:tools}, represents the idea of using a list to promote spatial awareness --- in particular, by allowing VIPs to learn about the contents of the area they are currently in. Many audio games made for VIPs, such as \textit{Terraformers}~\cite{Westin2004, PinInteractive2003a} and \textit{A Hero's Call}~\cite{OutOfSightGames2017}, use list- and grid-based representations to present the world to VIPs. The simple audio menu we implemented exposes an audio-based list of points-of-interest (POIs). Players use the tool by pressing the left bumper button to open a list of POIs within the room they are currently in. As the player scrolls through the list using the D-pad, they will hear each item’s associated sound effect and text-to-speech announcement. The simple audio menu is modeled after list interfaces used in some audio games as well as prior research~\cite{Westin2004, PinInteractive2003a, Nair2021pp} in that it employs an alphabetical ordering of items. Previous research has suggested that, for a linear menu, a stable alphabetical ordering is less confusing than a proximity-based or direction-based ordering, both of which can change as the player moves~\cite{Nair2021pp}. Similar to the directional scanner, we did not implement any major changes to the simple audio menu as a result of our pilots. \subsection{Game: \textit{Dungeon Escape}} \textit{Dungeon Escape} is a 3D third-person adventure game set in a fantasy world, in which the player must escape small dungeons by finding objects that allow them to clear obstacles. We chose to create \textit{Dungeon Escape} to address RQ1 and RQ2 because the game requires players to use the tools they are given to search for and understand where objects are located and how the rooms in each level are laid out in order to succeed --- thus testing how well they are able to gain spatial awareness using those tools. We created \textit{Dungeon Escape} using the Unity game engine~\cite{UnityTechnologies2020}, and we designed the dungeon’s layout using the Dungeon Architect Unity asset~\cite{DungeonArchitect}. Figures ~\ref{fig:tools} and ~\ref{fig:zoomsession} show views from \textit{Dungeon Escape}. \textit{Dungeon Escape} consists of four levels (small dungeons), which allowed us to study the four SATs within separate dungeon layouts. Figure \ref{fig:leveloverhead} shows overhead views of all four main levels and the trial level. In each main level, the player must reach a goal area by gaining passage through an obstacle: either a locked door, a cracked wooden door, a spider web, or a dog blocking the exit. To do so, the player must find a relevant object in another room: a key, an axe, a burning torch, or a bone, respectively. Each level consists of several rooms scattered with decorative objects such as crates and barrels. \begin{figure}[] \centering \includegraphics[width=0.47\textwidth]{figures/level_top_down_figures.png} \caption{Overhead views of \textit{Dungeon Escape}'s trial level and four main levels. All levels have a common set of points-of-interest: a start point, a key, an obstacle that the key affords passage through, and a goal checkpoint. Each level, however, possesses a unique layout, allowing us to evaluate the four spatial awareness tools within a variety of layouts.} \Description{Bird’s eye views of five different room layouts. The four main levels each consist of 5 to 7 rooms with two larger “halls.” The trial level only consists of 3 rooms.} \label{fig:leveloverhead} \end{figure} We generated the four level layouts by deriving them from a single Dungeon Architect ``grid flow.'' This grid flow defined basic parameters from which Dungeon Architect would generate levels. Each level consisted of: \begin{itemize} \item A ``start room'' within which the player first spawns. \item An ``obstacle room'' containing the obstacle. \item A ``key room'' containing the object that clears the obstacle. \item A ``main hall'' connecting the start, key, and obstacle rooms. \item A ``final hall'' containing the goal checkpoint. \end{itemize} We then fed random seed values into this grid flow to generate the final layouts. This allowed us to have unique level layouts while keeping them equivalent in terms of difficulty and structure. The trial level followed a similar conceptual structure but was much smaller, consisting of a start room, a combined key-and-obstacle room, and a final hall with the checkpoint. Players move the main character with the left thumbstick. Tilting it forward and backward will move the character forward and backward. Tilting it left and right will rotate the character left and right. This control scheme reflects controls found in mainstream 3D games such as \textit{Tomb Raider}~\cite{TombRaider}, \textit{Resident Evil} 1-5~\cite{ResidentEvil1, ResidentEvil2, ResidentEvil3, ResidentEvil4, ResidentEvil5}, \textit{Metroid Prime} 1-3~\cite{MetroidPrime1, MetroidPrime2, MetroidPrime3}, \textit{Heavy Rain}~\cite{HeavyRain}, and \textit{Silent Hill}~\cite{SilentHill}, which use a fixed over-the-shoulder camera and use left/right on the left thumbstick to rotate the character. The right thumbstick is used by the directional scanner condition; thus, to eliminate a confound, we removed right thumbstick controls from all other conditions. Players can press the bottom face button to pick up an object or to use an object to remove the relevant obstacle. Players hear a scraping sound if they physically hit an obstruction; the sound will be situated in the direction of contact. Keys, obstacles, and checkpoints play a relevant sound once the player is within two meters of the object. Players will hear the name of the room (for example, ``Start Room'' or ``Key Room'') announced on entry, and they can also press the right face button to hear the room name on-demand. We integrated these sounds to allow VIPs to be informed of these events --- i.e., hitting a wall or entering a room --- when they occur. Sighted players can perceive these events solely via sight, but VIPs require notifications via other means. We also implemented a ``rotation indicator'' utility that helps players understand how much they are rotating when they turn left or right using the left thumbstick. As the player rotates, a click sound will be played at $15\degree$ increments via 3D sound \textit{only} in the direction of the player’s objective (i.e., the obstacle that must be cleared). The rotation indicator mimics snap rotation controls found in many games created for VIPs~\cite{Westin2004, PinInteractive2003a, Kaldobsky2011}, which allow players to snap to pre-defined angle increments. In order to bring \textit{Dungeon Escape}'s controls closer to the free movement of mainstream 3D games, we gave players full analog control via the left thumbstick but maintained the feedback afforded by snap rotation via the rotation indicator. The rotation indicator was available across all four tools and pointed in the direction of the objective regardless of any intervening obstacles. Similar utilities have also been implemented in prior work that has investigated navigation by VIPs within virtual environments~\cite{Andrade2018, Andrade2021, Nair2021pp}. Additionally, players could place looping audio beacons on objects of interest so that they could lock onto and keep targets within their “field of view.” Once placed, these beacons emit a looping sound, which players can use to orient themselves and move towards the target. With NavStick, players point at a target with the right stick and press the right bumper button to place the beacon. With the simple audio menu, players scroll to a target and press the left bumper. With the smartphone map, players tap on the upper one-fifth of the screen to place a beacon at the last announced target. There was no mechanism for beacon placement with the whole-room shockwave. We added the beacons exclusively for guidance purposes to speed up the process of walking toward a target --- players still need to use an SAT to find objects and other targets before they can place a beacon at that object/target. \subsection{Participants} We recruited nine participants for this study. In our pre-study questionnaire, eight described themselves as being completely blind and one (P1) described themselves as having light perception only. All participants were male and have had their vision impairments from birth. Six participants were 18--25 years old; two (P5 \& P9) were 26--35 years old; and one (P3) was 36--45 years old. In addition to having vision impairments, two participants (P3 \& P6) reported having slight hearing loss in one of their ears. We recruited participants \replaced{through posts on the AudioGames.net Forum, an online discussion board that centers around audio-based games and is frequented by VIPs.}{by posting to online forums popular among the visually impaired community \textit{(exact forums anonymized for submission)}.} Six of our participants reported themselves as being very experienced with video and other electronic games (4+ on a 5-point Likert scale), while the other three (P2, P6, \& P8) reported themselves as being moderately experienced with games (3 on a 5-point Likert scale). \begin{figure}[] \centering \includegraphics[width=0.47\textwidth]{figures/ZoomCallFigure.png} \caption{Remote study session with a participant and two facilitators. The participant is currently sharing their screen. Within the game, a blue key is situated to the participant's right. The participant will need to collect that key to progress through the level. \textit{(Faces obscured to protect anonymity.)}} \Description{Screen capture from a remote study session. A view of Dungeon Escape as a participant plays through it is alongside three obscured faces.} \label{fig:zoomsession} \end{figure} \subsection{Technical Setup \& COVID-19 Challenges} We performed this study remotely due to the COVID-19 pandemic and the difficulties that VIPs may face in travelling to our institution. We sent each participant an executable of our game for them to download to their computer before their study appointment. The game included all of the SATs except for the smartphone map. We distributed that tool as both iOS and Android apps using the Google Firebase App Distribution service~\cite{FirebaseAD}. We designed both \textit{Dungeon Escape} and the smartphone map to connect with a cloud backend, which allowed both components to synchronize with each other, and allowed us to remotely observe and control the runtime state of participants’ games using a custom-built control panel. We held the study appointments over Zoom and asked participants to share their computer audio (and, optionally, video) with us. Although there was no way for us to see the smartphone map during the study, most participants’ microphones picked up the sound from the app. Figure \ref{fig:zoomsession} shows a study session in progress. The study and our data collection efforts were approved by the Columbia University Institutional Review Board (IRB). \subsection{Procedure} \label{sec:procedure} To address RQ1, we began the session by administering a two-part pre-study questionnaire. The first part requested demographic information alongside information about participants' existing experience with video games and physical world navigation. The second part directly asked participants about how important they find each of the six types of spatial awareness --- that we identified in Section \ref{sec:affdesc} --- within a video game context. For each type, responses were given on a 5-point unipolar Likert scale where 1 indicated that the type of spatial awareness was not-at-all important and 5 indicated that it was extremely important. Afterwards, we placed participants in a room within the game where we introduced basic movement and interaction controls. For each tool, we first placed participants into the trial level. We explained how to use the tool and afterwards allowed participants to traverse the trial level at their own leisure. The trial level was the same across all tools. After the trial level, we placed participants into one of the four main levels. Although all participants played the four levels in the same order, we counterbalanced the order of the tools themselves via a Latin square design to reduce any variations caused by order effects. In order to address RQ2, we administered a two-part post-level questionnaire; we did this after participants traversed a level with a tool. In the first part, we asked participants to elaborate on their impressions of the tool, what they think is missing, and in what game situations they might use the tool. In the second part, we gauged how well participants thought the tool satisfied each of the six types of spatial awareness. Responses were given on five-point scales, where 1 indicated that the tool facilitated that type not-at-all well and 5 indicated that it facilitated that type extremely well. Participants were encouraged to elaborate on all questions. After completing all four levels, we administered a two-part post-study questionnaire. In the first part, we asked participants to consider a scenario where they were able to play the levels in \textit{Dungeon Escape} using more than one tool at once; we did this in order to determine if using multiple tools at once could have improved participants' spatial awareness in any way. As part of this section, participants were asked to provide two combinations of two tools each that they would have liked to use if they were given the chance to do so. (We should note that \textit{Dungeon Escape} is capable of activating two tools at once; however, in our initial pilot tests, including additional game levels to test these combinations made study sessions well exceed our limit of two hours.) In the second part, we asked participants how likely they were to recommend each individual tool to a friend or colleague, assuming they had the same visual impairments as the participant. Responses were given on a 10-point net promoter score scale~\cite{Reichheld2003}, where 1 indicated they were very unlikely to recommend it and 10 was very likely. \subsection{Data Collection \& Analysis} We administered all questionnaires by having the facilitator read out each question and input the participant's response into an internal Google Form. For all choice- and rating-based questions, we asked for participants' open-ended opinions via the questionnaire itself by explicitly following up on their responses. The facilitator was also encouraged to follow up on any other points they found interesting throughout the session --- though they were not allowed to disturb the participant while a game level was in progress. We have included the questionnaires as part of our supplementary material. We recorded all sessions with participants' permission for transcription purposes. We also obtained raw data of participants' actions within the game by capturing in-game logs. To analyze sessions, we followed an inductive coding process that involved five members of the research team. Individual coders went through session transcripts and coded quotes and other events. Then, all five coders iterated on the codes together until there was unanimous agreement that they could not iterate further. \subsection{Rank 1: Position and Orientation [Type 3]} Participants found position and orientation to be the most important aspect of spatial awareness within a video game context. Six participants explicitly affirmed this aspect of spatial awareness as the most important because it was crucial to determining their current state within the game world: \begin{quote} \textit{“You have an idea of how fast you’re turning and in what direction. I would say it’s the most important thing.”} --- \textbf{P3} \end{quote} Another participant who echoed this sentiment, P9, recounted extensive experience with shooter audio games, such as \textit{Swamp}~\cite{Kaldobsky2011}, that require players to move through a complex environment. P9 affirmed position \& orientation awareness --- and thus, awareness of their current state --- as extremely important to helping them plan out future actions, which is a crucial aspect of shooter-type games: \begin{quote} \textit{“You have to know where you are at and where you are oriented to in order to know where to go and what to do next.”} --- \textbf{P9} \end{quote} These opinions reflect work within the physical world that has found position and orientation to be important to VIPs~\cite{GiudiceLegge2008, Giudice2020}. They also establish that SATs for VIPs within video games must satisfy a high bar in terms of communicating position and orientation information. In Section \ref{sec:toolresults}, we determine if any of the four tools we implement for this study satisfy this high bar. \subsection{Rank 2 (three-way tie): Presence, Arrangement, and Adjacent Areas [Types 4, 5, and 6]} After position and orientation, participants found the next most important aspects of spatial awareness to be the presence and arrangement of items within the space (Types 4 \& 5) and information about areas adjacent to their current area (Type 6). These aspects are all important to participants in certain contexts, but not in all situations like position and orientation is. As P3 and P9 implied in their quotes in the previous subsection, position and orientation awareness grants players with a sense of their \textit{current state} within the world. Prior work in the physical world has found ascertaining this knowledge to be cognitively demanding for VIPs as they move through an environment~\cite{GiudiceLegge2008, Lewis2015}. This increased cognitive load can interfere with VIPs' ability to understand \textit{other} aspects of spatial awareness, making position and orientation awareness essential. Seven participants found presence to be very important for spatial awareness because if an SAT did well at communicating presence, then they could be confident that they would not miss finding anything within the game: \begin{quote} \textit{“If you hear [a familiar object], you know you are relatively in the right place and you can search the area specifically.”} --- \textbf{P7} \end{quote} Five participants clarified why the importance of knowing the arrangement of items is heavily context-dependent. For example, when faced with objectives that involve finding a specific item, participants believed that knowing the arrangement of items was very helpful because it would help them figure out where to go first. However, participants noted that having knowledge about items' arrangement may be detrimental in less restrictive, exploration-oriented tasks since that knowledge may reveal too much information and rob players the enjoyment of discovering items for themselves: \begin{quote} \textit{“[Knowing arrangement] depends on what the task is [at hand]. It's especially [important] if it involves triggering certain things in certain orders, finding an item then finding a person, or facing off against a challenger then finding an NPC.”} --- \textbf{P7} \end{quote} Six participants felt that having an SAT communicate which regions are adjacent to the one they are currently in would be beneficial as it would make navigation through the game world easier: \begin{quote} \textit{“To be honest, I’ll say [having an SAT communicate adjacencies is] extremely important because it makes it much easier for the player to move from one area to another without moving through the whole map.”} --- \textbf{P5} \end{quote} However, three others feared that having this information presented outright may make exploration and discovery less fun. One such participant was P4 who was a fan of games that required a high level of strategy. P4 asserted that the game should preserve a level of challenge and instead convey connections to other navigable places using plot and contextual cues such as dialogue or readable signs in the world itself: \begin{quote} \textit{“If it's one of those strategy games where you have to discover it on your own, it’s not important. Let’s say it’s a hidden area, [...] it should stay hidden.”} --- \textbf{P4} \end{quote} The other two participants, P2 and P6, echoed similar sentiments. This finding is quite surprising: These three participants thought that an SAT did not necessarily need to communicate adjacencies \textit{despite} us identifying this as a basic aspect of spatial awareness. This points to the importance that participants place in their \textit{experience} within the game over the actual \textit{information} they receive, implying that VIPs may be willing to sacrifice receiving some pieces of information for the sake of a more interesting gaming experience. \subsection{Rank 5 (two-way tie): Shape and Scale [Types 1 \& 2]} Participants generally found scale and shape information about an area to be the least important aspects of spatial awareness. VIPs' opinions generally revolved around the sentiment that, unlike the other types of spatial awareness, scale and shape information may be outright unnecessary much of the time. Seven participants stated that having a sense of the room's scale was not important to them and that SATs should focus on conveying information about the presence and location of nearby objects instead: \begin{quote} \textit{“I feel when you are navigating in games you don't really need to know how big the area is as long as you know where the objects in that area are.”} --- \textbf{P1} \end{quote} Seven participants thought that an SAT should only convey information about the surrounding area’s shape when absolutely necessary and that communicating shape information is too much for an SAT to do, possibly resulting in information overload. However, these participants also thought that knowing shape information in some situations may make navigation more efficient --- for example, in a situation where the room does not have a circular or rectangular shape: \begin{quote} \textit{“If the room is an odd shape --- every time I play a game, I assume the room is like a square, but that’s not always the case, sometimes rooms may have [many] sides, parts that jut out --- so I believe it’s a consideration.”} --- \textbf{P3} \end{quote} In a situation where a room is not rectangular, knowing the room’s shape could help players plan out their movements more carefully. Otherwise, players may resort to hugging walls to traverse the room and search for doors, which may become frustrating. \subsection{Since participants could trace out the contours of a room with their finger, the smartphone map communicated the shape of an area better than other tools.} \label{sec:spmapshape} Figure \ref{fig:heatmap} shows that participants’ ratings on how the smartphone map communicated the shape (Type~2) of the room were generally the highest out of all four tools. Five participants resonated with the following sentiment: \begin{quote} \textit{"[The smartphone map] gives you a general idea of where openings and spaces are. It [also] gives you an idea of how far each edge is from your center point which tells you, ‘OK, [the wall] angles a bit.’"} --- \textbf{P6} \end{quote} One participant even used this ability to their advantage. For example, in the irregularly-shaped final room of Level 3, P8 used the smartphone interface to trace out the walls of the room and determine that they needed to turn a corner to reach the goal checkpoint: \begin{quote} \textit{“Having that memory of ‘Oh, I know a little bit more about the shape of the room than I had previously with the other tools.’ --- that really helped me get a better sense of exactly where I needed to go in terms of [knowing that] I have to round a corner instead of trying to run directly forward for the target.”} --- \textbf{P8} \end{quote} \subsection{The whole-room shockwave allowed participants to quickly ascertain a general overview of an area, especially with respect to its scale.} Figure~\ref{fig:heatmap} shows that the whole-room shockwave tied the directional scanner for being the best tool at communicating an area's scale (Type~1). Participants found that the distance-based volume attenuation afforded by \textit{Dungeon Escape}'s 3D sound system and the delayed timing of objects’ sounds during a shockwave helped them approximate how far away objects were and, thus, how big the room was. P8 was one such participant; they described themselves as "not-at-all experienced" with echolocation techniques in the pre-study questionnaire. (Recall from Section \ref{sec:toolshockwave} that we derived the whole-room shockwave from our explorations in echolocation.) Yet, they relayed the following positive sentiment which was shared by many other participants despite their inexperience with echolocation: \begin{quote} \textit{“Even though doors and objects were further away from me, I was still able to know that they are still in fact there. [The shockwave] helped me quickly gauge 'OK, cool. I know I'm in a corridor [...] and I know there is a door in the far end, and so this helps me determine on a higher level how big the room might be.'”} --- \textbf{P8} \end{quote} The quick nature of the shockwave particularly advantaged participants within areas with many items. One such area within \textit{Dungeon Escape} was the irregularly shaped room mentioned in Section \ref{sec:spmapshape}, which contained obstacles in the form of barrels and crates. Participants who used the whole-room shockwave hit the checkpoint in that room much faster ($M =$ 19 sec., $SD =$ 2.8 sec.) than those who used the directional scanner ($M =$ 51 sec., $SD =$ 17.7 sec.), simple audio menu ($M =$ 123 sec., $SD =$ 5.0 sec.), and smartphone map ($M =$ 135 sec., $SD =$ 15.5 sec.). The shockwave provided participants with an almost-instant overview of what obstacles were in the room. However, those who used the other tools spent a much longer time searching for these very obstacles --- scrolling through each item (using the menu), trying to point at them (using the directional scanner), or trying to find them on a map (using the smartphone). \subsection{Participants made extensive use of the whole-room shockwave's filters.} All nine participants made use of the whole-room shockwave's filtering mechanism. Six of them explicitly mentioned that this ability was extremely important to them and that they highly valued this ability even in applications outside of games. One participant invoked the customizability of screen readers as an example: \begin{quote} \textit{“Everybody has different needs and wants so I really believe in allowing information to be filtered in such a way where you just get the information you need as you need it like in the shockwave. [...] Screen readers have settings like this for a reason.”} — \textbf{P3} \end{quote} \subsection{The physical use of the right stick in the directional scanner meant that participants could obtain a clear idea of how items were arranged around themselves.} Our findings with the directional scanner provided insights into how physically moving a joystick to survey an environment might provide players with an enhanced sense of its layout. Five participants explicitly mentioned that moving the joystick to ``look around'' allowed them to understand how objects were arranged: \begin{quote} \textit{“Because of where I had to put the stick to see stuff around me, it really helped. It was easier to tell what was behind me, what was in front of me, or what was in any other direction because I knew where my stick position was.”} --- \textbf{P6} \end{quote} P6 went on to say that surveying with the joystick “felt natural” and compared the directional scanner to a camera which they could use to ``look’’ for objects. This sentiment is further reflected in Figure~\ref{fig:heatmap}, which shows that the directional scanner scored the highest out of all four tools in terms of communicating the arrangement of items (Type~5). \begin{figure*} \centering \includegraphics[width=0.83\textwidth]{figures/AllToolPaths.png} \vspace{-4mm} \caption{Illustration of paths taken by participants within the Level 2 key room. The subfigures depict four different participants traversing the same room with different tools; the key is located in the lower left corner, and participants enter through the door at the bottom. Paths are divided into segments --- the end of a segment represents a point where the participant paused to survey using the tool. \Description{Four overhead views of the same room. Each view has a series of connected lines that represent a participant's path through the room while using the corresponding tool. The paths for the directional scanner, shockwave, and smartphone map cover greater areas of the room than that of the simple audio menu.} \label{fig:pathplots} \end{figure*} \subsection{The simple audio menu’s straightforward presentation of items meant that participants received information about the presence of items extremely well.} Participants found that the simple audio menu clearly communicated the presence of every item in the room. As seen in Figure \ref{fig:heatmap}, the simple audio menu received an average/median rating of 5.0 with a standard deviation of zero on communicating the presence of items (Type~4) --- every participant gave the menu the maximum possible score on this aspect of spatial awareness. Participants unanimously agreed that they were able to obtain a clear idea of what was in the room because of its straightforward presentation: \begin{quote} \textit{“You know everything that’s there because it’s in a menu. There’s nothing hidden. It doesn't matter if you’re far away from it. If you’re in the same room as it, it's on that list. That’s something I really like.”} --- \textbf{P3} \end{quote} Yet, some participants complained that the simple audio menu provided \textit{too much} information. Within games, a certain degree of surprise and exploration --- that is, the ability to “discover” aspects of the game world --- are core elements for making a game fun for players~\cite{LeBlanc2008}. Knowing the presence of objects so easily can remove this aspect of discovery from the game. Indeed, five participants felt that the simple audio menu did not promote exploration and made the game less enjoyable: \begin{quote} \textit{“I thought that I was using a shortcut. [...] I like that it’s faster, but it takes something out of the game experience.”} --- \textbf{P2} \end{quote} Within \textit{Dungeon Escape} itself, participants tended to avoid deviating from the task at hand while using the simple audio menu, going directly to POIs they needed to go to. Figure \ref{fig:pathplots} plots paths taken by participants within the key room in Level 2. Note how P3 went straight to the key when using the simple audio menu in Level 2, while participants who used other tools in the same level roamed around the room in an effort to survey their surroundings more thoroughly. This behavior is also visible in the raw time data we collected within the room: Those who used the menu collected the key much faster ($M =$ 17 sec., $SD =$ 4.2 sec.) than those who used the shockwave ($M =$ 84 sec., $SD =$ 6.5 sec.), smartphone ($M =$ 93 sec., $SD =$ 17.5 sec.), and directional scanner ($M =$ 105 sec., $SD =$ 5 sec.). Although players completed the levels with the simple audio menu (and often did so quickly), it remains an open question whether players' increased focus on objectives and lack of exploration is a net positive for the game experience or not. \subsection{No tool excelled at communicating position and orientation.} \label{sec:posorienttools} As we found in Section~\ref{sec:affresults}, participants rated position and orientation (Type~3) as the most important aspect of spatial awareness to them. However, post-level ratings indicate that participants perceived all four tools to be mediocre at facilitating position and orientation information. As Figure~\ref{fig:heatmap} shows, the average score that each SAT received in terms of affording position and orientation information was a low-to-mid three (``moderately well''). Our results indicate that these four tools may not meet the high bar that these tools need to meet for such an important aspect of spatial awareness. \subsection{Participants disliked having to juggle multiple pieces of hardware when using the smartphone map.} Five participants mentioned that they found the smartphone map cumbersome to use. Participants often found themselves needing to physically switch between their controller and their smartphone when they wanted to explore the map. Furthermore, at least six participants used noise-cancelling headphones during their sessions and had to adjust \textit{them} as well to hear the smartphone's audio output. These experiences annoyed some participants: \begin{quote} \textit{"I have mixed emotions [about the smartphone map] because I have to do one thing on one device and then move with the other device [...] That made things a bit confusing and annoying."} - \textbf{P2} \end{quote} \subsection{The simple audio menu did not communicate scale and shape well.} As Figure \ref{fig:heatmap} shows, participants thought that the simple audio menu communicated the scale (Type~1) and shape (Type~2) of areas quite poorly with average ratings of around 2 out of 5. The simple audio menu did not explicitly communicate boundaries or other characteristics of the room itself. As such, many participants could not definitively determine the structure (scale and shape) of the surrounding area using the menu: \begin{quote} \textit{"I could probably use [the simple audio menu’s] 3D sounds to assume that, say, a bunch of items were against a wall if they’re coming from the same general side relative to me [...] but that’s an educated guess."} --- \textbf{P6} \end{quote} \subsection{Participants found the whole-room shockwave to be overwhelming, which negatively affected their spatial awareness.} Five participants felt that despite communicating scale relatively well, the whole-room shockwave provided too much information, which negatively affected their sense of spatial awareness. We were surprised by this finding since we designed the shockwave to emit slowly for better intelligibility, and every participant used the filters to make the shockwave easier to understand. Yet, despite these improvements, participants still felt that the shockwave was too information-dense, making it difficult for them to ascertain information about their environment. Our conversations with them yielded insights into how VIPs view similar echolocation-inspired tools within other games. P3, who described themselves as having played "everything" when asked about their gaming experience during the pre-study, was especially vocal: \begin{quote} \textit{“Everybody thinks you can just send out a sonar ping and get information about an environment. [...] Echolocation is very overwhelming, especially in a game. Trying to hone in on an item that is far away and being masked by another item is ludicrous.”} --- \textbf{P3} \end{quote} \subsection{Participants preferred combinations of tools that excelled across multiple spatial awareness aspects.} Figure \ref{fig:heatmap} gives us an interesting perspective on how combinations of tools can best facilitate multiple aspects of spatial awareness jointly. As stated in Section \ref{sec:procedure}, we asked participants to state their two most preferred combinations of tools. The (directional scanner + simple audio menu) combination was one of the most selected combinations, with five participants selecting it. This combination "wins" in four out of the six aspects of spatial awareness: position \& orientation, presence of items, arrangement of items, and communicating adjacent areas. Furthermore, this combination is "tied" with the whole-room shockwave at being the best at conveying area scale. Five participants also selected the (directional scanner + whole-room shockwave) combination, which wins at \textit{three} of the six aspects of spatial awareness. We discuss these selections further in Section \ref{sec:discussion_section}. \subsection{Position and orientation is the most important type of spatial awareness for VIPs, yet is \textit{not} well-served by current tools.} \label{sec:posorientdisc} Our results indicate that communicating position and orientation well is a crucial challenge that must be addressed when designing future SATs. As we reported in Section \ref{sec:posorienttools}, participants rated position and orientation as the most important aspect of spatial awareness to them. Yet, they also felt that all four tools were mediocre at facilitating position and orientation information. Surprisingly, this includes the smartphone map, which was the tool that most explicitly communicated position and orientation information, as we described in Section~\ref{sec:toolmap}. This indicates that communicating position and orientation information to VIPs is harder than researchers assume and that a major opportunity for future research is to develop better indicators for VIPs' position and orientation. Previous research has shown that VIPs rely heavily on landmarks and other environmental features to determine their position and orientation, which, in turn, allows them to navigate through environments~\cite{YuGanz2012, Fallah2012}. These landmarks include walls and other boundaries dictating the area's scale and shape (Types 1 and 2) as well as the layout of items within the space (Type 5). From our findings, however, we see that VIPs do not find it suitable to merely infer their position and orientation from these other cues and would rather benefit from having it communicated more explicitly. \vspace{2mm} \begin{center} \setlength{\fboxsep}{0.8em} \fbox{\begin{minipage}{0.4\textwidth} \textbf{Design Implication \#1}: VIPs will benefit greatly from a purpose-built tool for communicating position and orientation in real time. \end{minipage}} \end{center} \vspace{1mm} \subsection{The four most important aspects of spatial awareness are covered by two tools.} \label{sec:twotoolsdisc} If we consider the four most important aspects of spatial awareness from our RQ1 findings --- position and orientation (Type~3), item presence (Type~4), item arrangement (Type~5), and adjacent areas (Type~6) --- we can see that the combination of the directional scanner with the simple audio menu ``wins'' at communicating all four of these types. We do not consider area scale (Type~1) and area shape (Type~2) because participants found them to be the least important; however, we can also see that the directional scanner is tied for ``winning'' scale as well. From a theoretical standpoint, this implies that VIPs would most gravitate toward this combination, and indeed, we saw precisely this during our study. The fact that participants are excited about the (directional scanner + simple audio menu) combination makes sense. It seems that, with this combination of tools, participants gravitated toward a combination that facilitates the greatest number of spatial awareness aspects well. \vspace{2mm} \begin{center} \setlength{\fboxsep}{0.8em} \fbox{\begin{minipage}{0.4\textwidth} \textbf{Design Implication \#2}: Of today's SATs, the combination of the directional scanner and simple audio menu gives VIPs the greatest spatial awareness. \end{minipage}} \end{center} \vspace{1mm} \begin{figure}[] \centering \includegraphics[width=0.47\textwidth]{figures/nps_box.png} \caption{Box plot of net promoter score responses for all four tools. Red lines indicate median. The whole-room shockwave received some of the lowest scores out of all four tools.} \Description{Box plot of Net Promoter Score responses with respect to all four tools. The simple audio menu received a median score of 9; the directional scanner 8; the smartphone map 7; and the whole-room shockwave 8. The shockwave had the largest range of all four tools.} \label{fig:npsbox} \end{figure} \subsection{VIPs highly value the ability to customize SATs.} In addition to the (directional scanner + simple audio menu) combination, five participants also picked the (directional scanner + whole-room shockwave) combination as one of their favorite combinations. Unlike the former combination, however, the latter combination only ``wins’’ at three of the six types of spatial awareness (i.e., the two tools cover winning values for three columns in Figure~\ref{fig:heatmap}). Additionally, Figure \ref{fig:npsbox} shows that the shockwave had some of the lowest net promoter scores out of all of the tools, and participants even complained about the tool being overwhelming. This finding implies that there exists a consideration that VIPs may find \textit{even more} important than raw spatial awareness. One possible explanation lies in the fact that the whole-room shockwave was the only tool that participants could change the behavior of --- in this case, selecting the type of information they wanted to hear. Participants' enthusiasm for customizable tools --- especially evident in their comparisons with other tools such as screen readers --- shows that SATs should implement similar capabilities, allowing VIPs to take control of what they hear. \vspace{2mm} \begin{center} \setlength{\fboxsep}{0.8em} \fbox{\begin{minipage}{0.4\textwidth} \textbf{Design Implication \#3}: SATs should embrace customizability, allowing VIPs to customize and filter the information communicated. \end{minipage}} \end{center} \vspace{1mm} \subsection{Toward \textit{optimally} communicating each spatial awareness type.} Our findings showed that communicating some aspect of spatial awareness well is not simply about doing so to the maximum extent possible. For example, when it comes to conveying the presence of items (Type~4), the simple audio menu facilitated it perfectly (receiving perfect Likert scores), but many participants disliked how it listed every item within the room they were currently in. They thought that the menu communicated too much information --- enough to affect how much fun they had playing the game. These findings indicate that --- particularly within video games --- communicating a specific type of spatial awareness optimally does not necessarily mean communicating it at the maximum possible level. Future work should address what "optimal" really means in terms of communicating each type of spatial awareness. For example, in the case of item presence information (Type 4): What is the proper level of item presence that should be communicated to the player, and what factors --- such as game objectives --- may influence the level of item presence a tool should communicate? Similar questions can be extended to the other types as well. \subsection{Toward purpose-built hardware.} \label{sec:hardwarerw} Participants revealed that they disliked juggling multiple pieces of hardware while using the smartphone map. Future touch-based SATs should reduce the number of devices required. One possibility involves using touchpads found on game controllers such as the DualShock 4~\cite{Dualshock4}, DualSense~\cite{Dualsense}, and Steam Controller~\cite{SteamController}. Hybrid touchscreen controller devices, such as the Nintendo Switch~\cite{NintendoSwitch} and the Steam Deck~\cite{SteamDeck}, are also promising alternatives. \subsection{Applications for physical world navigation.} In Section \ref{sec:affresults}, we addressed RQ1 by reporting participants' preferences for the six types of spatial awareness within a video game context. Future work could explore VIPs' preferences within the \textit{physical} world and see how they differ from their preferences within video games. We found that some of our results resemble prior work in the physical world: Participants generally agreed that position \& orientation was extremely important --- in our study, they collectively saw it as the \textit{most} important type of spatial awareness. In a similar vein, much physical world work for both visually impaired and sighted people has found position \& orientation awareness to be very important~\cite{Klatzky1998, Epstein2017, Giudice2020, GiudiceLegge2008, Kacorri2016}. Participants also generally found knowledge of item presence, item arrangement, and adjacent areas to be relatively important as well --- reflecting prior work that has echoed the importance of inter-object and inter-area relationships in promoting spatial awareness~\cite{GiudiceLegge2008, Hill1993, RowellUngar2005, Yatani2012}. We were surprised, however, when we found that participants did not find scale and shape awareness to be very important within video games. This differs from much prior work from physical world contexts --- especially in the realm of tactile maps and echolocation --- that has found general overviews of spaces, including information such as scale and shape, to be crucial for spatial awareness~\cite{RowellUngar2005, HolmesArditi1998}. An interesting direction for future work may involve repeating the study presented in this paper, but in the physical world, to enable a direct comparison. The physical world presents its own challenges and circumstances. SATs' accuracy within physical environments and VIPs' physical safety considerations~\cite{Banovic2013}, for example, could influence how important VIPs find the various types of spatial awareness and even how they wayfind and explore using the tools. A direct comparison can help the community establish a hierarchy from the spatial information that we know is important to VIPs. It can also help the community establish formal principles for prioritizing the display of different types of information during physical world navigation.
1,314,259,995,690
arxiv
\section{Introduction} The landscape of mobile communication service requirements is rapidly changing with the proliferation of digitization technologies. Consequently, in the future, more emphasis needs to be placed on location specific services in different vertical sectors. Hospitals, shopping malls, smart cities, industries and universities are identified as some of the common locations, which are heavily benefited by these location specific services. Location specific requirements stipulate high demands on reliability, high data rates, low latency, privacy and security. The key focus of the future 5G wireless systems is to serve such case specific requirements along with the provisioning of the traditional mobile broadband services~\cite{euroadmap}. These case specific and localized requirements are expanding beyond the current capabilities of the traditional MNOs whose services are often designed to serve masses. To cater the location specific future communication requirements, the need for establishing local 5G networks is evident. In speeding up local service delivery with 5G networks, the present mobile communication market needs to be opened for local 5G networks deployed by different stakeholders such as recently proposed in the micro operator (uO) concept~\cite{matinmikko2017micro}. Unlike the traditional MNOs with wide area coverage, uOs are local operators who intend to offer case specific and location specific services through locally deployed 5G networks ~\cite{matinmikko2017micro1}. Therefore, system architecture for a uO should be carefully designed in such a way that it enables the efficient and reliable local service deliveries. Since many services have very stringent requirements that can only be provided through cellular network technology, uO itself is a 5G service provider and uO system architecture must contain the network functions defined by 3rd Generation Partnership Project~(3GPP). Since the uOs are specialized to provide tailored services, the system architecture and its specific deployment may also depend on the use case. Besides the novelty of uO concept, the uO system architecture is still not defined in a comprehensive manner. With this regard, we propose a descriptive architecture for emerging 5G uO, which provides user specific and location specific services in a spatially confined environment. The architecture is discussed for a smart factory environment which supports industry 4.0 standards, having typical use case called Augmented Reality (AR)~\cite{automation}. The architecture comprises 5G network functions and the operational units which entail the core and the access networks to cater the communication of AR use case. In order to realize the conceptual design and present the simulation results, we compare two network deployment models for a factory, one being served by a local uO or and the other being served by a traditional MNO. Based on the simulation results, we discuss the benefits of having uOs in 5G for specialized user requirements, rather than continuing with the traditional MNO driven approach. The remainder of the paper is organized as follows: Section~\ref{sec:RelatedWork} describes the related work on uOs, generic 5G architecture and industry 4.0. Section~\ref{sec:usecaseindustry} describes the use case for which the uO architecture is defined and Section~\ref{sec:experimentalsetup} presents the experimental setup and its key parameters. Section~\ref{sec:discussion} compares and discusses the simulation results for a typical MNO setting and the proposed uO architecture. Finally, Section~\ref{sec:conclusions} concludes the paper with the future research directions. \section{Background and Related Work} \label{sec:RelatedWork} Expected key characteristics of the future 5G wireless systems are identified as extremely high data rates, ultra reliability and low latency, and massive communication between devices~\cite{shafi20175g}. Moreover, three specific areas of 5G services are diversified as enhanced mobile broadband (eMBB), ultra reliability and low latency communication (URLLC) and massive machine type communication (mMTC)~\cite{itudoc}. Based on the communication needs of different verticals, future 5G operators must possess the capability of providing case specific services in addition to the present generic communications services. Local 5G networks are gaining increasing attention in regulation, research and industry. The concept of uO was proposed to expand the mobile communication market by allowing new stakeholders to deploy local 5G networks to complement the conventional MNOs. The uOs are expected to provide tailored 5G services and fulfill case specific and versatile local wireless communication needs with extremely low latency~\cite{matinmikko2017micro}. 5G uO can operate a closed network to serve its own customers, an open network to offer its services to other MNO's customers, or a mix of both. Key regulatory elements and the techno economic aspects related to the uOs are discussed in~\cite{matinmikko2017micro1}. Business model options for local 5G uOs and the different network deployment options are discussed in~\cite{ahokangas2016future}. The network architecture of 5G uO should also comprise the network functions of generic 5G architecture~\cite{matinmikko2017micro}. 3GPP has already released the specifications for 5G system architecture~\cite{architecture}. Instead of the network elements defined in Evolved Packet Core (EPC) in 4G systems, Software Defined Networking~(SDN) and Network Function Virtualization~(NFV) are involved in creating Network Functions~(NF) in 5G systems architecture. Network functions can be implemented on a dedicated hardware or as a software instance on a dedicated hardware or as a virtualized function instantiated on an appropriate platform such as a cloud. The concept of network functions has led operators to add flexibility over the functionality of the underlying physical infrastructure of the 5G network. 3GPP specifications represent the 5G architecture in two ways: \begin{itemize} \item \textbf{Service based representation:} Shows how NFs within the control plane enable other authorized NFs to access their services, as in Figure \ref{fig_service}. \item \textbf{Reference point representation:} , Shows the point-to-point interaction existing between two NFs, as depicted in Figure \ref{fig_point}. \end{itemize} \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth]{svc123.jpg} \caption{Service based representation of 5G system architecture~\cite{architecture}} \label{fig_service} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth]{ref.jpg} \caption{Reference point representation of 5G system architecture~\cite{architecture}} \label{fig_point} \end{figure} Different locations belong to different vertical sectors (i.e., hospitals, campuses, factories, etc.) have their own communication requirements, hence the network architecture also depends on the communication requirement. In this paper, we consider Industry 4.0 or smart factory environment to develop the uO architecture. Industry 4.0 refers to the advancement of the present industries into the next generation~\cite{wollschlaeger2017future}. It aims to interconnect the devices inside the factories, make them smart by adding more intelligence into the device and ultimately resulting in improved adaptability, resource efficiency, and the supply and demand process between factories~\cite{varghese2014wireless}. Machine-to-Machine (M2M) communication plays a critical role in Industry 4.0 which is also a key focus in 5G systems. Wireless Sensor Networks (WSN) in current industries are moving towards industrial wireless networks because of the low latency, high mobility and high capacity requirements in the future industries~\cite{li2017review}. A study report has been released by 3GPP focusing on typical use cases in Industry 4.0 such as motion control, mobile robots, augmented reality, massive wireless sensor networks~\cite{automation}. \section{Use Case and Proposed Architecture} \label{sec:usecaseindustry} Augmented Reality (AR) can be considered as an application which will be heavily used in future industrial environments~\cite{automation}. In our study, we consider AR use case in which the factory workers are supported by AR devices. They identify production flaws, obtain step by step guidance to carry out pre-defined tasks, obtain support from the supervisors via those devices. In this context, AR devices should be highly energy efficient and lightweight. This requires AR devices to carry out minimal processing and more intensive tasks to be carried out by a separate image processing server located inside the factory. Typical communications of an Industry 4.0 AR network should be as depicted in Figure \ref{fig_ar}. \begin{figure}[ht] \centering \includegraphics[width=0.47\textwidth]{ar.jpg} \caption{AR system model with offloaded processing~\cite{automation}} \label{fig_ar} \end{figure} AR device takes the images and transmits them to the image processing server. Server then does the processing of the images and sends the augmentations back to AR device to display. 5G system supporting this communication should be able to provide an end-to-end latency of less than 10 ms for one way communication with a 99.9\% success of frame delivery~\cite{automation}. A local 5G network deployed by a uO covering the factory could be used to address the needs of AR use case. Because uO operates a local 5G network, it should comprise architectural components inherited from generic 5G systems architecture. The uO concept provides flexibility over the selection of architectural components and the location where the core network is hosted. For a low latency requirement, the desirable implementation is to have the uO core network within the factory premises itself, but not mandatory. In our study, we define the architectural components needed in the core network to cater AR use case. Generally, AR use case requires the 5G system facilitate the following three steps of communications. \begin{itemize} \item Registering the AR devices into the network \item Establishing data session between the AR device and image processing server \item Data transfer between AR device and image processing server \end{itemize} Architectural components needed for completing above steps can be identified based on the message transfer between each element in 5G system including AR device, Next Generation NodeB~(gNB) and core network functions. We define the registration procedure for AR device based on 3GPP specifications~\cite{procedures}. Figure \ref{fig_regis} illustrates the message sequence between the entities in the architecture. \begin{figure}[ht] \centering \includegraphics[width=0.47\textwidth]{regprocess.jpg} \caption{Message sequence chart for AR device registration procedure } \label{fig_regis} \end{figure} AR device initiates the registration process by sending registration request to gNB. gNB forwards the request to Access and Mobility Management Function (AMF). After that, AMF and AR device exchange the identity request and response messages. In the next step, AMF contacts Authentication Server Function~(AUSF) for the device authentication. AUSF facilities the authentication after contacting the Unified Data Management~(UDM) and retrieving the authentication data. Once the authentication data is received from UDM, AUSF sends the authentication response to AUSF. Identity request/response messages are transmitted between AMF and the AR device again. After the identity verification, AMF then works with Policy Control function~(PCF) for the policy association for the AR device. Once the policy association is successful, AMF sends an update to Session Management Function~(SMF) informing the session context. AMF also sends the registration accept message to the AR device and the device then sends registration complete message to AMF concluding the registration process. \begin{figure}[ht] \centering \includegraphics[width=0.47\textwidth]{pduprocess.jpg} \caption{Message sequence chart for session establishment between AR device and server } \label{fig_session} \end{figure} After completing the registration process, AR device has to establish a data session with the image processing server to enable continuous data transfer. We define the Protocol Data Unit (PDU) session establishment procedure between AR device and the image processing server based on 3GPP specifications~\cite{procedures}. Figure \ref{fig_session} illustrates the message sequence required for PDU session establishment process. Here, AR device initiates the process by sending PDU session establishment request to AMF via gNB. AMF then sends a request for a new session creation to SMF. In the next step, SMF registers with UDM, subsequently UDM stores data related to the session. After that SMF sends the Response to AMF. Then, PDU session authentication/authorization process occurs by exchanging messages between AR device, gNB, AMF, SMF, UPF and Server. Once this step is completed, SMF works with PCF for policy association for the session. Then UPF and SMF exchange the session establishment/modification request and the respective response. Message transfer from SMF to AMF allows AMF to know which access towards the AR device to use. AMF then sends the PDU session ID information to gNB so that gNB can work with AR device for the gNB specific resource setup. After that gNB sends the acknowledgement for the PDU session request to AMF. Based on that, AMF sends request regarding PDU session update to SMF and SMF then requests UPF for session modification. Once SMF received the response from UPF, SMF finally sends the response for PDU Session update to AMF completing the PDU session establishment process. After successful completion of above steps, AR device can send a continuous data stream to the server and retrieve the augmentations sent by the server. Entities involved in this data transfer process are AR device, gNB, UPF and the server. Based on the above steps, 5G network functions needed to cater the AR use case can be identified to derive uO architecture. Network slicing proposes a way to create logical networks on a common infrastructure to enable different types of communication services~\cite{alliance2016description, zhang2017network}. Assuming uO requires multiple network slices to cater a particular use case (eg. AR use case), uO has to create network slices before any actual communication happens over a selected slice. 3GPP introduces three network slice management functions for creating and managing network slices~\cite{slicing}. \begin{itemize} \item \textbf{Communication Service Management Function (CSMF):} Responsible for translating communication service related requirement to network slice related requirements. \item \textbf{Network Slice Management Function (NSMF):} Derive network slice subnet related requirements from network slice related requirements and, responsible for management and orchestration of Network Slice Instances (NSI). \item \textbf{Network Slice Subnet Management Function (NSSMF):} Responsible for management and orchestration of Network Slice Subnet Instances (NSSI). \end{itemize} For uO to create and manage multiple network slices, these three slice management functions should also be there in the architecture. When there are multiple slices available to facilitate any communication, the best fitting slice must be selected before the communication begins. This is done by Network Slice Selection Function (NSSF), which is an obligatory element in the uO architecture that supports multiple network slices. Figure \ref{fig_arch2} represents the proposed uO architecture for AR use case. For the sake of clarity, all the network functions in the original 5G architecture are illustrated in the Figure \ref{fig_arch2}, but NEF, NRF and AF are not necessary in this AR use case. \begin{figure}[ht] \centering \includegraphics[width=0.47\textwidth]{aruoarch3.jpg} \caption{Proposed uO architecture for AR use case} \label{fig_arch2} \end{figure} \section{Experimental Setup} \label{sec:experimentalsetup} We consider two deployment models for our simulations for AR use case communications. In the first model, a factory is served by a local 5G network deployed by the uO. Factory owns the AR devices and the processing server. It is assumed that the server is located in a different cell site within the factory premises. Communication is facilitated by a 5G network deployed inside the factory premises. Core network of uO is also located inside the factory. This setup is depicted in Figure \ref{fig_uo_sim}. \begin{figure}[ht] \centering \includegraphics[width=0.47\textwidth]{aruosim.jpg} \caption{Deployment model for uO serving the factory } \label{fig_uo_sim} \end{figure} In the second model, we assume that the entire factory is covered by a 5G network deployed by an MNO. AR devices and the processing server are owned by the factory. Server is located within the factory. We consider that the MNO is simultaneously serving total of \(N\) such factories having AR use cases. Each factory having similar network setup and similar requirements as seen in Figure \ref{fig_fac}. Core network of MNO is located outside the factory. Figure \ref{fig_mno_sim} illustrates the MNO based model serving for the AR use case of a given factory. \begin{figure}[ht] \centering \includegraphics[width=0.47\textwidth]{factorymodel.jpg} \caption{MNO's service for \(N\) factories having AR use case} \label{fig_fac} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.47\textwidth]{armnosim.jpg} \caption{Deployment model for MNO serving the factory } \label{fig_mno_sim} \end{figure} Without loss of generality, we have assumed that uO's processing power is 1/\(N\) of the possessing power of an MNO. It validates the fact that MNO's resources are equally divided among \(N\) uOs in each factory. For each uO and MNO based models, we simulate two of the above procedures namely the registration process and the data transfer between AR device and the image processing server. For the simulations, we use Omnet++ discrete event simulator~\cite{omnet} with INET framework installed~\cite{inet}. First step, the AR device registration process, has one AR device connected to a small cell base station, which then connects to 5G core. In this case, we model all the information flows from the AR device to Access Network~(AN), communication between the AN and the 5G core and back, and the information flow between the network functions which is needed for the registration process. Data transfer process is the next step considered in our experiments. AR data stream travels through UPF of MNO/uO core via the 5G AN and then routes back to the image processing server inside the factory. The server then does the processing, generate the augmentations and initiate the data transfer to the AR device via UPF. This is modeled as a continuous data stream between the AR device and the image processing server. Table \ref{table:gen_par} outlines the general simulation parameters whereas the variable parameters are explained at each experiment. Latency of AN is based on the 3GPP study on next generation access technologies~\cite{access} and we assume that both MNO and uO access networks serving the AR use case have similar properties. We take backhaul as a fiber connection and the latency parameters are selected based on a study of 5G backhaul challenges~\cite{jaber20165g}. Image processing server delay is based on 3GPP study on communication for automation~\cite{automation}. For each experiment, we measure the end-to-end~(E2E) latency of the communications. \begin{table}[ht] \begin{center} \caption{ General simulation parameters} \label{table:gen_par} \begin{tabular}{|p{5.5cm}|c|} \hline \textbf{Parameter} & \textbf{Value}\\ \hline Latency between AR device and AN & 0.5 ms~\cite{access}\\ Latency between AN and core network&~~0.05 ms per km~\cite{jaber20165g}\\ Image processing server delay & 30 ms~\cite{automation}\\ Distance to uO core network & 500 m\\ Number of factories served by MNO (\(N\)) & 10\\ \hline \end{tabular} \end{center} \end{table} \section{Results Analysis and Discussion} \label{sec:discussion} We run several experiments to study the performance of AR use case under uO and MNO network deployment models. \subsection{AR Device Registration}\label{ARR} We first simulate the registration process and observe the E2E latency with respect to distance to the core network of MNO. We use the parameters shown in Table \ref{table:gen_par} and vary the distance from 500 m to to 500 km in 50 km intervals. Results of the experiment are shown in Figure \ref{fig_lat_dis}. E2E latency of the 5G uO also illustrated in the Figure \ref{fig_lat_dis} assuming uO core network is located at 500 m. For uO setup, NF processing delay is taken as 1 ms, which is 10 times of the NF processing delay of MNO. \begin{figure}[ht] \centering \includegraphics[width=0.47\textwidth]{latencyregdis.jpg} \caption{E2E latency of AR device registration process with respect to distance } \label{fig_lat_dis} \end{figure} E2E latency exhibits a linear increment with respect to distance to core network. Increment is approximately 0.32 ms per 1 km. Registration process has multiple round trips between AR device and the core network, causing the increase in latency. Even with 10 times higher NF processing time, 5G uO can still support a very low latency compared to an MNO having core network at a large distance. If the AR use case to be served by an MNO, then the core network should be in close proximity to factory premises to get low E2E latency as uO provides. It should be closer than 18.21 km as seen in Figure \ref{fig_lat_dis}. This is not a practical implementation because MNO is serving 10 factories and those factories usually are in diverse geographical areas. Next, we observe the E2E latency with respect to NF processing delay. We keep the core network distance of uO at 500 m and MNO at 250 km. E2E latency obtained by varying NF processing delay from 1 $\mu$s to 1 ms is shown in Figure \ref{fig_lat_proc}. \begin{figure}[ht] \centering \includegraphics[width=0.47\textwidth]{latencyregproc.jpg} \caption{E2E latency of AR device registration process with respect to NF processing delay } \label{fig_lat_proc} \end{figure} E2E latency varies linearly with respect to NF processing delay for both uO and MNO. Latency increases approximately 8.63 $\mu$s when NF processing delay is increased by 1 $\mu$s. It is observed that uO always performs better than MNO for a given NF processing delay. Reason for the behavior is that propagation delay due to core network distance is dominant than the NF processing delay. In the actual implementation, uO 5G network is setup to serve only for AR use case because of the fact that uO is providing a tailored service to specific locations. However, MNO needs to handle 10 factories and different traffic forms such as mobile broadband traffic from users. Therefore, there is a high probability of having high NF processing delay in MNO case compared to uO case. As per the message transfer procedure of the registration process, E2E latency for the registration process can be expressed as follows, \begin{equation} L_{reg} = k_{1}\:.T_{access} + k_{2}\:.T_{backhaul} + k_{3}\:.T_{N\!F} \end{equation} \noindent where \( L_{reg} \) is the E2E latency of registration process, \( T_{access} \) is the delay from AR device to AN, \( T_{backhaul} \) is the delay from AN to core network, \( T_{N\!F} \) is the NF processing delay. \( k_{1} \), \( k_{2} \) being the number of times where the registration message passes through access channel and backhaul respectively. \( k_{3} \) is the number of times where the message is processed at a network function. Since \( T_{backhaul} \) depends on the distance from AN to core network, \begin{equation} T_{backhaul} = k_{4}\:.D_{backhaul} \end{equation} \noindent where \( D_{backhaul} \) is the distance from AN to core network and \( k_{4} \) is a constant related to the communication over fiber channel. Therefore, for a given \( L_{reg} \), distance to MNO core network can be calculated as, \begin{equation} D_{backhaul} = \frac{L_{reg} - k_{1}\:.T_{access} - k_{3}\:.T_{N\!F} }{k_{2}\:.k_{4}} \end{equation} Moreover, NF processing delay varies based on two main factors, i.e. operator resources and network traffic load. NF processing delay is proportional to network load factor while inversely proportional to operator resources factor. \begin{equation} N\!F\:processing\:delay \propto \frac{network\:load}{operator\:resources} \end{equation} Therefore, we consider following 4 cases of different resource levels at MNO and identify the minimum distance to core network to obtain same E2E latency as uO supports. \begin{itemize} \item Case 1 : MNO resources = uO resources \item Case 2 : MNO resources = 10 x uO resources \item Case 3 : MNO resources = 100 x uO resources \item Case 4 : MNO resources = 1000 x uO resources \end{itemize} Here also, we take MNO is serving 10 factories while uO is serving for a single factory. Table \ref{table:dis_reg1} shows the maximum distance from the factory for above four cases, where the MNO core network should be placed to achieve similar performance as uO. In the first experiment, NF processing delay of uO is set to 1 ms. For MNO it varies for each case because of the availability of different resource levels. When the resource level of MNO and uO is equal, MNO cannot cater the E2E latency supported by uO because MNO is serving ten factories and the resources are shared among ten factories. When MNO resources is 10 times of uO resources, MNO can support the same E2E latency at the distance of 500 m because this case is the same as uO serving one factory. At higher resource levels, it is observed that the MNO can achieve the same E2E latency provided by uO, with the core network being located at a certain distance but not too far from the factory premises. This is due to backhaul delay, which is prominent than NF processing delay. When the resources of MNO is increased from 10x to 100x of uO, MNO can establish the core at 18.21 km. When the resources of MNO is increased from 100x to 1000x of uO, MNO can have a distance advantage from 18.21 km to 20.52 km, which is less than the distance advantage it gained by increasing resource level from 10x to 100x. This means that the distance advantage MNO gains, is diminishing even with the increase of core network resources. \begin{table}[ht] \begin{center} \caption{Distance to MNO core when uO \( T_{N\!F} \) = 1 ms } \label{table:dis_reg1} \begin{tabular}{ |c|c|c|c|c| } \hline \textbf{Case} & \textbf{NF Proc. Delay of MNO} & \textbf{D\textsubscript{\textit{backhaul}}}\\ \hline Case 1 & 10 ms & --\\ Case 2 & 1 ms & 500 m \\ Case 3 & 0.1 ms & 18.21 km\\ Case 4 & 0.01 ms & 20.52 km\\ \hline \end{tabular} \end{center} \end{table} We consider two more scenarios with less uO operator resources. NF processing delay for uO is taken as 10 ms and 100 ms because \( T_{N\!F} \) is inversely proportional to resource availability. Table \ref{table:dis_reg2} shows the minimum distance to MNO core from factory for \( T_{N\!F} \) of uO = 10 ms and Table \ref{table:dis_reg3} shows the minimum distance to MNO core from factory for \( T_{N\!F} \) of uO = 100 ms. \begin{table}[ht] \begin{center} \caption{Distance to MNO core when uO \( T_{N\!F} \) = 10 ms } \label{table:dis_reg2} \begin{tabular}{ |c|c|c|c|c| } \hline \textbf{Case} & \textbf{NF Proc. Delay of MNO} & \textbf{D\textsubscript{\textit{backhaul}}}\\ \hline Case 1 & 100 ms & --\\ Case 2 & 10 ms & 500 m \\ Case 3 & 1 ms & 231.92 km\\ Case 4 & 0.1 ms & 255.07 km\\ \hline \end{tabular} \end{center} \end{table} \begin{table}[ht] \begin{center} \caption{Distance to MNO core when uO \( T_{N\!F} \) = 100 ms } \label{table:dis_reg3} \begin{tabular}{ |c|c|c|c|c| } \hline \textbf{Case} & \textbf{NF Proc. Delay of MNO} & \textbf{D\textsubscript{\textit{backhaul}}}\\ \hline Case 1 & 1000 ms & --\\ Case 2 & 100 ms & 500 m \\ Case 3 & 10 ms & 2314.78 km\\ Case 4 & 1 ms & 2546.21 km\\ \hline \end{tabular} \end{center} \end{table} When the uO performance is low, then the MNO has the ability to establish the core network at larger distances to provide the same latency as uO provides. However, this is unrealistic as uOs have the ability to provide a case specific service which means that the uO performance is tailored for that specific service. Moreover, processing delay in 5G networks will be within $\mu$s than ms range in order to support low latency applications. Hence, we can consider uO \( T_{N\!F} \) = 10 ms and uO \( T_{N\!F} \) = 100 ms scenarios as non realistic ones. \subsection{E2E Data Transfer} Here, we simulate the data transfer process from AR device to image processing server and back. According to ~\cite{automation}, this communication should be completed within 50 ms in order to avoid cyber-sickness. Therefore we try to identify the distance to core network to satisfy the latency requirement in an MNO served factory. We consider the NF processing delay of uO to be 1 ms and for MNO to be 0.1 ms. Results of the experiment are illustrated in Figure \ref{fig_lat_data}. Threshold latency of 50 ms and the uO performance with core network at 500 m are also depicted in the same figure for comparison. Results show that if MNO is to serve the E2E latency requirement mentioned at ~\cite{automation}, its core network should be located approximately 92 km from the factory location. This requirement is difficult to satisfy when the ten factories are located in diverse geographical areas. Further, it shows that the uO can have far better performance than MNO, even though the NF processing delay of uO is 10 times higher than the MNO, making uO the favorable implementation option. \begin{figure}[ht] \centering \includegraphics[width=0.47\textwidth]{latencydata.jpg} \caption{E2E latency of data transfer between AR device and server } \label{fig_lat_data} \end{figure} Similar to the registration process, we can express the E2E latency of data transfer process as, \begin{equation} L_{dat} = k_{1}\:.T_{access} + k_{2}\:.T_{backhaul} + k_{3}\:.T_{N\!F} + T_{server} \end{equation} where \( L_{dat} \) is the E2E latency of data transfer process, \( T_{server} \) is the aggregated delay at the image processing server. For a given \( L_{dat} \), we can calculate the distance to MNO core network as, \begin{equation} D_{backhaul} = \frac{L_{dat} - k_{1}\:.T_{access} - k_{3}\:.T_{N\!F} - T_{server} }{k_{2}\:.k_{4}} \end{equation} As we explained in Section \ref{ARR}, we consider the four different resource levels at MNO and observe how far MNO can move its core network away from the factory premises, so that MNO can provide the same E2E latency as uO provides. Results of this experiment are outlined in Table \ref{table:dis_dat}. As in registration process, if both uO and MNO have same resources, MNO cannot cater the E2E latency supported by uO. When MNO resources is 10x higher than of uO, MNO provides similar E2E latency as uO with a core network distance of 500 m. With higher resource levels, MNO can get only a slight advantage for core network distance. MNO gets a 9.5 km advantage by increasing the resources from 10x to 100x, and a further 1 km advantage by increasing its resource level from 100x to 100x of uO, thereby making MNO a non-favorable choice. \begin{table}[ht] \begin{center} \caption{Distance to MNO core when uO \( T_{N\!F} \) = 1 ms } \label{table:dis_dat} \begin{tabular}{ |c|c|c|c|c| } \hline \textbf{Case} & \textbf{NF Proc. Delay of MNO} & \textbf{D\textsubscript{\textit{backhaul}}}\\ \hline Case 1 & 10 ms & --\\ Case 2 & 1 ms & 500 m \\ Case 3 & 0.1 ms & 9.5 km\\ Case 4 & 0.01 ms & 10.4 km\\ \hline \end{tabular} \end{center} \end{table} As the second scenario we take a uO with \( T_{N\!F} \) = 10 ms and obtain the E2E latency measurement for data transfer process and the result is 52.1 ms. Since the E2E latency required by AR use case is 50 ms, this scenario is not realistic. Therefore we do not further analyze core network location of MNO for this scenario. \section{Conclusions} \label{sec:conclusions} The novel concept of micro-operator~(uO) enables a versatile set of stakeholders to operate 5G networks within their premises with a guaranteed quality and reliability to complement traditional Mobile Network Operator (MNO) offerings. In this paper, we analyzed the feasibility and performance advantage of using a uO instead of an MNO in a smart factory environment which supports industry 4.0 standards. We proposed a novel uO architecture based on 5G standards which is customized for factory environment. The architecture was discussed in terms of network functions and operational units which entail the core and the access networks in a smart factory environment. To realize the conceptual design, we conducted several experiments for an Augmented Reality~(AR) use case which will be heavily used in future factories. The experiments revealed that a local 5G network established by the uO within the factory premises can cater low end-to-end latency for the AR use case compared to an MNO provided 5G network, where the core network is located outside the factory premises. End-to-end latency of the communication exhibits a significant increase over the distance between the core network and the factory. MNO can reduce the end-to-end latency by establishing the core network within close proximity (less than 92 km) to the factory or even inside the factory, however these deployments are not realistic because MNO usually provides services to multiple factories distributed geographically. MNO can also have a different approach to reduce the end-to-end latency by increasing the computational resources at the core network, thereby effectively reducing the processing time within the core network functions. However, advantage MNO can yield by increasing resources is comparatively low because the dominant factor causing the end-to-end latency is the propagation delay due to distance to core network. In a 5G uO served factory, the AR data stream stays within the factory premises because the core network is inside a confined environment. This ensures more secure communication between AR devices and the server. In future, we consider few more Industry 4.0 use cases such as mobility of robots and alarm management using sensor networks to study the impact of uO architecture. \addtolength{\textheight}{-12cm} \section*{Acknowledgement} This work is supported by Business Finland in uO5G project and Academy of Finland in 6Genesis Flagship (grant no. 318927). \bibliographystyle{IEEEtran}
1,314,259,995,691
arxiv
\section{Introduction} Classical antennas are devices transforming radio-waves from free-space to a guiding device and vice versa \cite{balan}. The radiative properties of such antennas are characterized by the angular distribution of the radiated field and intensity (field and power radiation patterns). In the case of classical radiation, the power radiation pattern formed by interference effects is equal to the squared modulus of the field radiation pattern divided by the doubled characteristic impedance of the medium \cite{balan}. However, this simple picture does not hold for non-classical states of emitted radiation. For example, the field radiation pattern can vanish, whereas the power radiation pattern has a finite non-zero value. Generally, higher-order correlation functions of the non-classical field cannot be expressed through lower-order ones. Recent progress in nanofabrication opened a way for the design and implementation of nanoantennas operating in the terahertz, infrared, and visible spectral ranges \cite{biag,novot,alu,greffer}. In spite of the quantum origin of charge carriers transport inside the antennas, the emitted field was commonly considered as being classical. However, the use of the quantum properties of light and generalization of the concept of antennas for the quantum case \cite{boag2013,mokhl,slep2010,slep2012,slep2016} open far richer possibilities for controlling and shaping the emitted field (\textit{e.g.} directive light squeezing via antenna emission \cite{slep2016}). Note that light squeezing can be achieved not only by arranging emitters, but also by engineering the initial state of the antenna. It is well known that an entangled state of emitters can lead to entanglement of the emitted photons (\textit{i.e.} the state of the field can be mapped into the emitters state and vice versa). This effect was suggested as a basis for a quantum memory device capable to store entangled states of light \cite{gisin2011,zuk2012}. Furthermore, entanglement of emitters in antennas can lead to intensity distributions otherwise impossible to reach with the factorized initial states of the antenna emitters \cite{agarwal2011}, to sub-Rayleigh imaging and superresolution \cite{boto,rozema}, as well as to superbunching \cite{agarwal2015b}. Until now, quantum features in the field emitted by an antenna were mostly considered for some well known initial states independently of the actual antenna geometry (\textit{e.g.}, symmetric Dicke states were usually considered \cite{agarwal2011}). On the other hand, so called ``timed" Dicke states bear information about the location of emitters \cite{scully2006} and provide a special quantum mechanism that introduces a non-reciprocity of the antenna \cite{boag2013}). Here, we introduce a method to design an initial state in a quantum antenna in order to shape the emitted field correlation functions. The non-classicality of the antenna's radiation is revealed through a measurement of the higher-order correlation functions. Note that such a measurement constitutes a convenient imaging tool \cite{cassano} that enables to reach superresolution \cite{oppel,classen,supertwin}. The approach introduced here is similar to the one usually implemented in quantum state tomography. In the same spirit, one can optimize the directivity of the correlation functions of the radiation produced by a quantum antenna. For example, by optimizing the second-order correlation function of the two-particle entangled state of an equispaced linear antenna array, we can produce photon pairs that are strongly correlated in momentum. Interestingly, we find that both co-directional and contra-directional correlations are possible for the same spatial antenna design, but with different initial states. The same approach is also valid for multi-particles antenna states and higher-order correlation functions. In particular, we show that some initial states lead to a strong suppression of the radiation in the far-field zone, reproducing a classical effect of ``non-radiative source" \cite{balan,wolf}. Additionally, we show that in some cases, the quantum correlations of the antenna field can be captured with a semiclassical model of the emitter-field interaction. The outline of the paper is as follows. In the second Section, the antenna model is introduced. In the third Section, we describe the procedure for designing the antenna state with the required correlation functions. In the fourth Section, the long-time field state is considered for providing guidelines for the field shaping. The fifth Section discusses the example of co- and contra-directional twin-photon propagation, and the sixth Section considers a suppression of the far-field radiation. Finally, the seventh Section discusses an application of a semiclassical approach in the description of the field emitted by a quantum antenna. \section{Antenna model} As a model system for a quantum antenna, we consider a chain of $N$ identical non-interacting two-level emitters with the same dipole moments ${\vec d}$ positioned along the same axis at points ${\vec R}_j$ (see Fig.~\ref{fig1}). Omitting the time-dependence factor, which is common for all emitters, the positive-frequency field operator part that gives non-zero contribution to the normally ordered correlation functions and describes the spatial field distribution at the point ${\vec r}$ in the far field zone, reads: \begin{eqnarray} \vec{E}({\vec r})\propto A({\vec r})=\sum\limits_{j=1}^N\frac{{\vec n}\times[{\vec n}\times {\vec d}]}{|{\vec r}|} \exp\{i\omega(|{\vec R}_j-{\vec r}|/c)\}\sigma^-_j, \label{arop} \end{eqnarray} where $\sigma^-_j=|-_j\rangle\langle +_j|$ is the lowering operator for the $j$-th two-level system (TLS) with upper(lower) levels described by the vectors $|\pm_j\rangle$ , and ${\vec n}$ is the unit vector from an emitter to the observation point; $\omega$ is the TLS transition frequency. For what follows, we label the right-hand side in Eq. (\ref{arop}) as the array factor operator $A({\vec r})$. Generically, the design of an antenna consists in finding the positions ${\vec R}_j$ of individual TLS elements, and in defining the initial density matrix of the antenna $\rho$ in a way to achieve the required values of simultaneous correlation function of the order $n$ in some sets $\{l\}$ of directions $\{{\vec r}_{k,l}\}$, $k=1\ldots n$: \begin{equation} G^{(n)}({\vec r}_{1,l}\ldots{\vec r}_{n,l})=\langle\left[\prod\limits_{k=1}^nA({\vec r}_{k,l})\right]^{\dagger}\prod\limits_{k=1}^nA({\vec r}_{k,l})\rangle. \label{g} \end{equation} Thus, the index $l$ in Eq. (\ref{g}) labels different spatial arrangements of the $n$ detectors. These functions can be measured by placing photon detectors in given directions and by recording the coincident counts (for example, the scheme for measuring $G^{(2)}$ is depicted in Fig.~\ref{fig1}). Note that for a conventional classical antenna, the radiation pattern and all correlation functions are entirely defined by the average field amplitude $\langle E({\vec r})\rangle$. However, in the quantum case the situation is different. For example, for all TLSs being either in the excited or ground state, $\langle E({\vec r})\rangle=0$ for an arbitrary ${\vec r}$, whereas one can have $G^{(n)}\neq 0$. \begin{figure}[htb] \includegraphics[width=0.75\linewidth]{Fig1new3.pdf} \caption{Schematics of an equispaced linear array antenna. The second-order correlations can be detected by measuring simultaneous counts at detectors $D_1$ and $D_2$. The red arrows represent the TLS dipole moments of the antenna. The dipole moments of every TLS are the same} \label{fig1} \end{figure} \begin{figure}[htb] \includegraphics[width=0.75\linewidth]{OptimizationLimits20n.pdf} \caption{An example of accessible regions of target probabilities for shaping $G^{(2)}$ by a set of $N=20$ equidistant TLSs antenna depicted in Fig.~\ref{fig1} for the angles $\cos\theta_1 = 0$, $\cos\theta_2 = 0.05$ and two-excitations pure states of the form given in Eq. (\ref{psi1}). The dashed curve delimits the region of available probabilities $p(\theta_1,\theta_2)$ versus $p(\theta_1,\theta_1)$; the solid line delimits the region of available $p(\theta_2,\theta_2)$ versus $p(\theta_1,\theta_1)$. The targeted probabilities $p_l$ are normalized by $p_l^{high}$.} \label{fig1b} \end{figure} \section{State estimation for antennas} The problem of antenna state design can be formulated as a state estimation problem in the following way. We specify a finite number of discrete sets of spatial observation points for which we will perform the antenna design $\{\vec r_{k,l}\}$, and re-write Eq.(\ref{g}) as: \begin{equation} p_l=Tr\{\Pi_{l}\rho\}, \quad \Pi_{l}\propto \left[\prod\limits_{k=1}^nA({\vec r_{k,l}})\right]^{\dagger}\prod\limits_{k=1}^nA({\vec r}_{k,l}), \label{prob} \end{equation} where the operators $\Pi_{l}$ are semi-positive definite and can be considered as elements of a POVM (positive operator valued measure), while $p_l$ can be considered as the set of targeted probabilities. Generally, $\Pi_{l}$ can be singular and might not form a complete set required for an unambiguous representation of the antenna state. The visibility operator that comprises all possible arrangements of the detectors is $C_V=\sum\limits_{\forall l}\Pi_{l}$ \cite{mog2006}. This operator defines the subspace of states accessible for measurements. The operator $C_V$ can be singular too and different from the unity operator. Thus, an exact solution for the density matrix might not exist for some subsets of targeted probabilities. Therefore, here we consider the problem of shaping the correlation function in the following way: we look for the density matrix (estimator) maximizing the probabilities $p_s$ of some subset $\{s\}$, while simultaneously minimizing other probabilities $p_m$, $m\in \{l\}\diagdown\{s\}$. Assuming that \begin{equation} 0\leq p_l^{low}\leq p_l\leq p_l^{high}, \label{cond} \end{equation} $p_l^{low(high)}$ being the lower(upper) limit of the targeted probabilities, our design problem can be formulated as a minimization of some distance between target and estimated sets, $D(p_{target},p_{l})$, where the set of the targeted values is $\{p_m^{low},p_s^{high}\}$. Like the directivity problem for a classical antenna \cite{balan}, the problem of maximizing the directivity of the quantum antenna can be formulated in the following way: we look for a conditional minimum of $Tr\{C_V\rho\}$ under the conditions (\ref{cond}) and for $\rho\geq 0$. Note that defining an available target range is a semi-definite programming problem of finding $\min(\max)\{p_l\}$ for $\rho\geq 0$. As an example, let us take an antenna in the pure two-excitations state: $\rho=|\psi\rangle\langle\psi|$, where \begin{equation} |\psi\rangle=\sum\limits_{j=2}^N\sum\limits_{m=1}^{j-1}c_{jm}|+_j,+_m\rangle, \label{psi1} \end{equation} with the summation performed over all distinct pairs of indices $(j,m)\in [1,N]$. The vectors $|+_j,+_m\rangle$ describe the state of excited $j$th and $m$th TLSs with all other TLSs in the lower state. An example of available targeted regions for the state (\ref{psi1}) in array with $N=20$ is shown in Fig. \ref{fig1b}. One can see that limitations on possible choices of targeted probabilities can be quite severe. \section{Field-state considerations} To give an intuitive picture of the connection between the state of the antenna and the field correlation functions, let us consider the field of the antenna in the momentum space. Initially, let us assume that the antenna initial state is a product of the states of the first $M$ fully excited emitters and $N-M$ emitters in the ground state. For time intervals much longer than the inverse decay rate of the excited state $\gamma$, the field disentangles from the emitters and can be written as \cite{scullybook}: \begin{eqnarray} |\Psi\rangle \propto \int\prod\limits_{j=1}^M\Bigl[ d^3{\vec k}_ja_j^{\dagger}({\vec k}_j)V({\vec k}_j)\Bigr]\Phi(\{{\vec k}_j\},\{{\vec R}_j\}) |vac\rangle, \label{psi5} \end{eqnarray} where the function \[ V({\vec k}_j)=\frac{\sqrt{w(\vec{k_j})}{\vec d}{\vec e}(\vec{k_j})}{w(\vec{k}_j)-\omega+i\gamma/2} \] does not depend on the positions of the emitters. The function \begin{equation} \Phi(\{{\vec k}_j\},\{{\vec R}_j\})=\exp{\{-i\sum\limits_{j=1}^M\vec{k}_j\vec{R}_j\}}, \label{psi2} \end{equation} describes the relative phase shifts introduced by the locations of the TLSs and the detectors. Here $a_j^{\dagger}$ is the creation operator for the mode with momentum $\vec{k}_j$, frequency $w(\vec{k}_j)$ and polarization vector ${\vec e}(\vec{k_j})$; $|vac\rangle$ is the vector of the field vacuum and $\omega$ is the TLS transition frequency. Eqs. (\ref{psi5},\ref{psi2}) give a hint for understanding the mechanism of $G^{(2)}$ shaping. Let us take again, for example, the simple two-excitations pure state (\ref{psi1}) with $c_{jm}=\delta_{m,N+1-j}/\sqrt N$. For such an initial state of the antenna, the wave function of the emitted field state is of the form (\ref{psi5}) with: \begin{equation} \begin{gathered} \Phi(\{{\vec k}_j\},\{{\vec R}_j\})=\frac{1}{\sqrt N}\sum\limits_{m=1}^N\exp{\{-im(\vec{k}_1+\vec{k}_2)\vec{\Delta}\}} \\ {}\times\exp{\{-i(\vec{k}_1+\vec{k}_2)\vec{R}_0-i\vec{k}_1\vec{\Delta}(N+1)/2\}}, \\ {} = \exp\{-i(\vec k_1 + \vec k_2) \vec R_0\} \frac{\sin\{N(\vec k_1 - \vec k_2) \vec \Delta / 2\}}{\sqrt N \sin\{(\vec k_1 - \vec k_2) \vec \Delta / 2\}}, \end{gathered} \label{psi3} \end{equation} where the vector $\vec{\Delta}=\vec{R}_{m+1}-\vec{R}_{m}$ does not depend on $m$, and $\vec{R}_0$ is the vector describing the position of the antenna middle point. The function $|\Phi|$ in Eq. (\ref{psi3}) for $N \gg 1$ has a sharp peak at $(\vec{k}_1-\vec{k}_2)\vec{\Delta}=0$ and tends towards the delta-function $\delta((\vec{k}_1-\vec{k}_2)\vec{\Delta})$ when $N\rightarrow\infty$. This function is not factorable with respect to momenta $\vec{k}_j$, thus, the state (\ref{psi5}) is entangled in momentum. Hence, one should expect a sharp maximum in the second-order correlation function $G^{(2)}$, corresponding to co-directionally emitted photons. Similarly, Eq. (\ref{psi2}) points to the possibility of emitting multi-photons momentum-entangled states and to shape higher-order correlation functions even using the simplest linear array antenna of Fig. \ref{fig1}. Indeed, a superposition of at least two different sets of initially excited antenna TLSs leads to a non-factorability of the function $\Phi(\{{\vec k}_j\},\{{\vec R}_j\})$ and thus to momentum entanglement of the wave-function (\ref{psi5}). Let us demonstrate this effect on the example of a three-photons state with the initial antenna state $|\psi\rangle = \frac{1}{\sqrt{N-2}}\sum\limits_{j=1}^{N-2}|+_{j},+_{j+1},+_{j+2}\rangle$. We obtain \begin{equation} \begin{gathered} \Phi_l(\{{\vec k}_j\},\{{\vec R}_j\})=2 \exp\{-i(\vec k_1 + \vec k_2 + \vec k_3) \vec R_0\} \\ {} \times \frac{\sin\{(N-2)(\vec k_1 + \vec k_2+\vec k_3) \vec \Delta / 2\}}{\sqrt {N-2}\,\sin\{(\vec k_1 + \vec k_2 + \vec k_3) \vec \Delta / 2\}}\\ {} \times \Bigl(\cos\{\frac{(\vec k_1 - \vec k_2) \vec \Delta}{2}\}+ \cos\{\frac{(\vec k_1 - \vec k_3) \vec \Delta}{2}\} \\ {} + \cos\{\frac{(\vec k_2 - \vec k_3) \vec \Delta}{2}\} \Bigr) \end{gathered} \label{psi4} \end{equation} with $|\Phi|$ being not factorable and approaching the delta-function $\delta((\vec{k}_1+\vec{k}_2+\vec{k}_3)\vec{\Delta})$ for $N\rightarrow\infty$. The correlation function $G^{(3)}$ is sharply peaked for angles satisfying the condition $(\vec{k}_1+\vec{k}_2+\vec{k}_3)\vec{\Delta}=0$. \section{Example I: Directional two-photon emission} \label{examples} To demonstrate the feasibility of our antenna design approach, we apply it to the simple case of twin-photons generation by the linear antenna in Fig.\ref{fig1} with initial states (\ref{psi1}). We aim to find the coefficients $c_{jm}$ that provide a desired spatial pattern of the second-order correlation function $G^{(2)}$. Taking into account considerations from the previous section, we consider the optimization of $G^{(2)}$ for twin-photons emission from a finite length linear array antenna. \paragraph{Co-directional two-photons emission.} From Eqs. (\ref{arop},\ref{g}) the second-order correlation function in the plane perpendicular to the orientation of the dipoles is given by \begin{equation} \label{g2} \begin{gathered} G^{(2)}(\theta_1,\theta_2)\propto p(\theta_1, \theta_2) =Tr\{\Pi(\theta_1,\theta_2)\rho\}, \\ \Pi(\theta_1,\theta_2)=\sum\limits_{j,m,n,q}\exp\{ik\Delta(j-n)\cos(\theta_1)\}\times\\ \exp\{ik\Delta(m-q)\cos(\theta_2)\}\sigma_{j}^+\sigma_{m}^+\sigma_{n}^{-}\sigma_{q}^{-}, \end{gathered} \end{equation} where $k$ is the wave-number and $\Delta$ is the distance between the dipoles; $\theta_{1,2}$ are the angles in the direction of the detectors. Optimization of the antenna directivity for this case can be formulated as a quadratic programming problem of minimizing the average visibility operator $\langle\psi|C_V|\psi\rangle$ subjected to conditions (\ref{cond}). Let us aim, for example, to obtain the co-directional correlation of emitted photons, \textit{i.e.} sharply peaked $G^{(2)}$ pattern for $\theta_1=\theta_2$. Fig. \ref{fig2}(a,b) shows the results of such optimization for $k \Delta = 2$. The optimization was done by minimizing the weighted sum of the average visibility operator $\langle\psi|C_V|\psi\rangle$ and the quadratic distance between the actual values of $p(\theta_i, \theta_i; c_{jm})$ as well as the targeted value $p_0$ for 100 discrete angles $\theta_i$ in the range $[0, \pi]$. For the TLS number $N$ varying from 2 to 10, the problem was solved for the general case of complex coefficients $c_{jm} \in \mathbb{C}$. However, the imaginary parts of the optimal solution turned out to be very small in comparison with the real parts of $c_{jm}$. Therefore, for antennas with a larger number of TLSs (we checked up to $N = 20$), these coefficients were assumed to be real: $c_{jm} \in \mathbb{R}$. Indeed, in Fig. \ref{fig2}(a) one can see that $G^{(2)}$ is sharply peaked around equal observation angles. The initial antenna state producing such correlations is shown in Fig. \ref{fig2}(b). Remarkably, in accordance with the field-state considerations of the previous section, we have obtained that $c_{j,N+1-j}\approx1/\sqrt N$, while all other coefficients are much smaller. The initial state corresponds to excited pairs of the dipoles located symmetrically on the opposite sides of antenna (\textit{e.g.}, the first and the last one, the second and the $(N-1)$-th, etc.). Fig. \ref{fig2}(c,d) shows the optimization results for larger distance between dipoles, \textit{i.e.}, $k \Delta = 10$. The radiation pattern is still sharply peaked around $\theta_1 = \theta_2$, but in contrast to the previous example, each emission direction of one photon is correlated to several possible emission directions of the second photon. To elucidate the origin of this pattern, we rewrite Eq. (\ref{g2}) as: \begin{equation} \begin{gathered} p(\theta_1, \theta_2) = \langle \psi | \Pi(\theta_1, \theta_2)|\psi\rangle = |\Phi(\theta_1,\theta_2)|^2 \\ {} \equiv \left|\sum_{j,m=1}^N c_{jm} \exp\{-ik\Delta(j \cos\theta_1 + m \cos\theta_2)\} \right|^2. \end{gathered} \label{g2pure} \end{equation} Indeed, it can be seen that $\Phi(\theta_1,\theta_2)$ is a periodic function of $\cos \theta_1$ and $\cos \theta_2$, that is $\Phi(\theta_1, \theta_2) = \Phi(\theta_1, \theta_2')$ if $k \Delta (\cos \theta_2 - \cos \theta_2') = 2 \pi n$ with integer $n$. \begin{figure}[htb] \begin{tabular}{cc} (a) & (b) \\ \includegraphics[width=0.45\linewidth]{OptimizationDiagonal2_signal.pdf} & \includegraphics[width=0.45\linewidth]{OptimizationDiagonal2_state.pdf} \\ (c) & (d) \\ \includegraphics[width=0.45\linewidth]{OptimizationDiagonal2_signal10.pdf} & \includegraphics[width=0.45\linewidth]{OptimizationDiagonal2_state10.pdf} \\ \end{tabular} \caption{(a), (c) The normalized $G^{(2)}(\theta_1,\theta_2)$ (\textit{i.e.} $p(\theta_1,\theta_2)$ of Eq. (\ref{g2})) obtained as the result of optimization for the generation of two co-directional photons; (b), (d) states in Eq. (\ref{psi1}) obtained as the results of optimization. For all panels, $N=20$; for panel (a,b) $k \Delta = 2$, for panels (c,d) $k \Delta = 10$. Dashed lines divide the plot in panel (c) into equivalent regions due to the periodicity of the signal for $k \Delta > \pi$.} \label{fig2} \end{figure} \paragraph{Contra-directional two-photons emission.} By tailoring the initial quantum antenna state, one can also achieve contra-directional correlations between emitted photons. The optimization results for the radiation pattern with $G^{(2)}$ sharply peaked around $\theta_2 = \pi - \theta_1$ are shown in Fig.~\ref{fig3}(a), which attests strong contra-directional correlations. The optimal initial state of the antenna (Fig.~\ref{fig3}(b)) shows a peculiar structure of the matrix $c_{jl}$ describing the state (\ref{psi1}). This matrix is composed of sets of the coefficients with equal amplitudes on each sub-diagonal, \textit{i.e.} coefficients $c_{j,j\pm l} = c_l$ do not depend on the index $j$. Once again, this feature can also be explained using field-state considerations on the basis of Eqs.(\ref{psi5},\ref{psi2}) in the following way. Let us consider the contribution from just one sub-diagonal of the matrix $c_{j,j+l}$ with index shift $l$ (\textit{i.e.} we assume $c_l = 1/ \sqrt{N-l}$ and $c_{l'}=0$ for all other sub-diagonals with $l'\ne l$). The wave function of the emitted field state is described by Eq.(\ref{psi5}) with: \begin{equation} \begin{gathered} \Phi_l(\{{\vec k}_j\},\{{\vec R}_j\})=2 \exp\{-i(\vec k_1 + \vec k_2) \vec R_0\} \\ {} \times \cos\{\frac{l}{2}(\vec k_1 - \vec k_2) \vec \Delta\} \frac{\sin\{(N-l)(\vec k_1 + \vec k_2) \vec \Delta / 2\}}{\sqrt {N-l}\,\sin\{(\vec k_1 + \vec k_2) \vec \Delta / 2\}}. \end{gathered} \label{psi3b} \end{equation} For $N\rightarrow \infty$ and finite $l$ the function $|\Phi_l|$ asymptotically tends toward the delta-function $\delta ((\vec k_1 + \vec k_2)\vec \Delta)$, which corresponds to an entangled two-photons state with strong contra-directional correlations. However, in contrast to the previous example in sub-Section V.a, the absolute value of the wave function is varied along the line $\vec k_2 \vec \Delta = - \vec k_1 \vec \Delta$ ($\theta_2 = \pi - \theta_1$) as $\cos (l k \Delta \cos \theta_1)$. In order to obtain a $G^{(2)}$ pattern with even contra-directional correlations as shown in Fig.~\ref{fig3}(a), one needs to combine several sub-diagonal sets $c_{j,j\pm l} = c_l$ with different sub-diagonal numbers $l$. Fig.~\ref{fig3}(c) shows the result of such a combination for three sets with $l=1$, 2, 3 and relative amplitudes $c_1 : c_2 : c_3 = 1 : {-0.7} : 0.4$. The state shows strong contra-directional correlations across the full range of angles (see the main diagonal in the pattern of Fig.~\ref{fig3}(c)), but still it gives a less even and less sharply directed pattern of $G^{(2)}$ than the state found by numerical optimization in Fig.~\ref{fig3}(b). \paragraph{Maximal directivity of emission.} As a particular example one can consider a state with the maximal directivity of two-photons emission in the direction perpendicular to the linear array antenna ($\theta_1 = \theta_2 = \pi / 2$). The radiation pattern for the numerically optimized state is shown in Fig.~\ref{fig3}(d). As one would expect, the optimal state is close to the symmetric two-excitations Dicke state with $c_{jm} = const$ for all indexes $j$ and $m$. \begin{figure}[htb] \begin{tabular}{cc} (a) & (b) \\ \includegraphics[width=0.45\linewidth]{OptimizationDiagonal_signal.pdf} & \includegraphics[width=0.45\linewidth]{OptimizationDiagonal_state.pdf} \\ (c) & (d) \\ \includegraphics[width=0.45\linewidth]{OptimizationDiagonal_signal_analyt.pdf} & \includegraphics[width=0.45\linewidth]{OptimizationPoint_signal.pdf} \\ \end{tabular} \caption{(a), (c) The normalized $G^{(2)}(\theta_1,\theta_2)$ obtained as the result of numerical and analytical optimization for the generation of two contra-directional photons, respectively; (b) the state in Eq. (\ref{psi1}) obtained as the result of numerical directivity optimization. (d) The normalized $G^{(2)}(\theta_1,\theta_2)$ obtained as the result of optimization for the generation of two photons emitted in a direction perpendicular to the antenna. For all panels, $N=20$; $k \Delta = 2$.} \label{fig3} \end{figure} \section{Example II: "Dark" states and antenna design} \paragraph{Finding the "dark state".} Localization of the emitted field inside a finite volume is something that one would really expect for such exquisitely designed objects as 3D photonic crystals and meta-material structures \cite{gaponenko,shalaev}. With classical antennas, one can specifically design such distributions of currents so as to obtain the same effect, that is to create a ``non-radiative source'' \cite{balan,wolf}. Counterintuitively, this effect can also be achieved in a simple regular linear antenna array by choosing the appropriate initial quantum state of the antenna. Just by minimizing $G^{(2)}(\theta_1,\theta_2)$ for all angles $\theta_1$ and $\theta_2$, one obtains the initial state of the antenna leading to a strong field suppression in the far-field zone. Indeed, let us minimize the average visibility operator $\langle\psi|C_V|\psi\rangle$ for the state (\ref{psi1}) without imposing any additional requirements to obtain bright spots or lines in the $G^{(2)}$ patterns. For $k\Delta < \pi$, such optimization can be successfully performed (Fig.~\ref{fig4}(a)), yielding the maximum $G^{(2)}$ value in Eq. (\ref{g2}) of $3\cdot 10^{-7}$ for $N=20$ and $k\Delta = 2$. Eq.~(\ref{psi3b}) gives a hint on how to design such a state: first, choose broad distributions of the matrix coefficients describing the state (\ref{psi1}) along each sub-diagonal, \textit{i.e.} take $c_{j,j\pm l} \propto f(j - (N+1)/2)$ along sub-diagonals to suppress emission outside the region with $\theta_2 \approx \pi - \theta_1$. For example, a Gaussian distribution of the matrix coefficients $f(m)=e^{-m^2/\sigma^2}$ leads to $k\Delta|\cos\theta_1 + \cos \theta_2| \lesssim 2 / \sigma$. Then, one should suppress the emission along the diagonal using an appropriate combination of $\cos\{l k \Delta (\cos \theta_1 - \cos \theta_2) / 2\}$ (see Eq.~(\ref{psi3b})). Here, one can find an approximate analytical expression surprisingly close to the optimal state numerically found in Fig.~\ref{fig4}(a), \textit{i.e.}: \begin{equation} c_{jm}\propto (-1)^l l^2 \exp\{-(l^2+q^2) / (4 \sigma^2)\}, \label{dark} \end{equation} where $l = |j - m|$, $q = (j + m) - (N + 1)$, and $\sigma \approx 3.2$. It is worth mentioning that for $k\Delta > \pi$ one cannot design such a ``dark" state. By introducing dimensionless variables $x_j = k\Delta \cos \theta_j$, $j = 1,2$, $x_j \in [-k\Delta, k\Delta] \supset [-\pi, \pi]$, one can easily see that the following lower bound holds: \begin{equation} \begin{gathered} \int\limits_{-k\Delta}^{k\Delta} dx_1 \int\limits_{-k\Delta}^{k\Delta} dx_2 p(\theta_1, \theta_2) \ge \int\limits_{-\pi}^\pi dx_1 \int\limits_{-\pi}^\pi dx_2 p(\theta_1, \theta_2) \\ {} = 2 \pi^2 \sum_{j=2}^{N} \sum_{m=1}^{j-1} |c_{jm}|^2 = 2 \pi^2, \end{gathered} \label{norm} \end{equation} where the function $p$ is defined by Eq.~(\ref{g2}) and the normalization of the state (\ref{psi1}) is taken into account. Fig.~\ref{fig4}(b) shows the radiation pattern for $k \Delta = 10$ while the initial state is depicted in Fig.~\ref{fig4}(a). One can see in Fig.~\ref{fig4}(b) that it is possible to suppress emission in some regions (blue squares with dashed border in Fig.~\ref{fig4}(b)), but not to the whole range of angles. Note that just one square would represent the total radiation pattern for $k \Delta = 2$. \begin{figure}[htb] \begin{tabular}{cc} (a) & (b) \\ \includegraphics[width=0.45\linewidth]{OptimizationDark_state.pdf} & \includegraphics[width=0.45\linewidth]{OptimizationDark_signal10.pdf} \end{tabular} \caption{(a) The "dark" state (\ref{psi1}) obtained as the result of the optimization for $N = 20$, $k\Delta = 2$. (b) The pattern of $\log_{10} G^{(2)}(\theta_1,\theta_2)$ for the state (a), calculated for $k \Delta = 10$. Dashed lines define "dark" regions.} \label{fig4} \end{figure} Remarkably, by unconditionally maximizing the mean value of the visibility operator $\langle\psi|C_V|\psi\rangle$ for the state (\ref{psi1}), we can achieve an opposite effect and obtain a nearly homogeneous far-field distribution. \paragraph{Simultaneous state and antenna design.} Interestingly, the field-state considerations also point to the possibility of engineering the initial state and the antenna geometry for a complete 3D suppression of the far field. Indeed, let us take two perpendicular linear antennas with the same dipole moments orthogonal to the antenna plane and randomly located TLSs. We choose the initial state as a superposition of randomly chosen pairs of TLS from the first and second antennas. Then, the phase factor reads \begin{equation} \Phi(\{{\vec k}_j\},\{{\vec R}_j\})\propto\sum_{\forall j,m}\exp{\{-i(\vec{k}_1\vec{R}_j+\vec{k}_2\vec{R}_m)\}}, \label{psi7} \end{equation} where $j$ and $m$ are the indices of the TLSs from the two antenna arms. For a sufficiently large number of TLSs in the antenna, the phase factor in Eq. (\ref{psi7}) tends toward zero for all directions $\vec{k}_{1,2}$ except for directions parallel to $\vec{d}$. In this way, the emitted field in the far-field zone is suppressed. However, one should notice that for the states predicting a field localization, the antenna approximation of non-interacting emitters might fail. The localized photons might be re-absorbed and re-emitted by the antenna (in section VII below we outline one possible approach to account for such interactions between emitters) . Also, by simultaneously changing the shape and the initial state of the antenna, one can get a high directivity of the correlation function without using the Dicke state as the initial antenna state. Eq. (\ref{psi3}) hints toward a simple way to obtain co- or contra-directional correlations of emitted photons that are localized in a narrow region in the vicinity of $\pm\pi/2$. More specifically, instead of one regular antenna array shown in Fig. \ref{fig1}(a), let us consider an antenna composed of two regular linear arrays located on the same axis and each comprising $N$ TLSs. We choose the pitch of TLSs in one sub-antenna to be $u$-times larger than in the other sub-antenna. We consider the initial antenna state (\ref{psi1}) with excited TLS pairs composed of one counterpart from the first sub-array (\textit{e.g.} with larger pitch) and another counterpart from the second sub-array (\textit{e.g.} with smaller pitch). In this case the first index of the matrix element $c_{jm}$ enumerates TLS from the sub-array with larger pitch while the second index enumerates TLS from the the sub-array with shorter pitch. As in the section V.a, we then additionally impose a specific symmetry between indexes of TLS located symmetrically on the opposite sides of antenna arms. More specifically, we define the initial state by non-vanishing coefficients $c_{j,N+1-j}=1/\sqrt N$, where the index $j$ spans over the large-period sub-array (the index $N+1-j$ then spans over a short-period sub-array). For our compound antenna composed of two sub-arrays with different pitches, Eqs. (\ref{psi2},\ref{psi3}) suggest a sharp localization of the emitted photons for $(\vec{k}_1-\vec{k}_2/u)\vec{\Delta}=0$, which can be satisfied for $u\gg1$ only if both $\vec{k}_{1,2}$ are nearly orthogonal to $\vec{\Delta}$. Thus, we can see that a simultaneous design of the antenna geometry and the initial states opens considerably richer possibilities for the optimization of the correlation functions compared to the state design for a pre-defined antenna. However, generally, such a design is a complicated nonlinear optimization problem. \begin{figure}[htb] \includegraphics[width=0.5\linewidth]{figquantum.pdf} \\ \caption{The normalized quantum correlation function $G^{(2)}(\theta_1,\theta_2)$ for the initial state with $N=3$ giving contra-directional twin-photons correlations. The dipole-dipole distance is $k\Delta=4.5$. } \label{fig5a} \end{figure} \begin{figure}[htb] \begin{tabular}{cc} (a) & (b) \\ \includegraphics[width=0.51\linewidth]{figmirror.pdf} & \includegraphics[width=0.5\linewidth]{figantimirror.pdf} \end{tabular} \caption{The nornalized semiclassical correlations functions $G^{(2)}(\theta_1,\theta_2)$ for the initial state with $N=3$ giving contra-directional twin-photons correlations. The semiclassical solution is given by Eq.~(\ref{gp}) and Eqs.~(17) in the Appendix. Panel (a) displays the post-semiclassical solution for correlated, "mirrored" noise sources. Panel (b) displays the semiclassical solution for uncorrelated noise sources. The patterns are taken at a specific time and are averaged over 100 realizations (see Appendix). Other parameters are as for Fig.~\ref{fig5a}. } \label{fig5b} \end{figure} \section{Semiclassical approach} As we have shown, quantum interferences are essential for shaping the correlation functions. Here we show that it is still possible to use a semiclassical approach for modelling the emitters dynamics and, after a minor modification, to reproduce non-classical features of the spatial correlation functions $G^{(n)}$ from Eq. (\ref{g}) (we have termed this recipe ``the post-semiclassical approximation" ). Such a recipe can be developed inspite of the fact that the semiclassical approach is, generally, unable to capture the mechanism of spontaneous emission and effects stemming from it. For example, creation of entanglement between TLS decaying into the same radiative reservoir \cite{ficek} can hardly be captured by the approach assuming an absence of quantum correlations between TLSs. Nevertheless, field correlation effects can still be successfully captured in some cases. The best known example is superradiance. The onset of cooperative effects and phase correlations leading to the formation of the superradiant field pulse can be quite accuratly described by the semiclassical approach \cite{boiko}. Importantly, the initial state of interacting semiclassical TLS does not need to be correlated. The correlations self-establish at the initial stage of cooperative emission \cite{benedict}. The key observation enabling to develop a post-semiclassical approximation, is given by Eqs. (\ref{arop},\ref{g}). They show that by modeling the TLS correlation functions with enough accuracy, one would get an accurate description of the emitted field in the far-field zone. From the first glance, to model correlation functions of non-interacting quantum emitters with correlation functions of interacting semiclassical emiters (notice that interaction is intrinsic for semiclassical models) is hardly possible. However, it was shown that interacting TLS can have spatial correlation functions coinciding with the spatial correlation functions of non-interacting TLS \cite{ficek0}. The task is considerably simplified by requiring closeness of only the spatial correlation patterns for some specific time-intervals. The semiclassical approach in its simplest form assumes a factorization of the correlation functions (\ref{g}) up to the first-order averages, for example, $\langle\sigma^-_j(t_1)\sigma^-_k(t_2)\rangle\approx\langle\sigma^-_j(t_1)\rangle\langle\sigma^-_k(t_2)\rangle$ where the time-dependence of averages, $\langle\sigma_j^{\pm}\rangle$, is derived from the semiclassical Maxwell-Bloch equations \cite{benedict,boiko} (see also Appendix) . Firstly, let us consider semiclassically the antenna with uncorrelated identical initial states of each TLS. Assuming interacting TLSs, for a sufficiently long antenna ($N\gg1$) one can, \textit{e.g.}, replace the sum $\sum \Pi^{jm}_{nq}(\theta_1,\theta_2)\langle\sigma_{j}^+\sigma_{m}^+\sigma_{n}^{-}\sigma_{q}^{-}\rangle$ in Eqs. (\ref{g},\ref{prob}) with $\sum \Pi^{jm}_{nq}(\theta_1,\theta_2)\langle\sigma_{j}^+\rangle\langle\sigma_{m}^+\rangle\langle\sigma_{n}^{-}\rangle\langle\sigma_{q}^{-}\rangle$, and finally obtain $\sum \Pi^{jm}_{nq}(\theta_1,\theta_2)|\langle\sigma\rangle|^4$ for the approximation of the correlation function within the relative accuracy of the order of $N^{-2}$, which is essentially a classical radiation pattern \cite{benedict}. However, one can extend the semiclassical approach of an interacting TLS antenna for the consideration of a non-interacting quantum TLS antenna by accounting for commutation relations of TLS operators and correlations between initial state components ( it is the backbone of the recipe for ``the post-semiclassical approximation"). Let us illustrate this concept with our example of two-excitations initial state giving contra-directional correlations, $c_{jm}=\delta_{j,j+1}/\sqrt{N}$ for the state (\ref{psi1}). For the quantum antenna of non-interacting TLSs the simultaneous second-order correlation function in the far-field zone for the two-excitations state (\ref{psi1}) reads as \cite{scullybook}: $G^{(2)}(\theta_1,\theta_2;t)=|R(\theta_1,\theta_2,t)|^2$, with \begin{eqnarray} \label{gp} R(\theta_1,\theta_2,t)=\langle vac|\langle -|E(\theta_1,t)E(\theta_2,t)|\psi\rangle|vac\rangle\propto \\ \nonumber \sum\limits_{j=1}^{N-1}\sum\limits_{l=1,2}\exp\{i\phi_j(\theta_l)+i\phi_{j+1}(\theta_{3-l})\}\\ \nonumber \langle -|\sigma_j^-(t)\sigma_{j+1}^-(t)|\psi\rangle, \end{eqnarray} where $E(\theta_1,t)$ is the field operator, $\phi_j(\theta_l)=k\Delta\cos\theta_l$ and $\langle -|$ is the bra-vector denoting the ground state of all TLSs. We aim to estimate Eq. (\ref{gp}) semiclassically. Our recipe for this case would be to consider the antenna with interacting TLSs semiclassically for different uncorrelated initial states with a pair of neighbour TLS in the excited state and others in the ground state, such as, \textit{e.g.}, $|+\rangle_1|+\rangle_2\prod\limits_{j=3}^N|-\rangle_j$. Then, we would sum the results for all the initial states with phase factors given by Eq.(\ref{gp}) , replacing $\langle -|\sigma_j^-(t)\sigma_{j+1}^-(t)|\psi\rangle$ with $\langle\sigma_j^-(t)\rangle\langle\sigma_{j+1}^-(t)\rangle$. Note that the radiation in the semiclassical approach is assumed to be initiated by a random polarization noise source. Such an approach can lead to spatial patterns of the semiclassical correlation functions which are quite close to the quantum ones even for a small number of TLSs in the antenna. This holds under the condition of a specifically correlated noise for different initial states (different states with $c_{jm} \neq 0$ in the superposition state \ref{psi1}) with the aim to reproduce the phase relationships between the parts of the initial superposition state. The details of the semiclassical approach are described in the Appendix. Fig. \ref{fig5a} shows an example of a quantum pattern of $G^{(2)}(\theta_1,\theta_2)$ for $N=3$ and the initial state $|\psi\rangle\propto|+\rangle_1|+\rangle_2|-\rangle_3+|-\rangle_1|+\rangle_2|+\rangle_3$ (one can easily show that $G^{(2)}(\theta_1,\theta_2)\propto 2+2\cos\{k\Delta(\cos\theta_1-\cos\theta_2)\}$). Fig. \ref{fig5b} shows examples of post-semiclassical patterns of $G^{(2)}(\theta_1,\theta_2)$ obtained with the same initial state $|\psi\rangle$ at a specific time (see Appendix). Patterns are averaged over 100 realizations with correlated (a) and uncorrelated (b) noise patterns. As shown in Fig. \ref{fig5a} and Fig. \ref{fig5b}(a), for the ``mirrored" realization of polarization noise for different initial states, $G^{(2)}(\theta_1,\theta_2)$ patterns for quantum and post-semiclassical cases are identical. The noise is "mirrored" when the $j$th TLS in the antenna array for the first initial state and the $N-j$th TLS of the array for the second initial state sense the same noise; see also Fig. \ref{pulses}(b) in the Appendix. As shown in Fig. \ref{fig5b}(b), uncorrelated noise sources lead to a semiclassical $G^{(2)}(\theta_1,\theta_2)$ pattern with a conserved position of the maxima compared to the full quantum case, but with distortions inducing a symmetry breaking. So, we have demonstrated that it is indeed possible to use semiclassical antenna models for designing the higher-order correlation functions of the emitted field. The semiclassical approach can potentially serve as a handy modeling tool, since the number of equations to solve scales linearly with the number of the TLSs. \section{Conclusions} We have shown that it is possible to shape the second and higher order correlation functions of the field emitted by a quantum antenna in the far-field zone by designing its initial state. We have proposed an optimization method using constrained linear and non-linear programming. We have demonstrated the feasibility of the method for designing states with two initial excitations. We have found states leading to highly co- or contra-directional emission of photon pairs for the same antenna, or even producing the effect of non-radiating sources by suppressing the field in the far-field zone. We have also shown that a quantum antenna can produce multi-photons momentum-entangled states. Despite the general quantum character of the state expected to produce desired spatial patterns of the correlation functions, we have also demonstrated that one can still use an appropriately modified semiclassical approach for this purpose. We believe that our method for producing patterned higher-order correlation functions of the emitted field can be of importance for imaging and high-precision sensing, as well as for designing an emitter-field interface for quantum information processing \cite{boto,rozema,supertwin,feber,chec}. A. M., D. M., I. K., G. B. and D. B. acknowledge support from the EU project Horizon-2020 SUPERTWIN id.686731, the National Academy of Sciences of Belarus program "Convergence", and the BRRFI project F17Y-004. G. Ya. S. acknowledges support from the project FP7-612285 CANTOR. G. B. and D. B. acknowledge Philippe Renevey for advices in coding.
1,314,259,995,692
arxiv
\section{\label{sec:Intro} Introduction} Relativistic heavy ion collisions are investigated both theoretically and experimentally to understand the properties of nuclear matter at extreme conditions. In heavy ion collisions, there is a possibility for the nuclear matter to undergo a phase transition to quark matter. The nature of the phase transition is still not well established. At low baryon chemical potential and high temperature nuclear matter is expected to smoothly cross over~\cite{Aoki:2006we} to a quark gluon plasma (QGP) phase. Whereas, at high baryon chemical potential and low temperature the system is expected to have a first order phase transition~\cite{Asakawa:1989bq, Ejiri:2008xt, Bowman:2008kc}. Therefore, the first-order phase transition at high baryon chemical potential and low temperature should end at a critical end-point (CEP) as one moves towards a high temperature and low baryon chemical potential region in the phase diagram of strongly interacting matter ~\cite{Halasz:1998qr, Fodor:2004nz, Gavai:2004sd, Stephanov:2004wx}. The main goal of experiments of heavy ion collisions is to map the quantum chromodynamics (QCD) phase diagram in terms of temperatures and baryon chemical potentials. One of the main objectives of the beam energy scan (BES) program of RHIC is to investigate the location of CEP. In the near future, CBM experiment at FAIR will also involve in such an investigation along with the other studies of strongly interacting matter at high baryon chemical potentials and low temperatures. The event-by-event fluctuations of conserved charges like baryon, strangeness, and electric charge are sensitive indicators of the transition from hadronic matter to QGP. Moreover, the existence of the CEP can be signalled by the divergent fluctuations. Therefore, a non-monotonic variation of observables related to the cumulants of the distributions of the above mentioned conserved charges with a variation of centre of mass energy ($\sqrt{s_{NN}}$) are believed to be good signatures of a phase transition and a CEP \cite{Asakawa:2000wh, Adamczyk:2013dal}. However, this non-monotonic behaviour is a necessary but not sufficient condition for the CEP. For example, the singularities associated with first or second order transition, in the infinite volume limit, may become finite peaks due to finite volume effect. Moreover, due to the finite size of the system in heavy ion collisions, non-monotonic behaviour may be indicative of pseudo-critical region which is shifted from the actual critical region \cite{Lacey:2014wqa, Ladrem:2004dw, Palhares:2009tf}. It may be expected that with the variation of centrality, keeping $\sqrt{s_{NN}}$ fixed, similar behaviour as those found for the variation of centre of mass energy would be observed. However, the signatures of phase transition or CEP are detectable only if they survive during the evolution of the system. Several experimental results of conserved charge fluctuations (or cumulants) from BES program have recently been reported at various energies and centralities \cite{Adamczyk:2013dal, Adamczyk:2014fia, Adare:2015aqk}. However, these data do not show non-monotonic behaviour as a function of $\sqrt{s_{NN}}$. On the other hand, a new analysis of net proton moments have been reported by STAR collaboration \cite{Luo:2015ewa} where the upper $p_T$ coverage for proton and anti-proton has been extended up to 2 GeV using the time of flight (ToF) detector. In this analysis a non-monotonic behaviour for higher order cumulants ($\kappa\sigma^2$) at lower $\sqrt{s_{NN}}$ has been reported indicating a probable CEP like behaviour. Finite system size may also cause this non-monotonic behaviour. In principle such effects may be estimated from the ratio of cumulants, as discussed in \cite{Bhattacharyya:2015zka} using Hadron resonance gas (HRG) model for illustration. It has been shown that though for net proton and net kaon the cumulant ratios are almost volume independent, the cumulant ratios of net charge are highly sensitive to the system volume. This is mainly due to the contribution of pions which are extremely light in the hadronic scale. Fluctuations which are related to the thermodynamic susceptibilities via fluctuation-dissipation theorem \cite{Kubo} can be studied using LQCD or models. However, since cumulants are volume dependent, ratios of cumulants are constructed to cancel volume term and they are related to the ratios of the different order of susceptibilities. Therefore, it is possible to extract chemical freeze-out parameters like temperature and chemical potential by comparing experimentally measured ratios of cumulants with ratios of susceptibilities calculated in LQCD or in a model \cite{Borsanyi:2014ewa, Alba:2014eba}. Thus the ratios of cumulants of conserved charges provide important information about chemical freeze-out parameters which is useful to locate CEP in the phase diagram. However, at finite chemical potential, LQCD faces the well-known sign problem. As a result, the region of very high chemical potential in the phase diagram can not be studied in LQCD presently. Moreover, it is not possible to employ experimental acceptance cuts in LQCD calculation. On the other hand, hadron resonance gas (HRG) model \cite{Hagedorn:1980kb, Rischke:1991ke, Cleymans:1992jz, BraunMunzinger:1994xr, Cleymans:1996cd, Yen:1997rv, BraunMunzinger:1999qy, Cleymans:1999st, BraunMunzinger:2001ip, BraunMunzinger:2003zd, Karsch:2003zq, Tawfik:2004sw, Becattini:2005xt, Andronic:2005yp, Andronic:2008gu,Begun:2012rf, Andronic:2012ut, Tiwari:2011km, Fu:2013gga, Tawfik:2013eua, Garg:2013ata, Bhattacharyya:2013oya, Bhattacharyya:2015zka,Kadam:2015xsa, Kadam:2015fza, Kadam:2015dda, Albright:2014gva, Albright:2015uua, Begun:2016cva} provides us with a simpler model for the study of the strongly interacting matter in the non-perturbative domain. HRG model is based on the assumption of thermal equilibrium of a system composed of free hadrons and resonances. One may estimate the commensurate chemical freeze-out parameters by fitting the experimental data of various hadronic observables with the HRG model \cite{Cleymans:2005xv,Xu:2001zj,Becattini:2005xt,Andronic:2005yp, Andronic:2009jd, Karsch:2010ck,Chatterjee:2015fua}. Also the susceptibilities of conserved charges calculated in LQCD have been well reproduced by HRG model \cite{Karsch:2003zq, Tawfik:2004sw, Andronic:2012ut, Bhattacharyya:2013oya} for temperatures up to 150 MeV. Moreover the region of large chemical potential in the phase diagram, which can be accessed by low energy heavy ion collisions, can be studied by this model. Since, one can incorporate proper experimental acceptances in this model, it can be used to estimate chemical freeze-out parameters by fitting experimental data of the ratios of cumulants of conserved charges. It should be noted however that the final parameters are still model dependent. Here we would like to emphasise the salient feature of our present study. If the system becomes thermalised well ahead of freeze-out, then all the observables would carry the signature of thermalisation. In such a scenario the observed hadrons should have a clear thermodynamic equilibrium distribution. Therefore a thermal model like HRG would show a very good agreement with the data up to all orders. Any difference from this scenario may point towards a more complex system and our attempt here is to find such discrepancies in the parametrisation of the HRG model from various experimental data and gain some insight about the system. The paper is organised as follows. The ideal and excluded volume hadron resonance gas model are introduced in Sec. \ref{sec:HRG}. In Sec. \ref{sec:Fluctuations} we have briefly discussed fluctuations of conserved charges and several relevant experimental observables. Then in Sec. \ref{sec:Results} we have discussed results of this paper. Finally, we summarise our results in Sec. \ref{sec:Discussion}. \section{\label{sec:HRG} Hadron Resonance Gas model} In HRG model, the system of thermal fireball consists of all the hadrons and resonances given in the particle data book \cite{Agashe:2014kda}. There are varieties of HRG models in the literature. Different versions of this model and some of the recent works using these models may be found in Refs \cite{Hagedorn:1980kb, Rischke:1991ke, Cleymans:1992jz, BraunMunzinger:1994xr, Cleymans:1996cd, Yen:1997rv, BraunMunzinger:1999qy, Cleymans:1999st, BraunMunzinger:2001ip, BraunMunzinger:2003zd, Karsch:2003zq, Tawfik:2004sw, Becattini:2005xt, Andronic:2005yp, Andronic:2008gu,Begun:2012rf, Andronic:2012ut, Tiwari:2011km, Fu:2013gga, Tawfik:2013eua, Garg:2013ata, Bhattacharyya:2013oya, Bhattacharyya:2015zka, Kadam:2015xsa, Kadam:2015fza, Kadam:2015dda, Albright:2014gva, Albright:2015uua, Begun:2016cva}. HRG model is not only successful in describing the hadron yields in central heavy ion collisions from AGS up to RHIC energies~\cite{BraunMunzinger:1994xr, Cleymans:1996cd, BraunMunzinger:1999qy, Cleymans:1999st, BraunMunzinger:2001ip, Becattini:2005xt, Andronic:2005yp, Andronic:2008gu} but also in describing the bulk properties of hadronic matter in thermal and chemical equilibrium \cite{Karsch:2003zq, Tawfik:2004sw, Andronic:2012ut}. The logarithm of the grand canonical partition function of a hadron resonance gas can be written as \cite{Andronic:2012ut}, \begin {equation} \ln Z^{id}=\sum_i \ln Z_i^{id}, \end{equation} where the sum is over all the hadrons. $id$ refers to ideal {\it i.e.}, non-interacting HRG. For particle species $i$, \begin{equation} \ln Z_i^{id}=\pm \frac{Vg_i}{(2\pi)^3}\int d^3p \ln[1\pm\exp(-(E_i-\mu_i)/T)], \end{equation} where $V$ is the volume of the system, $g_i$ is the degeneracy factor, $T$ is the temperature, $E_i$ is the single particle energy, $m_i$ is the mass and $\mu_i=B_i\mu_B+S_i\mu_S+Q_i\mu_Q$ is the chemical potential. In the last expression, $B_i,S_i,Q_i$ are respectively the baryon number, strangeness and charge of the particle, $\mu^,s$ are corresponding chemical potentials. The upper and lower sign corresponds to baryons and mesons respectively. We assume that the hadronic matter is in thermal and chemical equilibrium therefore we have ignored non-equilibrium phenomena like decays of particles along with minimum biased jets and harmonisation. We have ignored the effect of parton fragmentation into hadrons which produces very significant correlations at lower energies and in peripheral collisions as high as 200 GeV \cite{Trainor:2010zv,Trainor:2012jv}. In addition, at lower collision energies stopping becomes important which has not been considered here. The partition function is the basic quantity from which one can calculate various thermodynamic quantities of the thermal system. The number density $n_i$ of $i$ th particle is defined as, \begin{equation} n_i =\frac{T}{V} \left(\frac{\partial \ln Z_i} {\partial\mu_i}\right)_{V,T} =\frac{g_i}{{(2\pi)}^3} \int\frac{d^3p} {\exp[(E_i-\mu_i)/T]\pm1}. \end{equation} In case of heavy ion collision experiments, the parameters $T$ and $\mu's$ of HRG model corresponds to those at chemical freeze-out which depend on initial conditions of the collision. The chemical potentials $\mu_B, \mu_S$ and $\mu_Q$ are not independent, but related (on average) to each other as well as to $T$ via the relations \cite{Alba:2014eba}, \begin{equation} \label{eq:ns} \sum_i n_i (T, \mu_B, \mu_S, \mu_Q) S_i=0, \end{equation} and \begin{equation} \label{eq:nbq} \sum_i n_i (T, \mu_B, \mu_S, \mu_Q) Q_i= r \sum_i n_i (T, \mu_B, \mu_S, \mu_Q) B_i, \end{equation} where $r$ is the ratio of net-charge to net-baryon number of the colliding nuclei. For Au + Au collisions $r = N_p /(N_p + N_n)=0.4$, where $N_p$ and $N_n$ are respectively proton numbers and neutron numbers of the colliding nuclei. The Eq. \ref{eq:ns} is due to fact that initially there is no net-strangeness in the colliding nuclei. In terms of transverse momentum $(p_T)$ and pseudo-rapidity ($\eta$), the volume element $d^3p$ and the single particle energy $E_i$ can be written as $d^3p=2\pi~p_T^2~ cosh~\eta ~dp_T ~d\eta$ and $E_i=\sqrt{(p_T ~cosh~\eta)^2+m_i^2}$, respectively. Instead of pseudo-rapidity, one can use rapidity ($y$) as well. In that case $d^3p$ and $E_i$ respectively can be written as $d^3p=2\pi~p_T m_{Ti}~ cosh~y ~dp_T ~dy$ and $E_i=m_{Ti} cosh~y$, where $m_{Ti}=\sqrt{p_T^2+m_i^2}$. The experimental acceptances can be incorporated by considering the appropriate integration ranges, either in $p_T$ and $\eta$ or in $p_T$ and $y$. \subsection{Excluded volume corrections} In ideal HRG model point like particles are considered. Although, attractive interactions between hadrons are incorporated through the presence of resonances, repulsive interactions are ignored in this framework. This simple model has few parameters only. Despite its simplicity, this model successfully describes the bulk properties of the system created in heavy ion collisions. The repulsive interactions are also needed, especially at very high temperature and/ or large baryon chemical potential, to catch the basic qualitative features of strong interactions where ideal gas assumption becomes inadequate. In the EVHRG model \cite{Hagedorn:1980kb, Rischke:1991ke, Cleymans:1992jz, Yen:1997rv, Begun:2012rf, Andronic:2012ut, Fu:2013gga, Tawfik:2013eua, Bhattacharyya:2013oya, Kadam:2015xsa, Kadam:2015fza, Kadam:2015dda, Albright:2014gva, Albright:2015uua}, hadronic phase is modelled by a gas of interacting hadrons, where the geometrical sizes of the hadrons are explicitly incorporated as the excluded volume correction, to approximate the short-range repulsive hadron-hadron interaction. \section{\label{sec:Fluctuations} Fluctuations of conserved charges} Derivatives of the $\ln Z$ with respect to corresponding chemical potential define susceptibilities, which experimentally become accessible through event-by-event analysis of fluctuations of conserved quantities such as net-baryon number, net-charge, net-strangeness and others. The $n~th$ order susceptibility is defined as, \begin{equation}\label{eq:chi} \chi^n_q=\frac{1}{V T^3}\frac{\partial^n {(\ln Z)}}{\partial {(\frac{\mu_q}{T})}^n} \end{equation} where $\mu_q$ is the chemical potential for conserved charge $q$. Experimentally net-charges $N_q$ ($= N_q^+-N_q^-$) are measured in a finite acceptance on an event by event basis. The mean ($M_q$), variance ($\sigma_q^2$), skewness ($S_q$) and kurtosis ($\kappa_q$) of net-charge distribution are related to the different order of susceptibilities by the following relations: \begin{equation} M_q=\left\langle N_q \right \rangle = VT^3\chi_q^1, \end{equation} \begin{equation} \sigma_q^2=\left\langle(\delta{ N_q})^2\right\rangle=VT^3\chi_q^2, \end{equation} \begin{equation} S_q=\frac{\left\langle(\delta{ N_q})^3\right\rangle}{\sigma_q^3}=\frac{VT^3\chi_q^3}{(VT^3\chi_q^2)^{3/2}}, \end{equation} \begin{equation} \kappa_q=\frac{\left\langle(\delta{ N_q})^4\right\rangle}{\sigma_q^4}-3=\frac{VT^3\chi_q^4}{(VT^3\chi_q^2)^2}, \end{equation} where $\delta{ N_q} = N_q - \left\langle N_q \right\rangle$. The mean, variance, skewness and kurtosis are respectively estimations of the most probable value, width, asymmetry and the peakedness of the distribution. From the above equations, volume independent ratios can be defined by the following relations: \begin{subequations} \label{allequations} \begin{eqnarray} &\sigma_q^2/M_q=C_2/C_1=\chi_q^2/\chi_q^1,\label{equationa} \\ &S_q \sigma_q = C_3/C_2 = \chi_q^3/\chi_q^2,\label{equationb} \\ &\kappa_q \sigma_q^2 = C_4/C_2 = \chi_q^4/\chi_q^2,\label{equationc} \end{eqnarray} \end{subequations} where $C_n$ is the $n$ th order cumulants of the charge distribution. The STAR collaboration has reported results of the above-mentioned observables of net-proton and net-charge at different energies ranging from $7.7$ GeV to $200$ GeV and at various centralities \cite{Adamczyk:2013dal, Adamczyk:2014fia}. The PHENIX collaboration has also reported results of similar observables for net-charge \cite{Adare:2015aqk}. Non-monotonic variations of these ratios with beam energy ($\sqrt{s_{NN}}$) and also with centrality at a fixed $\sqrt{s_{NN}}$ are believed to be good signatures of a phase transition and a CEP. These observables have also been studied in different models \cite{Bhattacharyya:2013oya, Garg:2013ata, Alba:2014eba, Bhattacharyya:2015zka, Albright:2015uua, Karsch:2015zna, Ichihara:2015kba, Xu:2016jaz, Xu:2016qzd} and also in LQCD \cite{Gupta:2011wh, Karsch:2011gg, Karsch:2015nqx, Bazavov:2015zja}. Recently $S\sigma$ and $\kappa \sigma^2$ for charged pions have been studied using non-equilibrium HRG model \cite{Begun:2016cva}. \begin{figure*}[!htb] \centering \includegraphics[width=\textwidth]{T_muB_7.7_200GeV_hrg_evhrg.eps} \caption{(Color online) Centrality (in terms of $N_{part}$) dependence of chemical freeze-out temperatures and baryon chemical potentials for Au + Au collisions at $\sqrt{s_{NN}}=~200,~62.4,~39,~27, ~19.6,~11.5$ and $7.7$ GeV. $\sqrt{s_{NN}}$ varies column wise. Four sets of chemical freeze-out parameters (CFO) have been plotted. For specifications of different sets of parameters see the table \ref{tableLabel1}.} \label{T_mu_Npart} \end{figure*} \section{\label{sec:Results} Results} In this paper, we have studied fluctuations of net-proton and net-charge using HRG as well as its interacting version $i.e.$ EVHRG model. In Ref. \cite{Fu:2013gga, Bhattacharyya:2013oya}, it was shown that the ratios of higher order cumulants are affected by the excluded volume corrections. Further, it was shown that experimental data of $\sigma^2/M$ for net-proton as well as for net-charge in central Au + Au collisions \cite{Bhattacharyya:2013oya} can be described quite well using this model. Not only that, $S\sigma$, $\kappa\sigma^2$ can also be described within experimental error for $\sqrt{s_{NN}} \ge 27$ GeV. Therefore, it is very important to consider EVHRG model for the study of fluctuations of conserved charges. On the other hand ratio of cumulants depend on acceptance cuts as well \cite{Garg:2013ata, Bhattacharyya:2013oya,Karsch:2015zna}. Therefore, in this work we have used the HRG / EVHRG model with proper experimental acceptances. For our present study we constrained the chemical freeze-out temperature and chemical potentials using some of the net-charge and net-proton measured cumulants and then predicted the others in order to test the model. In all our calculations, we have incorporated all the hadrons listed in the particle data book up to a mass of 3 GeV \cite{Agashe:2014kda}. \subsection{\label{sec:CFO} Centrality dependence of chemical freeze-out parameters} \begin{table} \centering \begin{tabular}{|c|c|c|} \hline Sets of parameters &Experimental data used&Model used\\ \hline CFO1 &$(\sigma^2/M)_{np},(\sigma^2/M)_{nc}$&HRG \\ CFO2 &$(\sigma^2/M)_{np},(\sigma^2/M)_{nc}$ &EVHRG \\ CFO3 &$(\sigma^2/M)_{nc}$, $(S\sigma)_{np}, (S\sigma)_{nc}$ &HRG \\ CFO4 &$(\sigma^2/M)_{nc}$, $(S\sigma)_{np}, (S\sigma)_{nc}$ &EVHRG \\ \hline \end{tabular} \caption{Sets of chemical freeze-out parameters ($T, \mu's$). Subscripts ``np'' and ``nc'' correspond to net-proton and net-charge respectively.} \label{tableLabel1} \end{table} The thermal fireball created due to heavy ion collision expands and cools. After some time inelastic collisions among the particles stop and hence particle yields (or particle ratios) get fixed. This stage is called chemical freeze-out. From the experimental information about particle yields or particle ratios, chemical freeze-out temperature and baryon chemical potential can be estimated \cite{Cleymans:2005xv,Xu:2001zj,Becattini:2005xt,Andronic:2005yp, Andronic:2009jd, Karsch:2010ck,Chatterjee:2015fua}. Chemical freeze-­out parameters are reported to be independent of centrality \cite{Kaneta:2004zr, Cleymans:2004pp}. However, we wanted to revisit the centrality dependence of chemical freeze­‐out parameters through the study of higher order cumulants in Au + Au collisions. Therefore, in this paper, we estimate chemical freeze-out temperatures and chemical potentials within HRG model, at different energies as well as at different centralities using the experimentally measured ratios of cumulants of net-proton and net-charge measured in Au + Au collisions by STAR collaboration at RHIC \cite{Adamczyk:2013dal, Adamczyk:2014fia}. Net-proton fluctuations were experimentally measured in the mid rapidity ($|y|<0.5$) and within transverse momentum $0.4 < p_T < 0.8$ GeV. On the other hand, net-charge fluctuations were measured in pseudo-rapidity range $|\eta|<0.5$ and within transverse momentum range $0.2 < p_T < 2.0$ GeV (removing net-proton of $p_T<0.4$ GeV) \cite{Adamczyk:2014fia}. Same acceptances have been used in the HRG / EVHRG model in the present study. We have considered hard core radii $0.3$ fm for all the hadrons whenever EVHRG is used. Four sets of chemical freeze-out parameters, listed in table \ref{tableLabel1}, have been used in order to describe $\sigma^2/M$, $S\sigma$ and $\kappa\sigma^2$ of net proton and net-charge. Here we would like to discuss the modus operandi for the estimation of parameter sets listed in table \ref{tableLabel1}. We have three experimental cumulant ratios $\sigma^2/M$, $S\sigma$ and $\kappa\sigma^2$ for net charge and net proton. It should be noted that $\sigma^2/M$ has smaller experimental errors compared to $S\sigma$ and $\kappa\sigma^2$. Not only that, experimental errors are smaller for net-proton fluctuations compared to net-charge data. In order to evaluate the chemical freeze-out parameters from these observables at a particular $\sqrt{s_{NN}}$ and centrality, we use $\chi^2$ minimisation technique where $\chi^2$ is defined as, \begin{equation}\label{chi2_min} \chi^2=1/N \sum_{i=1}^{N} \frac{(R_i^{expt}-R_i^{model})^2}{\sigma_i^2}, \end{equation} where $N$ is the number of observables, $R_i^{model}$ is the $i ~th$ observable with $R_i^{expt}$ and $\sigma_i$ being its experimental values and errors respectively (statistical error has been used here). Error bars in the evaluated freeze-out parameters correspond to $\chi^2=\chi^2_{min}+1$. We have taken care of the conservation laws Eqs. \ref{eq:ns} and \ref{eq:nbq} in the evaluation of chemical freeze-out parameters. First we obtained freeze-out parameters using only lower order cumulant ratios $\sigma^2/M$ of net-charge and net-proton. For this we have the two sets CFO1 (HRG) and CFO2 (EVHRG). Then we wanted to check if the freeze-out parameters estimated including the higher order cumulant $S\sigma$ for net-charge and net-proton agrees with the above set. We found that the extremely high precision of experimental data for $\sigma^2/M$ of net-proton completely biases the $\chi^2$ minimisation to agree with the earlier set. So finally we used $\sigma^2/M$ and $S\sigma$ of net-charge and $S\sigma$ of net-proton to extract the second set of parameters CFO3 (HRG) and CFO4 (EVHRG). Note that if equilibration is complete then any combination of observables should reproduce mutually agreeable set of fitting parameters. \begin{figure*}[!htb] \centering \includegraphics[width=\textwidth]{compare_muB_T_4.eps} \caption{Chemical freeze-out parameters in ($T, \mu_B$) plane of the QCD phase diagram. We plot chemical freeze-out parameters for most central ($0- 5 \%$) as well as for most peripheral ($70- 80 \%$) collisions for $7.7$ GeV $\le \sqrt{s_{NN}} \le 200$ GeV. We compare our results of CFO1 and CFO2 with chemical freeze-out parameters for $0- 5 \%$ centralities given in the Ref. Alba et al. \cite{Alba:2014eba} for $11.5$ GeV $\le \sqrt{s_{NN}} \le 200$ GeV. Chemical freeze-out parameters given in the Ref. Chatterjee et al. \cite{Chatterjee:2015fua} have also been plotted.} \label{T_muB} \end{figure*} Figure \ref{T_mu_Npart} shows centrality dependence of chemical freeze-out $T, \mu_B$ at different $\sqrt{s_{NN}}$ from $7.7$ GeV up to $200$ GeV. Four sets of chemical freeze-out parameters (CFO1 - CFO4) have been plotted. The average number of participant ($N_{part}$) is maximum for most central ($0 - 5 \%$) collision whereas it is minimum for most peripheral ($70 - 80 \%$) collision. In this figure $\sqrt{s_{NN}}$ decreases column wise. The leftmost column of Fig. \ref{T_mu_Npart} corresponds to the highest beam energy $i.e. \sqrt{s_{NN}}= 200$ GeV whereas the right most column corresponds to $\sqrt{s_{NN}}= 7.7$ GeV. One would expect that with higher $\sqrt{s_{NN}}$ the particle production would be higher and give rise to a high freeze-out temperature. On the other hand for low $\sqrt{s_{NN}}$ particle production would be less and the collision participants would also contribute actively to the system properties (due to baryon stopping). Thus observed temperature may be low but baryon chemical potential may be large. Note that for complete equilibration at freeze-out, all evolutionary history of the system will be erased. This will be reflected in the agreement of thermodynamic parameters fitted from all possible experimental observables. On the other hand, for incomplete equilibration certain discrepancies among the thermodynamic parameters fitted from different observables may arise. On top of that the presence of jets, hadronic decays, as well as interactions among the hadrons beyond those considered through the excluded volume effects, may also show deviation of the system from that expected from the HRG picture used to model the system. It can be seen from Fig. \ref{T_mu_Npart} that chemical freeze-out temperatures of CFO1 and CFO3 (or CFO2 and CFO4) are almost the same for $\sqrt{s_{NN}}\ge 27$ GeV. There are significant separations between these two sets of parameters for $\sqrt{s_{NN}}< 27$ GeV, and these separations increase towards central collisions for a fixed $\sqrt{s_{NN}}$. Not only that, these separations increase with the decrease of $\sqrt{s_{NN}}$. Overall the spread in temperature for the whole range of $\sqrt{s_{NN}}$ and centrality is within 140-180 MeV. On the other hand the magnitudes of chemical freeze-out baryon chemical potentials ($\mu_B$) increase with decrease of $\sqrt{s_{NN}}$ as well as increase in centrality by about two orders of magnitude. The occurrence of high net-baryon density is expected when the participant nucleons are deposited in the collision region. More or less similar behaviour of $\mu_B$ is reported in Ref. \cite{Kumar:2012np,Das:2014oca,Yu:2014epa} where chemical freeze-out parameters are extracted analysing particle yields measured experimentally \cite{Kumar:2012np,Das:2014oca} or generated by the event generator \cite{Yu:2014epa}. The separation of the parameters obtained from CFO1 and CFO3 (or CFO2 and CFO4) are also observed here, but in the opposite direction. The lower order cumulants thus seems to equilibrate with lower temperature and higher density than the higher order cumulants. The conclusion that one can draw from this observation is that the system formed in the heavy ion collision has not completely equilibrated if we consider only the HRG model to describe it. However it is possible that HRG picture, if suitably modified, may lead to the scenario as found in Fig.~\ref{T_mu_Npart}. Here the question is whether there are any possibilities such that one can find a multicomponent system with different equilibrium parameters that can systematically explain the observed discrepancy for the fitted parameters. At this point it is tempting to propose a possible scenario that may give rise to such an agreement of thermodynamic parameters for higher $\sqrt{s_{NN}}$ and deviations found for lower $\sqrt{s_{NN}}$. We first assume that in the region of lower values of $\sqrt{s_{NN}}$ the effects coming from the jets are quite small, and hence are not responsible for this deviation. Now if the system has thermalised near or above the phase transition region and then evolved down to the hadronic phase then one can qualitatively describe the situation as follows. For a cross-over region the system undergoes rapid changes from partonic to hadronic phase, but all the components can still follow a given equilibrium condition at all times. This is expected to happen for large $\sqrt{s_{NN}}$. However near the critical point, correlation length $\xi$ would tend to infinity and there would be a large enhancement in the fluctuations. In a realistic situation, as in heavy ion collisions, dynamical variables are functions of time. As the system moves towards the critical region, relaxation time increases and at some point the system may expand too fast to maintain thermodynamic equilibrium. So the correlation length gets constrained due to this critical slowing down \cite{Ma_1976} and becomes frozen at some time. But the system expansion continues further. This situation may lead to the difference in the information carried by the different order of cumulants. More specifically, second, third and fourth order cumulants of multiplicities are related to the correlation length by the relations $\left\langle(\delta{N})^2\right\rangle \sim \xi^2$, $\left\langle(\delta{N})^3\right\rangle \sim \xi^{4.5}$ and $\left\langle(\delta{N})^4\right\rangle \sim \xi^7$ respectively \cite{Stephanov:2008qz}. This, in turn, implies that for higher order cumulants their relaxation time to the equilibrium values may be considerably larger compared to those of lower order cumulants. So in the final spectrum, higher order cumulants are expected to carry the information of the system farther from equilibrium compared to lower cumulants. For example, compared to lower moments, the temperature evaluated using the higher moments may be larger as being away from equilibrium system is hotter. This is what we observe for lower $\sqrt{s_{NN}}$ i.e. temperatures of CFO3 / CFO4, where third order fluctuations are involved, are larger compared to that of CFO1 / CFO2 for $\sqrt{s_{NN}}< 27$ GeV, the corresponding chemical potential being smaller than that of CFO1/CFO2. Incidentally this is the range of temperature and baryon chemical potential where close to which the critical end point is expected to lie. Availability of higher moment data with much better statistics is extremely essential for further constraining this picture. We however emphasise that this is only a plausibility argument for effects of a CEP to modify the simple HRG parameters with different cumulants. A systematic study of various other dynamical effects would be required to ascertain how far this picture is valid \cite{Luo:2014tga,Netrakanti:2014mta}. Another important caveat is that the contributions due to purely statistical fluctuations in the cumulants reported by the STAR experiment are not subtracted from the variances and rely on models for $S\sigma$ and $\kappa\sigma^2$. Therefore, the sensitivity of the reported cumulants to dynamical effects is ambiguous. In fact it is even difficult to ascertain whether the statistical fluctuations in the data may overwhelm the critical fluctuations or not. Figure \ref{T_muB} shows chemical freeze-out parameters for 0-5$\%$ and $70-80 \%$ in the ($T, \mu_B$) plane. In this figure, we also compare our results of chemical freeze-out parameters with previous works \cite{Alba:2014eba, Chatterjee:2015fua}. In \cite{Alba:2014eba}, chemical freeze-out parameters for most central collisions were estimated using the experimental data of $\sigma^2/M$ of net-proton and net-charge. In their model they considered effect of the resonance decays, experimental acceptances and randomisation of the isospin of nucleons in the hadronic phase. They excluded chemical freeze-out parameters for $\sqrt{s_{NN}} = 7.7$ GeV. It can be seen that our chemical freeze-out parameters of CFO1 / CFO2 for $0 - 5 \%$ centrality are very close to that of \cite{Alba:2014eba} for $\sqrt{s_{NN}} = 19.6 - 200$ GeV. Moreover, the agreement is slightly better for CFO2. Interesting point is that, with decrease of $\sqrt{s_{NN}}$ from $\sqrt{s_{NN}} = 200$ GeV, chemical freeze-out $T$ increases up to $\sqrt{s_{NN}} = 39$ GeV then it decreases at $\sqrt{s_{NN}} = 27$ GeV and becomes almost flat up to $\sqrt{s_{NN}} = 19.6$ GeV and then again decreases. In contrast, the chemical freeze-out $\mu_B$ increases with decrease of $\sqrt{s_{NN}}$ in the whole range of $\sqrt{s_{NN}}$. This behaviour of chemical freeze-out $T$ is in contradiction to what has been reported in the Refs. \cite{Cleymans:2005xv,Xu:2001zj,Becattini:2005xt,Andronic:2005yp, Andronic:2009jd,Karsch:2010ck,Tiwari:2011km,Chatterjee:2015fua} where chemical freeze-out parameters have been extracted from particle multiplicities. We plot chemical freeze-out $T$ and $\mu_B$ of Ref. \cite{Chatterjee:2015fua} for comparison. Refs. \cite{Cleymans:2005xv,Xu:2001zj,Becattini:2005xt,Andronic:2005yp, Andronic:2009jd,Karsch:2010ck,Tiwari:2011km,Chatterjee:2015fua} showed that chemical freeze-out $T$ rapidly increases with the increase of $\sqrt{s_{NN}}$ in SIS-AGS-SPS energy range and then saturates at top RHIC energy. However, behaviour of chemical freeze-out $\mu_B$ reported in these references were similar. In Fig. \ref{T_muB} we also show chemical freeze-out $T, \mu_B$ of CFO3/ CFO4. For both CFO3 and CFO4, chemical freeze-out $T$ increases with decrease of $\sqrt{s_{NN}}$ for $11.5$ GeV $\le \sqrt{s_{NN}} \le 200$ GeV and the temperatures are larger compared to that of CFO1 / CFO2. Although the fast rise of $\mu_B$ for higher moments as found in HRG, seems to have slowed down for EVHRG as seen in the figure for CFO4., all the chemical freeze-out parameters are within certain band in the ($T, \mu_B$) plane. Recently, the possibility of larger chemical freeze-out temperature is indicated also at LHC energy \cite{Vovchenko:2015cbk}. \begin{figure*}[!htb] \centering \includegraphics[width=\textwidth]{fitting_muB_T_4.eps} \caption{(Color online) Variation of $\mu_B/T$ with $N_{part}$ (Black points). Blue solid points correspond to the $\mu_B/T$ according to the Eq. \ref{fitting_eq}.} \label{fitting_muB_T} \end{figure*} \begin{figure*}[!htb] \centering \includegraphics[width=\textwidth]{scaling_mub_T_4.eps} \caption{Scaling behaviour of $\mu_B/T$ with centrality. On the horizontal axis, $N_{part}$ is normalised with that at the most central collision and similarly in the vertical axis $\mu_B/T$ is normalised to that of the most central collision. Therefore, in the horizontal axis, the maximum value, which is equals to $1$, corresponds to the most central collision ($0-5 \%$) and the minimum value corresponds to the most peripheral collision ($70-80 \%$).} \label{muB_T_Npart} \end{figure*} \subsection{Scaling behaviour of \texorpdfstring{$\mu_B/T$}{mB/T}} \label{sec:Scaling} The Fig. \ref{fitting_muB_T} shows variation of $\mu_B/T$ with $N_{part}$. $\mu_B/T$ increases with increase of $N_{part}$ for all energies for all four parameter sets. Moreover, $\mu_B/T$ increases with decrease of $\sqrt{s_{NN}}$. It can also be seen that $\mu_B/T$ for CFO1 / CFO2 are larger compared to CFO3 / CFO4 and differences between the parameters of CFO1 and CFO3 (or CFO2 and CFO4), as shown in Fig. \ref{T_muB}, increase when the value of $\mu_B/T$ is close to or greater than unity. In order to separate the effects of $N_{part}$ and $\sqrt{s_{NN}}$, $\mu_B/T$ can be expressed by the relation, \begin{equation}\label{fitting_eq} \mu_B/T(\sqrt{s_{NN}},N_{part})= p(0) (N_{part})^{1/p(1)} \sqrt{s_{NN}}^{p(2)}, \end{equation} where $p(0), p(1)$ and $p(2)$ are three parameters. In this equation, first part depends only on $N_{part}$ while second part depends only on $\sqrt{s_{NN}}$. For fitting purpose we have simultaneously used $\mu_B/T$ of $\sqrt{s_{NN}}= 19.6$ GeV to $62.4$ GeV for CFO1 / CFO2 and $\sqrt{s_{NN}}= 27$ GeV to $200$ GeV for CFO3 /CFO4 for which $\chi^2$ per degree of freedom (ndf) is minimum. All the fitting parameters are listed in the table \ref{tableLabel2}. The quality of fitting is quite good as can be seen from the figure. The fitted parameters are then used to estimate $\mu_B/T$ for remaining energies. It can be seen that, for the sets CFO1 / CFO2, the $\mu_B/T$ from Eq. \ref{fitting_eq} slightly underestimates the extracted $\mu_B/T$ for $\sqrt{s_{NN}}= 11.5$ GeV and $200$ GeV. On the other hand, for CFO3/ CFO4, $\mu_B/T$ evaluated using Eq. \ref{fitting_eq} slightly underestimates the extracted $\mu_B/T$ for peripheral collisions of $\sqrt{s_{NN}}= 11.5$ GeV and slightly overestimates towards central collisions. \begin{table*} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline CFO &Using $\frac{\mu_B}{T}$ of&$p(0)$&$p(1)$&$p(2)$&$\frac{\chi^2}{ndf}$\\ & $\sqrt{s_{NN}} (GeV)$ &&&&\\ \hline CFO1& 19.6-62.4 &$9.59 \pm 1.33$ & $7.30 \pm 1.60$ & $-0.95 \pm 0.02$ &0.13\\ CFO2& 19.6-62.4 & $8.95 \pm 0.28 $& $7.56 \pm 0.40$ & $-0.92 \pm 0.01$ & 0.17\\ CFO3& 27-200 &$4.9 \pm 0.14$ &$7.28 \pm 0.34$ & $-0.79 \pm 0.01$ & 0.53\\ CFO4& 27-200 &$5.23 \pm 0.44$ &$7.29 \pm 0.97$ & $-0.81 \pm 0.02$ & 0.11\\ \hline \end{tabular} \caption{Parameters of the fitting function $\mu_B/T=p(0) (N_{part})^{1/p(1)} \sqrt{s_{NN}}^{p(2)}$. Since $\frac{\mu_B}{T}$ is dimensionless, the dimension of p(0) is equals to GeV$^{-(p(2))}$.} \label{tableLabel2} \end{table*} In the Fig. \ref{muB_T_Npart}, we have explored the scaling behaviour of $(\mu_B/T)/(\mu_B/T)_{central}$ with $N_{part}/(N_{part})_{central}$. Quantities in both the axes have been normalised to the corresponding values in the most central collisions. As a result, the maximum value in the horizontal axis (equals to unity) corresponds to most-central collision ($0-5 \%$) and its value decreases towards most peripheral collisions ($70-80 \%$). It can be seen that $\frac{\mu_B}{T}/ (\frac{\mu_B}{T})_{central}$ increases with increase in $N_{part}/(N_{part})_{central}$ for all the $\sqrt{s_{NN}}$ from $200$ GeV down to $11.5$ GeV. For most peripheral collisions, $\frac{\mu_B}{T}/ (\frac{\mu_B}{T})_{central}$ become within $65- 85 \%$ of that of central collisions. $\frac{\mu_B}{T}/ (\frac{\mu_B}{T})_{central}$ for all the $\sqrt{s_{NN}}$ seem to scale well for the parameter sets CFO1/CFO2. On the other hand, scaling is found to be violated at lower $\sqrt{s_{NN}}$ for CFO3 / CFO4. The violation is large for $\sqrt{s_{NN}}=11.5$ GeV and small for $\sqrt{s_{NN}}=19.6$ GeV. This violation of scaling may again be interpreted as due to possible critical behaviour at lower $\sqrt{s_{NN}}$, apart from being caused by other dynamical effects, as already discussed earlier. It can be noted that here separations are observed towards lower values of the horizontal axis because we have normalised both the axes with the corresponding values in the most central collisions. However, normalisation of the axes with corresponding values from most peripheral collisions would have resulted in separations towards higher values of the horizontal axis. In the above discussion $\sqrt{s_{NN}}= 7.7$ GeV have been excluded due to large error bars. The presence of scaling is a direct consequence of the fact that one can separate the dependence of $\mu_B/T$ on $\sqrt{s_{NN}}$ and $N_{part}$ as given by Eq. \ref{fitting_eq}. By construction, $(\mu_B/T)/(\mu_B/T)_{central}$ becomes independent of $\sqrt{s_{NN}}$. However, since the fitting of $\mu_B/T$ is not perfect (Fig. \ref{fitting_muB_T}) for all the $\sqrt{s_{NN}}$, scaling as shown in Fig. \ref{muB_T_Npart} is also not exact. \subsection{\label{sec:Comparison} Comparison with experimental data} \begin{figure*}[!htb] \centering \includegraphics[width=\textwidth] {moments_products_7.7_200_np_nc_hrg_evhrg_dof2.eps} \caption{(Color online) Centrality dependence of ratios of cumulants of net-proton and net-charge. ``nc'' and ``np'' correspond to net-proton and net-charge respectively. Experimental data of fluctuations measured in Au + Au collisions by STAR collaboration is taken from \cite{Adamczyk:2013dal, Adamczyk:2014fia}. Blue and black points have been used for net-proton and net-charge respectively.} \label{moments_products_np_nc_2} \end{figure*} \begin{figure*}[!thb] \centering \includegraphics[width=\textwidth] {moments_products_7.7_200_np_nc_hrg_evhrg_dof3.eps} \caption{(Color online) Same as Fig. \ref{moments_products_np_nc_2} but for parameter sets CFO3 and CFO4.} \label{moments_products_np_nc_3} \end{figure*} Here we have used the extracted freeze-out parameters CFO1 and CFO2 to calculate the $\sigma^2/M$, $S\sigma$ and $\kappa\sigma^2$ of net-proton and net-charge using HRG and EVHRG model respectively. Note that for these two sets, chemical freeze-out parameters were estimated from experimental data of $\sigma^2/M$ only. We have compared our results with experimental data of fluctuations measured in Au + Au collisions by STAR collaboration \cite{Adamczyk:2013dal, Adamczyk:2014fia}. As mentioned earlier the experimental acceptances have been incorporated in our model calculation as well. In the top row of Fig. \ref{moments_products_np_nc_2}, we have shown centrality dependence of $\sigma^2/M$ of net-proton and net-charge at different beam energies. In this figure $\sqrt{s_{NN}}$ varies column wise. Blue and black points have been used for net-proton and net-charge respectively in this figure. For both net-proton and net-charge $\sigma^2/M$ decreases with increase of $N_{part}$ for all $\sqrt{s_{NN}}$. It can be seen that the ratios of lowest order susceptibilities ($i.e.$ $\sigma^2/M$) of net-proton and net-charge can be reproduced quite well using the CFO1 and CFO2 parameters in our model. We now evaluate the higher order susceptibility ratios using these two parameter sets. The middle row of Fig. \ref{moments_products_np_nc_2} shows centrality dependence of $S\sigma$ of net-proton and net-charge. The quantity $S\sigma$ for both net-proton and net-charge increases with increasing $N_{part}$ for $\sqrt{s_{NN}} \ge 11.5$ GeV. Experimental data of $S\sigma$ of net-proton also shows a similar trend. The experimental data of $S\sigma$ of net-proton matches within the error bar for $\sqrt{s_{NN}} > 27$ GeV. However, its value is overestimated at lower energies ($\sqrt{s_{NN}}\le 27$ GeV). On the other hand $S\sigma$ of net-charge calculated in the HRG / EVHRG model are close to or within the error bars of experimental data for all centralities at $\sqrt{s_{NN}} \ge 27$ GeV and for $\sqrt{s_{NN}} < 27$ GeV most central data matches within error bars. In general $S\sigma$ of net-charge shows a monotonic behaviour and differs considerably from experimental data. In the bottom row of Fig. \ref{moments_products_np_nc_2} we have shown centrality dependence of $\kappa\sigma^2$ of net-proton and net-charge. The $\kappa\sigma^2$ for net-proton calculated in our model using CFO1 / CFO2 are almost independent of centrality for $\sqrt{s_{NN}} \ge 19.6$ GeV and below that energy it decreases slightly with increase of $N_{part}$. For all $\sqrt{s_{NN}}$, $\kappa\sigma^2$ of net-proton calculated in the HRG / EVHRG model are within the error bars or very close to the experimental data. The experimental data of $\kappa\sigma^2$ for net-charge matches within the error bar with the HRG / EVHRG model results as we go towards central collisions for $\sqrt{s_{NN}} \ge 11.5$ GeV. Therefore we see that the HRG prediction of higher order cumulants calculated from the estimated freeze-out thermodynamic parameters for the lower order cumulants do not match the experimental data in general. This implies that the equilibration at freeze-out is not quite comprehensive vis-a-vis the HRG model. We can further check what happens when we consider one more higher order cumulant as discussed below. Fig. \ref{moments_products_np_nc_3} correspond to similar plot, as in Fig. \ref{moments_products_np_nc_2}, for parameter sets CFO3 and CFO4. While $\sigma^2/M$ for both net-proton and net-charge show consistency when compared to the experimental data as shown in the top row, the use of CFO3 / CFO4 in the HRG / EVHRG model, show clear improvement in agreement with experimental data of $S\sigma$ for net-proton at all $\sqrt{s_{NN}}$ as shown in the middle row. On the other hand, there is almost no change in the results for $S\sigma$ of net-charge. Once again we find that the prediction for $\kappa\sigma^2$ for net-proton agrees well with experimental data whereas that for net-charge does not. This further reaffirms that the matter formed in heavy ion collision system does not conform to a system of completely equilibrated hadron gas. \section{\label{sec:Discussion} Discussion and Conclusion} We have extracted the chemical freeze-out parameters by fitting the experimental data of cumulants of net-proton and net-charge measured by STAR collaboration using both HRG and EVHRG model. We have incorporated the proper experimental acceptances in our calculation. However, the dynamical effects such as particle decay, minimum biased jet, baryon stopping are not considered in the present study. The experimental data of $\sigma^2/M$ of both net-proton and net-charge have been used to estimate chemical freeze-out parameters CFO1 / CFO2. On the other hand, parameters CFO3 / CFO4 have been estimated using the experimental data of $\sigma^2/M$ of net-charge and $S\sigma$ of both net-proton and net-charge. For CFO1 and CFO3, HRG model has been used, whereas for CFO2 and CFO4, EVHRG has been used. The chemical freeze-out parameters evaluated using lower order cumulants (CFO1/CFO2) starts deviating from the one obtained using higher order cumulants (CFO3/CFO4) around $\sqrt{s_{NN}} = 19.6$ GeV as one goes from $\sqrt{s_{NN}} = 200$ GeV towards lower energies. Among other possibilities, transition of the system close to the critical region may contribute to the requirement of multiple parametrisation in HRG for various orders of cumulants. In case of lower energies one need to take into account the baryon stopping as well. In these regions of low energies HRG and EVHRG starts deviating from each other as well due to the effect of repulsive interaction in EVHRG. We observe that the effect of centrality and beam energy in $\mu_B/T (\sqrt{s_{NN}}, N_{part})$ can be separated. This separation leads to a scaling of $(\mu_B/T)/(\mu_B/T)_{central}$ with $N_{part}/(N_{part})_{central}$. Though the scaling is very good for CFO1 / CFO2, a deviation is observed for CFO3 / CFO4 especially in the region $\sqrt{s_{NN}} \le 19.6$ GeV. The study of such scaling behaviour will be useful to search for CEP which is the main goal of the ongoing STAR experiment and the future CBM experiment. Experimental data of lowest order susceptibilities ($i.e.,$ $\sigma^2/M$) of net-proton and net-charge can be reproduced quite well using the CFO1 and CFO2 in the HRG / EVHRG model. The experimental data of $S\sigma$ of net-proton match within the error bar for $\sqrt{s_{NN}} > 27$ GeV for these two sets of parameters. However, it is overestimated at lower beam energies ($\sqrt{s_{NN}}\le 27$ GeV). On the other hand $S\sigma$ of net-charge calculated in the HRG / EVHRG model using CFO1 / CFO2 are close to or within the error bars for $\sqrt{s_{NN}} \ge 27$ GeV and they are within the error bars for more central data at lower $\sqrt{s_{NN}}$. For all $\sqrt{s_{NN}}$, $\kappa\sigma^2$ of net-proton calculated in the HRG / EVHRG model using CFO1 / CFO2 are within the error bars or very close to the experimental data. The experimental data of $\kappa\sigma^2$ for net-charge matches within the error bar with the HRG / EVHRG model results calculated using CFO1 / CFO2 as we move towards central collisions for $\sqrt{s_{NN}} \ge 11.5$ GeV, but underestimate the data for peripheral collisions. This points to the incomplete equilibrium distribution of the particles observed in data. On the other hand experimental data of $S\sigma$ of net-proton can be described well at all $\sqrt{s_{NN}}$ in the HRG / EVHRG model using CFO3 / CFO4. In addition, both the parameter sets give satisfactory description of $\sigma^2/M$ of net-proton and net-charge. However the $\kappa\sigma^2$ for both net-proton and net-charge calculated in the HRG / EVHRG model using CFO3 / CFO4 are similar to those calculated using CFO1 / CFO2. In this set again we found the signature of incomplete equilibration of the system formed in heavy ion collision experiments. Thus we conclude that the freeze-out parameters, which can describe lower order cumulant ratios very well in all energies and centralities, can't describe the higher order cumulant ratios equally well. It is difficult to pin-point all the reasons for such disagreement unless all the dynamical effects are accounted for. Looking at the systematic deviation of the thermodynamic parameters we could only present a plausibility argument for the system passing near a critical region. Precise experimental data of $\kappa\sigma^2$ along with few more $\sqrt{s_{NN}}$ around $19.6$ GeV will be extremely useful for further investigation in this direction. \section*{Acknowledgements} This work is funded by CSIR, UGC, and DST of the Government of India. We acknowledge the STAR collaboration for the experimental data. SS thanks Sabita Das for providing chemical freeze-out parameters of Ref. \cite{Chatterjee:2015fua}. We would like to thank Bedangadas Mohanty for useful discussions.
1,314,259,995,693
arxiv
\section*{} \markboth{Contents}{} \vspace*{-1cm} \tableofcontents \markboth{Contents}{} \vfill \noindent Photo credits:\\ \noindent Cover: NASA\\ \noindent Backpage: \begin{itemize} \item Collection of early drawings of Saturn by various observers, from Huyghens' \textit{Systema Saturnium} (1659). \item Guido Bonatti, from his \textit{De Astronomia Tractatus X Universum quod ad iudiciariam rationem nativitatum} (Basel, 1550) \end{itemize} \clearpage \section*{Foreword} \label{sec:foreword} \addcontentsline{toc}{section}{\nameref{sec:foreword}} \markboth{Foreword}{} The present notes constitute a somewhat corrected and mildly extended --- through the addition of an appendix --- version of a set of lecture notes devoted to planetary ring dynamics and initially published in the \textit{Goutelas} series of volumes \citep{L92}. They were designed to form an overall but rather complete pedagogical introduction to their subject matter, i.e.\ to cover the material needed for a detailed understanding of the theoretical results published by Borderies, Goldreich and Tremaine --- but also Shu and coworkers to a smaller extent --- in a variety of research papers in the late 70s and in the 80s. However, some sections are more detailed than others. These notes have never been widely distributed, and are almost impossible to find now. But although they are more than 20 years old, I believe or at least hope they might still be useful to a number of people in the field. The upcoming book (\textit{Planetary Ring Systems}), edited by Matt Tiscareno and Carl Murray and to be published later on this year at Cambridge University Press has been an incentive to post them on the Astrophysics Preprint Archive. In particular these notes are occasionally referenced in my own chapter of this book (\textit{Theory of Narrow Rings and Sharp Edges}). If you wish to refer to these notes, please quote the initial publication above along with their \textit{arXiv} number for accessibility. Needless to say, these notes ignore recent advances in the understanding of ring macro- and microphysics, such as the existence of propellers, the more recent focus on viscous overstabilities, the existence and potential r\^ole of self-gravitational wakes, the now large albeit indirect body of evidence against the DEBs model of ring particles which was fashionable back in the 1980s, and of course the wealth of new observational constraints and theoretical challenges brought to light by the \textit{Cassini} mission. A number of these shortcomings are addressed in my chapter of the new ``ring book''. The interested reader will find in there both a complete although less detailed exposition than the present one of the streamline formalism and a thorough discussion of all the physics that has not found its way in the present notes, most notably the physics of narrow rings and sharp edges and associated modes, as indicated by the chapter's title. The literature review will be complete up to the time of publication. If permitted by the copyright agreement, this chapter will also be posted on \textit{arXiv} as part 2 of these notes. The two main topics covered in detail in the present notes and that have not been revisited in my ring book chapter are the ring microphysics theory and the theory of linear and nonlinear density waves. \bigskip Last point: please feel free to send me any feedback on these notes, if only to point out the unavoidable remaining mistakes. \vskip 1truecm \hfill \textit{Pierre-Yves Longaretti, June 1, 2016} \clearpage \newpage\null\thispagestyle{empty}\newpage \mainmatter \setcounter{figure}{0} \section{Introduction} \label{sec:intro} Ring systems are now known to exist around the four giant planets of the Solar System. However, they differ widely from one another. Jupiter's ring is very tenuous, and its constituting particles are permanently destroyed and created. Saturn's rings constitute the most extensive system: from the inside to the outside, one successively finds the D, C, B rings, the Cassini division, the A, F, G and E rings; rings D, F, G and E are rather tenuous, whereas the others are much more massive and more opaque. Uranus is surrounded by nine main rings of high optical depth, separated by dust bands. Finally, Neptune has a very peculiar system of very low optical depth rings, with some azimuthal structures known as ``ring arcs". Rings are made up of particles of various sizes and compositions, and this property allows us to divide them roughly in two broad classes: the ``major" rings contain mostly ``big" particles (typically meter-sized); the ``etheral" rings are made of much smaller particles (typically microns). Saturn's main rings (A to C) and the nine main rings of Uranus ($\epsilon$, $\alpha$, $\beta$, $\gamma$, $\delta$, $\eta$, 4, 5, 6) belong to the first category. The other rings of Saturn and Uranus, as well as the rings of Jupiter and Neptune belong the second. The two classes of rings thus defined exhibit very different dynamical behaviors, because light particles are submitted to important electromagnetic forces, which is not the case for meter-sized particles. In this lecture, we will restrict ourselves to the dynamics of the major ring systems. Of course, the distinction between the two types of rings is not absolute: ring particles have a broad size distribution, and micron-sized particles can also be found in the major rings; for example, they play an essential r\^ole in the well-known ``spokes" phenomenon, mainly seen in Saturn's B ring. However, they dominate neither the mass nor the optical depth of these rings, and can therefore be ignored in discussions of their global dynamics. The aim of these notes is to develop from first principles a general approach to ring dynamical phenomena, usually referred to as the ``streamline formalism". This formalism was introduced by \cite{BGT82} and used by these authors in their subsequent papers (e.g. \citealt{BGT83a,BGT83b,BGT85} and references therein). It is a fluid approach to ring dynamics, based on the first three moments of the Boltzmann equation, and making use of perturbation techniques developed in celestial mechanics; in this sense, this is a hybrid (but powerful) method. Quite a number of fundamental theoretical results were obtained in this framework, but its main assumptions, limitations, methods and domains of application are not easily extracted from the existing literature. These notes are designed to remedy these shortcomings, by providing the reader with all the building blocks of the formalism. Therefore, the writing is designed to be as self--contained as possible, and the amount of material rather large. The reference list is purposely biased and limited, in the spirit of an introduction rather than a review. It is useful for the readers to have some practical knowledge of celestial mechanics, and some notions of fluid dynamics and physical kinetics. Some background knowledge on rings is not strictly necessary, but certainly helpful. In the next section, basic notions and orders of magnitudes are presented. Section 3 introduces the Boltzmann equation and discusses the fundamental features of its application to ring systems. It is argued in these two sections that ring systems can be described with the Boltzmann moment equations, up to second order; that the velocity field and the corresponding variations of the ring density are mainly imposed by the planet; and that the evolution of these two quantities takes place on a time-scale much longer than the orbital time-scale, whereas the pressure tensor (measuring the random motions) reaches steady-state on a time-scale comparable to the orbital time-scale. Therefore, ring dynamical problems can be solved in several steps: first, the general form of the ring density and velocity field must be found. Then the ring pressure tensor corresponding to this form of the velocity field and ring density can be computed, using a (semi)-Lagrangian description. Finally, the long time-scale evolution of the velocity field and ring density can be found, by using standard perturbation techniques inspired from Celestial Mechanics, as all the forces involved in the problem (the ring self-gravity, internal stress and satellite perturbations) are small compared to the gravitational attraction of the planet. This program is detailed in sections 4 through 7. The basic concepts of ring kinematics and dynamics are elaborated in section 4. In particular, a simple and general parametrization of ring shapes, motions (streamlines) and densities is introduced, which constitutes the root of the streamline formalism. Solutions for the pressure tensor are given in Section 5, for both dilute and dense systems, and some important general features of its behavior in rings are outlined. Section 6 presents the basic perturbation equations, borrowed from Celestial Mechanics, and provides a general discussion of ring mass, energy and angular momentum budgets. Some applications are described in Section 7, namely the now standard (but questioned) self-gravity model of narrow elliptic rings and the theory of density waves at Lindblad resonances. Section 8 presents a critical assessment of the scope and limitations of the formalism, as well as a quick overview of important issues in ring dynamics, and concludes these notes. \section{Basic concepts and orders of magnitude} \label{sec:basic} The dynamics of the major rings, as we have defined them, is dominated by the gravitational field of their parent planet. The motion of individual ring particles is pertubed from an ellipse by interparticle collisions, the gravitational field of the other ring particles -- the disk self-gravity -- and the planet satellites. Major rings all share a number of striking characteristics: they are flat and confined to the equatorial plane of their planet, they exhibit mostly axisymmetric features and the existing non-axisymmetric features show simple and regular patterns, they have complicated radial structures\dots In this section, we wish to develop some kind of heuristic understanding of the physical processes which generate these remarkable features. The reason for the axisymmetry is easily understood. Any non-axisymmetric feature is quickly erased by the Keplerian shear supplied with a little diffusion. The existing non-axisymmetric features -- like density-waves, ring arcs, or the global eccentricity of some Uranian rings and some Saturn ringlets -- must have a dynamical origin (which is the reason for their regularity). \subsection{Angular momentum axis} Rings form thin circular disks because collisions dissipate energy while they conserve angular momentum. For an oblate planet, only the component of angular momentum parallel to the planet rotation axis is conserved, and the rings lie in the equatorial plane. A simple (although somewhat unrealistic) Celestial Mechanics argument can be made to illustrate this point. Let us consider a situation in which the ring particles are initially confined to a plane different from the equatorial plane of the planet, on uninteracting, inclined circular orbits, as in Figure~1. \begin{figure}[h] \centering \includegraphics[width=0.7\linewidth]{./Figures/Fig1.pdf} \caption{\small{Sketch of an hypothetical ring initially inclined with respect to the equatorial plane of an oblate planet.}} \label{fig:Fig1} \end{figure} In this situation the total angular momentum of the ring is not aligned with the axis of symmetry of the planet. Due to the planet-induced differential precession of the lines of nodes, the system quickly falls into the stable situation represented on Figure~2. The total angular momentum of the disk has changed, and is now aligned with the symmetry axis of the planet. Because the system is isolated, some angular momentum has been transferred to the planet, but the orientation of its spin axis has basically not been affected, because it is much more massive than the rings. It is interesting to give an order of magnitude estimate of the time-scale of this process, by considering the time required for two particle orbits of semimajor axes $a$, separated by the typical ring width $d$, to loose the correlation of their lines of nodes\footnote{In narrow rings, this process can be prevented by the action of the ring self-gravity. However, the ring viscous stress associated with the Keplerian shear probably leads to the damping of the ring inclinations, as it does for the ring eccentricities, at least if no viscous overstability occurs.}. This differential precession time-scale $t_{pr}$ is then given by: \begin{equation} t_{pr}\sim\left[J_2 \Omega\left(\frac{d}{a}\right)\right]^{-1},\label{prec} \end{equation} where $\Omega$ is the angular frequency (mean motion) and $J_2$ the oblateness of the planet. It is interesting to have in mind typical orders of magnitude for the various quantities entering this formula: $J_2$ is of order $10^{-2}$ to $10^{-3}$, $d\sim a$ (in Saturn's rings), $a \sim$ $10^5$ km, and particle rotation periods around the planet are typically of the order of a few hours. For Saturn's rings, e.g., one obtains $t_{pr} \sim 10$ to $100$ days; this figure reaches $\sim 10^4$ to $10^5$ years for a 1~km wide ringlet. \begin{figure}[h] \centering \includegraphics[width=0.7\linewidth]{./Figures/Fig2.pdf} \caption{Situation reached from the starting point of Figure~1 under the action of differential precession.} \label{fig:Fig2} \end{figure} \subsection{Collisional quasi-equilibrium} Assuming that an hypothetical ring has reached the state just described, there is no net vertical motion: the vertical velocity averaged over a large number of ring particles is zero. The thickness of the ring measures the velocity dispersion of the ring particles\footnote{The velocity dispersion is in principle anisotropic. This anisotropy is ignored here.}. In reality, collisions affect this velocity dispersion and control the ring thickness, in various ways: \begin{enumerate} \item Direct collisions are inelastic. Consider two particles on a collision orbit with relative velocity $v_r$. During the collision, the relative velocity is reduced and one has \begin{equation} v_r'\sim \epsilon v_r,\label{vbounce} \end{equation} where $v_r'$ is the post-collision relative velocity, and $\epsilon$ is the so-called ``{\it coefficient of restitution}\footnote{In fact one should define normal and tangential coefficients of restitution. The tangential coefficient is often ignored in the specialized literature, although it provides an important source of coupling between the translation and the spin degrees of freedom. However, such distinctions are not important for our order of magnitude estimates.}"; $\epsilon$ is a function of the relative velocity before encounter. Its functional form depends strongly on the bulk and surface properties of the colliding materials, and is unfortunately not very well constrained for planetary ring particles; however it is usually a decreasing function of $v_r$, so that the energy lost is larger for larger relative impact velocities, as one would intuitively expect. The important point here is of course that the velocity dispersion of the particles (which is the source of their relative collision velocities) is damped due to the inelasticity of the collisions. \item On the other hand collisions are as usual a source of random motions: they scatter the particles. Furthermore in a sheared medium like planetary rings, collisions can transfer energy from the mean Keplerian motion to the random motions, thus increasing the velocity dispersion. This process appears macroscopically as the viscosity of the rings in the hydrodynamical approximation. \item As orbital energy is transferred into random motions some of the ring material must fall on the planet. However as the total angular momentum of the ring is conserved, some other fraction of the material must go away from the planet. In fact, collisions do actually transfer the orbital motion angular momentum from the inside to the outside, while most of the mass is transfered inwards, and the ring spreads. Expressions for the rate of spreading will be given in Eq.~(2.11) and for the rate of angular momentum transfer in sections 5 and 6. \end{enumerate} The net result of all these processes is that the velocity dispersion attains an equilibrium when the rate of transfer of energy from the orbital motion equals the rate of dissipation of energy during collisions. The equilibrium can be obtained in particular through the dependence of the coefficients of restitution on the relative collision velocities (but see also section 2.2.4). In the process, rings spread. Thus, very sharp or narrow features must be confined by some sort of dynamical agent. This is why the discovery of the complex radial structure of Saturn's and Uranus' rings was so surprising. We are going to quantify somewhat these processes, but some important notions must be introduced first. \subsubsection{Optical depth, collision frequency, and ring thickness} Under the simplistic assumption that the ring particles all have the same radius $r$, the optical depth $\tau$ is related to the particle number density $n$, the ring thickness $H$ and the particle cross section $S\sim \pi r^2$ by: \begin{equation} \tau\sim n H S.\label{tau} \end{equation} Note that the particle number density $n$ is related to the ring surface density $\sigma$ by $nH\sim\sigma/m$ where $m$ is the mass of the particles. One can also obtain an expression for the collision frequency as follows. The number of collisions $N_c$ undergone by a given ring particle during a short time $\delta t$ is $N_c\sim nSc\delta t$ where $c$ is the velocity dispersion of the ring particles. Thus, the collision frequency $\omega_c$ (which is the inverse of the time $\delta t$ during which, on average, a particle experiences only one collision), is given by: \begin{equation} \omega_c\sim n S c.\label{collfreq} \end{equation} On the other hand, Figure~2 above shows that for small inclinations, the thickness $H$ of the ring is of order $a i$ where $i$ is the typical inclination of ring particles at distance $a$; furthermore, the typical vertical velocity of a ring particle on such an inclined orbit is typically of the order of the orbital velocity $\Omega a$ times the inclination $i$. Therefore, as the vertical velocity is of the order of the random velocity, one obtains: \begin{equation} c\sim \Omega a i\sim \Omega H.\label{veldisp} \end{equation} By combining Eqs.~\eqref{tau}, \eqref{vbounce} and \eqref{veldisp}, one obtains the following relation between the collision frequency and the optical depth\footnote{A more quantitative estimate is given in Section 5. It is also discussed there that the ring vertical self-gravity increases substantially the effective vertical oscillation frequency, leading to significantly higher collision frequencies in main ring systems}: \begin{equation} \omega_c\sim \Omega\tau.\label{collfreq2} \end{equation} It is again useful to have in mind some orders of magnitude for the various quantities we have just introduced. The velocity dispersion is best estimated from the analysis of the damping of density and bending waves in Saturn's rings\footnote{In the hydrodynamic approximation, the damping is controlled by the ring viscosity, which can be related to the velocity dispersion; see below.}; this yields $c\sim$ a few mm/s. This corresponds to a ring thickness of a few tens of meters at most, which is $10^7$ times smaller than the typical radius of the rings; rings are {\it extremely} flat, and, correlatively, extremely cold\footnote{The ring Reynolds number is enormous, but they can't be turbulent, because their scale of granularity is comparable to the vertical scale; this is a major difference with other astrophysical disks.}. As the major rings have optical depths which are typically of order unity, the collision frequency is comparable with the orbital frequency (the orbital period is about a few hours). \subsubsection{Ring viscosity} The concept of viscosity is strictly speaking dependent on the existence of a local stress-strain relation. It turns out that in rings, (and more generally for all fluids which cannot be described in the hydrodynamic limit) no such relation exists in general, but one can be found in the special (but important) case of axisymmetric flows. However, the concept of viscosity is convenient to discuss dissipation phenomena, and a heuristic derivation of the form of the viscous coefficient, due to \cite{GT78a}, is provided here. Microscopically, the kinematic viscosity coefficient can be expressed in terms of the collision frequency and the particles mean free path $l$ as [see, e.g., \citet{R65}]: \begin{equation} \nu \sim \omega_c l^2.\label{vis} \end{equation} If the optical depth $\tau$ is larger than unity, there are several collisions per rotation period. The particles follow more or less rectilinear trajectories between two collisions, and the mean free path is simply given by $l\sim c/\omega_c\sim c/\Omega\tau$. On the other hand, if the optical depth is smaller than unity, collisions occur only once in several rotation periods, and the particle paths are curved between two collisions. Note also that in this case, the mean free path and the transport coefficients should become anisotropic and one should define viscosity coefficients for all directions. However, such complications are again ignored here. Then, according to what we just said, the mean free path is of the order of the radial excursion of a particle on its elliptic motion, i.e., the ring thickness: $l\sim H\sim c/\Omega$ [see Eq.~(2.5)]. Both regimes can be included in the same simple interpolation formula, which reads: \begin{equation} l\sim \frac{c}{\Omega(1+\tau^2)^{1/2}};\label{freepath} \end{equation} Replacing this expression in Eq.~\eqref{vis} and making use of Eq.~\eqref{collfreq2} yields: \begin{equation} \nu\sim \frac{c^2\tau}{\Omega(1+\tau^2)};\label{vis2} \end{equation} Note that the same expression holds for the transverse viscosity of a plasma in a magnetic field, provided that the mean motion $\Omega$ is replaced by the plasma gyrofrequency. \subsubsection{Velocity dispersion quasi-equilibrium} We can now derive an approximate equation of evolution of the ring velocity dispersion. The internal energy of the ring (kinetic energy of random motions) per unit mass is $c^2$. The rate of viscous transfer of energy per unit mass from the orbital motion to the random motions is $\sim\nu(ad\Omega/da)^2\sim\nu \Omega^2$ (for a simple justification, see \citealt{LL87}, p.~50, Eq.~(16.3), with the Keplerian circular velocity field, which has zero divergence). The rate of change of $c^2$ due to the energy loss during collisions is comparable to the change of the squares of the relative velocities of the colliding particles, i.e., to $v_r^2-v_r'^2\sim (1-\epsilon^2)v_r^2\sim (1-\epsilon^2)c^2$ [see Eq.~\eqref{vbounce}]. The typical collision time is of order $1/\omega_c$, so that the rate of damping of $c^2$ is of order $\omega_c(1-\epsilon^2)c^2$. Putting these two results together and making use of Eq.~\eqref{collfreq2} leads to the desired equation of evolution of the velocity dispersion: \begin{equation} \frac{dc^2}{dt} \sim -\alpha \Omega\tau c^2(1-\epsilon^2)+\beta\frac{\Omega c^2\tau}{1+\tau^2},\label{veldispeq} \end{equation} where coefficients of order unity, $\alpha$ and $\beta$, have been introduced. This equation of evolution has two important consequences: \begin{enumerate} \item There is only one time scale involved: the orbital time-scale. Thus, the velocity dispersion reaches equilibrium on a time-scale comparable to the particle orbital period, i.e. a few hours. This is exceedingly short. When the velocity dispersion equilibrium is reached, the ring equilibrium thickness is also reached: thus the flattening of the rings takes place in a few hours. This is much shorter than the spreading time-scale $t_{sp}$, which is basically the time for a particle to random-walk across the ring: \begin{equation} t_{sp}\sim (R/l)^2\omega_c^{-1},\label{spread} \end{equation} where $R$ is the ring radial extent. \item At equilibrium ($dc^2/dt=0$), the equation of evolution yields a relation between the coefficient of restitution $\epsilon$ and the optical depth $\tau$: \begin{equation} \epsilon^2\sim 1-\frac{\gamma}{1+\tau^2},\label{coefrest} \end{equation} where $\gamma$ is another coefficient of order unity ($\gamma\sim 0.5$; see section 5). Note that $\epsilon$ is an increasing function of $\tau$, the minimum of which is obtained for $\tau=0$; this minimum is rather high (0.6 or 0.7) and is even higher when coupling with the spin degrees of freedom is considered (citealt{S84,AT86}). This relation is known as the $\epsilon-\tau$ relation, and was derived by \cite{GT78a}, both in the heuristic manner presented here and in a more formal way by solving Boltzmann's equation. At first glance, this relation does not seem to involve the ring velocity dispersion. Remember however that the coefficient of restitution is a function of the particles' velocity: $\epsilon=\epsilon(c)$. This determines in principle the magnitude of the velocity dispersion once the optical depth and the functional form of $\epsilon$ are known (which, let us recall, is not the case). Notice finally that the equilibrium is stable if $\epsilon$ is a decreasing function of the impact velocity; this is the case for all known materials. This result can be understood as follows. The rate of transfer of energy from the orbital energy into random motions is proportional to $c^2$; the rate of dissipation of random energy is proportional to $(1-\epsilon^2)c^2$. If $c^2$ is, e.g., increased from, its equilibrium value, the collisions become more inelastic, $\epsilon$ is decreased, and the rate of loss of random motions wins over the injection from the shear: the disk is returned to equilibrium. A similar reasoning holds if $c^2$ is decreased rather than increased. \end{enumerate} \subsubsection{Limitations and extensions of the previous model} The preceding analysis ignores a number of complications. We are not going to present them with as many details, but just outline the problems and their solutions when they are known, and refer the reader to the specialized literature. First, we have ignored the possibility of having perturbed flows, i.e. flows for which the mean velocity is not circular. Such flows occur for example in the vicinity of resonances with the planet satellites. This type of flow will be fully discussed from section 4 onwards. We have implicitly assumed that the particles have the same size. It appears that the equilibrium velocity dispersion is roughly independent of the particle size, provided that the coefficient of restitution $\epsilon$ does not depend too strongly on $v$ (see e.g. \citealt{SL84}). Note also that the upper bounds to the ring thickness derived from the Voyager data argues also strongly in favor of an equilibrium velocity dispersion independent on the particle size (equipartition would lead to a ring thickness of the order several tens of kilometers whereas the Voyager data imply one or two hundred of meters at most). The single particle size model poses however one difficulty: the effective particle size, optical depth, collision frequency... are not unambiguously defined (this issue will be addressed in section 5). Note also that in most models, the particles are assumed spherical. Gravitational binary encounters have been neglected. They also act as an effective particle scatterer. A heuristic discussion of the effect of these encounters can be found in \cite{CDBH79} (see also \citealt{SS85}). The main effect is on small particles, which are scattered out of the ring plane by large particles, with a velocity dispersion a few times larger than the velocity dispersion of the large particles. Note that because of the thin disk geometry, the main contribution is not due to the encounters with large impact parameters, in opposition to the standard situation in galactic dynamics. The ring particles are usually assumed indestructible. This is a crude assumption, especially that accretion and erosion processes are likely to be very important in planetary ring dynamics. The problem of the collisional evolution of the particle size distribution has been addressed by a few authors (see \citealt{WCDG84,L89}), but no self-consistent model taking into account both the evolution of the size distribution and the velocity dispersion is available. The coupling between the velocity dispersion and the spin degrees of freedom has been neglected. A heuristic discussion of these effects can be found in Shukhman 1984. Basically, a rough equipartition of energy is established between the velocity dispersion and the energy of rotation. Finally, one can wonder what happens if the ring material does not allow the coefficient of restitution $\epsilon$ to be larger than the minimum value required for the equilibrium to exist. This is quite likely to be the case if the particles are regolith-covered, or are the loose aggregates called {\it Dynamical Ephemeral Bodies}, as has been suggested in the past few years (\citealt{WCDG84,L89})). The velocity dispersion then decreases, and the ring becomes more compact until a different regime is reached, in which the particles are no longer at mutual distances much larger than their sizes. In this case, the collisional processes are dramatically altered. For example, the viscous transfer of momentum occurs not because the particles carry the momentum on the mean free path, but because it is carried across the particle itself; this gives rise to a minimum viscosity \begin{equation} \nu \sim \Omega d^2,\label{numin} \end{equation} where $d$ is the typical particle size \citep{B77,S84,AT86}. The properties of particle flows in such conditions have been investigated by \cite{BGT85}. The reader will notice that none of these complications alters the basic physical phenomena described earlier: the equilibrium velocity dispersion is established on a time-scale comparable to the orbital time-scale; in the process, energy is permanently drawn from the orbital motion into random motions, and then dissipated as heat; as orbital energy is dissipated while angular momentum is conserved, rings spread. The only important issue is to know whether the ring particle properties are compatible with a ``thick" equilibrium in which the viscosity is given by Eq.~\eqref{vis2}, or lead to a ``thin" equilibrium for which it is reduced to its minimum value given by Eq.~\eqref{numin}; this last possibility seems most likely in high optical depth ring systems. \subsection{Angular momentum transport and the origin of the rings radial structures} Angular momentum transport is a key concept to the understanding of gross and fine features of planetary ring dynamics. There are two major causes of angular momentum transport in planetary rings: the ring viscous stress and the gravitational perturbations, either due to the ring itself or to some external agents, e.g. satellites. To the lowest order of approximation, as we have just argued, the ring viscous stress transports the angular momentum from the inside to the outside\footnote{This is true only of unperturbed flows. In perturbed flows, the viscous flow of angular momentum can be reversed; see section 5.}. However, some interesting other phenomena can occur. The viscous luminosity of angular momentum (the rate of transfer of angular momentum through a given radius of the ring due to the viscous stress) is proportional to $\nu\sigma$ (cf section 5), so that the viscous torque on a ringlet is proportional to $d\nu\sigma/dr$. The reader can check that if $\partial \nu\sigma/\partial \sigma <0$, fluctuations in the ring density are amplified, because the viscous torques tend to push material from regions of low optical depths to regions of high optical depths. This is the case for example for viscosities of the form of Eq.~\eqref{vis2}, if one uses $\sigma\propto\tau$. Such a phenomenon is at the origin of the viscous and thermal instabilities which have been invoked to explain the complex radial structure of planetary rings (for a review, see \citealt{SLB84}). However, it appears that this such a possibility does not occur in dense rings (see e.g. \citealt{AT86,WT88}) On the other hand the gravitational interaction between rings and satellites often results in the creation of density waves, or at least in the creation of some kind of wake. This generates a net angular momentum exchange between the disk and the satellite, the angular momentum flowing again from the inside to the outside (\citealt{GT80}; see also sections 6 and 7): if the satellite lies inside the ring, it gives angular momentum to the ring; this is the case for example with the inner ``shepherd" satellite of the $\epsilon$ ring; the reverse is true in the opposite case. Thus a satellite and a ring repel one another. This phenomenon was invoked as a possible confining process to explain the outer edges of Saturn's A and B ring, and also to explain the confinement of narrow rings \citep{GT79a,BGT82, BGT89}. One also sees that a small satellite situated in a ring ``repels" the ring material around it, by virtue of these angular momentum exchanges. The existence of a collection of kilometer-sized particles randomly distributed in the rings was therefore envisioned as a possible cause to their radial structure. Let us stress that these mechanisms are not the only possible source of structure in the rings\footnote{Density and bending waves are also well-identified sources of structure in Saturn's A and B rings.}. In the analysis of dense rings performed by \cite{AT86}, these authors argued that no viscous instability of the type described here actually occurred, but instead that some type of ``phase transition" between zero shear (solid) and high shear (fluid) regimes was possible, and suggested that this could also account for the ring complex radial structure. However, angular momentum exchanges (and energy exchanges) are of fundamental importance, because they regulate the long term dynamics of the rings. \section{The Boltzmann equation and its moments} Rings are composed of countless particles. This suggests various types of approach to study their dynamics. For example, one can treat them as a collection of independent test particles. This yields some crude estimates of some basic dynamical properties, but as a given particle experiences many collisions during its life-time, it cannot give a realistic description of ring evolution. The statistical character of the effects of collisions on ring particles dynamics is in fact appropriately described by the Boltzmann equation, which applies to the evolution of the particle distribution function $f({\bf r, v}, m, t)$. By definition, $fd^3{\bf r}d^3{\bf v}dm$ is the number of particles of mass $m$ at position ${\bf r}$, with velocity ${\bf v}$ in the elementary seven-dimensional volume of phase space $d^3{\bf r}d^3{\bf v}dm$. One should in principle consider that the distribution function depends on the particle spin as well. This would however unnecessarily complicate the analysis without affecting the basic principles that we wish to expose. Therefore, the coupling with the spin degrees of freedom is ignored from now on. For the same reason, the parameters describing the particle shapes and surfaces (which are important for the collisional dynamics) are also ignored. The Boltzmann equation reads: \begin{plain} $${\partial f\over\partial t}+{\bf v}.{\partial f\over\partial {\bf r}} +{\bf\nabla}\phi.{\partial f\over\partial {\bf v}}=\left(\partial f\over \partial t\right)_c,\eqno(3.1)$$ \end{plain} where the right-hand side represents the effect of the particle collisions on the evolution of the distribution function $f$. The form of this collision term does not need to be further specified yet. The potential $\phi$ includes all the dynamical agents acting on the particles except the collisional forces: the planet potential $\phi_p$, the potential arising from the disk self-gravity $\phi_d$, and the satellite perturbations $\phi_s$. The Boltzmann equation without collisions expresses the fact that the six-dimensional flow of particles of a given mass in phase space is incompressible (the flow is confined to constant mass hypersurfaces). It has the form of a continuity equation. The collision term acts as a local source and sink term in phase space. \subsection{The moments of the Boltzmann equation} The Boltzmann equation, involving six-dimensional derivatives, is in general rather difficult to solve. This is why one usually prefers to work with its moments. It is necessary to define first various local (in physical space) average quantities: the mean density of particles $N$, the mean velocity ${\bf u}$, the pressure tensor\footnote{The pressure tensor is the opposite of the stress tensor, and the two expressions may indifferently be used.} $p_{ij}$. these quantities are defined as follows: \begin{plain} $$N=\int fd^3{\bf v},\eqno(3.2)$$ $${\bf u}={1\over N}\int {\bf v} fd^3{\bf v},\eqno(3.3)$$ $$p_{ij}=\int (v_i-u_i)(v_j-u_j)fd^3{\bf v},\eqno(3.4)$$ \end{plain} \noindent where the subscripts $i,j$ refer to a cartesian inertial reference frame. The integrals are performed over all velocity space. One sees that these mean quantities are indeed the velocity moments of the distribution function. Note that these quantities are defined per unit particle mass, except for the mean velocity; for example, to retrieve the usual number density of particles, one needs to integrate $N$ over all masses. Also, $p_{ij}\sim Nc^2$ where $c$ is the mean square one dimensional velocity dispersion, defined by $3Nc^2\equiv p_{xx}+p_{yy}+p_{zz}$. By multiplying the Boltzmann equation by $1, v_i, v_iv_j$ and integrating over the velocity space, one obtains respectively, after some manipulations: \begin{plain} $${\partial N\over\partial t}+{\partial\over\partial x_i}(Nu_i)= \left(\partial N\over\partial t\right)_c,\eqno(3.5)$$ $$N\left({\partial u_i\over\partial t}+ u_j{\partial u_i\over\partial x_j} \right) =-N{\partial \phi\over\partial x_i} -{\partial p_{ij}\over \partial x_j} + \left(\partial N u_i\over\partial t\right)_c,\eqno(3.6)$$ $${\partial p_{ij}\over\partial t}+p_{ik}{\partial u_j\over\partial x_k} +p_{jk}{\partial u_i\over\partial x_k} +{\partial\over\partial x_k} (p_{ij}u_k)+{\partial q_{ijk}\over\partial x_k}=\left(\partial p_{ij}\over \partial t\right)_c,\eqno(3.7)$$ \end{plain} \noindent where $q_{ijk}$ is the tensor of the third order velocity moments of the distribution function. Note that the equation of evolution of the moments of any order depends on the moments to the next order. Thus, one needs some external information in order to close the hierarchy of moment equations. Fortunately, the special characteristics of ring systems provide us with quite a number of simplifying assumptions which make the problem tractable. \subsection{The moment equations in ring systems} Rings constitute cold media: the mean velocity ${\bf u}$, of the order of the orbital velocity (a few km/s), is several orders of magnitude larger than the velocity dispersion $c$ (a few mm/s). Thus, the term involving the third order moments $q_{ijk}\sim N c^3$ can be neglected in comparison with the other terms of Eq.~(3.7), except maybe the $z$-derivative terms, because of the small vertical scale height of the rings. From a hydrodynamical point of vue, this approximation is equivalent to killing the heat conduction terms. It is justified because the equilibrium of the internal heat of the ring particle fluid (i.e., its velocity dispersion) is controlled by the input from the shear, and the loss due to the inelasticity of the collisions, as argued in the previous section, and not by heat conduction phenomena, which occur on a much longer time-scale, except possibly in the vertical direction where on the contrary they tend to make the disk isothermal\footnote{Horizontal gradients can also lead to a non negligible heat conduction term when the flow is sufficiently perturbed from the axial symmetry, but this phenomenon is neglected throughout these lecture.}. We have argued that the velocity dispersion is more or less size independent. Furthermore, all the forces which can generate the mean velocity are gravitational, and therefore insensitive to the particle mass. Thus, in first approximation, both {\bf u} and $p_{ij}$ do not depend on the particle mass, and one can integrate Eqs.~(3.5) through (3.7) on the mass parameter. Notice also that $\int m (\partial N/\partial t)_c dm = 0$, because the collisions conserve the total local mass (the size distribution evolves by creation, destruction accretion and erosion of particles, but in all these processes, the total mass is locally conserved). Therefore, multiplying Eq.~(3.5) by $m$ and integrating over mass yields: \begin{plain} $${\partial \rho\over\partial t}+{\partial\over\partial x_i}(\rho u_i)=0, \eqno(3.8)$$ \end{plain} \noindent where $\rho\equiv\int mNdm$ is the local mass density. This equation has the standard form of a continuity equation. Let us also introduce here, for future use, various mass averaged quantities. The mean mass of the distribution is defined by $M=\int mNdm/\int Ndm$; in first approximation, $M$ is independent of the location in the rings. With this definition, $\rho=Mn$, where $n=\int Ndm$ is the local particle density. One can also define a mass weighted pressure tensor ${\mathrm p}_{ij}\equiv\int mp_{ij}dm=Mp_{ij}$ and a mass weighted third order moment ${\mathrm q}_{ijz}\equiv\int mq_{ijk}dm =Mq_{ijk}$. Collisions are locally momentum conserving if the particle size is smaller than the mean free path (even if coupling with the spin degrees of freedom is included). If the particle size is of the same order as or larger than the mean free path, there is a non-local collisional contribution to momentum transport, across the particle size, because the particles behave as an almost incompressible medium in comparison with the rings. However, one can show that this contribution can be expressed as the divergence of a second rank tensor (see Shukhman 1984). Therefore, it can be simply taken into account by a suitably redefinition of $p_{ij}$, and one can assume, without loss of generality, that $(\partial N u_i/\partial t)_c=0$. Multiplying Eq. (3.6) by $m$ and integrating over mass yields, after division by $\rho$: \begin{plain} $${\partial u_i\over\partial t}+ u_j{\partial u_i\over\partial x_j} =-{\partial \phi\over\partial x_i} -{1\over\rho}{\partial{\mathrm p}_{ij}\over \partial x_j},\eqno(3.9)$$ \end{plain} \noindent which has the standard form of an equation of fluid motion. The physical meaning of the pressure tensor term in Eq. (3.9) is well-known from Fluid Mechanics. Let us ignore the term due to $\phi$, and compute the rate of change of momentum per unit volume, $\partial\rho{\bf u}/\partial t$ from the momentum equation and the continuity equation. Integrating over some arbitrary fixed volume in space, one obtains $\partial/\partial t\int\!\!\!\int\!\!\!\int\rho u_i\ dV= -\int\!\!\int({\mathrm p}_{ij}+\rho u_i u_j)dS_j$, where $d{\bf S}$ is an elementary surface vector. The right-hand side of this expression is the flux of momentum through the surface bounding the volume considered. The second term is the momentum that the fluid crossing the surface during the motion carries with it (the so-called advective term), the first term characterizes the fact that the fluid outside the volume considered exerts a force equal to $-({\bf n}.{\bf e}_i){\mathrm p}_{ij}{\bf e}_j$ on the fluid in the volume considered per unit area of the bounding surface, where ${\bf n}$ is the outside normal to the surface, and $\{ {\bf e}_i, i=x,y,z\}$ are the unit vectors of a cartesian system of coordinates. The existence of this term is natural, because ${\mathrm p}_{ij}$ is a measure of the correlation of the random velocities in the ($i,j$) directions, and is therefore similar to the $\rho u_i u_j$ term. Note also that this contribution is a consequence of the existence of random velocities, and not of collisions, although random velocities are often the result of collisions. Let us now complete our mass integrations by noting that Eq.~(3.7) is unchanged by the mass weighting process, except for the third order moment term, which reduces to $\partial {\mathrm q}_{ijz}/\partial z$, and, possibly, for the collisional term. For completeness, let us mention the existence of a few processes which do not conserve mass nor momentum locally. For example, micron-sized particles are permanently accreted onto large particles and formed from them. As these micron-sized particles are submitted to electromagnetic forces as well as gravitational ones, they evolve differently from the large ones. Such effects could introduce mass and momentum source and sink terms, and require a multi-fluid description. However, we have already mentioned that the ring fraction of mass included in these particles is very small, so that this contribution can be safely neglected. Also, ballistic transport processes can result in mass and momentum exchanges across large distances in the rings (for a review, see \citealt{D84}). These processes are neglected in this lecture. Finally, let us point out that the mass integration just performed is not completely innocuous. Very big particles are quite underpopulated with respect to smaller ones in rings, so that their mean separation can be very large. If one considers dynamical phenomena having characteristic scales smaller than the mean separation of the bigger particles, the mass integration is not very meaningful. This problem is specific to ring systems and cannot be found in ordinary fluids, because the individual particles, usually molecules, all have roughly the same size, much smaller than any scale of interest for the macroscopic motions. In ring systems, the size distribution spans several orders of magnitude... There is also another major difference between ordinary fluids and rings. In ordinary fluids -- i.e., in the so-called hydrodynamical approximation -- the divergence of the pressure tensor in Eq. (3.9) is reduced to the sum of a pressure term and of a viscous term, and the pressure tensor evolution, Eq. (3.7), is reduced to the evolution of its diagonal trace, which represents the internal energy of the fluid. The third order moments of the form $q_{iij}$ are expressed as heat flux terms. The system of equation is finally closed with the provision of an equation of state (see also section 5). No such manipulation is possible in rings, because the pressure tensor is not symmetric enough. This is due to the fact that the particle paths are curved between collisions, whereas in ordinary fluids, the collision time is always much shorter than the dynamical time (or equivalently that the distance travelled between two collisions is smaller than the system characteric scales). It is necessary for future use to recast the equations just obtained in Lagrangian form. Eqs. (3.8), (3.9) and (3.7) constitute a Eulerian description of ring systems: the various moments, $n$, ${\bf u}$, $p_{ij}$, are functions of position and of time, and describe the state of the rings at a given place and at a given moment. In the Lagrangian description, the attention is focused on a {\it fluid} particle, i.e., a collection of ring particles. This collection must be large enough so that its size is much larger than the typical particle size and particle mean separation, but much smaller than the typical length-scale of the phenomena under consideration. Also, the concept of fluid particle can be meaningful only if the typical time for a ring particle to random-walk out of the fluid particle is much larger than the dynamical time-scale. One sees again that the existence of a broad particle size distribution can in certain circumstances invalidate these assumptions. In order to have a complete description of the fluid, one wants to know at each time the position, density... of all its fluid particles. Let us call ${\bf r}$ the position of a fluid particle, and ${\bf r}_0$ its position at some initial time $t_0$. The position of the fluid particle depends necessarily on time. Also, for a given flow, the fluid particles paths do not cross at any given time, so that the path is completely specified if the initial position is known; thus the fluid particle position depends on its initial position as well: \begin{plain} $${\bf r}={\bf r}({\bf r}_0,t).\eqno(3.10)$$ \end{plain} With these definitions, the velocity of the flow at a given time and location is equal to the velocity of the fluid particle located at the same place at the same time, and tangent to the fluid particle path: \begin{plain} $$\left(\partial{\bf r}\over\partial t\right)_{{\bf r}_0}={\bf u}({\bf r},t)={\bf u}({\bf r}({\bf r}_0,t), t),\eqno(3.11)$$ \end{plain} By the same token, any fluid quantity $X$ (scalar, vector or tensor) can be considered as a function of either ${\bf r}$ and $t$ or of the fluid particle to which it belongs, i.e. as a function of ${\bf r}_0$ and $t$ through Eq. (3.10). This enables us to introduce the concept of substantial derivative, noted $D/Dt$: \begin{plain} $${DX\over Dt}\equiv \left(\partial X({\bf r}_0,t) \over \partial t\right)_{{\bf r}_0} ={\partial X({\bf r},t)\over\partial t}+{\bf u}.{\partial X({\bf r},t) \over\partial {\bf r}}.\eqno(3.12)$$ \end{plain} This substantial derivative, commonly used in Fluid Dynamics, expresses the change with time of a given quantity $X$ along the flow, i.e. as it is carried by a fluid particle along its path. Note that we have implicitly assumed that all quantities are defined in a ``smooth" way at the initial time $t_0$; for example, the initial velocity is assumed to vary smoothly from one fluid particle to the next (it is a continuous function of ${\bf r}_0$). We are now in position to recast Eqs.~(3.8), (3.9) and (3.7) in Lagrangian form. They read: \begin{plain} $${D\rho\over D t}+\rho{\partial u_i\over\partial x_i}=0,\eqno(3.13)$$ $${D^2 r_i\over Dt^2}={D u_i\over Dt}=-{\partial \phi\over\partial x_i}-{1\over\rho}{\partial {\mathrm p}_{ij}\over\partial x_j},\eqno(3.14)$$ $${D {\mathrm p}_{ij}\over Dt}+{\mathrm p}_{ik}{\partial u_j\over\partial x_k} +{\mathrm p}_{jk}{\partial u_i\over\partial x_k} +{\mathrm p}_{ij}{\partial u_k\over\partial x_k} +{\partial{\mathrm q}_{ijz}\over\partial z} =\left(\partial {\mathrm p}_{ij}\over \partial t\right)_c.\eqno(3.15)$$ \end{plain} \noindent where again all derivatives with respect to $x_i$ are evaluated at ${\bf x}={\bf r}$ (this will be implicitly assumed in the remainder of these notes). Further progress can be made by taking advantage of the extreme thinness of ring systems, and of the dominance of the planet attraction over the potential of the disk self-gravity, the perturbations of the satellite\footnote{This is true even near a resonance with a satellite.}, and the disk pressure tensor, except for the determination of the ring vertical structure. Thus the horizontal component of the velocity, which is mainly driven by the planet, is nearly independent of the vertical coordinate in the ring plane. A somewhat different result holds for the vertical component, for the following reason. Let us consider for example a ring in circular motion in the equatorial plane of its parent planet but with non zero thickness. Typically, $H/r\sim 10^{-7}$. Thus, the variation of the vertical component of the planet force across the ring thickness is very small, comparable to the vertical component of the pressure tensor force, and even smaller than the ring self-gravity; the pressure tensor can then prevent the crossing of the fluid particle paths that the planet and the ring self-gravity would tend to impose in the vertical direction. This physical constraint is often expressed through a condition of vertical hydrostatic equilibrium both in ring dynamics and in accretion disk theory. In this case, the vertical velocity is nearly independent of the vertical coordinate; it essentially is equal to zero, except for example when inclined satellites excite coherent vertical motions of the ring plane (this is the case for bending waves). Alternatively, when the ring particles are in a ``thin" equilibrium (in the sense defined in section 2), i.e. when the ring particles have mean separations comparable to their radii, the hydrostatic condition has to be replaced with the constraint of incompressibility of the three dimensional flow (see \citealt{BGT85} and section 5). In any case, because of the remarkable thinness of the rings, it is useful to vertically integrate the preceding equations, and forget about the precise vertical structure. In this operation, all fluid particles with different $z$ are replaced by an equivalent fluid particle whose vertical coordinate is equal to the mean plane height. Let us call ${\bf R}=$($R,\Theta, \mathcal{Z}$) the position of this equivalent fluid particle; $\mathcal{Z}=\int z\rho dz/\sigma$, and the vertically integrated equation of motion Eq. (3.14) reduces to the equation of motion of this equivalent particle. For simplicity, we will assume here that the condition of vertical hydrostatic equilibrium holds\footnote{One could think at first glance that this assumption is not very good for perturbed flows, because the horizontal and vertical orbit perturbations seem to have the same period. However, this is not true: in the vertical direction, the self-gravity of the ring is much larger than the restoring force of the planet, which is not the case in the horizontal direction. Therefore, the vertical effective epicyclic frequency is about ten times larger than the horizontal one, and the vertical structure adjusts on a time-scale much shorter than the horizontal perturbation time-scale (see section 5).}; similar results are obtained in the incompressible case (see section 5). The integration of the continuity equation and of the equations of fluid motion is greatly simplified by the fact that the velocity is nearly independent on the vertical coordinate in comparison with the density $\rho$. Therefore, in cylindrical coordinates ($r,\theta,z$), the vertically integrated continuity equation reads: \begin{plain} $${D\sigma\over Dt}+ \sigma{\partial u_r\over\partial r}+ {\sigma\over r}{\partial u_\theta\over\partial\theta}=0,\eqno(3.16)$$ \end{plain} \noindent where $\sigma\equiv\int \rho dz$ is the surface density of the ring. Multiplying Eq. (3.14) by $\rho$, integrating over $z$ and dividing the resulting equations by $\sigma$ yields: \begin{plain} $${D^2 R\over Dt^2}-R\left(D\Theta\over Dt\right)^2=-{\partial\phi_0\over \partial r}-{1\over\sigma}\left[{1\over r}{\partial(rP_{rr})\over\partial r}+{1\over r}{\partial P_{r\theta}\over\partial\theta}- {P_{\theta\theta}\over r}\right],\eqno(3.17)$$ $${1\over R}{D\over Dt}\left(R^2{D\Theta\over Dt}\right)= -{1\over r}{\partial\phi_0\over\partial\theta}-{1\over\sigma} \left[{1\over r^2}{\partial(r^2 P_{r\theta})\over\partial r} + {1\over r}{\partial P_{\theta\theta}\over\partial\theta}\right],\eqno(3.18)$$ $${D^2 \mathcal{Z}\over Dt^2}=-\left(\partial\phi\over\partial z\right)_{z=\mathcal{Z}} -{1\over\sigma}\left[{1\over r}{\partial (r P_{rz})\over\partial r}+ {1\over r}{\partial P_{\theta z}\over\partial\theta}\right],\eqno(3.19)$$ \end{plain} \noindent where $P_{ij}\equiv \int {\mathrm p}_{ij}dz$ are the vertically integrated components of the pressure tensor, and where $\phi_0=\int\rho\phi dz/\sigma$ is the vertically averaged potential (function of $\mathcal{Z}$). Due to the small thickness of the ring, $\phi_0\simeq\phi$. In Eq. (3.19) we have used the constraint of vertical hydrostatic equilibrium with respect to the mean plane, which reads $\partial{\mathrm p}_{zz}/\partial z=-\rho[\partial\phi/\partial z - (\partial \phi/\partial z)_{z=\mathcal{Z}}]$, and $D\mathcal{Z}/Dt=\int u_z\rho dz/\sigma$, which follows from Eqs. (3.13) and (3.16) (for details, see \citealt{SS85}). Finally, Eq. (3.15) yields: \begin{plain} $${D P_{ij}\over Dt}+P_{ik}{\partial u_j\over\partial x_k} +P_{jk}{\partial u_i\over\partial x_k} +P_{ij}{\partial u_k\over\partial x_k} =\left(\partial P_{ij}\over \partial t\right)_c.\eqno(3.20)$$ \end{plain} \noindent where all terms involving $z$ derivatives have been removed by the vertical integration. In Eqs. (3.17) through (3.20), all quantities depending on the spatial coordinates are evaluated at ${\bf R}$. Note that when there are no vertical motions ($D^2 \mathcal{Z}/Dt^2 =0$), the rings are in principle symmetric with respect to the equatorial plane, so that $P_{rz}=0$ and $P_{\theta z}=0$, and Eq.~(3.19) is trivially satisfied. It is useful to recast Eq.~(3.20) in cylindrical coordinates. There are several ways to perform the change of variables. On can make it directly on Eq.~(3.20); or directly on the Boltzmann equation itself Eq.~(3.1) and compute afterwards the moment equations; or use tensor calculus, by replacing the partial derivatives in cartesian coordinates by covariant derivatives in cylindrical coordinates and compute the Christoffel symbols. The latter is probably the fastest. In any case, we give the resulting equations for $P_{rr}, P_{r\theta}, P_{\theta\theta}$ and $P_{zz}$ (we will see in section 5 that the remaining equations are not needed for our purposes, but the interested reader can find them in the appendix B of \cite{SS85}): \begin{plain} $${D P_{rr}\over Dt}+P_{rr}\left(3{\partial u_r\over\partial r}+{u_r\over r}+{1\over r}{\partial u_{\theta}\over\partial\theta}\right)+2P_{r\theta}\left({1\over r}{\partial u_r\over\partial\theta}-{2\over r}u_\theta\right)$$ $$\hspace{5truecm}= \left(\partial P_{rr}\over\partial t\right)_c,\eqno(3.21)$$ $${D P_{\theta\theta}\over Dt}+P_{\theta\theta} \left({\partial u_r\over\partial r}+{3u_r\over r}+{3\over r}{\partial u_{\theta}\over\partial\theta}\right) +{2\over r}P_{r\theta} {\partial\over \partial r}(r u_\theta)$$ $$\hspace{5truecm}= \left(\partial P_{\theta\theta}\over\partial t\right)_c,\eqno(3.22)$$ $${D P_{r\theta}\over Dt}+2P_{r\theta}\left({\partial u_r\over\partial r}+{u_r\over r}+{1\over r}{\partial u_{\theta}\over\partial\theta}\right)+ {1\over r}P_{rr}{\partial\over\partial r}(ru_\theta)+ P_{\theta\theta}\left({1\over r}{\partial u_r\over\partial\theta}-{2\over r}u_\theta\right)$$ $$\hspace{5truecm} = \left(\partial P_{r\theta}\over\partial t\right)_c,\eqno(3.23)$$ $${D P_{zz}\over Dt}+P_{zz}\left({\partial u_r\over\partial r}+{u_r\over r}+{1\over r}{\partial u_{\theta}\over\partial\theta}\right) +2P_{zr}{\partial\over\partial r}\left(D\mathcal{Z}\over Dt\right)+ 2P_{\theta z}{1\over r}{\partial\over\partial\theta}\left(D\mathcal{Z}\over Dt\right)$$ $$\hspace{5truecm}= \left(\partial P_{zz}\over\partial t\right)_c,\eqno(3.24)$$ \end{plain} Let us conclude this section with a final fundamental comment. We have already pointed out that all forces are small compared to the planet's, except in the vertical direction, but the question of the vertical structure has just been evicted by the vertical integration. This means also that the dynamical time-scale imposed by the planet is much shorter than the dynamical evolution time-scale due to the disk self-gravity, the satellite perturbations, or the pressure tensor. Furthermore, we have also argued in section 2 that the pressure tensor reaches steady-state on a time scale comparable to the orbital time scale. This means that {\it the pressure tensor components reach steady-state for steady-state values of the mean velocity ${\bf u}$ and surface density $\sigma$ mainly imposed by the planet}. Therefore, the theory of ring dynamics can be developed according to the following scheme: \begin{enumerate} \item The motion is first solved when the planet force is the only one acting on the rings. \item The $P_{ij}$ are then found by solving Eq.~(3.21) with steady-state ${\bf u}$ and $\sigma$ appropriately chosen. \item The others forces (self-gravity, satellite perturbations and pressure tensor) are treated as perturbations. When they drives a slow evolution of the surface density and velocity field of the rings, the pressure tensor evolution is of course ``enslaved" to this evolution. \end{enumerate} This program forms the basis of the streamline formalism and is described in the next sections. \section{Ring kinematics: streamlines} Our primary objective is to find the general solution of Eqs.~(3.17) and (3.18) when the potential is reduced to the planetary potential, and when the viscous stress terms are neglected. Thus, the problem at hand is analogous to the motion of a test particle around a planet, except that it applies to a fluid particle rather than to an individual particle. We refer to this problem as to the ``fluid test particle motion" in the remainder of these notes. For convenience, we drop the capital letter notation for the position of the equivalent fluid particle. Furthermore, for simplicity, we will restrict ourselves to motions confined to the equatorial plane of the planet, i.e., $z=0$ and Eq.~(3.19) does not need to be considered. This restriction eliminates the dynamics of bending waves and inclined rings from the analysis. However, bending waves do not differ very much from density waves, and inclined rings are similar in their dynamical properties to eccentric rings, so that this restriction is not essential while simplifying the exposition of the method. Note that the total derivative ($d/dt$) and the substantial derivative ($D/Dt$) have very similar meanings: the total time derivative of Classical Mechanics (resp. the substantial derivative) refers to the motion of a given point (resp. fluid) particle under given initial conditions. It is therefore customary to write Eqs.~(3.17) through (3.19) with usual total derivatives instead of substantial ones, and reserve the substantial derivative symbol to express the time variation along the fluid particle paths of quantities expressed in usual Eulerian coordinates. We will follow this custom in the remainder of these notes, but the reader should not forget that we are solving a fluid problem and not a point particle one. \subsection{Fluid test particle motion} In the conditions just outlined, the equation of motion of the fluid test particle reads: \begin{plain} $${d^2{\bf r}\over dt^2}=-\nabla\phi_p.\eqno(4.1)$$ \end{plain} \noindent where ${\bf r}$ is restricted to the equatorial plane of the planet. This equation is formally identical to the equation of motion of a point particle in the planet potential; the only difference is that ${\bf r}$ is a function of the initial position as well as of time. This equation can therefore be solved with standard techniques. It is usual in Celestial Mechanics to use the elliptic solution of the two-body problem and treat the deviations of the planet from spherical symmetry as perturbations. This procedure raises however quite a number of subtle technical issues which will be briefly described in section 4.2. We will therefore depart from this well-established custom; however, the solution used here is closely related to the elliptic solution, and everyone already used to Celestial Mechanics techniques will feel at ease with it. Note that the equation of motion Eq.~(4.1) does not depend on the surface density, so that it can be solved independently of the continuity equation. The solution we will use derives from the epicyclic theory, which was initially developed for galactic dynamics. A fundamental feature of ring problems is that although the density contrast can be strongly nonlinear, the deviations from circular trajectories are always very small (see in particular section 4.4). The analysis therefore relies on the existence of purely circular solutions (the planet is axisymmetric), and looks for general solutions in the form of small deviations around one of these circular solutions, in successive approximations. Up to second order in deviation from circularity, the radius $r$ and true longitude $\theta$ of a fluid test particle on an equatorial orbit read \citep{LB91}: \begin{plain} $$r=r_0\left[1+{3\eta^2\over 2\kappa^2}\epsilon^2 - \epsilon\cos\xi-{\eta^2\over 2\kappa^2}\epsilon^2\cos 2\xi\right],\eqno(4.2)$$ $$\theta=\gamma+{2\Omega\over\kappa}\epsilon\sin\xi+ {\Omega\over 2\kappa} \left( {3\over 2}+{\eta^2\over\kappa^2}\right)\epsilon^2 \sin 2\xi,\eqno(4.3)$$ \end{plain} \noindent with \begin{plain} $$\xi=\int_0^t\kappa dt+\delta=\kappa t +\delta,\eqno(4.4)$$ $$\gamma=\theta_0+\int_0^t\Omega\left[1+\left({3\over 2}-{3\eta^2\over\kappa^2} \right)\epsilon^2\right]dt =\theta_0$$ $$\hspace{2truecm}+\Omega\left[1+\left({3\over 2}-{3\eta^2\over\kappa^2} \right)\epsilon^2\right]t,\eqno(4.5)$$ $$\Omega^2\equiv {1\over r_0}{d\phi_p\over dr}(r_0),\eqno(4.6)$$ $$\kappa^2\equiv \left[{3\over r}{d\phi_p\over dr}+ {d^2\phi_p\over dr^2}\right]_{r=r_0},\eqno(4.7)$$ $$\eta^2\equiv \left[{2\over r}{d\phi_p\over dr}- {r\over 6}{d^3\phi_p\over dr^3}\right]_{r=r_0},\eqno(4.8)$$ \end{plain} \noindent where $r_0, \epsilon, \theta_0$, and $\delta$ are the constants of integration of the problem, and are functions of the initial position of the fluid particle ${\bf r}_0$ (do not get confused between $r_0$ and ${\bf r}_0$ !!): $r_0$ is the radius of the circular motion around which the solution is expanded (analogous to an average radius); $\epsilon$ is the relative departure from circularity (analogous to an eccentricity); in ring problems, $\epsilon$ is always a small quantity; $\theta_0$ and $\delta$ are initial phases; $\Omega$ and $\kappa$ are the usual rotation and epicyclic frequencies, i.e. the frequencies of the motion around the planet and of the radial oscillations, respectively; $\eta$ is an auxiliary quantity homogeneous to a frequency; $\phi_p$ is the potential of the planet, including the $J_k$ terms (the coefficients of the expansion of the planetary potential in spherical harmonics\footnote{In some cases, it is useful to include the axisymmetric contributions of the satellites and of the rings in the definition of $\phi_p$.}). In the limit of a spherical planet, $\Omega^2=\kappa^2=\eta^2=GM_p/r_0^3$, where $M_p$ is the mass of the planet, so that in general, the three frequencies differ only by terms of order $J_2$ and smaller. General expressions for these frequencies in terms of the $J_k$s can be found in \cite{BL87}. Expressions in terms of $J_2$ are given below. These expressions constitute the epicyclic solution as it is generally derived; however, the close analogy with the elliptic solution is much more apparent if one makes use of a somewhat different set of epicyclic elements\footnote{The usefulness of the change of variable from $r_0$ to $a_e$ was pointed out by Phil Nicholson 1990 (private communication).}: {$a_e$, $\epsilon$, $\varpi_e$, $ M_e\equiv\xi$ }, where $a_e$ and $\varpi_e$ are defined by: \begin{plain} $$a_e\equiv {r_0\over{1-\epsilon^2}},\eqno(4.9)$$ $$\varpi_e\equiv\gamma-\xi=\gamma- M_e.\eqno(4.10)$$ \end{plain} The change of notation from $\xi$ to $ M_e$ is adopted for mnemonic reasons which will soon be made obvious. Keeping terms up to second order in $\epsilon$, the preceding formul\ae\ become: \begin{plain} $$r=a_e\left[1+\left({3\eta_a^2\over 2\kappa_a^2}-1\right)\epsilon^2 - \epsilon\cos M_e -{\eta_a^2\over 2\kappa_a^2}\epsilon^2\cos 2 M_e\right],\eqno(4.11)$$ $$\theta=\varpi_e+ M_e+{2\Omega_a\over\kappa_a}\epsilon\sin M_e+ {\Omega_a\over 2\kappa_a} \left( {3\over 2}+{\eta_a^2\over\kappa_a^2}\right)\epsilon^2 \sin 2 M_e,\eqno(4.12)$$ \end{plain} \noindent where now the frequencies are evaluated at $a_e$, so that\footnote{Note that the terms of order $\epsilon^2\kappa_a t$ are not explicitly given in Eq.~(4.13). Such terms cannot be self-consistently computed from a second order epicyclic theory, because a third order expansion produces a nonlinear frequency correction of the same magnitude to $ M_e$. This correction actually kills the contribution of order $\epsilon^2$ to $\kappa_a$, so that the remaining contribution is of order $J_2\epsilon^2$. Note also that in this case, $\Omega_a$ and $\kappa_a$ differ by a term of order $J_2\epsilon^2$. The knowledge of this contribution is important for some ring problems, but is not needed in these notes.}: \begin{plain} $$ M_e=\int_0^t\kappa_a dt+\delta+O(J_2\epsilon^2),\eqno(4.13)$$ $$\gamma=\theta_0+\int_0^t \Omega_a\left[1+\left({7\over 2}- {3\eta_a^2\over\kappa_a^2}-{\kappa_a^2\over\Omega_a^2} \right)\epsilon^2\right]dt,\eqno(4.14)$$ $$\Omega_a^2\equiv {1\over a_e}{d\phi_p\over dr}(a_e),\eqno(4.15)$$ $$\kappa_a^2\equiv \left[{3\over r}{d\phi_p\over dr}+ {d^2\phi_p\over dr^2}\right]_{r=a_e},\eqno(4.16)$$ $$\eta_a^2\equiv \left[{2\over r}{d\phi_p\over dr}- {r\over 6}{d^3\phi_p\over dr^3}\right]_{r=a_e}.\eqno(4.17)$$ \end{plain} For definiteness, let us give expressions for $\Phi_p$, $\Omega_a$ and $\kappa_a$ in terms of $J_2$ ($\eta_a$ is not needed): \begin{eqnarray} \Phi_p(a_e) & = &\frac{G M_p}{a_e} \times \left[-1+\frac{1}{2}\left(\frac{R_p}{a_e}\right)^2 J_2 \right],\nonumber\\ \Omega_a(a_e) & = & \quad n_a\ \times \left[\ \ 1+\frac{3}{4}\left(\frac{R_p}{a_e}\right)^2 J_2\right],\nonumber\\ \kappa_a(a_e) & = & \quad n_a\ \times \left[\ \ 1-\frac{3}{4}\left(\frac{R_p}{a_e}\right)^2 J_2\right],\nonumber \end{eqnarray} \noindent where $M_P$ and $R_p$ are the planet mass and radius. For future use, we have also introduced an effective elliptic mean motion defined by $n_a=(GM_p/a_e^3)^{1/2}$. The analogy is apparent if one expands the elliptic solution to second order in eccentricity: \begin{plain} $$r=a\left[1+{1\over 2}e^2 -e\cos M- {1\over 2}e^2\cos 2M\right],\eqno(4.18)$$ $$\theta=\varpi+M+2e\sin M + {5\over 4}e^2\sin 2M,\eqno(4.19)$$ \end{plain} \noindent with standard notations for the elliptic elements. If one makes the following formal identifications: \begin{plain} $$a\rightarrow a_e,$$ $$e\rightarrow \epsilon,$$ $$M\rightarrow M_e,$$ $$\varpi\rightarrow\varpi_e,$$ \end{plain} \noindent one sees that the two types of expansions are formally nearly identical; the only difference comes from the ratios of frequencies in the epicyclic solution, which differ from unity by terms of order $J_2$. In the epicyclic solution, the precession due the deviations of the planet from sphericity is of course already included: $\dot\varpi_e = \Omega_a -\kappa_a \sim J_2 \Omega_a$. For convenience, we will refer to the elements ($a_e, \epsilon, M_e, \varpi_e$) as to the epicyclic semi-major axis, eccentricity, mean motion and periapse angle respectively. These elements are also, of course, function of the initial position of the fluid particle. They represent the average of the epicyclic elements of the individual ring particles composing the fluid particle under consideration. The individual particle epicyclic eccentricities are made up of two contributions: this mean part, and a random contribution which is connected to the velocity second moments (the pressure tensor) of the fluid particle. To conclude this subsection, let us write down the perturbation equations of the epicyclic elements. Such equations are necessary as we have decided to treat all forces but the planet's as perturbations. Due to the formal analogy between the point particle equations of motion and the Lagrangian equations we have derived, the perturbation equations can be obtained with standard variation of the constants techniques. They read \citep{LB91}: \begin{plain} $${da_e\over dt}={2\over\kappa_a}\left[R\epsilon\sin M_e+{\Omega_a\over \kappa_a}S\left(1+\epsilon\cos M_e\right)\right]+O(\epsilon^2),\eqno(4.20)$$ $${d\epsilon\over dt}={1\over \kappa_a a_e}\left[R\sin M_e + 2{\Omega_a\over\kappa_a}S\cos M_e\right]+O(\epsilon),\eqno(4.21)$$ $${d\varpi_e\over dt}=\Omega_a - \kappa_a +{1\over\kappa_a a_e\epsilon}\left[-R\cos M_e +2{\Omega_a\over\kappa_a}S\sin M_e\right]+O(\epsilon^0),\eqno(4.22)$$ $$\eqalignno{{d M_e\over dt}=\kappa_a+&{1\over \kappa_a a_e\epsilon} \biggl[R\biggl.\left(\cos M_e -3{\eta_a^2\over\kappa_a^2}\epsilon+{\eta_a^2\over\kappa_a^2} \epsilon\cos 2 M_e\right)-\cr &{\Omega_a\over\kappa_a}S\left(2\sin M_e+ \left({1\over 2}+{\eta_a^2\over\kappa_a^2}\right)\epsilon\sin 2 M_e\right)\biggl.\biggr]+O(\epsilon),&(4.23)\cr}$$ \end{plain} The $\Omega_a-\kappa_a$ term in Eq.~(4.22) represents the effect of the planet oblateness on the precession of the apses. For comparison, let us write down the equations of perturbation of the elliptic motion (see, e.g., \citealt{MD99}), to first order in eccentricity: \begin{plain} $${da\over dt}={2\over n}\left[Re\sin M + S\left(1+e\cos M\right)\right]+O(e^2),\eqno(4.24)$$ $${de\over dt}={1\over na}\left[R\sin M + 2S\cos M\right]+O(e),\eqno(4.25)$$ $${d\varpi\over dt}={1\over nae}\left[-R\cos M + 2S\sin M\right]+O(e^0),\eqno(4.26)$$ $$\eqalignno{{dM\over dt}=n+{1\over nae}\biggl[ \biggr. &R\left(\cos M -3e +e\cos 2M\right)\cr &-S\left(2\sin M +{3\over 2}e\sin 2M\right)\biggl. \biggr]+O(e),&(4.27)\cr}$$ \end{plain} \noindent with standard notations. In both sets of equations, $R$ and $S$ are the radial and tangential components of the perturbing acceleration, as usual. Notice the similar roles played by $n$ and $\kappa_a$. One sees here again that the perturbation equations for elliptic and epicyclic variables are formally nearly identical, except for the various ratios of frequencies in the epicyclic equations, which differ from unity by terms of order $J_2\ (\ll 1)$. In what follows these relations are used to first order in eccentricity at most. As a consequence, $d\varphi/dt=d(\varpi_e+M_e)/dt$ is negligible compared to $d\varpi_e/dt$, and only the first free equations are needed in practice. Because $R,S$ are small (compared to the planet's acceleration) and $\epsilon\ll 1$, one can replace all frequency ratios with $1$ in these relations; we also replace everywhere $\kappa_a$ and $\Omega_a$ by the effective mean motion $n_a$ except in $\Omega_a - \kappa_a$, to the same level of precision. In this limit, these relations become formally identical to their elliptic counterpart, except for the notable fact that $a_e$ and $\epsilon$ differ from the osculating elliptic ones by terms of order $J_2$. This simplification is usually made from now on. The comparison of the expressions of the specific energy and angular momentum in terms of the elliptic and epicyclic elements is also useful. In elliptic variables, the specific energy and angular momentum are related to the semimajor axis and the eccentricity by the well-known formul\ae: \begin{plain} $$E=-{G M_p\over 2 a},\eqno(4.28)$$ $$H=\sqrt{G M_p a(1-e^2)}= na^2\left(1-{e^2\over 2}\right)+O(e^4) ,\eqno(4.29)$$ \end{plain} \noindent whereas in epicyclic variables, we have: \begin{plain} $$E=\phi_p(a_e)+{\Omega_a^2a_e^2\over 2}+O(\epsilon^4),\eqno(4.30)$$ $$H=\Omega_a a_e^2\left[1-{1\over 2}\left(\kappa_a\over\Omega_a\right)^2 \epsilon^2\right]+O(\epsilon^4).\eqno(4.31)$$ \end{plain} It is easily seen that Eqs.~(4.28) and (4.29) are recovered in the case of a spherical planet. Note that $r_0$ (resp. $a_e$) is the radius of the circular orbit having the same angular momentum (resp. the same energy to order $\epsilon^4$) as the non-circular epicyclic orbit of Eqs.~(4.2) and (4.3) [resp.\~(4.11) and (4.12)]. \subsection{Epicyclic versus elliptic elements} Although elliptic elements are much more widely known and almost exclusively used in Celestial Mechanics in general and in the ``ring community" in particular, epicyclic elements are much more adapted to the observational and theoretical descriptions of planetary rings for the three following reasons: \begin{enumerate} \item First, the fluid particle trajectories in a circular ring are described by the simple equation $r={\mathrm constant}=a_e$ in the streamline formalism (see section 4.3). However, it is well-known that although the ring fluid particles follow a circular trajectory, their elliptic osculating eccentricity $e_0$ is non zero; $e_0=3/2 J_2 (R/r)^2$ to leading order in the gravitational coefficients of the planet potential. Furthermore, the eccentricities which are typically considered in some ring problems, e.g. {\it the mean eccentricities} which are involved in the description of density waves in Saturn's rings, {\it can be orders of magnitudes smaller than the osculating eccentricities}, which are then of order $e_0$ \citep{LB86}. Similarly, the osculating semi-major axis $a_0\simeq r[1+3/2J_2(R/r)^2]$ is substantially different in absolute value from its ``mean" value $r$, although not in relative value. Second, for non-circular motions, the osculating elements exhibit short-period variations due to the harmonic coefficients of the planet, whereas the elements used in the streamline formalism, as well as in data fits, are supposed to be time-independent, or, to the very least, to vary only on much longer time scales than the orbital period. This time variation cannot be ignored: it is at the origin of this sometimes very large discrepency between the osculating and observed elements. \item Epicyclic elements are ``more constant" than elliptic elements: the non--sphericity of the planet is readily taken into account in the epicyclic formul\ae\ whereas, as already pointed out, it leads to short-period variations of the elliptic elements. Furthermore, the non-sphericity of the planet is the most important source of short-period variations. The main effects of the shepherd satellites, and of the ring self-gravity and pressure tensor (which perturb both elliptic and epicyclic elements) occur on much longer time-scales (their short-period contributions are negligible). The argument developed in this paragraph can be rephrased and summarized in a different way: {\it the mean (short-period averaged) and the osculating elliptic elements of a ring particle can be substantially different, whereas its mean (short-period averaged) and osculating epicyclic elements are always identical or nearly identical.} \item No approximation with respect to the harmonic coefficients of the planet is involved, which is not the case for elliptic elements. On the other hand, elliptic formul\ae\ are valid to all orders in eccentricity, but in practice, expansions in eccentricity (and inclination) are always required, and the exactitude of the elliptic formul\ae\ turns out to be no great advantage, because epicyclic formul\ae\ can easily be obtained to the order needed for a good description of the data. \end{enumerate} Up to now the equations which have been used both in data fits and in dynamical analyses are the equations of the elliptic motion, although they were applied to elements which were assumed to be constant at least on the short time-scale, and known to be quite different from the elliptic osculating elements in some circumstances. It is therefore legitimate to wonder why this procedure was valid, especially at the light of the comments above. However, we have shown [Eqs.~(4.11) to (4.27)] that the elliptic elements are formally nearly identical to a suitably defined set of epicyclic element. This argument combined with the three points exposed above shows that (i) the elements obtained from the observations are indeed the epicyclic elements $a_e, \epsilon, M_e, \varpi_e$ and (ii) the application of elliptic formul\ae\ and equations to these epicyclic elements generally constitutes an acceptable approximation. \subsection{Ring streamlines and kinematics} In fluid dynamics, the streamlines are the lines of the velocity field of the fluid. In the streamline formalism, the word is sometimes used in a different way. Most of the times, it designates the actual streamlines of the flow, at least in some suitably defined rotating frame. For example, this is the case for density waves and eccentric rings, but the $m=0$ mode of the $\gamma$ ring of Uranus is a notable exception. In all cases, the streamlines provide a description of ring shapes: more precisely, they designate the curve that an infinitely thin ring would define in space. Therefore, to conform with past usage, we will use the word ``streamline" to designate both ring velocity field lines and ring shapes, keeping in mind that when the two concepts do not overlap, the word refers to the latter, in opposition to the more common usage. It is customary when treating a fluid dynamics problem to look for solutions with a specific space and time dependence. For example, in the analysis of fluid stability in the linear approximation, one often looks for oscillating solutions with phases of the form $kx -\omega t$ for a one-dimensional problem, $s$ being the space variable. The same approach is assumed in ring problems, with two notable subtleties attached: \begin{enumerate} \item The celestial mechanics perturbation technique adopted in the streamline formalism implies to define not only the streamline shapes, but also the defining quantities of each individual particle in a given streamline. As the unperturbed background is not at rest but is constituted by circular motions around the planet, this makes the \textit{a priori} specification of the shape of the motion somewhat more involved than usual. \item One looks for \textit{nonlinear} solutions in general, but with a specific form of nonlinearity that makes them analytically tractable to a larger extent. In disk systems in general, one can identify two types of nonlinearity: large radial extension, and large density variations. The two are not necessarily coupled, and it turns out that, in rings, deviations from circularity are always quite small, but the associated density contrast can be quite large. This will be discussed in section \ref{sec:surf} \end{enumerate} Let us start with the trivial case: a circular ring, in which the fluid particles are in circular motion\footnote{We have argued in section 2 that, because interparticle collisions are dissipative, orbital energy is permanently lost, so that no fluid particle can be exactly in circular motion, but let us ignore this complication for the time being, as we have not yet included the effect of the viscous stress in our analysis. In any case, this is a small effect, occurring on the longest of all the time-scales of interest in ring dynamics.}. The fluid particles' positions reduce to: \begin{plain} $$ r = a_e,\eqno(4.32)$$ $$ \theta = \varphi \equiv \varpi_e + M_e.\eqno(4.33)$$ \end{plain} \noindent where $\varphi$ is the epicyclic mean longitude. One sees also that at any given time, to any given fluid particle with initial position ${\bf r}_0$ corresponds a unique set ($a_e, \varphi$). This suggests that these quantities can be chosen as (semi-)Lagrangian labels instead of the initial position ${\bf r}_0$ if needed. This is also obviously true for the more general eccentric solution, and in the rest of these notes, it is considered when needed that the epicyclic elements $\epsilon$, $\varpi_e$ and $ M_e$ are functions of $a_e$ and $\varphi$ considered as Lagrangian labels (naturally, $\varpi_e$ and $ M_e$ are functions of time as well in the fluid test particle solution). Of course, $a_e$ and $\varphi$ are independent Lagrangian variables: the change of variable from ${\bf r}_0$ to ($a_e, \varphi$) is nowhere singular at any given time. Note that in non steady-state flows, the change of variable from ${\bf r}_0$ to ($a_e, \varphi$) may be time dependent. \subsubsection{Eccentric rings} Let us now consider some more general cases. Usually, in Eulerian fluid dynamics, one is interested in special solutions of the Navier-Stokes equations. The situation is similar here, because it is observationally found that rings can be well described by some special form of the general solution we have just written down. For example, the shape of the elliptic Uranian rings is known from the analysis of stellar occultation data (for a review, see \citealt{EN84}). Their mean eccentricity is typically of the order of $10^{-3}$ to $10^{-4}$; they also present a difference of eccentricity between the inner and outer edges, of the order of $10^{-4}$ to $10^{-5}$ \citep{FEL86,Fetal88}. The analysis of the data strongly suggests that the ring fluid particles having the same (epicyclic) semimajor-axis also have the same (epicyclic) eccentricity and the same (epicyclic) periapse angle. This means that the ring shape, which in this case coincide with the ring fluid particle streamlines -- is parametrized as follows, to first order in eccentricity [combine Eqs. (4.11) and (4.12)], as in Figure~3 below: \begin{plain} $$r(a_e,\phi)=a_e\left[1-\epsilon(a_e)\cos\left(\theta-\varpi_e(a_e)\right)\right]. \eqno(4.34)$$ \end{plain} \noindent In this type of solution, the epicyclic elements $\epsilon$ and $\varpi_e$ do not depend on the Lagrangian coordinate $\varphi$. Note that if such a dependence existed, it would generate an increased shear, and therefore, in usual situations, be quickly erased by viscous forces, unless it were maintained by some dynamical agent. This is also true for eccentric rings, which tend to generally tend to become circular under the action of the ring viscous stress. Thus, the eccentricities of elliptic rings need generally speaking to be maintained by some external agents (for example the shepherd satellites), unless the viscous stress has such an unusual form that viscous instabilities can take place and generate them (see section 7.1). Note also that the dependence of the precession rate on the fluid particle semimajor axis implies that the inner edge streamline of elliptic rings tends to precess faster than the outer one under the action of the planet. In the absence of other forces, the alignment of the inner and outer apses would be very quickly destroyed, streamlines would cross and the ring would become circular. Therefore, this differential precession must be balanced by some dynamical agent, e.g. the ring self-gravity (\citealt{GT79a,GT79b}; see section 7.1). \subsubsection{Density waves} It is also instructive to consider the case of density waves. Such waves are excited by the satellites near resonance locations, and propagate away from the resonance. They are sustained by the self-gravity of the disk, arise from the coherent radial excursions of the ring fluid particles, and look stationary in a particular rotating frame. The eccentricities involved are typically of order $10^{-3}$ to $10^{-5}$. The form of the fluid particle paths has been known for a long time in galactic dynamics from the work of Lindblad on spiral galaxies. The usual parametrization is most conveniently understood from the following elementary analysis, reproduced from \cite{GT82}. Let us consider the forced linear response of a test particle in circular orbit around an oblate planet, perturbed by a satellite. Expressing the radius and longitude of the test particle as $r=r_0+r_1$ with $r_1\ll r_0$ and $\theta=\theta_0+\Omega t +\theta_1$, where $r_0$ is the radius of the circular orbit and $\Omega$ is the orbital frequency, given by Eq.~(4.6), the linearized equations of motion for $r_1$ and $\theta_1$ read: \begin{plain} $${d^2 r_1\over dt^2}+r_0\left(d\Omega^2\over dr\right)_{r_0}r_1- 2r_0\Omega_0{d\theta_1\over dt}=- \left(\partial\phi_s\over\partial r\right)_{{\bf r}_0},\eqno(4.35)$$ $$r_0^2{d^2\theta_1\over dt^2}+2r_0\Omega_0{dr_1\over dt}=- \left(\partial\phi_s\over\partial\theta\right)_{{\bf r}_0},\eqno(4.36)$$ \end{plain} \noindent where $\phi_s$ is the satellite potential. The satellite is supposed to orbit in the equatorial plane of the planet. Let us call $a_s$ its (epicyclic) semimajor axis, $e_s$ its eccentricity, and ${\varpi_s}$ its periapse angle; $\kappa_s$ is the epicyclic frequency evaluated at $a_s$. At any given time, the satellite potential is of course periodic in azimuth. Furthermore, in a frame rotating at $\dot\varpi_s$, the satellite orbit is closed of period $2\pi/\kappa_s$. Thus, the satellite potential can be expanded in a double Fourier series, one in time, and one in azimuth; this yields\footnote{The phase of the satellite has been taken equal to 0 at $t=0$ (the origin of time $t=0$ is chosen when the satellite is at periapse); also, corotation resonances with $a=a_s$ are excluded from the expansion of Eq.~(4.37): $r<a_s$ is assumed.}: \begin{plain} $$\phi_s(r,\theta,t)=\sum_{m=0}^\infty\sum_{k=-\infty}^\infty \Phi_{mk}\left({r/a_s}\right)\cos\left[m(\theta-\dot\varpi_s t)- (m+k)\kappa_s t\right].\eqno(4.37)$$ \end{plain} The Fourier coefficients $\Phi_{mk}$ are expressed in terms of Laplace coefficients (\citealt{MD99}; a particularly synthetic and convenient derivation can be found in the appendix A of \citealt{Sh84}): \begin{plain} $$b_{1/2}^m (\alpha)={2\over\pi}\int_0^{\pi}{\cos mu\ du\over(1-2\alpha \cos u +\alpha^2)^{1/2}}.\eqno(4.38)$$ \end{plain} The coefficients $\Phi_{mk}$ are of order $e_s^{|k|}$, so that only the Fourier coefficients with small $k$ are important in practice, because usually $e_s\ll 1$. Let us define $\alpha=r/a_s$, and call $M_s$ the mass of the satellite. The terms of order $|k|\le 1$ read: \begin{plain} $$\Phi_{m0}=-{GM_s\over a_s}{b_{1/2}^m(\alpha)-\delta_{m1}\alpha\over 1+\delta_{m0}},\eqno(4.39)$$ $$\Phi_{m,\pm1}=-{GM_s e_s\over a_s}{\left[{1\over 2}\left(1\pm 2m +\alpha{d\over d\alpha}\right)b_{1/2}^m(\alpha)-\alpha\delta_{m1}(1\pm 1)\right]\over 1+\delta_{m0}},\eqno(4.40)$$ \end{plain} \noindent where $\delta_{ij}$ is the Kronecker delta symbol (the contribution of the indirect term is included). For a single Fourier component, the solution of the linearized equations of motion reads: \begin{plain} $$r_1=\left\{ {\cos[m(\Omega-\Omega_p)t+m\theta_0]\over{m^2(\Omega -\Omega_p)^2- \kappa^2}}\left({d\Phi_{mk}\over dr}+{2m\Omega\over m(\Omega-\Omega_p)r} \Phi_{mk}\right)\right\} _{r_0},\eqno(4.41)$$ \end{plain} \begin{plain} $$\eqalignno{\theta_1=&-\left\{ {\sin [m(\Omega-\Omega_p)t+m\theta_0]\over m^2(\Omega-\Omega_p)^2-\kappa^2}\left({2\Omega\over m(\Omega-\Omega_p)r} {d\Phi_{mk}\over dr}\right.\right.+\cr & \left.\left.\left[{4\Omega^2-\kappa^2\over m^2(\Omega-\Omega_p)^2}+1\right]{m\Phi_{mk}\over r^2}\right)\right\}_{r_0},&(4.42)\cr}$$ \end{plain} \noindent where $\Omega_p = \Omega_s + k/m\ \kappa_s$ is the so-called pattern speed: it is the angular speed of the frame in which the $(m,k)$ potential component is stationary. The linear response is singular either when $\Omega_p=\Omega_0$ (corotation resonance) or when $\kappa_0 =\pm m(\Omega_0-\Omega_p)$ (Lindblad resonance). The corotation resonances that lie within a ring arise because the satellite orbit is eccentric, and have $|k|\ge 1$; the $k=0$ corotation resonance (the strongest) occurs at the satellite radius. The inner Lindblad resonance occurs inside the corotation resonance, and corresponds to the positive sign; the other one is the outer Lindblad resonance, and lies outside. If the multipole moments of the planet potential are neglected, the Lindblad (resp. corotation) resonance condition reduces to $\Omega_0/\Omega_s=(m+k)/ (m\mp 1)$ (resp. $\Omega_0/\Omega_s=m+k/m$). This ratio is often used to label a resonance. For example, the $k=0$, $m=2$ inner Lindblad resonance is called a $2:1$ resonance (the outer edge of the Saturn's B ring corresponds to such a resonance with the satellite Mimas). The resonance condition implicitly defines the resonance radius $r_R$, which is the only radius for which the condition is satisfied. Near a resonance, the radial perturbation $r_1$ can be expressed as a function of $\theta$ and of the distance to the resonance $\Delta r=r_0-r_R$. For a corotation resonance, we have: \begin{plain} $$r_1\simeq {A^c_{mk}\over \Delta r}\cos m(\theta-\Omega_p t),\eqno(4.43)$$ \end{plain} \noindent with \begin{plain} $$A^c_{mk}=\left({4\Phi_{mk}\over 3\Omega^2}\right)_{r_R},\eqno(4.44)$$ \end{plain} \noindent whereas for a Lindblad resonance, one obtains: \begin{plain} $$r_1\simeq {A^L_{mk}\over \Delta r}\cos m(\theta-\Omega_p t),\eqno(4.45)$$ \end{plain} \begin{plain} $$A^L_{mk}= \left[{1\over 3\Omega^2(1\mp m)}\left(r{d\Phi_{mk}\over dr} \pm 2m\Phi_{mk}\right)\right]_{r_R},\eqno(4.46)$$ \end{plain} Note that this solution describes the circulating orbits at corotation resonances, and the librating orbits at Lindblad resonances. Density waves at corotation resonances will not be discussed in this lecture, and from now on, only Lindblad resonances are considered. Notice also that the amplitudes $A_{mk}^L/\Delta r\sim r^2 M_s/M_p(r-r_R)$, and that in a region of width $\Delta r\sim r(M_s/M_p)^{1/2}$, these test particle orbits intersect. Therefore, collective effects are expected to be important in this region. Note finally that the relative sign of $r_1$ and $\theta_1$ is preserved at OLR with respect to ILR upon the substitution of $m(\Omega - \Omega_p)=\pm \kappa$ (preserving the usual direction of epicyclic motions at both resonances). Although this test particle solution is somewhat unrealistic\footnote{In any case, this solution breaks down too close to the resonance where the condition $r_1\ll r_0$ is no longer satisfied.}, it indicates that ring streamlines can be chosen, to first order in eccentricity, as sinusoidal functions of the basic angle $m(\theta-\Omega_p t)$ when one considers a density wave driven by the $(m,k)$ component of the satellite potential. This choice reflects the fact that the ring fluid particles behave like a forced oscillator, responding with the time and angular dependence of the forcing. Notice also that all particles with the same semimajor axis have the same eccentricity and periapse angle in this simple test particle solution. Similarly, density waves are special solutions for which the eccentricities of the fluid particles are function of the semimajor axis only. For density waves at Lindblad resonances, we are therefore motivated to assume that the streamlines are parametrized by (see Figure~3): \begin{plain} $$r(a_e,\phi)=a_e\{1-\epsilon\cos [m(\theta -\Omega_p t) +m\Delta]\},\eqno(4.47)$$ \end{plain} \noindent where $\epsilon\ll 1$ everywhere in the wave region\footnote{Collective effects prevent the divergence seen in the test particle solution at the resonance (see section 7). Notice also that, although $\epsilon\ll 1$, the density contrast can be highly nonlinear (see section 4.4) so that Eq.~(4.47) describes both linear and nonlinear density waves.}. Finding solutions of this type will demonstrate {\it a posteriori} that our assumption is correct\footnote{A formal justification of Eq.~(4.47) is also provided from first principles by \cite{Sh84} in an {\textit ab initio} analysis of nonlinear density waves.}. Combining Eqs.~(4.11) and (4.12) necessarily yields $r=a_e[1-\epsilon\cos (\varphi -\varpi_e)]$ to lowest order in eccentricity. This is compatible with Eq.~(4.47) only if $m(\varphi-\Omega_p t)+m\Delta = \varphi -\varpi_e$, i.e., if \begin{plain} $$m(\Omega_a-\Omega_p)=\Omega_a-\dot\varpi_e,\eqno(4.48)$$ $$\varpi_0=\varphi_0(1-m)-m\Delta,\eqno(4.49)$$ \end{plain} \noindent where $\varpi_0$ and $\varphi_0$ are the periapse angle and mean longitude of the fluid particle at $t=0$. Note that in Eq.~(4.48) $\dot\varpi_e$ includes the precession rates due to the perturbations; the contribution of the perturbations to $d\varphi/dt$ is negligible in front their contribution to $\dot\varpi_e$, to leading order in $\epsilon$ [see Eqs. (4.12), (4.22) and (4.23)]. The second relation expresses the condition that the initial periapse angles and initial azimuths of particles having the same semi-major axis must satisfy in order for these particles to belong both to the streamline Eq. (4.47) and to their epicyclic orbit around the planet (it generalizes the equivalent condition for elliptic rings, which is that all fluid particles with the same semimajor axis have the same periapse angle). Eq.~(4.48) expresses the Lindblad resonance condition, and is required if the ring fluid particles are to belong to the streamline and to their natural epicyclic orbits at all times, and not only the initial one. These requirements follow from the fact that all forces are small in comparison with the planet attraction, so that the streamlines of the the flow cannot differ very much from the natural fluid particle orbits. The phase angle $\Delta$ has been added for the following reason. Note that the trajectories of the fluid particles having epicyclic semimajor axes equal to the resonance radius and eccentricity $\epsilon$, which are elliptic in an inertial reference frame, appear as the $m$-lobe shape of Eq.~(4.47) when viewed in a frame rotating at $\Omega_p$. This is a purely geometric and kinematic effect; Eq. (4.49) then states that the trajectories of all such particles will be identical in the rotating frame. However, for fluid test particles, this is true only at the resonance, and density waves cannot exist, as the precession rate $\dot\varpi_e$ is imposed by the planet only: for example, just outside the resonance [i.e., for fluid test particles of semi-major axis $a>a_R$, where $a_R$ is the semimajor axis at resonance, defined by $\kappa_{a_R}=m(\Omega_{a_R}-\Omega_p)$], $m(\Omega_a-\Omega_p) \not=\kappa_a$, and the streamlines appear to have an angular velocity $\kappa_a-m(\Omega_a-\Omega_p)$ with respect to the streamline at the resonance, quickly leading to streamline crossing and to the destruction of any density wave pattern, as the one shown on Figure~4. Thus, fluid test particles cannot support free density wave. But the ring self-gravity can produce a contribution to the precession rate $\dot\varpi_{sg}$ which actually cancels this secular drift, so that the resonance condition is satisfied throughout the wave region. As the ring gravity is nevertheless a very small force, it can only reach the right magnitude when the phase shift between adjacent streamlines is large enough, i.e., when the WKBJ, or tight-winding condition \begin{plain} $$ma_e\left|{d\Delta\over da_e}\right|\gg 1,\eqno(4.50)$$ \end{plain} \noindent is satisfied. This is why density waves are so tightly-wound in rings\footnote{Waves in rings are forced density waves. However, the coupling of the wave with the forcing potential occurs at the resonance on a small fraction of the wave zone, and the wave propagates essentially as a free wave on most of its radial extent.}. In spiral galaxies, as the self-gravity of the disk dominates the gravity from the central bulge, the spiral arms appear much more open. We will return to the discussion of density waves and justify these assertions later on in section 7, after having introduced the dynamical tools of the streamline formalism. \subsubsection{Eccentric modes} As a last example, let us consider the case of the $\gamma$ and $\delta$ rings of Uranus, whose streamlines are not described by the simple elliptic shape Eq.~(4.34). Actually, the $\delta$ ring is well-fitted by Eq.~(4.47), with $m=2$, and with $\Delta$ almost constant across the ring. This last characteristic allows us to introduce the notion of {\it mode}: a mode is a global sinusoidal oscillation of a ring, with streamlines given by Eq.~(4.47) where $\epsilon$ and $\Delta$ depend on $a_e$, and where $\Delta$, as we just said, is more or less constant across the ring; a mode is characterized by its number of lobes $m$ and its pattern speed $\Omega_p$. Elliptic rings enter this definition, with $m=1$ and $m\Delta-m\Omega_pt=-\varpi_e$; note that in this case, Eq.~(4.49) implies as required that the periapse angle depends only on semimajor axis, and not on $\theta_0$. Note also that spiral density waves are excluded from this definition (but not standing waves). In these notes only single mode motions are considered. The generalisation to multimode motions is somewhat discussed in \cite{L89b}. \begin{figure} \centering \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{./Figures/DW.jpg} \caption{$m$=2 density wave} \label{densitywave} \end{subfigure} \hfill \begin{subfigure}[b]{0.21\textwidth} \includegraphics[width=\textwidth]{./Figures/Ring.jpg} \caption{$m$=1 ring} \label{m1ring} \end{subfigure} \hfill \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\textwidth]{./Figures/Ring2.jpg} \caption{$m$=2 ring} \label{m2ring} \end{subfigure} \caption{\small{Examples of eccentric fluid motion in rings.}}\label{fig:ecc} \end{figure} Because all forces are much smaller than the planet's, and because the radial extent of the rings in which these modes are found is very small, the contributions of the perturbations to the frequencies and precession rates are much smaller than the planet's, and the pattern speed of the modes obeys the condition: \begin{plain} $$m(\Omega_a-\Omega_p)-(\Omega_a-\dot\varpi_{plan})\ll \Omega_a-\dot\varpi_{plan}.\eqno(4.51)$$ \end{plain} Note that for density waves, the precession rate is not limited in theory, because the winding can, in principle, be as high as needed. However, the winding is nevertheless meaningful only as long as the wavelength is larger than the typical particle size, and in any case, the wave is damped before this limit is reached, so that in practice Eq.~(4.51) applies to density waves as well. The streamlines of an $m=2$ density wave, and of $m=1,2$ ringlets are displayed on Figure~3 for comparison. Let us now consider the case of the $m=0$ mode of the $\gamma$ ring. To understand the basic kinematic properties of this mode, let us consider an infinitely thin ringlet whose fluid particles orbit on eccentric trajectories, with the same semimajor axis $a_e$ and the same eccentricity $\epsilon$. The periapse angles are evenly distributed, and the phases of the ring particles on their orbits are initially all identical, so that at any given time, all the fluid particles are at the same radial distance from the planet. This situation is schematically depicted on Figure~4 where the positions of the fluid particles are represented by dots; the particles belong both to their epicyclic orbit and to the circular ring; particles never collide. Note that this situation is substantially different from the case of the other modes. In Figure~3 for example, ring fluid particles are present all along any given eccentric orbit (the fluid particle orbits and streamlines are identical in the rotating frame). Here, the orbits are essentially empty and the fluid particles are confined to a special point along the orbit. At any given time, the ring appears circular. Its radius oscillates sinusoidally at the orbital frequency: $r=a_e(1- \epsilon\cos\zeta)$ where $d\zeta/dt=\kappa_a$. As the number of lobes $m$ of the other modes refer to their azimuthal structure (it is the azimuthal wavenumber), this type of motions actually corresponds to the case $m=0$ in Eulerian analyses, i.e., a purely radial motion. \begin{figure} \centering \includegraphics[width=0.4\linewidth]{./Figures/M0.jpg} \caption{\small{Sketch of an infinitely narrow $m=0$ mode. All fluid particles belong to the same circle oscillating radially, while occupying a given but identical azimuthal location along their individual orbits. This mode is kinematically different from $m\neq 0$ modes, for which orbits and streamlines are identical in the rotating frame.}} \label{fig:M0} \end{figure} Turning back to the case of a real ring (not infinitely thin), one sees that the ring streamlines can be chosen as: \begin{plain} $$r=a_e\left[1-\epsilon(a_e)\cos(\Omega_p t + \Delta(a_e))\right].\eqno(4.52)$$ \end{plain} Here again because all perturbing forces are such weak forces, Eq.~(4.52) is compatible with $r=a_e[1-\cos(\theta-\varpi_e)]$ only if $\Omega_p - (\Omega_a-\dot\varpi_{plan}) \ll \Omega_a - \dot\varpi_{plan}$. However, we shall see that the perturbing forces, although very weak, are nevertheless essential, because they can counteract the action of the Keplerian and precession shear, and therefore allow the mode to exist. \subsubsection{Summary} Let us summarize and complete the results obtained so far on ring kinematics. The results are discussed assuming $m\neq 0$, but can be transposed to the $m=0$ case by replacing $-m\Omega_p$ by $\Omega_p$ and $m\Delta$ by $\Delta$ in the following equations [compare Eqs.~(4.47) and (4.52)]. Unperturbed ring fluid particles travel on epicyclic orbits, which read\footnote{From now on, we keep terms up to first order in eccentricity only, as second order terms are generally not accessible from the data.}: \begin{plain} $$r=a_e[1-\epsilon\cos M_e],\eqno(4.53)$$ $$\theta=\varpi_e+ M_e+2{\Omega_a\over\kappa_a}\epsilon\sin M_e,\eqno(4.54)$$ \end{plain} \noindent where ($a_e, \epsilon,\varpi_e, M_e$) are the ``constants" of the motion, and are functions of the fluid particle initial position ($\varpi_e$ and $ M_e$ depend on time as well). They can equivalently be considered as functions of $a_e$ and $\theta_0$ (the fluid particle initial phase), or of $a_e$ and $\varphi\equiv M_e +\varpi_e$; $ M_e$ is given by Eq.~(4.13), $\varpi_e=\varpi_0+\int_0^t(\Omega_a-\kappa_a)dt$, and $\Omega_a$ and $\kappa_a$ are defined in Eqs.~(4.15) and (4.16). This fluid test particle solution is perturbed by the ring self-gravity, the ring viscous stress, and the satellites. These perturbations induce a time dependence of the ``constants" of the motion which is expressed by Eqs.~(4.20) through (4.23). Differentiating Eqs.~(4.53) and (4.54) with respect to time yields the velocity field of the unperturbed motion: \begin{plain} $$u_r={dr\over dt}=a_e\epsilon\kappa_a\sin M_e,\eqno(4.55)$$ $$u_\theta=r{d\theta\over dt}=a_e\Omega_a[1+\epsilon\cos M_e].\eqno(4.56)$$ \end{plain} As usual Eqs.~(4.51) through (4.56) apply to the perturbed fluid particle motion as well: they give its osculating position and velocity. On the other hand, the collective motion of ring fluid particles is \textit{assumed} to constitute an $m$-lobe mode in a frame rotating with an angular velocity noted $\Omega_p$: \begin{plain} $$r(a_e,\varphi,t)=a_e\{1-\epsilon(a_e,t)\cos(m(\varphi-\Omega_p t)+ m\Delta(a_e,t))\}.\eqno(4.57)$$ \end{plain} \noindent In these relations, ($a_e,\varphi$) represent the circular motion the fluid particle would have in the absence of perturbation, and are used as (semi-)Lagrangian labels. This choice is clearly motivated by observations, and the dynamical equations self-consistently specify the conditions of existence of such motions. Quite often, the $m$-lobe shape is stationary and $\epsilon$ and $\Delta$ are independent of $t$; a time-dependence occurs, e.g., due to viscous overstabilities or in the relaxation phase to stationary in which case the time dependence is transient. Eqs.~(4.57) and (4.53) can be satisfied simultaneously only if \begin{plain} $$M_e=m(\varphi-\Omega_p t)+m\Delta,\eqno(4.58)$$ \end{plain} \noindent i.e., if the following relations are satisfied \begin{plain} $${dm\Delta \over dt}=-m(\Omega_a-\Omega_p)+\Omega_a-\dot\varpi_e,\eqno(4.59)$$ $$\varpi_0=\varphi_0(1-m)-m\Delta_0,\eqno(4.60)$$ \end{plain} \noindent where $\varpi_0$, $\theta_0$ are the periapse angle and azimuth of the fluid particle at $t=0$ and $\Delta_0$ the phase at the same time [remember that the contribution of the perturbations to $d\varphi/dt$ is negligible; see the discussion after Eq.~(4.49)]. Note that for stationary patterns, equation (4.59) implies that \begin{plain} $$\eqalign{\dot\varpi_{pert.} & = -m(\Omega_a-\Omega_p)+(\Omega_a-\dot\varpi_{plan.})\cr & \simeq\left[\frac{3}{2}(m-1)+{21\over 4}\left(1+{m-1\over 2}\right)\left({R_p\over a_R}\right)^2 J_2\right] \times \left({G M_P\over a_R^3}\right)\left({a-a_R\over a_R}\right)}\eqno(4.61)$$ \end{plain} \noindent where $a_R$ is the resonance radius, and where $\dot\varpi_{pert.}$ and $\dot\varpi_{plan.}$ are the contributions of the perturbing forces and of the planet to the precession rate, respectively. We have up to now encountered two basic time-scales: \begin{enumerate} \item The short or orbital time-scale. \item The intermediate or ``synodic" time-scale arising from the secular drift of test particle streamlines with respect to one another in the vicinity of the ``resonance" radius $a_R$ implicitly defined by the relation $m(\Omega-\Omega_p)=\kappa$. This time scale is of order $[\Omega_a\delta a/a]^{-1}$ (or $[J_2\Omega_a\delta a/a]^{-1}$ if $m=1$) where $\delta a$ is the width of the perturbed region; also, as argued in the paragraph around Eq.~(4.51), $\delta a/a \ll 1$. \end{enumerate} One sees that the perturbing accelerations must produce a secular variation of the line of the apses on the intermediate time-scale. Because the perturbing forces are weak forces, $\dot\varpi_{pert}\ll\Omega_a-\dot\varpi_{plan.}$ (or equivalently $a_e-a_R\ll a_R$), and the motion is mainly imposed by the planet. This precession rate is in general provided by the ring self-gravity. Note also that for an elliptic ring ($m=1$), the required contribution of the perturbations to the precession rate is down by a factor $J_2$. Therefore the $m=1$ mode is easier to maintain than other modes, and elliptic rings more common; this is a natural result, because ellipses are the natural form of oscillations of the fluid particles around the planet. In spiral galaxies, as the central bulge does not dominate the gravity, the most common mode of oscillation corresponds to $m=2$, as can be expected for objects with a flat rotation curve. Thus, two-arms spiral galaxies tend to be more common. Let us conclude this section with a final comment. Generally, unperturbed rings appear circular, and are described by $r=a_e$; the motion of ring fluid particles reduces to $r=a_e, \theta=\varphi$. When the ring is perturbed, it is useful to express the perturbed position ($r,\theta$) of a fluid particle in terms of the unperturbed position ($a_e,\varphi$) it would have in the absence of perturbation. This is easily performed from Eqs.~(4.53), (4.54) and (4.58), and yields: \begin{plain} $$r=a_e[1-\epsilon\cos(m(\varphi-\Omega_p t)+ m\Delta)],\eqno(4.62)$$ $$\theta =\varphi+2\left(\Omega_a\over\kappa_a\right) \epsilon\sin[m(\varphi-\Omega_p t) +m\Delta].\eqno(4.63)$$ \end{plain} Note that these equations define at any given time a Eulerian change of variables from ($r,\theta$) to ($a_e, \varphi$). This property will be sometimes used in the remainder of these notes. Note also that differentiation of Eqs.~(4.62) and (4.63) yields the following expressions for the velocity field: \begin{plain} $$u_r= ma_e\epsilon(\Omega_a-\Omega_p)\sin[m(\varphi-\Omega_p t)+m\Delta],\eqno(4.64)$$ $$u_\theta=a_e\Omega_a\left[1+\epsilon\left(2{m(\Omega_a-\Omega_p)\over \kappa_a}-1\right) \cos(m(\varphi-\Omega_p t)+m\Delta)\right].\eqno(4.65)$$ \end{plain} The ``kinematic" set, Eqs. (4.62), (4.63), (4.64) and (4.65) differs from the ``osculating" set, Eqs. (4.53), (4.54), (4.55) and (4.56) by terms of order $\epsilon \delta a_e/a_e\ (\ll\epsilon)$, where $\delta a_e$ is the distance to the resonance radius, implying that the osculating elements differ from the (correct) kinematic ones by terms of the same order\footnote{Note that the epicyclic frequency appearing in these equations is defined by Eq.~(4.7) and does not include the effect of the perturbations on the periapse angle.}. This difference is accounted for by the short period terms of the osculating elements. However, such terms are usually negligible, and the difference between the two types of elements does not have to be specified. In the derivation of these equations, as well as in the remainder of these notes, we have assumed that $a_e$, $\epsilon$, and $\Delta$ are time independent. However, most results, if not all, are still valid if these quantities are time-dependent, provided that they vary on time-scales much longer than the orbital period, as is the case in section 7.1. \subsection{Ring surface density}\label{sec:surf} Having solved the equation of motion, we wish now to consider the equation of mass conservation [Eq.~(3.16)]. In fact, we are not going to solve directly this differential equation, but instead present a general and well-known Lagrangian solution directly from the constraint of mass conservation between the perturbed (elliptic) and unperturbed (circular) states of a ring. The following argument should be more properly developed in integral form, but we will just present it in differential form, as this does not affect the result. Let us consider an elementary mass element $\delta M$, defined as the mass between the two streamlines of semimajor axis $a_e$ and $a_e+da_e$, and the two unperturbed azimuths $\varphi$ and $\varphi+d\varphi$: \begin{plain} $$\delta M = \sigma_0(a_e) a_e da_e d\varphi,\eqno(4.66)$$ \end{plain} \noindent where, by definition, $\sigma_0$ is the surface density in the unperturbed state. It is a function of $a_e$ only, as any nonaxisymmetric feature should be quickly erased by the Keplerian shear and by diffusion \footnote{Ring arcs are not discussed in these notes.}. Let us now perturb this flow, which becomes elliptic. We assume, as argued in the previous sections, that the fluid particle positions are given most generally by Eqs.~(4.62) and (4.63) so that the ring streamlines form $m$-lobe shapes in a frame rotating at $\Omega_p$. The mass element is now given by: \begin{plain} $$\delta M = \sigma(r,\theta) r dr d\theta,\eqno(4.67)$$ \end{plain} \noindent where $\sigma$ is the surface density of the fluid particle in ($r, \theta$). Let us introduce the Jacobian $J$ of the change of variable from the unperturbed flow to the perturbed flow: \begin{plain} $$J=\left|{r\over a} {\partial(r,\theta)\over\partial(a_e,\varphi)}\right|, \eqno(4.68)$$ \end{plain} \noindent so that $rdrd\theta=J adad\varphi$. We can express the perturbed surface density in terms of the unperturbed one and of this Jacobian, using the fact that the fluid element of mass is the same in the two states: \begin{plain} $$\sigma={\sigma_0\over J}.\eqno(4.69)$$ \end{plain} In order to evaluate the Jacobian, we need to evaluate partial derivatives of the (actual) perturbed position with respect to the (fictitious) unperturbed position. Let us start with $\partial r/\partial a_e$. As $\epsilon$ and $m\Delta$ are function of $a_e$ only, one obtains: \begin{plain} $${\partial r\over\partial a_e}=1-q\cos[m(\varphi-\Omega_p t)+m\Delta + \gamma],\eqno(4.70)$$ \end{plain} \noindent where $q$ and $\gamma$ are defined by: \begin{plain} $$q\cos\gamma = {d a_e\epsilon\over d a_e},\eqno(4.71)$$ $$q\sin\gamma = ma_e\epsilon{d\Delta\over da_e}.\eqno(4.72)$$ \end{plain} We have just introduced another fundamental parameter: $q$. In general, although $\epsilon \ll 1$, $q$ can be of order unity. Streamlines cross if $|q|>1$. This follows from the fact that the distance between two streamlines, to lowest order in eccentricity, is given by $\Delta r = {\partial r/\partial a_e}\Delta a_e$ where $\Delta r$ and $\Delta a_e$ are the differences in radii and semimajor axes of the two streamlines; this shows also that the distance between two adjacent streamlines varies with azimuth, and so does the width of an elliptic ring, as observed for all the elliptic Uranian rings. Usually, the pressure tensor (if nothing else) will diverge as $q\rightarrow 1$ or before, preventing streamline crossing. \begin{figure}[h] \centering \includegraphics[width=0.5\linewidth]{./Figures/sig.jpg} \caption{\small{Schematic representation of the perturbed ring surface density as a function of azimuth for $q = 0.9$.}} \label{fig:sig} \end{figure} It is easy to check that $\partial\theta/\partial a_e \sim O(q)$, $\partial r/\partial\varphi\sim O(\epsilon)$, and $\partial\theta/\partial\varphi\sim 1+ O(\epsilon)$, so that to lowest order in $\epsilon$, the Jacobian reads: \begin{plain} $$J={\partial r\over\partial a_e}=1-q\cos[m(\varphi-\Omega_p t) + m\Delta +\gamma],\eqno(4.73)$$ \end{plain} \noindent The corresponding variation of the surface density with azimuth is schematically represented on Figure~\ref{fig:sig}. One sees that the surface density azimuthal variations are characterized by narrow peaks and broad troughs. An elliptic ring is narrower and denser in the vicinity of its periapse than at its apoapse. This trough-peak behavior is also seen in the optical depths profiles of density waves in Saturn's rings. It is an unavoidable consequence of the basic kinematics discussed in the previous subsections. A major kinematic difference between elliptic rings and density waves is that in elliptic rings, the values of $q\gg\epsilon$ arise because of the gradient of eccentricity across the ring, while the phase $\Delta$ remains constant or nearly constant. In density waves, the situation is exactly reversed: the eccentricity is nearly constant in the wave region, while the phase $\Delta$ varies very quickly with increasing semimajor axis [see the tight-winding condition Eq.~(4.50)]. For elliptic rings, $\gamma\simeq 0$, whereas for density waves, $\gamma\simeq\pi/2$. The surface density can reach very high contrasts, although the eccentricity remains very small: this type of nonlinearity is not connected to large deviations from the circular motion, but rather from the differences of deviations between neighboring streamlines. Thus we have the possibility of developing nonlinear theories while still linearizing the motion with respect to $\epsilon$ (but of course not with respect to $q$), which is always a small parameter in ring problems. We have solved the continuity equation, in the sense that we have found the surface density of any ring fluid particle, in terms of three unknown functions: $\sigma_0, q, \gamma$. The reader can check that the {\it nonlinear} Eq. (3.16) is satisfied for any choice of these three functions, to lowest order in eccentricity. It is impossible to go further on purely kinematical grounds: the magnitude and form of these functions (except maybe $\sigma_0$) is determined by the perturbations. \section {Ring pressure tensor} The number of papers on the solution to the Boltzmann second-order moment equations in unperturbed flows is rather numerous. Some of them have already been quoted in section 2. But there are only three studies to date giving solutions for the second-order moments in perturbed flows. \cite{BGT83b} solve the Boltzmann second-order moment equations with a collision term of the Boltzmann form, modified to take the inelasticity of the collisions into account, assuming identical indestructible spherical particles characterized by a normal coefficient of restitution (there is no coupling with the spin degrees of freedom). This work is an extension of the paper by \cite{GT78a}. \cite{SS85} and \cite{SDLYC85} use instead a Krook collision term, also modified to account for the inelasticity of the collisions. These first two analyses are very similar in spirit and scope, but are however mostly restricted to dilute systems (i.e. systems in which the particle size is much smaller than the particle mean separations). On the other hand, \cite{BGT85} present a heuristic analysis of the pressure tensor behavior in dense systems (where the particle size is much bigger than the interparticle distances), which is not based on the Boltzmann second-order moment equations, but on a hydrodynamical approach. This analysis is expected to apply to high optical depth rings. Taken together, these papers give an interesting insight into the pressure tensor behavior under two opposite sets of physical conditions, which more or less span the conditions relevant to ring systems. Both dilute and dense systems will be discussed here. \subsection{Dilute systems} The approach based on the Krook model has a number of advantages: first, as the Krook model is simpler than the Boltzmann collision term, the analysis is somewhat simplified. Also, various generalizations like the inclusion of gravitational encounters are more easily performed. Furthermore, the Krook model can be looked at as a model equation for more complicated collisional operators when its parameters are appropriately chosen. Therefore, only the Krook model will be described here. The material of this section is taken from \cite{SS85}, and from \cite{SDLYC85}, unless otherwise specified. In this model the collision term of the Boltzmann equation is expressed as \begin{plain} $$\left(\partial f\over\partial t\right)_c=\nu_c(f_I-f),\eqno(5.1)$$ \end{plain} \noindent where $\nu_c$ is the mean collision frequency of the system under consideration, and $f_I$ is a Maxwellian distribution with the same local number density $\rho$ and mean velocity ${\bf u}$ as $f$: \begin{plain} $$f_I={\rho/M\over(2\pi c_I^2)^{3/2}}\exp \left(-{({\bf v}-{\bf u})^2\over 2c_I^2}\right),\eqno(5.2)$$ \end{plain} \noindent where $M$ is the mean mass of the distribution [see the discussion after Eq.~(3.12)]. To allow for the effects of inelastic collisions, we let $c_I$ differ from $c$. Assuming that a collision reduces the magnitude of the component of the relative velocity along the lines of the centers of the two colliding particles by a factor\footnote{An index {\textit r} has been added to the coefficient of restitution to prevent confusion with the epicyclic eccentricity.} $\epsilon_r < 1$, while it preserves the two components of the tangential velocity, allows us to require \begin{plain} $$3c_I^2=(2+\epsilon_r^2)c^2,\eqno(5.3)$$ \end{plain} \noindent where the coefficient of restitution $\epsilon_r$ has been averaged over all encounters, and can be regarded as a function of $c$. The Krook collision term has the following physical meaning. As we follow the (individual) particles in orbit around the planet, per unit volume of phase space, inelastic collisions remove particles at a rate $\nu_c f$ and restore them with a Maxwellian distribution at a rate $\nu_c f_I$, i.e. isotropically around the mean velocity ${\bf u}$. This model does not examine in any detail the microphysics of the collisions. It just expresses the fact that, independently of the mechanics of individual collisions, collisional processes always tend to make the distribution isotropic, and more specifically Maxwellian, in a time-scale comparable to the collisional time-scale. \subsubsection{Collision frequency, effective particle size, and effective optical depth} The collision frequency can be estimated with the following arguments, mostly reproduced from the appendix A of Shu and Stewart 1985. Let us first compute the collision frequency for a collection of particles of identical masses $m$ and sizes $R$, assuming that the distribution function is a maxwellian of velocity dispersion $c$: \begin{plain} $$f={\rho/m\over(2\pi c^2)^{3/2}}\exp\left(-{{\bf w}^2\over 2c^2}\right),\eqno(5.4)$$ \end{plain} \noindent where ${\bf w}={\bf v}-{\bf u}$ is the velocity with respect to the local mean velocity ${\bf u}$, and the vertical distribution of the mass density $\rho$ is assumed isothermal: \begin{plain} $$\rho={\sigma\mu_0\over(2\pi c^2)^{1/2}}\exp\left(-{\mu_0^2 z^2\over 2c^2}\right),\eqno(5.5)$$ \end{plain} \noindent with $\sigma$ equal to the surface mass density. This distribution of density expresses the vertical hydrostatic equilibrium of the ring, i.e. the equilibrium between the planet and disk self-gravity forces on one side, and the pressure on the other. Therefore, the vertical epicyclic frequency $\mu_0\equiv (\partial^2 \phi/\partial z^2)_{z=0}$ entering Eq.~(5.5) must include the contribution of the disk self-gravity, which can be much larger than the restoring force of the planet in the vertical direction, as will be seen in section 5.2. Computing the rate at which a particle with random velocity ${\bf w}_1$ is hit by particles of all velocities ${\bf w}_2$, and averaging over the distribution of ${\bf w}_1$ yields the local averaged collision frequency as: \begin{plain} $$\langle \nu_c\rangle=4\pi R^2{\rho\over m} \int\int{d^3w_1d^3w_2\over (2\pi c^2)^3} |{\bf w}_1-{\bf w}_2|\exp\left(-{{\bf w}_1^2+{\bf w}_2^2\over2c^2}\right).\eqno(5.6)$$ \end{plain} The integrals are most easily computed by performing the change of variable from $({\bf w}_1,{\bf w}_2)$ to $\{{\bf W}={\bf w}_1-{\bf w}_2, {\bf U}=({\bf w}_1+{\bf w}_2)/2\}$, which yields \begin{plain} $$\langle\nu_c\rangle={16\rho c R^2\over\pi^{1/2}m}.\eqno(5.7)$$ \end{plain} We can now compute the vertically averaged collision frequency: \begin{plain} $$\langle\langle\nu_c\rangle\rangle\equiv{1\over\sigma}\int\langle\nu_c \rangle\rho dz = {8\over\pi}\mu_0\tau,\eqno(5.8)$$ \end{plain} \noindent where we have introduced the normal optical depth $\tau=\sigma \pi R^2/m$. We can now generalize this expression to a general distribution of particle sizes. Let $N(R)dR$ be the number of particles per unit disk area having radii between $R$ and $R+dR$. The surface density is related to the particle size distribution by \begin{plain} $$\sigma=\int_{R_1}^{R_2}mN(R) dR,\eqno(5.9)$$ \end{plain} \noindent with $m=4\pi \rho_p R^3$, for particles of bulk density $\rho_p$. The two quantities $R_1$ and $R_2$ are the two cutoff radii of the distribution. Typically in Saturn's rings, the distribution $N(R)\propto R^{-3}$, and $R_1\sim 1$ cm and $R_2\sim 5$ m. Following \cite{SS85}, we wish to define an effective binary collision frequency which possesses the following properties: (1) is symmetric with respect to the interchange of the members of the colliding pair, (2) is proportional to the geometric cross section $\pi(R+R')^2$, (3) is weighted by the reduced mass $mm'/(m+m')$ of the colliding pair, and (4) is equivalent to the preceding expression for a $\delta$-function distribution of particle sizes. All these requirements seem natural, but might nevertheless need some comments. We have pointed out that the mean velocity and the pressure tensor are independent of the particle size which allows us to write the mass-dependent distribution function $f(m,{\bf r}, {\bf v})$ as $n(m)f_0({\bf r}, {\bf v})$. Also, Eq.~(5.1) should more properly be written $(\partial f(m)/\partial t)_c=\int dm' \alpha(m,m') \nu_c(m,m') n(m) ({f_0}_I-f_0)$, where $\nu_c(m,m')$ is the frequency of collision of a particle of mass $m$ with particles of mass $m'$\footnote {Note that at this point, $\nu_c$ is {\textit not} symmetric with respect to the interchange of $m$ and $m'$: if there are many more particles of mass $m'$ than of mass $m$, any particle of mass $m$ will collide much more often with a particle of mass $m'$ than the reverse.}. This would indicate first that the collisional change for particles of one mass depends on the collisions with particles of all other masses, and second that not all pair of masses have the same efficiency in relaxing the distribution of a given mass to a Maxwellian, as indicated by the factor $\alpha$. Indeed, in a collision between two particles of mass $m$ and $m'$, the conservation of momentum requires that the change of momentum of each particle is $\mu O(c)$ where $\mu$ is the reduced mass of the colliding pair. Therefore, the change of velocity of the particle of mass $m$ is $\mu/m O(c)$, and one can take $\alpha\propto\mu/m$. Now, remember that we have multiplied the Boltzmann moment equations by the mass and integrated over mass, which shows that the integrated collision term is proportional to $\int dm\ m \int dm' n(m)\mu(m,m')\nu_c(m,m') \alpha(m,m')$, and that the collision frequency must indeed be weighted by the reduced mass of the colliding pairs. Furthermore, Eq. (5.6) can be generalized to give the collision frequency of a particle of mass $m$ with particles of mass $m'$, resulting in the change of the factor $4\pi R^2 \rho/m$ into $\pi(R+R')^2 n(m')$. This series of argument would give the effective collision frequency exactly, if it weren't for the efficiency factor $\alpha$ which is only determined to within a multiplicative constant. We can therefore only conclude that the effective collision frequency $\nu_c\propto\int\int dmdm'n(m)n(m')\pi(R+R')^2mm'/(m+m')$. The coefficient of proportionallity is constrained by the last requirement. Finally, after getting rid of the vertical dependence of the $n(m)n(m')$ factor by a vertical integration, we can write down the desired collision frequency by inspection as \begin{plain} $$\nu_c={4\mu_0\over\pi\sigma}\int\int \pi(R+R')^2\left(mm'\over m+m'\right) N(R)N(R')dRdR',\eqno(5.10)$$ \end{plain} \noindent where we have dropped the double-average notation. For definiteness, let us take $N(R)=C R^{-3}$. If we assume that $R_2\gg R_1$, then $C=3\sigma/4\pi\rho_pR_2$ from Eq. (5.9), and the double integral can be computed: \begin{plain} $$\nu_c={6\mu_0\sigma\over{\sqrt 3}\rho_p R_2}.\eqno(5.11)$$ \end{plain} We can put this result under a form similar to (5.8) by defining an effective optical depth and an effective particle size. We obtain \begin{plain} $$\nu_c={8\over\pi}\mu_0\tau_e,\eqno(5.12)$$ \end{plain} \noindent with \begin{plain} $$\tau_e={\sigma\pi R_e^2\over m_e}={3\sigma\over 4\rho_p R_e},\eqno(5.13)$$ $$R_e={{\sqrt 3}R_2\over \pi}.\eqno(5.14)$$ \end{plain} Note that this effective optical depth differs from the actual optical depth obtained for $N(R)\propto R^{-3}$. In fact, \begin{plain} $$\tau=\int\pi R^2 N(R)dR={3\sigma\over 4\rho_p R_2}\ln(R_2/R_1),\eqno(5.15)$$ \end{plain} \noindent which is larger than $\tau_e$ by a factor ${\sqrt 3}/\pi\ln(R_2/R_1)$; this factor amounts to 3 or 4 if $R_1\sim 1$ cm and $R_2\sim 500$ cm; $\tau_e$ is the quantity which should enter Eqs. (2.9), (2.10) and (2.12), and particular attention should be paid to this point when comparing theoretical results with observations. Let us conclude this section with a few general comments. First, note that with the provision of Eqs.~(5.3) and (5.12), the Krook model is completely specified: we have expressed its two parameters $c_I$ and $\nu_c$ in terms of the other variables of the problem. The reader should however notice that instead of looking at the Krook model as to a physical model in its own right, one could as well consider Eq.~(5.1) as a model equation for more complicated collision terms by keeping some freedom in the choice of $c_I$ and $\nu_c$. For example, \cite{SS85} show that the Boltzmann collision term for smooth identical spheres can be reproduced by adopting $c_I^2=[(1+\epsilon_r)/(3-\epsilon_r)]c^2$ and $\nu_c=(4\mu_0/3\pi)(3-\epsilon_r)(1+\epsilon_r)\tau_e$ instead of Eqs.~(5.3) and (5.12). Finally, let us note that corrective factors can be added to Eq.~(5.12) to account for the effects of the anisotropy of the distribution and of the possible close-packing of the ring particles\footnote{If the particles are close-packed, i.e. if their radii are not small in comparison with their mean distances, the particles have to travel much less than their relative distance before a collision takes place, resulting in an increase of the collision frequency. This effect will be further discussed in section 5.2.}. However, \cite{SDLYC85} show that the anisotropy correction does not produce important modifications. The effect of the close-packing correction has not been analyzed in the literature. We will therefore not include it here. We merely note that Eq.~(5.12) applies only to dilute systems, i.e. systems in which the filling factor is much smaller than unity. \subsubsection{Quasi-equilibrium of perturbed and unperturbed ring systems} We can pursue the program outlined at the end of section 3: we can now look for the steady-state solution of the pressure tensor equations with the velocity field of Eqs.~(4.64) and (4.65), and the ring surface density of Eqs.~(4.69) and (4.73). Let us first recast Eqs.~(3.21) through (3.24) in an appropriate form. First, notice that to lowest order in $\epsilon$, $\partial/\partial r= (1/J)\partial/\partial a$, and $d/dt\equiv \partial/\partial t+ \Omega\partial/\partial\theta +u_r\partial/\partial r= \Omega\partial/ \partial M'$ in steady-state where $M'= M+\gamma=m(\varphi-\Omega_pt)+m\Delta +\gamma$ [Eq. (4.58) has been used]. Notice that the steady-state condition $d/dt=\Omega\partial/\partial M'$ can be obtained either in an inertial frame or in a frame rotating with angular speed $\Omega_p$. In such a rotating frame, the pressure tensor is time-independent because ${\bf u}$ and $\sigma$ are time-independent. The steady-state condition expresses the fact that the pressure tensor depends on time only through $M'$. Any other time dependence disappears in a time-scale $\sim \Omega^{-1}$, in accordance with the heuristic argument developed in section 2. Then, keeping only the leading terms in $\epsilon$, one obtains: \begin{plain} $${dP_{rr}\over dM'}+{3q\over J}\sin M' P_{rr}-4P_{r\theta}= {1\over\Omega}\left(\partial P_{rr}\over\partial t\right)_c,\eqno(5.16)$$ $${dP_{r\theta}\over dM'}+{P_{rr}\over 2J}+{2q\over J}\sin M' P_{r\theta}- 2P_{\theta\theta}={1\over\Omega}\left(\partial P_{r\theta}\over\partial t \right)_c,\eqno(5.17)$$ $${dP_{\theta\theta}\over dM'}+{P_{r\theta}\over J}+{q\over J}\sin M' P_{\theta\theta}={1\over\Omega}\left(\partial P_{\theta\theta}\over\partial t\right)_c,\eqno(5.18)$$ $${dP_{zz}\over dM'}+{q\over J}\sin M'P_{zz}={1\over\Omega}\left(\partial P_{zz}\over\partial t\right)_c.\eqno(5.19)$$ \end{plain} These equations show that the pressure tensor components depend on azimuth only through the angle $M'$. From Eq. (5.1), one obtains the right-hand sides of the preceding equations as \begin{plain} $${1\over\Omega}\left(\partial P_{ij}\over\partial t\right)_c= {\nu_c\over\Omega}\left(\sigma c_I^2\delta_{ij}-P_{ij}\right),\eqno(5.20)$$ \end{plain} \noindent where $\nu_c$ is defined by Eqs.~(5.12) through (5.14), and $c_I$ by Eq.~(5.3). In performing the vertical integration, we have used the fact that $c$ is independent of $z$ (see the discussion at the beginning of section 3.2). Note also that by definition of the velocity dispersion \begin{plain} $$3\sigma c^2=P_{rr}+P_{\theta\theta}+P_{zz}.\eqno(5.21)$$ \end{plain} If the relation $\epsilon_r(c)$ were known, Eqs (5.16) through (5.21) supplemented by Eqs. (5.3) and (5.12) through (5.14) would yield a closed set of equations for $P_{rr}, P_{r\theta},P_{\theta\theta}$, and $P_{zz}$ (notice that these quantities uncouple from the other pressure tensor components). However, as this relation is not very well constrained, one usually prefers to keep the absolute scale of the pressure tensor as a free parameter, and solve rather for the relative magnitude of the pressure tensor components and for the relation $\epsilon_r(\tau_e)$ that the equilibrium requires. In doing so, the dependence of $\nu_c$ on azimuth is usually taken into account, but $\epsilon_r$ is usually taken to be constant with azimuth, as well as $c^2$ and $c_I^2$ ($\nu_c$ depends on azimuth because it is proportional to $\sigma$). In order to illustrate these points, let us first look at the solution for unperturbed flows. In this case, the velocity field and the surface density are purely axisymmetric, so that $q=0$, and the pressure tensor is independent of azimuth, as well as the collision frequency. Therefore, Eqs.~(5.16) through (5.19) reduce to \begin{plain} $$P_{rr}=\sigma_0 c_I^2\left[1+{6\Omega^2\over\nu_c^2+4\Omega^2}\right], \eqno(5.22)$$ $$P_{r\theta}=\sigma_0 c_I^2{\nu_c\over 2\Omega} {3\Omega^2\over \nu_c^2+4\Omega^2},\eqno(5.23)$$ $$P_{\theta\theta}=\sigma_0 c_I^2\left[1-{3\Omega^2\over 2(\nu_c^2+ 4\Omega^2)}\right],\eqno(5.24)$$ $$P_{zz}=\sigma_0 c_I^2.\eqno(5.25)$$ \end{plain} In these equations, the pressure tensor is scaled to $\sigma_0 c_I^2$, which we take as a free parameter. Combining Eqs.~(5.3) and (5.21) with the ``solution" we have just written down yields after some algebraic manipulations the required relation between the coefficient of restitution and the effective optical depth \begin{plain} $$\epsilon_r^2=1-{9/11\over 1+(128/11\pi^2)\tau_e^2}.\eqno(5.26)$$ \end{plain} The reader will notice that this relation is in good agreement with Eq. (2.12), which we had derived with heuristic arguments. Again, equating this $\epsilon_r(\tau_e)$ relation to some $\epsilon_r(c)$ relation would give $c(\tau_e)$, and therefore would determine the magnitude of the pressure tensor in Eqs.~(5.22) through (5.25). Furthermore, summing Eqs.~(5.16), (5.18) and (5.19) yields the equation of equilibrium of the velocity dispersion, which reads \begin{plain} $$4\Omega {P_{r\theta}\over\sigma_0}=\nu_c(1-\epsilon_r^2)c^2.\eqno(5.27)$$ \end{plain} \noindent In the hydrodynamical approximation (see section 5.2, or \cite{LL87}, chapter 2), $P_{r\theta}/\sigma_0\sim \nu \Omega$ for the Keplerian velocity field; also, Eq.~(5.12) shows that $\nu_c\sim \Omega\tau_e$, so that Eq.~(5.27) is in agreement with Eq.~(2.10). We have already pointed out in section 2 that this equation expresses the equilibrium between the excitation due to the input from the shear of the mean motion, and the damping due to the inelasticity of the collisions. This completes our discussion of unperturbed flows. In what concerns perturbed flows, no analytic solution of Eqs.~(5.16) through (5.19) is available\footnote{Analytic expressions do exist in the limit of small {\textit q}, but won't be given here. The interested reader is referred to the paper by \cite{SDLYC85} for their derivation.}. The set has been solved numerically, either by integrating the differential equations from $M'=0$ to $2\pi$, subject to the constraint that the functions are periodic in $M'$ \citep{BGT83b}, or the pressure tensor components are expanded in Fourier series, and the various coefficients computed \citep{SDLYC85}. The pressure tensor components are computed as functions of $q$, $\tau_{e0}\equiv J\tau_e=3\sigma_0/4\rho_pR_e$ and $M'$ (the weak dependence of $\Omega$ and $\mu_0$ on $a_e$ is ignored). The preceding $\epsilon_r(\tau_e)$ of unperturbed flows generalizes to an $\epsilon_r(\tau_{e0},q)$ relation, as is depicted on Figure~\ref{fig:Shu} (adapted from \citealt{SDLYC85}): the curves show the variation of $\epsilon_r$ with $q$, parametrized by the values of $\tau_{e0}$. These curves exhibit a very important feature: if the unperturbed optical depth $\tau_{e0}$ is smaller than a critical value (here of order 0.2 or 0.3, but its exact value is sensitive to the details of the collisional model), the coefficient of restitution {\it must} become zero for a finite value of $q=q_m$, here of order 0.7. On the other hand, if $\tau_{e0}$ is larger than this critical value, $q$ is not limited\footnote{Except by $q<1$. This constraints arises because as $q\rightarrow 1$, the streamlines tend to cross, and therefore the pressure tensor, if nothing else, tends to diverge to prevent this crossing.}, and $\tau_{e0}\rightarrow 1$ as $q\rightarrow 1$. This result has the following physical interpretation. Let us consider a ring perturbed by some outside agent (a satellite near a resonance, for example), and let us increase slowly the strength of this perturbation so that the rate of perturbation of the flow, measured by $q$, tends to increase. To dissipate the energy input of the perturbation, $\epsilon_r$ must adjust according to the requirements of Figure~\ref{fig:Shu}. For small mean effective optical depths, the collision frequency is also small, and $\epsilon_r$ goes to zero to make maximum use of the rare collisional events. As a consequence the velocity dispersion increases without bound (because the coefficient of restitution is a decreasing function of the impact velocity), and the viscous damping increases [Eq.~(2.9)], preventing further growth of the rate of perturbation of the flow $q$. On the other hand, in regions of high $\tau_{e0}$, the collision frequency is always high enough to keep the dispersion velocity low. Therefore, in high optical depth regions, for a given strength of the outside perturbation, the viscous damping is much smaller than it would be in small optical depth regions. \cite{SDLYC85} mainly attribute to this effect the difference in damping length-scales of density waves in Saturn's A (low optical depth) and B (high optical depth) rings. \begin{figure}[h] \centering \includegraphics[width=0.7\linewidth]{./Figures/Shu.jpg} \caption{\small{The relation between the particle coefficient of restitution and the rate of perturbation of the ring for several values of the effective optical depth; the three different types of lines (solid, dashed, dotted) correspond to three different forms of the Krook model (from \cite{SDLYC85}.}} \label{fig:Shu} \end{figure} Let us now consider the behavior of the pressure tensor components themselves. Actually, we will see in section 6 that the pressure tensor does not affect the mean flow directly, but only through three quantities, $t_1$, $t_2$ and $a_{r\theta}$, defined by\footnote{The present definitions differs by a factor $\sigma_0$ from the equivalent definitions of \cite{BGT86}.}: \begin{plain} $$t_1=s_{rr}+2c_{r\theta},\eqno(5.28)$$ $$t_2=2s_{r\theta}-c_{rr},\eqno(5.29)$$ $$c_{ij}+is_{ij}=\langle\exp(iM')P_{ij}(M')\rangle, \eqno(5.30)$$ $$a_{ij}=\langle P_{ij}(M')\rangle,\eqno(5.31)$$ \end{plain} \noindent where the bracket notation stands for the azimuthal average, so that, for any quantity $X$, \begin{plain} $$\langle X\rangle\equiv {1\over 2\pi}\int_0^{2\pi}XdM'.\eqno(5.32)$$ \end{plain} Note that $\langle\sigma\rangle=\sigma_0/(1-q^2)^{1/2}$. The quantities just defined depend only on $q$ and $\tau_{e0}$, or equivalently on $q$ and $\langle \tau_e\rangle=\tau_{e0}/(1-q^2)^{1/2}$. In dilute systems, the pressure tensor is of the order of $\sigma c^2$, so that it is useful to define \begin{plain} $$v^2={\langle\sigma c^2\rangle\over\langle\sigma\rangle},\eqno(5.33)$$ \end{plain} \noindent and, following \cite{BGT83a}, to introduce dimensionless quantities of order unity $Q_{ij}$ such that \begin{plain} $$P_{ij}=\langle\sigma\rangle v^2 Q_{ij}(q,\langle\tau_e\rangle,M'). \eqno(5.34)$$ \end{plain} Due to the definitions of $v^2$ and $c^2$, the variables $Q_{ij}$ must satisfy the normalisation constraint $\langle Q_{rr}+Q_{\theta\theta}+Q_{zz} \rangle=3$. The value of $v^2$ is essentially a free parameter in the absence of constraint on the form of the $\epsilon_r(c)$ relation, although one knows from various observations that $v^2\sim c^2\sim$ a few mm/s (see section 2). Similarly, one can define ${\cal C}_{ij}$, ${\cal S}_{ij}$, ${\cal T}_i$, and ${\cal A}_{ij}$ such that \begin{plain} $$\langle\sigma\rangle v^2{\cal C}_{ij}=c_{ij},\eqno(5.35)$$ $$\langle\sigma\rangle v^2{\cal S}_{ij}=s_{ij},\eqno(5.36)$$ $$\langle\sigma\rangle v^2{\cal A}_{ij}=a_{ij},\eqno(5.37)$$ $$\langle\sigma\rangle v^2{\cal T}_i=t_i.\eqno(5.38)$$ \end{plain} The behavior of ${\cal C}_{ij}$ and ${\cal S}_{ij}$ as functions of $q$ and for $\langle \tau_e\rangle=0.5$ is represented on Figure~\ref{fig:BGT}, taken from\footnote{These curves were computed with a collision term of the Boltzmann form instead of the Krook form, but this does not affect much the results, at least for the purpose of these notes.} \cite{BGT83a}. \begin{figure}[th] \centering \includegraphics[width=0.7\linewidth]{./Figures/BGT.jpg} \caption{\small{The behavior of the viscous coefficients defined in the text as a function of the perturbation rate of the ring q, for an effective optical depth of 0.5 (from \cite{BGT83a}.}} \label{fig:BGT} \end{figure} The behavior for other values of $\langle \tau_e\rangle$ is quite similar. It is only important to remark that for dilute systems, both $t_1$ and $t_2$ are negative. The discussion of the effect of these coefficients on the mean motion is differed to section 6 and 7. The discussion of the behavior of $a_{r\theta}$ is differed to section 5.3. \subsection{Dense systems} Another important consequence of the relation displayed on Figure~\ref{fig:Shu} is that for moderately opaque rings (let's say, $\tau \gtrsim 1$), the equilibrium can be obtained only for rather elastic materials ($\epsilon_r$ is always close to unity). Reversely, if the ring particle material is not elastic enough (which is quite likely, because in any case ring particles are most probably covered with some more or less fluffy layer of regolith), the equilibrium described in Figure~\ref{fig:Shu} cannot be sustained, implying that the filling factor cannot be small and that the ring must collapse to a close-packed configuration (see the heuristic discussion of section 2.2.4). This argument ignores various complicating physical effects: first, gravitational encounters tend to increase the effective value of $\epsilon_r$, because they are completely elastic; second, the coupling with spin degrees of freedom tends to decrease this effective value, since energy is lost by tangential friction; finally, irregular particle surfaces also tends to reduce $\epsilon_r$. However, these effects could most certainly not suppress the possibility of reaching a close-packed configuration, which is likely to be relevant for Saturn's B ring as well as for the major Uranian rings, which have mean optical depths as high as 3 or 4. In this section, we will therefore model the rings as a collection of particles with typical size $\sim d$. The typical separation distance of ring particle surfaces will be $\sim s$, with $s\ll d$. Considering that the particles have random velocity $\sim c$, the collision frequency is $\omega_c\sim c/s$, i.e. larger than the collision frequency that the dilute approximation would yield by a factor $d/s$. In such physical conditions, two important conditions can be drawn. First, the collision frequency being greatly enhanced, the system can most certainly be described in the framework of the hydrodynamical approximation\footnote{The hydrodymical approximation can be derived from Boltzmann equation, see, e.g., \citealt{SS85}. However, it can also be derived directly from first physical principles, with no implicit or explicit reference to any specific form of the collision term, and therefore its range of application does not exactly overlap that of the Boltzmann moment equations.}. But as ring particles do not resemble the molecules of a fluid in many respects, some attention must be paid to the computation of the pressure and the viscosity of the system. Second, as $d\gg s$, the medium is nearly incompressible, and one can assume that the divergence of the three-dimensional flow is zero. The computation of the pressure tensor in these physical conditions is the subject of this subsection. Unless otherwise specified, the material is extracted from the paper by \cite{BGT85}. \subsubsection{Macroscopic equations} The macroscopic equations we will use are the continuity and the Navier-Stokes equation, i.e. Eqs.~(3.8) and (3.9), with the pressure tensor reducing to \begin{plain} $$p_{ij}=p\delta_{ij}-2\eta u_{ij},\eqno(5.39)$$ \end{plain} \noindent where $p$ is the pressure, $\eta$ the coefficient of dynamic viscosity and $u_{ij}$ is the strain tensor: \begin{plain} $$2 u_{ij}={\partial u_i\over\partial x_j}+{\partial u_j\over\partial x_i}.\eqno(5.40)$$ \end{plain} The tensor $\sigma_{ij}=2\eta u_{ij}$ is known as the (viscous-)stress tensor and $p_{ij}$ as the internal stress tensor in hydrodynamics. The validity of the hydrodynamical approximation relies on the existence of an isotropic pressure force, and on a linear stress-strain relation. These conditions are usually satisfied when the collision time is much smaller than the dynamical times involved in the problem. As the medium under consideration is incompressible, $\rho$ is constant (in space as well as in time), and the continuity equation reduces to \begin{plain} $$\nabla . {\bf u}=0.\eqno(5.41)$$ \end{plain} From our discussion of section 3, we know that the pressure forces play essentially no r\^ole in determining the horizontal motion, so that the incompressibility condition Eq.~(5.41) can be combined with Eqs.~(4.64) and (4.65) to yield the vertical velocity: \begin{plain} $$u_z=-z{\Omega q\over J}\sin M'.\eqno(5.42)$$ \end{plain} In this equation, we have used $m(\Omega-\Omega_p)\simeq\kappa\simeq\Omega$. The surface and volumetric ring density are related by \begin{plain} $$\sigma=2\rho h,\eqno(5.43)$$ \end{plain} \noindent where $h$ is the ring thickness. From $\sigma=\sigma_0/J$, one has $h=h_0/J$, where $h_0$ is the unperturbed ring thickness. One can plug the velocity field in the vertical component of the momentum equation, with $\phi=\phi_{plan.}+\phi_{sg}$, which then reads: \begin{plain} $$z{\Omega^2 q\over J^2}\left(q\sin^2 M' +q - \cos M'\right)= -F_2\Omega^2 z-{1\over\rho}{\partial p\over\partial z}-2{\partial \eta\over\partial z}{\Omega q\over J}\sin M',\eqno(5.44)$$ \end{plain} \noindent where one has approximated the planet by a central mass (neglected the gravitational harmonic coefficients), kept the first order term in the the $z$ expansion of the resulting force, and used Gauss theorem to compute the contribution of the self-gravity force (the horizontal variation of thickness occurs on a scale much larger than the ring thickness). Thus, $F_2$ is defined by \begin{plain} $$F_2=1+{4\pi G\rho\over\Omega^2}.\eqno(5.45)$$ \end{plain} \noindent Note that $F_2$ can be of order ten or larger in planetary rings (for ice particles, and a ring filling factor $\sim 0.5$), so that the vertical component of the ring self-gravity is much larger than the planet force in this direction, and the effect of the self-gravity on the vertical structure cannot be ignored. The vertical component of the momentum equation shows that $p$ and $\eta$ have to be quadratic in $z$. Furthermore, they must vanish at the top and bottom of the rings, so that we can write: \begin{plain} $$p=p_0\left(1-{z^2\over h^2}\right),\eqno(5.46)$$ $$\eta=\eta_0\left(1-{z^2\over h^2}\right),\eqno(5.47)$$ \end{plain} \noindent and recast Eq. (5.44) as \begin{plain} $${\Omega^2 q\over J^2}\left(q \sin^2 M' +q -\cos M'\right)+F_2\Omega^2= {2p_o\over\rho h^2}+{4\eta_0\over\rho h^2}{\Omega q\over J}\sin M'.\eqno(5.48)$$ \end{plain} For certain combinations of parameters, this equation might require $p_0$ to be negative, at least in some locations in the ring. In these conditions, the ring material ``splashes" in the vertical direction, so that our assumption of incompressibility fails. We will assume here that such conditions do not occur (formal requirements can be derived; see \citealt{BGT85}. Finally, the rate of transfer of macroscopic energy into random motions per unit volume is (see section 5.3) \begin{plain} $$\left(\partial{\cal E}\over\partial t\right)_{trans}=2\eta W^2,\eqno(5.49)$$ \end{plain} \noindent where $W$ is the shear, i.e., \begin{plain} $$\eqalignno{W^2&=u_{ij}u_{ij}\cr &={\Omega^2\over 8J^2}\left(16q^2-15+24J\right).&(5.50)\cr}$$ \end{plain} This completes the required set of macroscopic equation. \subsubsection{Microscopic equations} To proceed further, we need some microscopic equations for the pressure, the dynamic viscosity and the rate of collisional dissipation of energy. These quantities will be computed keeping in mind the simple physical model described at the introduction of section 5.2, following the derivation by \cite{H83}. To compute the pressure, any ring particle is imagined to vibrate with average velocity $c$ in a cell of typical size $d$, and to exert some pressure $p$ on the surrounding particles. In a collision, the typical momentum transfer is of the order of $mc$. The pressure being the momentum transfer per unit time and unit surface, one obtains therefore \begin{plain} $$p\sim\omega_c {mc\over d^2}=g_1{\rho d c^2\over s},\eqno(5.51)$$ \end{plain} \noindent where $g_1$ is a dimensionless constant of order unity, and $\rho\sim m/d^3$. The coefficient of dynamic viscosity is computed in the following way. Assume that the ring particle fluid is submitted to some shear flow, in which for definiteness the velocity $u$ is assumed to lie in the $x$ direction and the gradient of $u$ in the $y$ direction. Two adjacent ``layers" of ring particles will on the average move with velocities differing by an amount $\Delta u\sim (du/dy)\ d$. When collisions occur between particles belonging to the two layers, the average transfer of $x$-momentum in the $y$ direction in $m\Delta u$. Therefore, the shear stress, being the rate of transfer of momentum per unit time and across a unit surface [see the discussion after Eq. (3.9)] is \begin{plain} $$\sigma\sim \omega_c{m\Delta u\over d^2}.\eqno(5.52)$$ \end{plain} As by definition $\sigma=\eta du/dy$, one finally obtains \begin{plain} $$\eta=g_2{\rho d^2 c\over s},\eqno(5.53)$$ \end{plain} \noindent where $g_2$ is another factor of order unity. Finally, the rate of dissipation of kinetic energy of random motions per unit volume is (see the discussion at the beginning of section 2.2.3) \begin{plain} $$-\left(\partial{\cal U}\over\partial t\right)_c\sim (1-\epsilon_r^2)\omega_c\rho c^2 =g_3{\rho c^3\over s},\eqno(5.54)$$ \end{plain} \noindent where $g_3$ is the last dimensionless constant of order unity of the problem. We have now gathered all the pieces of the puzzle, and we can turn our attention to the computation of the pressure tensor components. \subsubsection{The pressure tensor components} Our first task is to find two relations between $p_0$ and $\eta_0$, in order to obtain expressions for these two quantities. One such relation is obviously Eq.~(5.48). The other one is obtained in the following way. First, equating the rate of transfer of energy from the orbital motion to random motions, Eq.~(5.49) with the rate of dissipation of energy in collisions, Eq.~(5.54), yields the following constraint (which will be further discussed in section 5.3): \begin{plain} $$2\eta W^2=g_3{\rho c^2\over s}.\eqno(5.55)$$ \end{plain} \noindent Combining this relation with the expression of $p^2/\eta$, computed from Eqs.~(5.51) and (5.53) gives us the required relation between $p_0$ and $\eta_0$, which reads \begin{plain} $$\eta_0= F_1{p_0\over W},\eqno(5.56)$$ \end{plain} \noindent where $F_1=(g_2 g_3/2g_1^2)^{1/2}$ is a dimensionless factor of order unity. We can finally compute $p_0$ and $\eta_0$ from this relation and Eq.~(5.48). This yields \begin{plain} $$p_0=\rho h_0^2\Omega^2{A_1 A_2\over 2 J^2 A_3},\eqno(5.57)$$ $$\eta_0=\rho h_0^2\Omega\ 2^{1/2}F_1{A_1\over J A_3},\eqno(5.58)$$ \end{plain} where we have defined \begin{plain} $$A_1=F_2+{q\over J^2}\left(q\sin^2 M' +q -\cos M'\right),\eqno(5.59)$$ $$A_2=(16 q^2 -15 + 24J)^{1/2},\eqno(5.60)$$ $$A_3=A_2+2^{5/2}F_1 q\sin M'.\eqno(5.61)$$ \end{plain} Note that in order of magnitude $W^2\sim \Omega^2$ so that Eqs. (5.55) and (5.53) imply $c\sim \Omega d$. Furthermore, Eqs. (5.58) and (5.53) imply $d^3\sim h_0^2 s$, so that the condition $s\ll d$ requires $h_0^2\gg d^2$, i.e., the rings must be many particle thick. We are now in position to compute the vertically integrated pressure tensor components $P_{ij}$. First, the vertical integration of Eq. (5.39) yields \begin{plain} $$P_{ij}={4\over 3}h(p_0\delta_{ij}-2\eta_0 u_{ij}),\eqno(5.62)$$ \end{plain} \noindent except for $P_{rz}$ and $P_{\theta z}$ which vanish. Introducing \begin{plain} $$A_4=A_2-2^{5/2}F_1q\sin M',\eqno(5.63)$$ $$A_5={2 A_1\over 3J^3},\eqno(5.64)$$ \end{plain} \noindent one finally obtains the nonvanishing pressure tensor component to lowest order in $h/a$ as \begin{plain} $$P_{rr}=\rho h_0^3\Omega^2{A_4 A_5\over A_3},\eqno(5.65)$$ $$P_{r\theta}=-\rho h_0^3\Omega^2{2^{1/2}F_1 A_5(1-4J)\over A_3},\eqno(5.66)$$ $$P_{\theta\theta}={4\over 3}hp_0,\eqno(5.67)$$ $$P_{zz}=\rho h_0^3\Omega^2 A_5,\eqno(5.68)$$ \end{plain} \begin{figure}[th] \centering \includegraphics[width=0.7\linewidth]{./Figures/BGT85.jpg} \caption{\small{The behavior of the azimuthally averaged viscous coefficients as a function of the rate of perturbation of the ring streamlines q (from \citealt{BGT85}).}} \label{fig:BGT85} \end{figure} Notice that for dense systems, the natural scaling of the vertically integrated pressure tensor components is no longer $\langle\sigma\rangle v^2$, but $\rho h_0^3\Omega^2$. Therefore, the three fundamental coefficients $t_1, t_2, a_{r\theta}$ defined in Eqs.~(5.28) through (5.31) are most conveniently written in dimensionless form as \begin{plain} $$t_i=\rho h_0^3\Omega^2 f{\cal T}_i,\eqno(5.69)$$ $$a_{r\theta}=\rho h_0^3\Omega^2 f{\cal A}_{r\theta},\eqno(5.70)$$ \end{plain} \noindent where, consistently with Eqs.~(5.35) through (5.38), \begin{plain} $$f={\sigma_0 v^2\over (1-q^2)^{1/2}\rho h_0^3\Omega^2},\eqno(5.71)$$ \end{plain} The behavior of the three dimensionless quantities $f{\cal T}_1, f{\cal T}_2$ and $f{\cal A}_{r\theta}$ for $F_1=F_2=1$ is displayed on Figure~\ref{fig:BGT85}, taken from \cite{BGT85}. The graph is represented for values of $q<0.5$, because otherwise $p_0$ becomes negative for some values of $M'$. Notice that ${\cal T}_2$ is negative as in the dilute approximation, but that ${\cal T}_1$ is positive for $q$ smaller than some critical value (although this cannot be seen on the graph, due to the poor resolution), in opposition to the dilute case. The implications of these results for the dynamics will be discussed in section 5.3 and sections 6 and 7. \subsection{Energy dissipation, and viscous flux of angular momentum} Energy and angular momentum budgets are important for the long term dynamics of the rings. We will here have a look into two fundamental features of energy and angular momentum exchanges, differing a more complete discussion to section 6. \subsubsection{Energy dissipation in planetary rings} The purpose of this section is to compute the rate of dissipation of energy due to the inelasticity of the collisions, and to show that this energy is drawn from the orbital motion. The orbital energy per unit mass is by definition \begin{plain} $$E=\left({1\over 2}{\bf u}^2+\phi_p\right).\eqno(5.72)$$ \end{plain} From the equation of continuity (3.13) and the equation of motion (3.14), one obtains the equation of evolution of $E$ as \begin{plain} $$\rho{DE\over Dt}=-{\partial{\rm p}_{ij}u_i\over\partial x_j} +{\rm p}_{ij}{\partial u_i\over\partial x_j}.\eqno(5.73)$$ \end{plain} In this equation, we have ignored the work of the ring self-gravity and of the satellite perturbations, as they have no effect on the argument. The two terms on the right-hand side are the rate of work of the internal stress of the ring during the motion. On the other hand, the internal kinetic energy per unit ring mass is \begin{plain} $$U={1\over 2\rho}{\rm p}_{ii}.\eqno(5.74)$$ \end{plain} From Eq.~(3.15), one can derive the equation of evolution of $U$ \begin{plain} $$\rho{DU\over Dt}=-{\rm p}_{ij}{\partial u_i\over\partial x_j}+ {1\over 2}\left(\partial{\rm p}_{ii}\over\partial t\right)_c,\eqno(5.75)$$ \end{plain} \noindent where, consistently with the approximations made earlier, we have neglected the heat flux terms. The first term on the right-hand side is the contribution of the internal stress (in quasi-static transformations, it results in the well-known $pdV$ work of thermodynamics), and the last term is the rate of loss of energy due to collisions (if collisions are elastic, this term equals to zero, by conservation of kinetic energy). This last term is obviously negative. Comparing Eq. (5.73) and (5.75) shows that the last term of the right-hand side of Eq. (5.73) represents the rate of transfer of macroscopic energy into random motions. For dense systems, we can put Eq. (5.75) in a different form. From the definitions of the stress and strain tensors Eqs. (5.39) and (5.40), one obtains \begin{plain} $$\rho{DU\over Dt}= 2\eta(u_{ij})^2+{1\over 2} \left(\partial{\rm p}_{ii}\over \partial t\right)_c,\eqno(5.76)$$ \end{plain} \noindent Eq.~(5.55) is identical to Eq. (5.76) except for $DU/Dt$, which has been set equal to zero, because some of the (crude) approximations which we have made earlier are not compatible with this term (in particular, its vertical dependence), and because it represents no essential piece of physics, as it averages to zero [see also the discussion after Eq.~(5.79)]. Now, remember that the streamlines of the flow form closed curves in a frame rotating at $\Omega_p$ (see section 4) so that we can actually integrate Eqs.~(5.75) and (5.73) over an arbitrary volume bounded by streamlines\footnote{This volume rotates with angular speed $\Omega_p$ in an inertial frame, but this has no effect on the argument.}. Notice that these volume integrals are in fact integrals over a given mass element of the ring. Note also that the steady state condition implies $DU/Dt=-(\partial U/\partial\theta)\Omega/m$, so that $DU/Dt$ does not contribute to the integral. Thus, one obtains: \begin{plain} $$\int dV\ \rho{DE\over Dt}=\int dV\ {\rm p}_{ij}{\partial u_i\over\partial x_j}-\int dV{\partial {\rm p}_{ij}u_i\over\partial x_j} ,\eqno(5.77)$$ $$\int dV\ {\rm p}_{ij}{\partial u_i\over\partial x_j}-\int dV\ {1\over 2} \left(\partial{\rm p}_{ii}\over\partial t\right)_c=0.\eqno(5.78)$$ \end{plain} These two equations demonstrate the advertised result, i.e. that the average energy lost per unit mass during collisions is drawn from the energy of the orbital motion. Therefore, on the average, the ring material tends to fall on the planet. We can compute the vertically integrated average rate of loss of energy of the ring material bounded by two streamlines of radii $a_1$ and $a_2$, by using Eqs.~(5.16) through (5.19), which yield \begin{plain} $$\eqalignno{\int rdr\int d\theta\ & {1\over 2}\left[{\partial\over\partial t} (P_{rr}+P_{\theta\theta}+P_{zz})\right]_c \cr & ={1\over 2} \int_0^{2\pi}dM'\int_{a_1}^{a_2}da\ Ja\left[\Omega{\partial\over\partial M'}(P_{rr}+P_{\theta\theta}+P_{zz})\right.\cr &\left.\qquad +{\Omega q\sin M'\over J}(3P_{rr}+P_{\theta\theta}+P_{zz})+ \Omega P_{r\theta}\left({1\over J}-4\right)\right]\cr &=\pi\int_{a_1}^{a_2}da\ a\Omega (2qt_1-3a_{r\theta}).&(5.79)\cr}$$ \end{plain} This result is also valid for dense systems, although Eqs.~(5.16) through (5.19) were derived under the assumption that $u_z=0$. This is most easily seen from Eq. (5.78) by noting that in the dense system approximation, $P_{rz}=0$ and $P_{\theta z}=0$, so that the only possible difference in the first term of this equation between the dilute and the dense approximations comes from the contribution of ${\rm p}_{zz}\partial u_z/\partial z$, which vanishes upon vertical integration [see, e.g., Eq.~(B.12c) of \cite{SS85}]. Note that as a consequence, Eq.~(5.55) is compatible with Eq. (5.79). We will show in section 6 that the rate of viscous loss of orbital energy is also given by Eq.~(5.79), as can be expected from the previous argument. This equation has an interesting consequence, which will be used in section 5.4: it implies that \begin{plain} $$2qt_1<3a_{r\theta},\eqno(5.80)$$ \end{plain} \noindent because the collision terms are negative (the dissipation of energy in collision reduces the internal kinetic energy) and because the choice of the boundaries $a_1$ and $a_2$ is arbitrary. \subsubsection{Viscous flux of angular momentum} We have argued in section 2 that the ring internal stress results in angular momentum transport. Let us call $F_H^{vis}$ the vertically integrated rate of transport of angular momentum across a unit length of streamline (in short, the viscous flux of angular momentum). The components of the pressure tensor being the components of the internal force of the ring fluid per unit surface [see the discussion after Eq.~(3.9)], the vertically integrated torque exerted per unit length of streamline due to the material inside it is $aP_{r\theta}$, to lowest order in eccentricity. Therefore, \begin{plain} $$F_H^{vis}=aP_{r\theta},\eqno(5.81)$$ \end{plain} Defining the viscous angular momentum luminosity, $L_H^{vis}$, as the integral of the flux around the streamline, one has \begin{plain} $$L_H^{vis}=2\pi a^2 a_{r\theta}.\eqno(5.82)$$ \end{plain} It is instructive to derive general expressions for fluids with constant kinematic viscosity $\nu=\eta/\rho$. From Eqs.~(5.39) and (5.40), one obtains \begin{plain} $$F_H^{vis}=2\nu\sigma_0\Omega a\left({1\over J}-{1\over 4J^2}\right),\eqno(5.83)$$ $$L_H^{vis}=\pi\nu\sigma_0\Omega a^2{3-4q^2\over(1-q^2)^{3/2}},\eqno(5.84)$$ \end{plain} In the limit of axisymmetric flows ($q=0$), the viscous luminosity of angular momentum reduces to \begin{plain} $$L_H^{vis}=3\pi\sigma\nu\Omega a^2,\eqno(5.85)$$ \end{plain} This result, derived by \cite{LP74} in the context of the theory of accretion disks, was used in section 2.3 in the discussion of axisymmetric viscous instabilities. Eq.~(5.85) shows that angular momentum flows outwards in axisymmetric flows. Eqs.~(5.83) and (5.84) show that two values of $q$ are particularly significant. First, for $q<q_1=3/4$, $F_H^{vis}>0$ for all azimuthal locations, whereas for $q>q_1$, $F_H^{vis}<0$ for some interval of azimuth. This inward flux of angular momentum arises because the angular velocity increases outwards in this longitude interval. For $q>q_2={\sqrt 3}/2$, the luminosity itself becomes negative: the perturbation of the flow is high enough to make the angular momentum flow inwards, in opposition to the unperturbed case. One can wonder if this feature is a consequence of our assumption of constant kinematic viscosity. This is not the case. \cite{BGT83b} in their analysis of dilute systems show that the angular momentum flux and luminosity, which are both positive for axisymmetric flows, both change sign in sufficiently perturbed regions. For example, the corresponding value of $q_2$ is represented as a function of $\langle \tau_e\rangle$ on Figure~\ref{fig:BGT83}. \begin{figure}[th] \centering \includegraphics[width=0.5\linewidth]{./Figures/BGT83.jpg} \caption{\small{The value of q for which the angular momentum luminosity reverses direction as a function of the azimuthally averaged effective optical depth (from \citealt{BGT83b}).}} \label{fig:BGT83} \end{figure} The same feature is also true of dense systems, although this cannot be seen on Figure~\ref{fig:BGT85}. For example, for the particular choice\footnote{These values are more appropriate for dense rings than the one chosen in Figure~\ref{fig:BGT85}. The value of $F_1$ is suggested by the analysis of dense systems by \cite{AT86} (N.\ Borderies, private communication) and the value of $F_2$ is the smallest one found in Saturn's rings.} of $F_1=0.55$ and $F_2=7$, \cite{BGT86} quote $q_2=0.79$. In conclusion, it appears that the reversals of the angular momentum flux and luminosity for sufficiently perturbed flows is a general feature, the existence of which does not depend on the details of the microphysics controlling the pressure tensor. This is fortunate, as angular momentum luminosity reversal plays a central r\^ole in the ring confinement by satellites (the so-called ``shepherding mechanism"; see \citealt{BGT84,BGT89}). \subsection{Summary and parametrization of the pressure tensor} This whole section was mainly devoted to the derivation of the pressure tensor under various sets of physical conditions which are likely to be relevant to planetary rings. The pressure tensor influences the mean motion only through the three coefficients $t_1$, $t_2$ and $a_{r\theta}$ defined in Eqs.~(5.28) through (5.31). The analyses presented in this section show that there exists a number of general features characterizing these three quantities, as pointed out by \cite{BGT86}. \begin{enumerate} \item Streamlines cross at $q=1$, so that the pressure tensor is likely to diverge as $q\rightarrow 1$, or even for smaller values of $q$, as shown on Figure~\ref{fig:Shu} and in the related discussion. \item $t_1$ and $t_2$ vanish as $q\rightarrow 0$, because the flow, and therefore the pressure tensor components, become axisymmetric in this limit. Hence, it is reasonable to assume that $t_{1,2}\propto q$ for small $q$, as this dependence is characteristic of both the dilute and dense models. Notice that $t_1$ is negative in the dilute approximation, whereas it is positive for small values of $q$ in the dense model; $t_2$ is negative in both models. \item We have shown that the energy dissipation due to inelastic collisions implies $2qt_1<3a_{r\theta}$. This is a general result, which does not depend on the choice of the collisional model [see Eq.~(5.80)]. \item We have just seen that $a_{r\theta}$ is in general positive for small $q$, so that angular momentum flows outwards. However, as $q\rightarrow 1$, the direction of the angular momentum flow is reversed, so that there is some value $q=q_a(\sigma_0)$ for which $a_{r\theta}=0$ and the transition between the two regimes occur. For dense systems, $q_2$ is independent of $\sigma_0$. \end{enumerate} From these general considerations, \cite{BGT86} were motivated to devise simple empirical formul\ae\ for the three coefficients $a_{r\theta}, t_1, t_2$, of the form \begin{plain} $$a_{r\theta}=B_a\sigma_0^b{q_a-q\over(q_c-q)^c},\eqno(5.86)$$ $$t_1=B_1\sigma_0^b q{q_1-q\over(q_c-q)^c},\eqno(5.87)$$ $$t_2=B_2\sigma_0^b q{q_2-q\over(q_c-q)^c}.\eqno(5.88)$$ \end{plain} In these equations, $B_a, B_1$ and $B_2$ are positive quantities, $0<q_a<1$, $q_1<q_a$ is either positive or negative, and $q_2$ is negative. In the model for dense systems, $b=3$, $q_c=1$ and the rapid divergence of the viscous coefficients is well represented by $c\simeq 3$. In principle, $q_a$, $q_1$, $q_2$ and $q_c$ are functions of $\sigma_0$, except for dense systems. Orders of magnitude for $B_a$, $B_1$ and $B_2$ can be obtained from the comparison of these relations with Eqs.~(5.37) and (5.38) on one hand, and (5.69) and (5.70) on the other. \section{Ring dynamics: perturbation equations and conserved quantities} We have already listed the perturbations acting on a ring: the ring self-gravity, the gravitational action of satellites, the disk pressure tensor. For the satellite perturbations, we will restrict ourselves to one Fourier component in the expansion of Eq.~(4.37), in order to investigate in some detail the dynamics of a density wave near an isolated Lindblad resonance. A complete discussion of disk-satellite interactions (e.g. of the effect of such interactions on the eccentricity evolution of narrow elliptic rings, or of the shepherding of a ring by a satellite), would take us too far afield and is outside the scope of these notes. A number of derivations are more easily performed when the rings are represented as a discrete collection of streamlines. This point of view is adopted in what follows. The continuum description will be recovered by taking the appropriate limits when needed. Conversely, one can go from the continuum limit to the discrete form of the equations. It is well known in Celestial Mechanics that perturbing accelerations generate both periodic and secular (or long period) terms. The periodic effects have usually a small amplitude in ring dynamics and can safely be neglected. Also, the long term dynamics is easier to capture in simulations once the short-term effects are averaged out. We will therefore average the perturbation equations in what follows. There are some added subtlety in this procedure for the fluid equations with respect test particles: \begin{enumerate} \item The perturbation equations are not used as is commonly done in Celestial Mechanics, by inserting an unperturbed solution in the right-hand side and deducing the magnitude of the perturbation. Instead, the perturbed motion is also inserted in the right-hand side, so that these equations provide self-consistency conditions for the existence of the assumed form of the perturbed motion. Because of this, $M_e=m(\varphi-\Omega_p t) + m\Delta$ is in fact always enforced. This implies that $M_e$ is not and independent phase, but a quantity depending on $a,\varphi$ and $t$. \item The equations are phase-averaged. Any form of averaging can be freely chosen --- this a matter of convenience, not a theoretical requirement --- and it turns out that this choice is more adequate than the usual time-averaging procedure, in particular because $M_e=M_e(a,\varphi,t)$. In the process, short-periodic terms of order $\epsilon g_{pert.}/g_{plan.}$ are averaged out ($g_{pert.}$ and $g_{plan.}$ stand for the accelerations due to the perturbing forces and the planet's, respectively). This is required for the self-consistency of the assumed form of perturbed motions and this self-consistency is ensured by the smallness of the short periodic term. In this sense, the perturbation equations are indeed handled perturbatively, albeit again in a slightly not usual way. \end{enumerate} In what follows, the subscript $e$ of $a_e$, $M_e$ and $\varpi_e$ is dropped for convenience. However, it should be kept in mind that the elements involved in the discussion are epicyclic elements and not elliptic ones. For definiteness, streamlines and associated quantities are labeled with their index $i$; $1\le i \le N_s$, where $N_s$ is the total number of streamlines. Streamlines width is noted $\delta a$ when needed. Streamlines semi-major axis are defined as the mid-position in the streamlines, and all quantities are evaluated their, except the stress tensor which should be evaluated at the streamline boundary; however this would introduce a staggered double-mesh and this refinement is avoided in these notes. \subsection{Disk self-gravity}\label{sec:sg} Let us consider two streamlines (indexed as streamline 1 and streamline 2) with $m$ lobes and orbital elements $a_1,\epsilon_1, \Delta_1$ and $a_2,\epsilon_2, \Delta_2$ respectively. We suppose $a_1<a_2$ for definiteness. We are going to compute the gravitational perturbation of streamline 2 on streamline 1. In fact it is sufficient to compute the gravitational perturbation on a fluid particle of streamline 1, because this perturbation is identical for all fluid particles, once averaged over the orbital time-scale. For $\Delta a_{ij}\equiv a_i-a_j$ small enough (that is, much smaller than the curvature radius of the streamlines, i.e. $\Delta a_{ij}\ll a$) one can locally identify the streamline with a straight line and find the perturbing acceleration with the help of Gauss's theorem. Let us call $\lambda_2$ the linear mass density of streamline 2. The gravitational acceleration ${\bf g}_{sg}$ on a fluid particle is given by \citep{BGT83c} \begin{plain} $${\bf g}_{sg}={2G\lambda_2\over \Delta_c}{\bf u},\eqno(6.1)$$ \end{plain} \noindent where ${\bf u}$ is the unit vector perpendicular to streamline 2 and directed outwards (from streamline 1 to streamline 2), and $\Delta_c$ the distance of the fluid particle to streamline 2 along ${\bf u}$. Let us introduce the angle $\beta$ between the radial direction and ${\bf u}$, so that $\Delta_c=|r_1-r_2|\cos\beta$. The radial and tangential components of the perturbing acceleration read: \begin{plain} $$R_{sg}=-{2G\lambda_2\over\Delta r_{12}},\eqno(6.2)$$ $$S_{sg}=-{2G\lambda_2\over\Delta r_{12}\cos\beta}\sin\beta,\eqno(6.3)$$ \end{plain} \noindent where $\Delta r_{ij}\equiv r_i-r_j$. Elementary geometric considerations lead to the following expressions for $\sin\alpha$ and $\cos\alpha$: \begin{plain} $$\cos\beta={r_2d\theta/d\varphi\over \sqrt{(r_2d\theta/d\varphi)^2+\left(dr_2/d\varphi\right)^2}}=1+ \mathcal{O}(\epsilon_2^2),\eqno(6.4)$$ $$\sin\beta=-{dr_2/d\varphi\over\sqrt{(r_2d\theta/d\varphi)^2+\left(dr_2/d\varphi\right)^2}}= -{1\over a_2}{dr_2\over d\varphi}+\mathcal{O}(\epsilon_2^2).\eqno(6.5)$$ \end{plain} The last equalities are evaluated from Eq.~(4.57). Finally, the components of the perturbing acceleration at position ($r,\theta$) and at time $t$ read: \begin{plain} $$R_{sg}=-{GM_2\over\pi a_2(r-r_2)},\eqno(6.7)$$ $$S_{sg}={GM_2m \epsilon_2\sin(m(\theta-\Omega_p t)+m\Delta_2)\over \pi a_2(r-r_2)},\eqno(6.8)$$ \end{plain} \noindent where the linear mass density has been expressed in terms of the total mass of the streamline\footnote{One might wonder if the possible dependence of the linear mass density with azimuth contributes to these results, to the order in $\epsilon$ aimed at. However, this is not the case. In any case, such contributions would not affect the results of Eq.~(6.12) to (6.14).} $M_2=2\pi a_2\lambda_2$ and where $r_2$ is a function of $\theta$ through Eq.~(4.57); we have used the fact that $\varphi=\theta$ to lowest order in $\epsilon$. The streamline mass is connected to the ring surface density by $M_2=2\pi a_2\delta a\sigma_0$ where $\delta a$ is the distance between two neighboring unperturbed streamlines. Eq.~(6.7) corrects an unimportant error in the corresponding equation of \cite{BGT85}. A simple trigonometric manipulation shows that $\Delta r_{12}= J_{12}\Delta a_{12}$ where \begin{plain} $$J_{12}=1-q_{12}\cos[m(\theta-\Omega_p t)+m\Delta_1+\gamma_{12}],\eqno(6.9)$$ \end{plain} \noindent and \begin{plain} $$q_{12}{\rm exp}(i\gamma_{12})={a_1\epsilon_1 - a_2\epsilon_2{\rm exp} [im(\Delta_2-\Delta_1)]\over\Delta a_{12}}.\eqno(6.10)$$ \end{plain} \noindent Note that $q_{12}=q_{21}$, but $\gamma_{12}\neq\gamma_{21}$. As $\Delta a_{12}\rightarrow 0$, $J_{12}\rightarrow J$, $q_{12}\rightarrow q$ and $\gamma_{12}\rightarrow \gamma$. With the same definitions, one also has: \begin{plain} $$\eqalignno{&a_2\epsilon_2\sin[m(\theta-\Omega_p t)+m\Delta_2]=a_1\epsilon_1\sin[m(\theta-\Omega_p t)+m\Delta_1]-\cr & \kern3truecm \Delta a_{12}q_{12}\sin[m(\theta-\Omega_pt)+m\Delta_1+\gamma_{12}] &(6.11)\cr}$$ \end{plain} \noindent We can now compute the perturbations of $a_1,\epsilon_1$ and $\varpi_1$ by inserting Eqs.~(6.6) through (6.11) into Eqs.~(4.20) through (4.22), and averaging over $\varphi_1$, with the additional constraint that $M_1=m(\varphi-\Omega_p t) +m\Delta_1$ and $\theta=\varphi$. Furthermore, we take $a_1=a_2=a$ except in the difference $\Delta a_{12}$ (remember $\Delta a_{12}\ll a$ in ring problems), and keep the results to lowest nonvanishing order in $\epsilon$. Finally, we take $\Omega_a=\kappa_a= (GM_p/a^3)^{1/2}$ in the perturbation equation (this leads to negligible fractional errors of order $J_2$), except in the difference \footnote{These approximations are made in the computations of all perturbations in the rest of the paper.} $\Omega_a-\kappa_a$. The resulting averaged equations read: \begin{plain} $$ \left(da_1\over dt\right)_{sg}=-{2(m-1)n_a\over\pi}{M_2\over M_p} {a\over\Delta a_{12}}a\epsilon_1 H(q_{12}^2)q_{12}\sin\gamma_{12},\eqno(6.12)$$ $$\left(d\epsilon_1\over dt\right)_{sg}={n_a\over\pi}{M_2\over M_p} {a\over\Delta a_{12}}H(q_{12}^2) q_{12}\sin\gamma_{12},\eqno(6.13)$$ $$\left(d\varpi_1\over dt\right)_{sg}={n_a\over\pi}{M_2\over M_p} {a\over\epsilon_1\Delta a_{12}}H(q_{12}^2) q_{12}\cos\gamma_{12},\eqno(6.14)$$ \end{plain} \noindent where $H(q^2)$ is defined by (the last equality is taken from \citealt{GR80}): \begin{plain} $$H(q^2)\equiv {1\over 2\pi q}\int_{-\pi}^{\pi}{\cos u\over 1-q\cos u}du = {1-(1-q^2)^{1/2}\over q^2(1-q^2)^{1/2}}.\eqno(6.15)$$ \end{plain} These equations are of course also valid in the case $m=0$. The only restriction comes from the constraint $\Delta a_{12}\ll a$, so that they cannot be used to compute the axisymmetric contribution of the self-gravity of wide rings (like Saturn's rings). However, this contribution can be included, if needed, in the computation of $\Omega_a$ and $\kappa_a$, so that this limitation is not essential. \subsection{Satellite perturbations} We are now going to compute the averaged perturbation generated by one Fourier component of the satellite potential Eq.~(4.37). The perturbing potential of the satellite at location $r,\theta$ and time $t$ reduces to: \begin{plain} $$\phi_s=\Phi_{mk}(r/a_s)\cos m(\theta-\Omega_p t).\eqno(6.16)$$ \end{plain} The derivatives of this potential give the components of the perturbing acceleration, which can be evaluated along a ring streamline of elements $a,\epsilon,\Delta$, and expanded to first order in $\epsilon$ with the help of Eqs.~(4.62) and (4.63). After these manipulations, one obtains: \begin{plain} $$\eqalignno{ R_s = &-{d\Phi_{mk}\over da}(\cos M\cos m\Delta +\sin M\sin m\Delta),&(6.17)\cr S_s=&m{\Phi_{mk}\over a}(\sin M\cos m\Delta -\cos M\sin m\Delta) \cr &- m\left({d\Phi_{mk}\over da}-{\Phi_{mk}\over a}\right)\epsilon( \cos M\sin M\cos m\Delta -\cos^2 M\sin m\Delta)\cr &+ 2m^2{\Phi_{mk}\over a}\epsilon(\sin M\cos M\cos m\Delta+\sin^2 M\sin m\Delta),&(6.18)\cr}$$ \end{plain} \noindent where the relation $M=m(\varphi-\Omega_p t)+m\Delta$ has been used, and where $\Phi_{mk}$ has been evaluated at $a$. Inserting these results into the perturbation equations and averaging over $\varphi$ yields: \begin{plain} $$\left(da\over dt\right)_s=n_a a(m-1)\epsilon{a\Psi_{mk}\over G M_p}\sin m\Delta,\eqno(6.19)$$ $$\left(d\epsilon\over dt\right)_s=-n_a {a\Psi_{mk}\over 2 G M_p}\sin m\Delta,\eqno(6.20)$$ $$\left(d\varpi \over dt\right)_s={n_a\over \epsilon} {a\Psi_{mk}\over 2 G M_p}\cos m\Delta,\eqno(6.21)$$ \end{plain} \noindent where one has defined \begin{plain} $$\Psi_{mk}\equiv a{d\Phi_{mk}\over da} + 2m\Phi_{mk}.\eqno(6.22)$$ \end{plain} The question of the effect of multiple resonant terms is complex and not addressed in these notes. The interested reader is referred to \cite{BGT83a} and \cite{GT81}. \subsection{Pressure tensor}\label{sec:vis} The radial and tangential perturbing accelerations due to the ring pressure tensor are given in Eqs.~(3.17) and (3.18). However, it is preferable for our purposes to derive the perturbing acceleration produced on a streamline of mass M by the material outside it. The pressure tensor captures the effect of collisions of one streamline on its neighbors. The vertically integrated acceleration produced on any streamline in the continuum limit reads: \begin{plain} $$R_{vis}=-{1\over a\sigma_0}{\partial aP_{rr}\over \partial a} +\mathcal{O}(\epsilon),\eqno(6.23)$$ $$S_{vis}=-{1\over a^2\sigma_0}{\partial a^2 P_{r\theta}\over\partial a} +\mathcal{O}(\epsilon),\eqno(6.24)$$ \end{plain} \noindent The only subtlety here is that the streamline $i$ boundaries lie at $(a_i+a_{i\pm 1})/2$; we have seen in section 5 that the pressure tensor depends on azimuth only through $M'$, which therefore differs from $M'_i$ at these two boundaries. In the continuum limit, a similar problem arises, and the angular average needs an integration by part before it can be performed $\langle R \cos M\rangle = \langle 1/\sigma_0\partial P_{rr}/\partial a \cos M \rangle = \langle 1/\sigma_0[\partial P_{rr}\cos M/\partial a + (dm\Delta/da) P_{rr}\sin M ] \rangle$. Both the discretized and continuum approaches lead to the same result and the phase-averaged perturbation equations read \begin{plain} $$\left(da_i\over dt\right)_{vis}=-{4\pi \over a n_a M_i}\Delta^{\pm} (a^2 a_{r\theta}),\eqno(6.25)$$ $$\left(d\epsilon_i\over dt\right)_{vis}={2\pi\over n_a M_i}\left[ -\Delta^{\pm} (t_1\cos\gamma+t_2\sin\gamma)+ (t^i_1\sin\gamma_i -t^i_2\cos\gamma_i)\Delta^{\pm} (m\Delta)\right],\eqno(6.26)$$ $$\left(d\varpi_i\over dt\right)_{vis}={2\pi\over n_a\epsilon_i M_i}\left[ \Delta^{\pm} (t_1\sin\gamma-t_2\cos\gamma)+ (t^i_1\cos\gamma_i +t^i_2\sin\gamma_i)\Delta^{\pm} (m\Delta)\right],\eqno(6.27)$$ \end{plain} \noindent where $M_i$ is the mass of streamline $i$, $\Delta X \equiv X^{i,i+1}-X^{i-1,i}$ and $X^{ij}$ is evaluated at the boundary between streamlines $i$ and $j$ [i.e. in $(a_i+a_j)/2$, $|\Delta a|$ being the inter-streamline distance]; $t_1,t_2$ and $a_{r\theta}$ are the quantities introduced in Eqs.~(5.28) through (5.31). This results differs from \cite{BGT83a}, \cite{BGT85} and \cite{BGT86} in several respects. First, $\sigma_0$ has been included here in the pressure tensor components. The rationale for this difference is that the full pressure tensor divergence defines the acceleration on a fluid particle and is therefore included in the difference $\Delta^{\pm} X$. This correction is unimportant in the WKB limit of \cite{BGT86} and in the nearly incompressible limit of \cite{BGT85}. Also, Eqs~(6.26) and (6.27) include a contribution --- the $\Delta^{\pm}(m\Delta)$ term --- that is missing from the various BGT papers. This term provides in fact the dominant stress tensor contribution in the tight-winding approximation that is relevant for density waves; BGT were nevertheless able to recover the correct result from the other term by an (incorrect) non-local approximation to $\gamma$. Finally, factors $a,a^2$ have been pulled out of Eqs.~(6.26) and (6.27), which are relevant only for perturbed flows, but not from Eq.~(6.25), which applies to circular flows as well and where this dependence is important. \subsection{Mass, energy, and angular momentum budget of ring systems:} The question of energy dissipation and viscous angular momentum transport has been addressed in sec V. Here, we wish to derive more general expressions, in Eulerian variables ($a,\varphi$). Furthermore, as we are mostly interested in radial transport, all the following expressions will be integrated over $\varphi$. For this purpose, we note $\langle X\rangle$ the azimuthal average of any quantity $X$. We have \begin{plain} $$\langle X\rangle ={1\over 2\pi}\int_{0}^{2\pi}d\varphi\ X ={1\over 2\pi}\int_{0}^{2\pi}dM\ X ={1\over 2\pi}\int_{0}^{2\pi}dM'\ X,\eqno(6.28)$$ \end{plain} \noindent so that this definition is in agreement with the one used in section 5; note also that all the perturbation equations of the previous sections should more properly have been written in this bracket notation. The three quantities we are interested in are the ring unperturbed surface density $\sigma_0$, and the energy ${\cal E}$ and angular momentum ${\cal H}$ per unit unperturbed radial length $a$, \begin{plain} $${\cal E}=2\pi a\sigma_0 E,\eqno(6.29)$$ $${\cal H}=2\pi a\sigma_0 H.\eqno(6.30)$$ \end{plain} \noindent By definition, these are azimuthally averaged quantities. In this section we depart from the semi-Lagrangian approach adopted so far to adopt a semi-Eulerian one instead. As such, we are interested in the time evolution in ($a,\varphi$) coordinates instead of using these quantities as Lagrangian labels. The main reason for this departure is that conserved quantities provide not only constraints on the dynamics, but also lead to a powerful amplitude equation for density waves. Note that, as any quantity $X=X(a,\varphi,t)$, one can always write \begin{plain} $${dX\over dt}={\partial X\over\partial t}+ {da\over dt}{\partial X\over\partial a}+ {d\varphi\over dt}{\partial X\over\partial\varphi}.\eqno(6.33a)$$ \end{plain} The equation of conservation of mass is obtained from Eq.~(3.16) either by change of variables or by more direct means and reads: \begin{plain} $${\partial\sigma_0\over\partial t}+{1\over a}{\partial\over\partial a}\left(a\sigma_0{da\over dt}\right)+{1\over a}{\partial\over\partial \varphi}\left(a\sigma_0{d\varphi\over dt}\right)=0,\eqno(6.31)$$ \end{plain} \noindent and yields, after azimuthal average, \begin{plain} $${\partial\sigma_0\over\partial t}+{1\over a}{\partial\over\partial a}\left(a\sigma_0\left\langle{da\over dt}\right\rangle\right)=0.\eqno(6.32)$$ \end{plain} Note that the designation of unperturbed surface density for $\sigma_0$ is somewhat improper because it evolves due to the radial drift that the perturbations (in particular the ring internal stress) induce. The equations for the ring energy ${\cal E}$ and angular momentum ${\cal H}$ are most easily derived in integrated form. First, note that for any quantity $X$, using the mass conservation constraint Eq. (6.31), one has \begin{plain} $$a\sigma_0{dX\over dt}={\partial a\sigma_0 X\over\partial t}+ {\partial\over\partial a}\left(a\sigma_0 X{da\over dt}\right)+ {\partial\over\partial\varphi}\left(a\sigma_0 X{d\varphi\over dt}\right),\eqno(6.33b)$$ \end{plain} \noindent so that after integration between two unperturbed radii\footnote{This integration is very handy to tackle the change of variable ($r\theta$) to ($a,\varphi$) for the pressure tensor contribution.} $a_1$ and $a_2$ and azimuthal average, one obtains (assuming $X$ independent of $\varphi$ and defined per unit mass such as $E$ or $H$) \begin{plain} $$\eqalign{\int_{a_1}^{a_2}da{\partial\over\partial t}\left(2\pi a\sigma_0 X\right) = & -\int_{a_1}^{a_2}da{\partial\over\partial a}\left(2\pi a\sigma_0 X\left\langle{da\over dt}\right\rangle\right) \cr & + \int_{a_1}^{a_2}da\ 2\pi a\sigma_0\left\langle{dX\over dt}\right\rangle_{pert}.}\eqno(6.34)$$ \end{plain} We can apply this result to the computation of the equations of evolution of ${\cal E}$ and ${\cal H}$, by noting that\footnote{$(\Omega_a/\kappa_a)^2\epsilon^2\simeq \epsilon^2$ consistently with earlier simplifications, but we keep the correct $J_k$ terms to zeroth order in eccentricity.} \begin{plain} $${dE\over dt}={\Omega^2 a\over 2}{da\over dt},\eqno(6.35)$$ $${dH\over dt}={1\over 2}\Omega a{da\over dt}-\Omega a^2\epsilon{d\epsilon\over dt}=rS.\eqno(6.36)$$ \end{plain} Let us consider first the perturbations induced by the satellite. Note that as the potential Eq. (6.16) is uniformly rotating at angular speed $\Omega_p$, the change in specific energy and angular momentum are related by Jacobi's constant, and \begin{plain} $$\left\langle dE\over dt\right\rangle_{sat}-\Omega_p\left\langle dH\over dt\right\rangle_{sat}=0. \eqno(6.37)$$ \end{plain} \noindent From Eqs.~(6.35) and (6.19), one has \begin{plain} $$\int_{a_1}^{a_2} da\ 2\pi a\sigma_0 \left\langle{dE\over dt}\right\rangle_{sat}= \Omega_p\int_{a_1}^{a_2} da\ \pi ma\sigma_0\epsilon\Psi_{mk}\sin m\Delta=\Omega_p\int_{a_1}^{a_2}da\ {\cal T}_s,\eqno(6.38)$$ \end{plain} \noindent where ${\cal T}_s\equiv 2\pi a\sigma_0\langle{dH/dt}\rangle_{sat}=\pi ma\sigma_0 \Psi_{mk}\epsilon\sin m\Delta$ is the torque density due to the satellite. Let us now consider the contribution of the ring self-gravity. The self-gravitational potential is also uniformly rotating with angular speed $\Omega_p$, so that the changes in specific energy and angular momentum are again related by Jacobi's constant. It is useful to introduce the rate of transfer of specific energy (resp. specific angular momentum) $L_E^{sg}$ (resp. $L_H^{sg}$) from all streamlines with radii $a_1<a$ on all streamlines of radii $a_2>a$ (which is the opposite of the rate of transfer from $a_2>a$ on $a_1<a$). By definition, and from Eq.~(6.12) and (6.13), one has \begin{plain} $$\eqalign{L_E^{sg}=\Omega_p L_H^{sg}=-4\pi Gm\Omega_p\int_0^a da_1\ \sigma_0(a_1) a_1\epsilon_1\int_a^{+\infty}da_2\ & \sigma_0(a_2)a_2\times \cr & {H(q_{12}^2)q_{12}\sin \gamma_{12}\over a_1-a_2}.}\eqno(6.39)$$ \end{plain} \noindent The first equality follows from Eq.~(6.12) by noting that $\Omega_p=(m-1)\Omega$\footnote{For $m\neq 1$.} or simply by noting that the self-gravitational potential is stationary in the rotating frame so that Jacobi's integral applies. With this definition, one easily checks that \begin{plain} $$\int_{a_1}^{a_2}da\ 2\pi a\sigma_0\left\langle{dE\over dt}\right\rangle_{sg}=-L_E^{sg}(a_2)+L_E^{sg}(a_1)=-\int_{a_1}^{a_2} {\partial\over\partial a}L_E^{sg},\eqno(6.40)$$ $$\int_{a_1}^{a_2}da\ 2\pi a\sigma_0\left\langle{dH\over dt}\right\rangle_{sg}=-L_H^{sg}(a_2)+L_H^{sg}(a_1)=-\int_{a_1}^{a_2} {\partial\over\partial a}L_H^{sg},\eqno(6.41)$$ \end{plain} \noindent so that $L_E^{sg}$ and $L_H^{sg}$ are energy and angular momentum luminosities. The preceding expressions are correct only if $|a_1-a|,|a_2-a|\ll a$, a condition which is satisfied in all cases of interest. We can finally compute the contribution of the ring internal stress, which is somewhat more delicate to handle. First, we cannot use the discrete equations of section 6.3 as the equation for $a$ keeps only the leading order term, whereas, as we shall see now, energy dissipation depends on the next-to-leading order terms. Instead, we revert to the continuous versions of components of the perturbing acceleration as given in Eqs.~(3.17) and (3.18). Expressing the various derivatives in terms of $\partial/\partial a$ and $\partial/\partial\varphi$, and after some integrations by part, one obtains to leading orders in $\epsilon$ \begin{plain} $$\int_{a_1}^{a_2}da\ 2\pi a\sigma_0\left\langle{dE\over dt}\right\rangle_{vis}= \int_{a_1}^{a_2}da\ \left[-{\partial L_E^{vis}\over\partial a}+\pi\Omega a(2qt_1-3a_{r\theta})\right],\eqno(6.42)$$ $$\int_{a_1}^{a_2}da\ 2\pi a\sigma_0\left\langle{dH\over dt}\right\rangle_{vis}=- \int_{a_1}^{a_2}da\ {\partial L_H^{vis}\over\partial a},\eqno(6.43)$$ \end{plain} \noindent where $L_E^{vis}$ and $L_H^{vis}$ are defined by \begin{plain} $$\eqalign{ L_E^{vis}=2\pi\Omega a^2[a_{r\theta} & +m\epsilon(2c_{r\theta}-s_{\theta\theta})\cos\gamma+ m\epsilon(2s_{r\theta}+c_{\theta\theta})\sin\gamma \cr & +\epsilon(s_{rr}\cos\gamma -c_{rr}\sin\gamma)],}\eqno(6.44)$$ $$\eqalign{ L_H^{vis}=2\pi a^2[a_{r\theta} & +m\epsilon(2c_{r\theta}-s_{\theta\theta}) \cos\gamma+m\epsilon(2s_{r\theta}+c_{\theta\theta})\sin\gamma \cr & - 2\epsilon(c_{r\theta}\cos\gamma +s_{r\theta}\sin\gamma)].}\eqno(6.45)$$ \end{plain} \noindent The terms of order $\epsilon$ are negligible in front of $a_{r\theta}$ but note that their derivatives may be comparable to $2qt_1-3a_{r\theta}$ in Eq.~(6.42). It is nevertheless legitimate to neglect them, as these two contributions are of a different qualitative nature: the former conserve energy and angular momentum, while the latter dissipate energy. Small energy and angular momentum redistribution terms do not affect the long term evolution, but small dissipation does. These terms are kept however as they are needed for part of the concluding discussion of section 7.2.3. When these terms are neglected, $L_H^{vis}$ reduces to the quantity introduced in Eq.~(5.82) and $L_E^{vis}=\Omega L_H^{vis}$. Note that the second term on the right-hand side of Eq.~(6.42) represents the rate of dissipation of macroscopic energy, and is equal to the rate of dissipation of energy in collisions as computed in Eq.~(5.79). We are now in position to write down the equations of evolution of energy and angular momentum in local form. Introducing \begin{plain} $$L_E^c=2\pi a\sigma_0 E\left\langle{da\over dt}\right\rangle,\eqno(6.46)$$ $$L_H^c=2\pi a\sigma_0 H\left\langle{da\over dt}\right\rangle,\eqno(6.47)$$ \end{plain} \noindent one can finally express them as \begin{plain} $${\partial {\cal E}\over\partial t}+{\partial\over\partial a}\left( L_E^c+L_E^{sg}+L_E^{vis}\right)=\Omega_p{\cal T}_s +\pi\Omega a(2qt_1-3 a_{r\theta}),\eqno(6.48)$$ $${\partial {\cal H}\over\partial t}+{\partial\over\partial a}\left( L_H^c+L_H^{sg}+L_H^{vis}\right)={\cal T}_s.\eqno(6.49)$$ \end{plain} These equations express the fact that the change in ${\cal E}$ and ${\cal H}$ is due to flux terms on one hand (the advective, self-gravity and viscous fluxes) and to source and sink terms (the satellite and the ring internal stress). Note that Eq.~(6.47) is valid even for more general expressions of the satellite torque than the one derived here. This equation allows us to derive an important feature of the confinement of rings by satellites. A complete discussion of the shepherding process is outside the scope of this lecture, so that we will only briefly recall some essential facts. It is well-known, for example, that the outer edges of the A and B rings of Saturn correspond to resonances with satellites, and it has been argued that the angular momentum exchanges between the satellite and the ring material at the edge is responsible for the survival of the edge against viscous diffusion \citep{BGT82}. The same thing is very likely to be true for all the known narrow rings \citep{GT79a,BGT89}. In the case of Saturn's F ring and Uranus' $\epsilon$ ring at least, the two ``shepherd" satellites bracketing each ring have been observed by the {\it Voyager} probes. The process works as follows. As already mentioned in section 2, the gravitational interaction between a ring and a satellite results in an outward flow of angular momentum: from the ring to the satellite if the satellite lies outside the ring (in which case $T_s<0$) and from the satellite to the ring if it lies inside (in which case $T_s>0$). We have seen also in section 5.3.2 that for not too strongly perturbed rings, the viscous stress induces an outward transport of angular momentum, from the inner edge to the outer edge of a narrow ringlet, so that the ringlet tends to spread. The spreading will be halted when the angular momentum fluxes induced by the satellites will balance the flux due to the viscous stress. Such an equilibrium is possible, because the satellite torques decrease in absolute value when the distance between the ring and the satellite increases: therefore, if the satellite torque is too small, the ring will spread until it reaches the right magnitude; if it is too large, the ring will contract until the satellite torque and the viscous torque have again adjusted. Of course, this equilibrium is not permanent: the inner satellite looses angular momentum, while the other gains some, and they are repelled by the ring. However, as they are usually much more massive than the rings, one can expect that the system will survive in quasi-equilibrium much longer than the ring alone would. Nevertheless, the evolution time-scales computed from these angular momentum exchanges are still in general uncomfortably short. Let us now turn to the implication of Eq. (6.47) for this scenario. Once the quasi-equilibrium just described is obtained, $\partial{\cal H} /\partial t\simeq 0$ and $L_H^c\simeq 0$. Ignoring for simplicity the self-gravity term, the integration of this equation from, say, the inner edge $a_i$ to some radius $a$ inside the ring yields $L_H^{vis}(a)= \int_{a_i}^a T_sda$ (a similar result holds for the outer edge). Taking the limit $a\rightarrow a_i$, one obtains $L_H^{vis}(a_i)=0$, a natural result considering that the torque has only a finite density (we have assumed that there is no very low optical depth material outside the ring). As unperturbed rings have a positive viscous luminosity, this condition can only be satisfied if the rate of perturbation of the edge is high enough so that the condition of reversal of the angular momentum flux is reached. One sees again that such an equilibrium can be obtained and is stable: if the satellite perturbation on the edge is too small, the ring spreads and the edge moves towards the satellite, whose perturbation increases as it gets closer, until the right value is obtained. Note that angular momentum reversal is a necessary feature of the shepherding process (the same conclusion is reached from a discussion of ring energetics; see \citealt{BGT84}). \section{Applications to ring dynamics} In this section, we wish to apply the apparatus of the previous sections to actual ring dynamical problems: the model of self-gravity for the rigid precession of elliptic rings, and the description of linear and nonlinear density waves excited at Lindblad resonances with external satellites. These applications are provided only as examples of use of the formalism; therefore only the major aspects of these questions will be addressed, in order to keep the emphasis on physical issues. Two of the major applications of the formalism, the discussion of the evolution of ring and satellites eccentricities, and the detailed discussion of the shepherding mechanism are not described here, although they are of great importance for ring-satellite system evolution. The reader is referred to the specialized literature (see, e.g. \citealt{GT80,GT81}, and \citealt{BGT82,BGT89}) on these topics. \subsection{The self-gravity model for elliptic rings} Most of the Uranian rings (in particular, the $\alpha$, $\beta$, $\gamma$, $\delta$ and $\epsilon$) and some of the Saturnian ringlets (e.g. the Titan and Huygens ringlets), which are all very narrow features -- from a few kilometers to a few tens of kilometers wide for radii of the order of $10^5$ kilometers -- are known to be eccentric, as discussed in sections\footnote{Many of these rings are also inclined, but for simplicity we set the inclination to zero.} IV.3.1 and IV.3.3. These rings share a number of interesting features: their apses are almost aligned (a small apsidal shift has been detected in some rings, in particular the $\alpha$ and $\beta$ rings of Uranus); they also exhibit a positive difference of eccentricity between the outer and inner edge (except, maybe, the $m=0$ mode of the $\gamma$ ring). The basic observational and dynamical properties of the Uranian rings have been reviewed by \cite{EN84}. These structures are submitted to various dynamical effects. The planet quadrupole moment tends to destroy the observed apse, but this tendency can be balanced by the self-gravity of the ring (\citealp{GT79a,GT79b}; see below). The internal stress also acts on the apse alignment and on the ring mean eccentricity, and produces some radial spreading of the ring. This spreading can be overcome by the action of satellites (see section 6), which also influence the evolution of the mean eccentricity. In this section, a simplified discussion of the dynamics of narrow ringlets will be presented, mostly taken from the analysis by \cite{BGT83a}. First, the changes in epicyclic semimajor axes will be ignored, under the assumption that an equilibrium between viscous spreading and the action of the satellites has been reached. Second, the ring will be represented with only two streamlines with equal masses $M_1=M_2=M_0/2$. Small quantities are supposed to be ordered by two small parameters, $p_1$ and $p_2$ such that $p_1\ll p_2\ll 1$. The two streamlines have semimajor axes $a_1$ and $a_2$, with $a_1<a_2$. Consistently with the approximations performed in the previous sections, one defines $a=(a_1+a_2)/2$, and assumes $a_1=a_2=a$ in the perturbation terms, except in the difference $\Delta a_{21}=\delta a$. The difference of eccentricity between the outer and inner edge $\Delta\epsilon_{21}\equiv\delta\epsilon$ is taken to be $O(p_1)$, whereas the mean eccentricity $\epsilon\equiv (\epsilon_1+\epsilon_2)/2$ is $O(p_2)$, so that $\delta\epsilon/\epsilon\ll 1$, as observed. We also define $m\Delta=m(\Delta_1+\Delta_2)/2$ and $\delta(m\Delta)=m(\Delta_2-\Delta_1)$. Furthermore, as $q\sim 1$, $\delta a/a$ is $O(p_1)$, as is $\epsilon\delta(m\Delta)$. Typically, the eccentricities range from a few times $10^{-4}$ to $10^{-2}$, whereas the eccentricity differences $\delta\epsilon$ vary from a few times $10^{-5}$ to a few times $10^{-4}$, so that the approximation $p_1\ll 1$ and $p_2\ll 1$ are very well satisfied. On the other hand, $\delta\epsilon/\epsilon$ ranges from $0.03$ to $0.5$, so that the approximation $p_1/p_2\ll 1$ is cruder but not critical. With these orderings, to leading order in the various small quantities, one has \begin{plain} $$q_{ij}\cos\gamma_{ij}=a{\delta \epsilon\over\delta a},\eqno(7.1)$$ $$q_{ij}\sin\gamma_{ij}=a\epsilon_j{\delta(m\Delta)\over\delta a},\eqno(7.2)$$ $$q^2_{12}=q^2_{21}=q^2=\left(a\delta\epsilon\over\delta a\right)^2+ \left(a\epsilon{\delta(m\Delta)\over\delta a}\right)^2,\eqno(7.3)$$ $$\gamma_{12}-\gamma_{21}=\delta(m\Delta)\sim O(p_1/p_2).\eqno(7.4)$$ \end{plain} We are now in position to derive the equations of evolution of our simplified system. First, note that the equation for $m\Delta_i$ is related to the equation of evolution of $\varpi_e$ by the streamline condition $M=m(\varphi-\Omega_p t)+m\Delta$, so that \begin{plain} $${d(m\Delta_i)\over dt}=(1-m)\Omega_i+m\Omega_p-{d\varpi_i\over dt}.\eqno(7.5)$$ \end{plain} Note that Eq.~(7.5) generalizes Eq.~(4.59) to the case where $\Delta$ is time dependent. By definition, $m\Delta_i$ contains only periodic terms; the secular terms are accounted for by $\Omega_p$. From Eqs. (4.21), (4.22), (6.13), (6.14), (6.26) and (6.27) applied to the evolution of the eccentricity and apsidal shift of the two streamlines, one can derive the equations of the four quantities $\epsilon$, $m\Delta$, $\delta\epsilon$, and $\delta(m\Delta)$. They read, to leading order in the various small quantities: \begin{plain} $${d\delta\epsilon\over dt}=(\Omega_{sg}-\lambda_2)\epsilon\delta(m\Delta)+\lambda_1\delta\epsilon ,\eqno(7.6)$$ $${d\delta(m\Delta)\over dt}=\delta\Omega_{plan}-(\Omega_{sg}-\lambda_2){\delta\epsilon\over\epsilon}+ \lambda_1\delta(m\Delta),\eqno(7.7)$$ $${d\epsilon\over dt}=-{\Omega_{sg}-\lambda_2\over 4}\delta\epsilon\delta (m\Delta),\eqno(7.8)$$ $${d(m\Delta)\over dt}=m\Omega_p-\Omega_{plan},\eqno(7.9)$$ \end{plain} \noindent where $\Omega_{sg}$, $\lambda_1$, $\lambda_2$, $\Omega_{plan}$ and $\delta\Omega_{plan}$ are quantities homogeneous to frequencies and defined by \begin{plain} $$\Omega_{sg}={n\over\pi}{M_0\over M_p}\left(a\over\delta a\right)^2 H(q^2),\eqno(7.10)$$ $$\lambda_1={2 t_1\over q n \sigma_0(\delta a)^2},\eqno(7.11)$$ $$\lambda_2=-{2 t_2\over q n \sigma_0(\delta a)^2},\eqno(7.12)$$ $$\Omega_{plan}= (m-1)n+{3\over 2}J_2 n\left(R_p\over a\right)^2\left[1+{m-1\over 2}\right],\eqno(7.13)$$ $$\delta\Omega_{plan}= \left({3\over 2}(m-1)n+{21\over 4}J_2 n\left(R_p\over a\right)^2\left[1+{m-1\over 2}\right]\right){\delta a\over a}. \eqno(7.14)$$ \end{plain} \noindent In these equations, $R_p$ is the equatorial radius of the planet, $n=(G M_p/a^3)^{1/2}$ and the surface density is related to the ring mass by $2\pi a\delta a\sigma_0=M_0$. $\Omega_{sg}$ is a characteristic frequency imposed by the self-gravity of the ring; for typical values of the ring surface density ($\sigma_0\sim 50$ g/cm$^2$), $\Omega_{sg}^{-1}$ is of the order of a few years to a few tens of years (e.g. $\sim 9$ years for the $\alpha$ ring). $\lambda_1^{-1}$ and $\lambda_2^{-1}$ are characteristic time-scales imposed by the ring internal stress. They are usually much longer than $\Omega_{sg}^{-1}$; e.g. assuming $t_1, t_2\sim\sigma_0 v^2$ with $v\sim 1$ mm/s, $\lambda_1^{-1}\sim 90$ years for the $\alpha$ ring. This reflects the fact that the ring internal stress produces a very weak force, even compared to the ring self-gravity. Finally, $\Omega_{plan}$ and $\delta\Omega_{plan}$ are frequencies imposed by the planet. For simplicity, terms smaller than $J_2$ have been neglected. Note that even the $J_2$ term is completely negligible for $m\neq 1$, in which case $\Omega_{plan}\sim n$ and $\delta\Omega_{plan}\sim n \delta a/a$. For $m=1$ (i.e. purely elliptic modes), the only remaining contribution is that due to $J_2$: $\Omega_{plan}\sim J_2 n$ and $\delta\Omega_{plan}\sim J_2 n \delta a/a$, so that these two quantities are $10^2$ or $10^3$ times smaller for $m=1$ than for $m\neq 1$. It has been implicitly assumed that the right-hand sides of Eqs.~(7.6) through (7.9) are linear in the unknowns. This is not true, as $\Omega_{sg}$, $\lambda_1$ and $\lambda_2$ depend on $q$. However, $q$ is always different enough from unity that the dependence of these three frequencies on $q$ can be ignored, and this assumption is made in the remainder of this subsection. Note also that the contribution of the satellites has not been included, as the equations derived in section 6.2 apply for a single isolated resonance, whereas the cumulative effect of all resonances should be considered. We are now in position to describe some simple consequences of Eqs.~(7.6) through (7.9). Note first that Eq.~(7.9) uncouples from the other three, and implies that $dm\Delta/dt$ is a constant. This constant must be equal to zero, in accordance with the interpretation of $\Omega_p$ as the rotation velocity of the pattern defined by the streamlines. Therefore, Eq.~(7.9) is in fact the relation imposing $\Omega_p$: \begin{plain} $$\Omega_p=\Omega_{plan}/m.\eqno(7.15)$$ \end{plain} This shows that a narrow ring described by an $m\neq 1$ mode precesses at a rate $\sim n$, i.e. $\sim J_2^{-1}$ faster than a purely elliptic ($m=1$) ring. This feature is a direct consequence of the dominance of the planet on the motions, Eq.~(4.51). Note also that the time-scale of evolution of $\epsilon$ is $\sim (p_2/p_1)^2$ longer than the time-scale of evolution of $\delta\epsilon$ and $\epsilon\delta(m\Delta)$. Therefore, we can look into the evolution of these last two quantities while assuming that $\epsilon$ is constant in time, in first approximation, which effectively uncouples Eq (7.8) from Eqs. (7.6) and (7.7). These two equations show that a narrow ring has an equilibrium configuration (i.e., $d\delta\epsilon/dt=0$ and $d\delta(m\Delta)/dt=0$) for \begin{plain} $${\delta\epsilon_0\over\epsilon}={(\Omega_{sg}-\lambda_2)\delta\Omega_{plan} \over \lambda_1^2+(\Omega_{sg}-\lambda_2)^2}\simeq{\delta\Omega_{plan}\over \Omega_{sg}},\eqno(7.16)$$ $$\delta(m\Delta)_0=-{\lambda_1\over\Omega_{sg}-\lambda_2}{\delta\epsilon_0 \over\epsilon}\simeq -{\lambda_1\over \Omega_{sg}}{\delta\epsilon_0\over\epsilon},\eqno(7.17)$$ \end{plain} \noindent where $\Omega_{sg}\gg \lambda_1,\lambda_2$ has been used in the second equalities. Furthermore, the general solution of Eqs. (7.6) and (7.7) is given by \begin{plain} $$\delta(m\Delta)=\delta(m\Delta)_0+A\exp(\lambda_1 t)\cos[(\Omega_{sg}-\lambda_2)t+\varphi_0],\eqno(7.18)$$ $$\delta\epsilon=\delta\epsilon_0-\epsilon A\exp(\lambda_1 t)\sin[(\Omega_{sg}-\lambda_2)t+\varphi_0],\eqno(7.19)$$ \end{plain} \noindent where $A$ is an arbitrary (but small) amplitude. The general solution consists of oscillations around equilibrium, which are damped if the viscous coefficient $\lambda_1<0$, as has usually been assumed in the literature. This is the case in the dilute approximation, but we have seen in chapter V that the opposite is true for dense systems, at least for small values of $q$. Let us assume for the time being that $\lambda_1<0$, so that the system is driven to its equilibrium point described by Eqs.~(7.16) and (7.17). Note that Eqs.~(7.18) and (7.19) show that $t_2$ is a pressure-like coefficient, while $t_1$ has a viscous-like action on the system. The meaning of Eq.~(7.16) is not especially obvious in the compact form in which it is displayed. Let us for definiteness consider the case of an $m=1$ mode. This equation then reduces to \begin{plain} $$\delta\epsilon_0={21\pi\epsilon\over 4}J_2{M_p\over M_0}\left(R_p\over a\right)^2\left(\delta a\over a\right)^3{1\over H(q^2)}.\eqno(7.20)$$ \end{plain} For $m\neq 1$, one obtains \begin{plain} $$\delta\epsilon_0={3\pi(m-1)\epsilon\over 2} {M_p\over M_0}\left(\delta a\over a\right)^3{1\over H(q^2)}.\eqno(7.21)$$ \end{plain} One sees that these equations relate parameters describing the overall shape of the ring, $a$, $\delta a$, $\epsilon$, $\delta\epsilon$ to the mass of the ring $M_0$, and indeed this relation has been used in the literature to estimate the mass of the $\alpha$, $\beta$, $\delta$ and $\epsilon$ rings of Uranus (the data analysis has not yield values for $\delta\epsilon$ for the other rings, due to problems which are only partly understood). Furthermore, as $\lambda_1\ll\Omega_{sg}$, $\epsilon\delta(m\Delta)_0\ll\delta\epsilon$, so that $q\simeq a\delta\epsilon/\delta a$, and $\gamma\simeq 0$. Therefore, the width of the ring, which is given by $W=J\delta a$ as a function of azimuth, has the same azimuthal behavior as the ring radius $r=(r_1+r_2)/2$, so that $W\propto r$, as observed. Note also that these two relations predict $\delta\epsilon_0>0$ for $m>0$, as observed. Because of these successes, the self-gravity model was for a long time widely regarded as the correct explanation of the rigid precession of narrow elliptic rings. In what concerns the evolution of the eccentricity, note that Eq.~(7.8) implies that $d\epsilon/dt\simeq -\Omega_{sg}\delta\epsilon_0 \delta(m\Delta)_0\sim \lambda_1(\delta\epsilon_0)^2/\epsilon <0$ if $\lambda_1<0$, so that\cite{BGT83a} were motivated to assume that the observed eccentricities were the result of a balance between viscous damping and the excitation by the ring ``shepherd" satellites. Let us conclude this subsection with a number of comments. First, if $\lambda_1$ is positive, the oscillations of $\delta\epsilon$ and $\delta(m\Delta)$ are amplified instead of damped, and the eccentricity grows, as can be seen from our estimate $d\epsilon/dt\sim \lambda_1 (\delta\epsilon_0)^2/\epsilon$. This behavior is characteristic of what is commonly referred to as viscous overstabilities\footnote{Viscous instabilities are axisymmetric motions; viscous overstabilities involve an oscillatory response.}, and this argument was used by \cite{BGT85} to argue that such instabilities could as well be responsible for the eccentricities of the narrow rings. However, the outcome of the evolution of the eccentricity depends critically on the existence of the oscillations of $\delta\epsilon$ and $\delta(m\Delta)$, and on the necessary change of sign of $\lambda_1$ (see section 5), so that, in fact, an initially circular ring cannot reach eccentricities $\epsilon\gg\delta\epsilon$, at least in the framework of the two-streamline model for narrow rings \citep{LR95}; note that both effects were ignored in the analysis by \citealt{PL88}). It remains to be seen if the same conclusion holds for more general models of narrow rings. But the most important problem encountered by the self-gravity model arose after the Voyager II encounter with Uranus, which revealed that the mass ot the rings deduced from Eqs.~(7.20) and (7.21) was roughly underestimated by a factor of $\sim 10$. The problem is particularly acute for the $\alpha$ and $\beta$ ring \citep{GP87}. The first piece of evidence comes from the radio data, the analysis of which gives estimates of the ring surface densities. The other piece of evidence is connected to the existence of an unexpectedly extended hydrogen atmosphere around Uranus; this atmosphere is the source of an extra torque acting on the ring, that the shepherd satellites have to balance (in addition to the viscous torque) in order for the narrow rings to survive, but it turns out that this requirement also can only be satisfied for ring surface densities about ten times larger than the ones derived from Eqs.~(7.20) and (7.21) (for more details about this point, see \citealt{GP87}). On the other hand, the analysis of the radio data relies on standard radiative transfer theory, which, applied to the Uranian ring, is known to give inconsistent results, most probably because the mean separation between ring particles is not small compared to the radio wavelengths. Thus, the derivation of the ring surface densities can possibly be in error, although this does not seem to be a likely possibility. Similarly, the density of hydrogen in Uranus atmosphere at the ring location is extrapolated from the density at inner locations, but major errors in the extrapolation procedure seem rather unlikely. On the theoretical side, the dynamical agents which can enforce rigid precession are not very numerous: the precession rates due to the satellites and smooth pressure terms seem far too small, and the possibility put forward by \cite{GT79a} that the rigid precession might alternatively be enforced by shock-like phenomena appears to be inconsistent with the small observed apsidal shifts. In conclusion, the issue of the rigid precession of the narrow rings is still open at the time of writing. \subsection{Density waves at Lindblad resonances with external satellites} The two {\it Voyager} spacecrafts have discovered tens of density (and bending) wavetrains in Saturn's rings, mainly the A and B rings. These wavetrains share a number of striking characteristics. First, they are all associated with resonances with satellites, suggesting that density waves are stable in Saturn's rings (see below). Note that the reverse is not true, i.e. a number of the Lindblad resonances in Saturn's rings are not associated with density waves: some are associated with gaps, some others don't exhibit any peculiar behavior at all. The reason of this disparity is only partly understood (see below). For simplicity, and because of their practical importance, we will only consider density waves excited at inner Lindblad resonances with Saturn's satellites. These waves propagate from the resonance outwards (whereas bending waves propagate inwards). The wavelengths, of the order of a few kilometers or tens of kilometers, are much smaller than the mean radii of the waves (of the order of 10$^5$ kilometers). Because the wavelength is so short compared to the radii, the winding of the density variations schematically displayed on Figure~\ref{fig:ecc} is very high (much higher than represented) so that these waves appear as quasi-circular features on the {\it Voyager} images. Generally, the waves propagate over a few (or a few tens) of cycles before they are damped by the ring internal stress. The radial wavelength varies like the inverse of the distance to the resonance, so that it becomes shorter as the wave propagates outwards. For example, the Mimas 5:3 density wave radial optical depth profile is displayed on Figure~\ref{fig:Mimas53}. The form of the surface density Eqs.~(4.69) and (4.73) is able to reproduce correctly this optical depth profile at constant azimuth and time for an appropriate (and in fact uniquely determined) choice of the functions involved \citep{LB86}. The propagation of this wave is obviously nonlinear, i.e. the density contrast is large compared to the background density. Assuming $\tau\propto\sigma$, Eq.~(4.69) yields $\tau_{p}/\tau_{t}=1-q/1+q$ where $\tau_p$ and $\tau_t$ are the optical depths at the peaks and troughs of the wave. From the data of Figure~\ref{fig:Mimas53} one sees that $q\lesssim 1$; a linear wave propagation would require $q\ll 1$. Note that by definition, the wavenumber $k$ is given by \begin{plain} $$k=m{d\Delta\over da}+{d\gamma\over da}.\eqno(7.22)$$ \end{plain} \noindent The surface density varies radially as $\sigma=\sigma_0/[1-q\cos(\int kda+cst)]$, so that $\int_{peak}^{peak}k da\simeq k\lambda=2\pi$, where $\lambda$ is the wavelength. \begin{figure}[th] \centering \includegraphics[width=0.7\linewidth]{./Figures/Mimas53.jpg} \caption{\small{The radial variation of optical depth of the Mimas 5:3 density wave as a function of the distance to resonance (reproduced from \citealt{SDLYC85}).}} \label{fig:Mimas53} \end{figure} The following discussion is self-contained, and no prior knowledge on the dynamics of density waves is required. However, some background on the usual Eulerian density wave linear theory at the introductory level of \cite{Sh84} is certainly helpful. The material of this section is mostly taken from the papers by \cite{SYL85,SDLYC85,BGT85,BGT86}. The analysis of density waves is performed in the framework of the WKBJ approximation (or tight-winding limit), which relies on the fact that the dominant contributions to the radial variations are due to the phase and not to the amplitude of the quantities under consideration. In this approximation, the dominant contribution to $J=\partial r/\partial a$ is due to the derivative of $\Delta$ and not to the derivative of $\epsilon$, so that \begin{plain} $$ma\epsilon{d\Delta\over da}\gg{d a\epsilon\over da},\eqno(7.23)$$ \end{plain} \noindent and, as a consequence, \begin{plain} $$\gamma\simeq{\pi\over 2}.\eqno(7.24)$$ \end{plain} \noindent The wavenumber reduces to $k=md\Delta/da$, and is related to $q$ and $\epsilon$ by\footnote{Note that {\textit q} is positive by definition, but that {\textit k} can be either positive or negative.} \begin{plain} $$q|\sin\gamma|=q=|k|a\epsilon.\eqno(7.25)$$ \end{plain} Note that for the wave of Figure~\ref{fig:Mimas53}, the wavelength is of the order of 10 km, so that $\epsilon\sim q/ka\sim 1/ka \sim 10 ^{-5}$ for $a\sim 10^5$ km, and the approximation $\epsilon\ll 1$ is remarkably good. It is also possible to check the validity of the WKBJ approximation from these data. The wave propagates over a radial distance $\Delta a_w\simeq 200$ km, so that $d a\epsilon/da\sim -a\epsilon/\Delta a_w$. Therefore, \begin{plain} $$\left|{d a\epsilon/da\over ma\epsilon d\Delta/da}\right|\sim {a\epsilon/\Delta a_w\over ka\epsilon}={\lambda\over 2\pi\Delta a_w} \ll 1.\eqno(7.26)$$ \end{plain} \noindent A general condition of validity of the WKBJ approximation will be derived below. Our purpose is to describe how the wave is excited by the satellite and damped by the ring internal stress, and to estimate the exchanges of angular momentum between the wave and the satellite, which can be done in principle once a wave equation has been derived. However, let us derive first the wave dispersion relation, as quite a number of aspects of the wave propagation can be understood directly from it. \subsubsection{Dispersion relation} The dispersion relation is an equation relating the wave temporal frequency, here $\omega=m\Omega_p$ to its spatial frequency $k$. This relation is most directly obtained from the condition of existence of the wave, Eq.~(4.59), which states that the resonance condition must be satisfied throughout the wave region. Eq. (4.59) can be recast as: \begin{plain} $$\left(d\varpi\over dt\right)_{sg}+\left(d\varpi\over dt\right)_{vis}+ \left(d\varpi\over dt\right)_{sat}=\kappa-m(\Omega-\Omega_p).\eqno(7.27)$$ \end{plain} Let us evaluate the various terms on the left-hand side in the tight-winding limit. From Eq. (6.14), one has \begin{plain} $$\left(d\varpi\over dt\right)_{sg}={2 n_a a^2\over M_p\epsilon} \int_0^{+\infty}da'{\sigma_0(a')H(q_{aa'}^2)q_{aa'}\cos\gamma_{aa'}\over a-a'}.\eqno(7.28)$$ \end{plain} \noindent Furthermore, we expect that the largest contribution to the self-gravity integral comes from regions with $|a-a'|\ll a$, so that we may take $m\Delta(a')=m\Delta+k(a'-a)$; in this approximation Eq.~(6.10) reads \begin{plain} $$q_{aa'}\exp(i\gamma_{aa'})=ika\epsilon\exp(-iu){\sin u\over u},\eqno(7.29)$$ \end{plain} \noindent where we have defined \begin{plain} $$u={k(a-a')\over 2}.\eqno(7.30)$$ \end{plain} Inserting this result in Eq.~(7.28) yields, treating $\sigma_0$ as a constant over the region of contribution of the integrand (with $n_a=(GM_p/a^3)^{1/2}$) \begin{plain} $$\left(d\varpi\over dt\right)_{sg}={\pi G \sigma_0 |k|\over n_a}C(q), \eqno(7.31)$$ \end{plain} \noindent where \begin{plain} $$C(q)={4\over \pi}\int_0^\infty du\ {\sin^2 u\over u^2}H\left(q^2{\sin^2 u\over u^2}\right).\eqno(7.32)$$ \end{plain} \noindent Note that $C(q)\rightarrow 1$ as $q\rightarrow 0$, i.e. for linear waves. In general, $C(q)$ is a factor of order unity. Note also that most of the contribution to this integral comes from regions with $u\lesssim 1$, i.e. with $(|a-a'|/a\lesssim 1/|k|a\ll 1$, as expected. The contribution of the pressure term is computed from the results of section 6.3.; the dominant term in the tight-winding approximation comes from $\Delta^{\pm}(m\Delta)$ term with $\gamma_i\simeq \pi/2$, i.e.: \begin{plain} $$\left(d\varpi_i\over dt\right)_{vis}={2\pi\over n_a\epsilon_i M_i} \left[ t^i_2\Delta^{\pm} (m\Delta)\right],\eqno(7.33)$$ \end{plain} \noindent where $t_i^{jk}(q,\langle\tau\rangle)$ is evaluated $q=q_{jk}$ [i.e.\ at $a=(a_j+a_k)/2$]. Taking the limit $\Delta a\rightarrow 0$ and using $q=a\epsilon k = a\epsilon(dm\Delta/da)$, one finally obtains \begin{plain} $$\left(d\varpi\over dt\right)_{vis}={k^2 t_2\over n_a \sigma_0 q}.\eqno(7.34)$$ \end{plain} \noindent Finally, let us argue that the contribution of the satellite can be neglected. From Eq.~(6.21), and from the tight-winding condition $|k|a\epsilon=q\sim 1$, one has \begin{plain} $$\left|{{\dot\varpi_{sat}}\over{\dot\varpi_{sg}}}\right|\lesssim {\left|\Psi_{mk}\right|\over 2\pi G\sigma_0 a}.\eqno(7.35)$$ \end{plain} \noindent Typically in Saturn's rings, this quantity is $\sim 0.1$ to $0.5$, so that neglecting the satellite contribution is not quite correct, but not too bad. Finally, the nonlinear dispersion relation reads \begin{plain} $$n_a [\kappa-m(\Omega-\Omega_p)]-\pi G\sigma_0|k|C(q)-{k^2t_2\over \sigma_0 q}=0.\eqno(7.36)$$ \end{plain} \noindent The dispersion relation expresses the fact the the ring self-gravity and internal stress adjust the fluid particles precession rate in order to maintain the resonance relation, $\kappa=m(\Omega-\Omega_p)$, throughout the wave region. Note that by allowing negative values of $m$, this dispersion relation is valid for both inner ($m>0$) and outer ($m < 0$) Lindblad resonances. Although $q$ reaches values of order unity, it is always small enough that $C(q)\gtrsim 1$. It is therefore interesting to look into the linear limit of Eq.~(7.36). For simplicity, we will assume that the ring behaves as a Newtonian fluid obeying an isothermal equation of state $p=c_0^2\sigma$ where $c_0$ is the isothermal sound speed (of the order of the velocity dispersion). This approximation for the pressure tensor is rather crude, but we will see shortly that the viscous term can be neglected in the dispersion relation for the problem of interest here, so that the exact form of $t_2$ is not essential. In this approximation, $t_2$ can be computed from Eqs.~(5.29) and (5.39), and reads \begin{plain} $$t_2=-\sigma_0 c_0^2{1-(1-q^2)^{1/2}\over q(1-q^2)^{1/2}},\eqno(7.37)$$ \end{plain} \noindent which reduces to $t_2=-\sigma_0 c_0^2 q/2$ in the linear limit (note that $2k^2t_2/\Omega\sigma_0 q\sim k^2 c^2$, so that the following results are relevant even if the isothermal equation of state does not apply). In this approximation, the dispersion relation reads \begin{plain} $$2 \Omega [\kappa-m(\Omega-\Omega_p)]-2\pi G\sigma_0 |k|+k^2c_0^2=0,\eqno(7.38)$$ \end{plain} \noindent which is the standard density wave dispersion relation, except for the first term on the left-hand side which should read $\kappa^2 -m^2(\Omega-\Omega_p)^2$ (for a simple fluid mechanical derivation, see \citealt{Sh84}). This difference is due to the approximations we have made in the preceding sections, where $\kappa\simeq\Omega\simeq n_a \simeq m(\Omega-\Omega_p)$ was assumed for a nearly Keplerian disk in the vicinity of an inner Lindblad resonance with a satellite, except in terms involving the difference between $\kappa$ and $m(\Omega-\Omega_p)$. It is interesting to rederive Toomre's stability criterion from the exact linear dispersion relation. Stability is insured if the temporal frequency of the wave $\omega$ contains no imaginary part, i.e. if $(\omega-m\Omega)^2>0$, which implies $\kappa^2-2\pi G\sigma_0|k|+k^2 c_0^2>0$. This expression has a minimum for $|k|=\pi G\sigma_0/c_0^2$, and it is easy to check that this minimum is positive if \begin{plain} $$Q\equiv{c\kappa\over\pi G\sigma_0}>1.\eqno(7.39)$$ \end{plain} \noindent A very similar criterion was originally derived by \cite{T64} for the dynamics of spiral galaxies. Toomre's $Q$ parameter expresses the fact that the ring (or stellar) velocity dispersion can stabilize the medium against the spontaneous generation of density waves. In rings, $Q\sim 1$ but the exact value is not very well known due to the uncertainty on the magnitude of the velocity dispersion. Shu 1984 has pointed out that the ring finite thickness and the existence of a finite size of particles can also have a stabilizing effect, and the associated criteria are closely related to Eq.~(7.39). Indeed, as the ring thickness $H\sim c/\kappa$, Eq.~(7.39) can be recast as $H>\pi G\sigma_0/\kappa^2$ (but this argument is {\it at most} dimensional: the ring can have a finite thickness even in the absence of velocity dispersion, due to the finite particle size); the criterion on the particle size is the same (note that the size of the largest particles of the distribution in Saturn's rings is comparable to the ring thickness). Presumably, the rings are stable due to all these effects, although we are still unable to decide on the basis of the available data if the stability critera are satisfied. An interesting consequence can be derived from the linear dispersion relation, Eq.~(7.38). Solving for $|k|$, one obtains \begin{plain} $$|k|={\pi G\sigma_o\over c_0^2}\pm\left[\left(\pi G\sigma_0\over c_0^2 \right)^2-{2\Omega(\kappa-m(\Omega-\Omega_p)]\over c_0^2}\right]^{1/2}.\eqno(7.40)$$ \end{plain} \noindent The $+$ sign corresponds to the short waves and the $-$ sign to the long ones. Note that the loci of constant surface density (for example the wave crests) correspond to constant values of $m(\theta-\Omega_p t)+ \int kda$, so that the isodensity curves $r(\theta)$ are solutions of the differential equation $k(dr/d\theta)+m=0$. Consider inner Lindblad resonances for definiteness ($m>0$); thus, for $k>0$, $dr/d\theta<0$ and the wave is trailing (it curves in the counter-rotation direction), while the opposite is true for $k<0$ and the wave is leading. A similar conclusion holds at outer resonances ($k<0$ corresponds to trailing waves there). The short waves have a very high wavenumber at resonance and therefore oscillate quite rapidly, preventing an efficient coupling with the satellite potential, which varies smoothly with radius at the resonance. As a consequence, the satellite is not expected to excite short waves\footnote{A more formal argument can be found in \cite{GT79c}, pp. 861 and 862.}; on the contrary, as long waves have $|k|\rightarrow 0$ at the resonance, they are expected to couple much more efficiently with the satellite. On the basis of these arguments, the short waves are ignored in the remainder of this section. In the region of propagation, $|k|>0$, which implies $\kappa-m(\Omega-\Omega_p)>0$. In other words, denoting $a_R$ is the resonance radius implicitly defined by the resonance relation $\kappa=m(\Omega-\Omega_p)$, the long trailing waves propagate outside the ILR (inner Lindblad resonance) and are evanescent inside, whereas the long trailing waves propagate inside the OLR (outer Lindblad resonance) and are evanescent outside\footnote{Furthermore, long trailing waves propagating from Lindblad resonances are reflected on the corotation radius as a short leading waves (\citealt{GT78b}, and \citealt{LL79}). This process is not relevant to planetary rings, because the waves damp much before they reach the corotation radius.}. This is consistent with the directions of propagation derived from the wave group velocity. \cite{T69} and \cite{D72} have shown in the context of the linear density wave theory of spiral galaxies that the group velocity of the waves is given by\footnote{The demonstration of this seemingly natural result would take us too far afield to be repeated here. The most elegant derivation is based on the use of a phase-averaged Lagrangian density of the wave, as in \cite{D72}, which shows at the same time that the conserved quantity associated with the invariance with rotation is (not surprisingly) the phase averaged angular momentum density, and that this quantity is transported radially at the group velocity of the wave. For a general introduction to the use of averaged Lagrangian densities in wave theory, see \cite{W74}, chapters 11 and 14.} $c_g=\partial\omega/\partial k$, where $\omega= m\Omega_p$. Consequently, long leading waves are not expected to be excited at the Lindblad resonance by the satellite, as they would enter a region of evanescence (see also \citealt{GT79c} and \citealt{Sh84}). Therefore, only long trailing waves are discussed in the remainder of these notes; $|m|\neq 1$ is also assumed for definiteness\footnote{Extending the following results to $|m|=1$ is straightforward but requires to maintain the $\mathcal{O}(J_2)$ difference between $\kappa$ and $\Omega$.}. In the vicinity of the resonance, $\kappa-m(\Omega-\Omega_p)\simeq [3(m-1)\Omega_R/2](a-a_R)/a_R$, so that $(\pi G\sigma_0/c_0^2)^2\gg 2\Omega[\kappa-m(\Omega-\Omega_p)]$ occurs for \begin{plain} $${|a-a_R|\over a_R}\ll{\pi^2 G^2\sigma_0^2\over 3c_0^2|m-1|\Omega_R^2} ={1\over 3Q^2|m-1|}\sim 1,\eqno(7.41)$$ \end{plain} \noindent a condition very largely satisfied. Therefore, keeping the leading order term in a Taylor expansion of Eq.~(7.40) for long waves yields $2\pi G\sigma_0|k|= 2\Omega[\kappa-m(\Omega-\Omega_p)]$, i.e., the pressure term can be neglected for long waves near a Lindblad resonance. The same feature is obviously true of the nonlinear dispersion relation, as $C(q)\gtrsim 1$, so that Eq.~(7.36) reduces to ($m\neq 1$) \begin{plain} $$|k| C(q)={3\over 2\pi}(m-1)\left(M_p\over\sigma_0 a_R^2\right) \left(a-a_R\over a_R\right){1\over a_R}.\eqno(7.42)$$ \end{plain} \noindent As the wavelength $\lambda\simeq 2\pi/|k|$, this relation predicts that $\lambda\propto 1/(a-a_R)$, as observed. As the coefficient of proportionality depends on $\sigma_0$, the dispersion relation has been used in the analysis of density wave profiles to estimate the ring surface density. Notice finally that $|k|\rightarrow 0$ as $a\rightarrow a_R$, so that the tight-winding condition $|ka|\gg 1$, which was used in the derivation of the dispersion relation, breaks down too close to the resonance. To estimate quantitatively the region of validity of the tight-winding condition, let us first compute the value $\lambda_1$ of the first wavelength from Eq.~(7.42), and from the constraint $\int_{a_R}^{a_R+\lambda_1}|k|da=2\pi$, taking $C(q)=1$. One obtains \begin{plain} $${\lambda_1^2\over a_R^2}={8\pi^2\over 3|m-1|}\left(\sigma_0 a_R^2\over M_p\right).\eqno(7.43)$$ \end{plain} \noindent It is customary to introduce a small parameter\footnote{Usually, this small parameter is denoted by $\epsilon$, but we have changed notation in order to prevent confusions with the epicyclic eccentricity.} $\delta$, defined by \begin{plain} $$\delta\equiv {2\pi\sigma_0 a_R^2\over 3(m-1) M_p}\sim {M_{ring}\over M_p}.\eqno(7.44)$$ \end{plain} \noindent This parameter is typically $\sim 10^{-8}$. In terms of $\delta$, one has $\lambda_1/a_R=(4\pi|\delta|)^{1/2}\sim 10^{-4}$, which implies $\lambda_1\sim 10$ km, as observed. Note that for disks in general, the tight-winding condition, $|k|a\gg 1$, or equivalently, $\lambda_1/a\ll 1$, is valid as long as $(M_{disk}/M_*)^{1/2}\ll 1$, where $M_*$ is the mass of the central object (star, black hole...), a condition which is not very well satisfied in spiral galaxies. Now, from the dispersion relation and from the expression of $\lambda_1$, the constraint $|ka|\gg 1$ reads \begin{plain} $$4\pi \left(a_R\over\lambda_1\right)\left(|a-a_R|\over\lambda_1\right)\gg 1,\eqno(7.45)$$ \end{plain} \noindent or equivalently $|a-a_R|/\lambda_1\gg 10^{-4}$, so that the tight-winding condition is in fact satisfied very close to the resonance, well inside the first wavelength. Neglecting the satellite contribution is not so good an approximation. We will see in the next subsection that in the linear limit, the satellite contribution is negligible only outside the first wavelength. This shows however that the satellite is not directly responsible for the existence of the wave throughout the propagation region: for most of its radial extent, the wave propagates essentially as a free wave due to the ring self-gravity. \subsubsection{Forced amplitude and wave damping} We are interested in this subsection in the description of stationary density waves, i.e. waves for which $a,\epsilon$ and $\Delta$ do not vary with time. Therefore, the wave is completely described by the knowledge of $\epsilon(a)$ and $\Delta(a)$. Following \cite{SYL85}, we achieve this purpose by deriving an equation for the quantity\footnote{This quantity is in fact the opposite of the complex conjugate of theirs, unscaled.}. \begin{plain} $$Z=\epsilon\exp im\Delta.\eqno(7.46)$$ \end{plain} With this definition, $q_{ij}\exp \gamma_{ij}=(a_i Z_i -a_j Z_j)\exp(-im\Delta_i)/(a_i-a_j)$, which suggests that the desired equation can be obtained from the computation of $(\epsilon{\dot\varpi} +i{\dot\epsilon})\exp im\Delta$. This procedure yields\footnote{The contribution from the stress-tensor are given in the tight-winding approximation, for simplicity.} \begin{plain} $${2\over\pi}\int_{-\infty}^{+\infty}dx'\ H(q^2_{xx'}){Z(x)-Z(x')\over (x-x')^2} + {a_R k^2(t_2+it_1)\over\pi G\sigma_0^2 q}Z-{Zx\over\delta}= -{\Psi_{mk}\over2\pi G\sigma_0 a_R},\eqno(7.47)$$ \end{plain} \noindent where $\delta$ is the small parameter introduced in Eq.~(7.44) and where $x\equiv (a-a_R)/a_R$ has been used instead of $a$ as a distance parameter; $|x|$ measures the fractional distance to the resonance. The first term on the left-hand side represents the contribution of the ring self-gravity, the second is due to the ring internal stress, the last reflects the precession rate required by the existence of the wave pattern, and the term on the right-hand side is the satellite forcing (cf Eq.\ 6.22, with both signs of $m$ allowed). This last term is a dimensionless quantity which ranges from $\sim 0.1$ to $\sim 0.5$ for the strongest resonances in Saturn's rings. Remember that it is of order $e_s^{|k|}$, where $k$ is an integer (see section 4.3.2). All smoothly varying quantities have been pulled out of the self-gravity integral. Eq.~(7.47) is the starting point of the analyses of \cite{SYL85} and \cite{SDLYC85}. This equation has no analytic solution in the nonlinear case, but quite a number of its features can be understood in the linear limit, which we are therefore going to discuss next, following the solution developed by \cite{SYL85}. In this limit, $q\rightarrow 0$ and one can take $H(q^2)=1/2$. Furthermore, we will ignore the viscous terms for the time being. Neglecting $t_2$ is equivalent to neglecting the short-waves, a simplification which was justified in the previous subsection. Neglecting $t_1$ results in the suppression of the wave damping, but we will return to this question shortly. With these approximations, Eq.~(7.47) reduces to \begin{plain} $${1\over\pi}\int_{-\infty}^{+\infty}dx'{dZ/dx'\over x-x'}+{Zx\over\delta}=\ff,\eqno(7.48)$$ \end{plain} \noindent where the self-gravity integral has been integrated by parts. The remaining integral can be computed with the residue theorem\footnote{Because the wave complex amplitude $Z$ contains an exponential phase term, there is \textit{no guarantee} that one can find a contour at infinity where the contribution vanishes, so that the resulting differential equation is not always valid (it would actually be remarkable that the self-gravity integral equation can be exactly replaced by a differential equation, even in the linear limit). In fact, for free waves, Eq.~(7.49) is clearly wrong as it implies the absence of evanescence inside the resonance radius; however one can argue that the sign of $k$ should be present as a factor for the first term (see \citealt{Sh84}, pp.\ 540 -- 541, as well as the Appendix). In any case, it is shown in the Appendix that this equation is a rather accurate approximation for forced density waves throughout the disturbed region, and a very precise one as soon as $|x| \gtrsim (2|\delta|)^{1/2}/3$, i.e., nearly everywhere except in a rather limited band around the resonance radius. It is probably worth pointing out that the standard fluid treatment of linear density waves (e.g. \citealt{Sh84}), suffers from a similar mild limitation. In this fluid approach, the problem stems from the relation between the surface density and self-gravitational potential; the standard solution based on analytic continuation \citep{Sh70} and used in most linear analyses of density waves performed in the 70s and 80s, is only valid in the tight-winding approximation and does not apply in the evanescent region.}, and one finally obtains \begin{plain} $$i{dZ\over dx}+{Zx\over\delta}=\ff,\eqno(7.49)$$ \end{plain} \noindent which is the usual equation of linear density wave theory [see \citealt{Sh84}, Eq.~(44)]. One can encompass both ILR and OLR solutions by introducing a new scaled radial coordinate $\xi= s x/|\delta|^{1/2}$ where $s=\mathrm{sgn}(\delta)$. From the dispersion relation, we expect that the forced (i.e.\ particular) solution to this equation is a long wave, at least in the far wave region. Therefore, we wish to find the particular solution which is evanescent for $\xi\rightarrow -\infty$. As $\exp(is\xi^2/2)$ is an integrating factor, the desired solution is found to be\footnote{This result also follows from an elementary variation of the constants technique.} \begin{plain} $$Z=-i s \ff\exp\left(i s{\xi^2\over 2}\right)\int_{-\infty}^\xi dy\ \exp\left(-i s{y^2\over 2}\right).\eqno(7.50)$$ \end{plain} This solution is depicted in Figure~\ref{fig:torque} (see Appendix~\ref{app:dw}), but it has two interesting asymptotic expansions \citep{GT79c,Sh84} that capture most of its behavior. First, in the limit $\xi\ll -1$ (in the far evanescent region), replacing $\exp(-isy^2/2)$ by $(is/y) d\exp(-isy^2/2)/dy$ and integrating by part yields \begin{plain} $$Z\simeq{\delta\over x}\ff.\eqno(7.51)$$ \end{plain} \noindent On the other hand, for $\xi\gg 1$ (i.e. into the propagation region), using $\int_{-\infty}^\xi=\int_{-\infty}^{+\infty}- \int_\xi^{+\infty}$, and performing a similar integration by part, one obtains \begin{plain} $$Z\simeq\ff (2\pi|\delta|)^{1/2}\exp\left(i{x^2\over 2\delta}-i s {3\pi\over 4}\right)+{\delta\over x}\ff,\eqno(7.52)$$ \end{plain} \noindent where $\int_{-\infty}^{+\infty}dx\exp (-ix^2/2\delta)= (2\pi|\delta|)^{1/2}\exp(-i s\pi/4)$ has been used. The last term represents the non-wavy part of the response, and is negligible. From this last result we can derive the expression of the eccentricity along the wave \begin{plain} $$\epsilon={\left|\Psi_{mk}\right|\over 2\pi G\sigma_0 a_R} (2\pi|\delta|)^{1/2}\sim 10^{-4},\eqno(7.53)$$ \end{plain} \noindent which is seen to be independent of $a$; the wavenumber $k= md\Delta/da$ reads \begin{plain} $$k={x\over a_R\delta}={3(m-1)M_p\over 2\pi\sigma_0 a_R^3}{a-a_R\over a_R}.\eqno(7.54)$$ \end{plain} \noindent which, by comparison with Eq.~(7.42), is seen to correspond to a long trailing wave, as expected. Finally, from $q=|k|a\epsilon$, one sees that $q\propto |x|$, so that the wave becomes nonlinear for \begin{plain} $$|x|=x_{nl}\equiv{2\pi G\sigma_0 a_R\over \left|\Psi_{mk}\right|} \left(|\delta|\over 2\pi\right)^{1/2}\sim {\lambda_1\over a_R}.\eqno(7.55)$$ \end{plain} \noindent Therefore, density waves become nonlinear in the first wavelength or so, as observed. It is to be noted that although the linear theory predicts streamline crossing within a wavelength, the nonlinear contributions of the self-gravity integral prevent this to happen. Let us now investigate the effects of the neglected viscous terms, and reintroduce them in Eq.~(7.49). Using the fact that in the propagation region, $|dZ/dx|=|ika_R Z|\gg |\Psi_{mk}|/2\pi G\sigma_0 a_R$, this equation reads \begin{plain} $$-ka_R Z+{Zx\over\delta}-{a_R k^2\over\pi G\sigma_0^2q}(t_2+it_1)Z=0.\eqno(7.56)$$ \end{plain} \noindent As $t_i\sim \sigma_0 c^2 q$ for small $q$, the ratio of the first to the last term is of order $Q^2(m-1)x\ll 1$, where the velocity dispersion has been expressed in terms of Toomre's $Q$ parameter. The viscous terms are indeed very small and balancing the two first terms gives back the dispersion relation for long trailing wave. However, if the contribution of $t_2$ is small and brings no qualitatively new information, the contribution of $t_1$ is qualitatively different because it is a pure imaginary number, indicating that $k$ must contain an imaginary part: $k=k_r+ik_i$, where $k_r$ is given Eq.~(7.54) and \begin{plain} $$k_ia_R=-{a_Rk_r^2 t_1\over\pi G\sigma_0^2 q}.\eqno(7.57)$$ \end{plain} \noindent Due to this new contribution, $Z$ is now given by \begin{plain} $$Z=Z_{nv}\exp\left(-\int k_ia_Rdy\right),\eqno(7.58)$$ \end{plain} \noindent in the far wave zone, where $Z_{nv}$ is the inviscid value of $Z$, Eq.~(7.52). As a consequence, $\epsilon$ now reads \begin{plain} $$\epsilon=\epsilon_{nv}\exp\left(-\int_0^x k_ia_Rdy\right), \eqno(7.59)$$ \end{plain} \noindent where $\epsilon_{nv}$ is the inviscid value of $\epsilon$, Eq.~(7.53). This shows that the wave is damped if $t_1<0$, an assumption which is made in the remainder of these notes\footnote{One sees also that the wave can be viscously unstable if the viscous coefficient is positive. However, one could expect, as for narrow rings, that oscillations of the eccentricity and phase shift would also be unstable, and possibly prevent the growth of the wave. Such oscillations are excluded by assumption in the present analysis. As the question is still open, we will not push it further in these notes. An analysis of the behavior of nonlinear density waves when the viscous coefficient is positive has been performed by \cite{BGT86}, but these results are incomplete, as the possibility of oscillations of the eccentricity and phase of the wave has been excluded at the onset of their analysis.}. An estimate of the damping length scale $x_{vis}$ can be obtained by setting $\int_0^|x| k_ia_Rdy=1$, which yields $Q^2|m-1||x|_{vis}^3/2|\delta|=1$, i.e. $|x_{vis}|\simeq |\delta|^{1/3}$: this implies that density waves can propagate over about ten cycles, as observed. In their more quantitative analysis of wave damping, \cite{SDLYC85} have pointed out that the results displayed on Figure~\ref{fig:Shu} play a central r\^ole. Indeed, as $q$ increases with $x$, if the unperturbed optical depth $\tau_0$ is smaller than some critical value, the coefficient of restitution $\epsilon_r\rightarrow 0$, and $c$ diverges (remember that for all known materials $\epsilon_r$ is a decreasing function of $c$): as $t_1\propto c^2$, this implies short damping length scales. On the other hand, if the unperturbed optical depth is larger than this critical value, $\epsilon_r$ depends much less on $q$, and the wave could propagate much farther. These authors have attributed to this effect the observed fact that density waves propagate much farther in the B ring than in the A ring of Saturn. However, they have also argued that in order to reproduce the observed short damping length scale of the A ring density waves, rather elastic materials were needed, which would rule out ``dynamical ephemeral bodies" (DEBs) as a likely model candidate for ring particles. Unfortunately, DEBs are the most natural outcome of the ring collisional processes \cite{WCDG84,L89}. Either this particle model is incorrect, or other unmodelled processed are at work wave such as the scattering of the wave by large particles, as has been suggested at least by \cite{SDLYC85} and by Peter Goldreich (private communication). At the time of writing, this issue is still unresolved. \subsubsection{Satellite torque and angular momentum transport} The question of angular momentum exchanges between the rings and the satellites have been widely debated for the past ten years, for reasons that will now be exposed. First, note that for a steady-state wave propagating in an inviscid medium, Eq~(6.49) reduces to \begin{plain} $$L_H^{sg}=\int_0^a da{\cal T}_s.\eqno(7.60)$$ \end{plain} \noindent Thus, for undamped density waves, all the angular momentum deposited by the satellite is carried away by the wave\footnote{This result is a direct consequence of the conservation of the wave action; see \cite{D72}.}. It is interesting to compute $L_H^{sg}$ in the tight-winding approximation. From Eqs.~(6.39) and (7.29), and assuming $k>0$, one obtains \begin{plain} $$L_H^{sg}=-4\pi Gm\sigma_0^2 a_R^3\epsilon^2\int_{-\infty}^v dv_1 \int_{v-v_1}^{+\infty}du\ H\left(q^2{\sin^2u\over u^2}\right){\sin 2u\over u^2}.\eqno(7.61)$$ \end{plain} \noindent Interchanging the order of the integrals and performing the integration over $v_1$ yields \begin{plain} $$L_H^{sg}=-\pi^2 Gm\sigma_0^2 a_R^3\epsilon^2 B(q),\eqno(7.62)$$ \end{plain} \noindent where $B(q)$ is defined by \begin{plain} $$B(q)={4\over\pi}\int_0^{+\infty}du\ H\left(q^2{\sin^2 u\over u^2}\right){\sin 2u\over u}.\eqno(7.63)$$ \end{plain} \noindent Note that $B(q)\rightarrow 1$ as $q\rightarrow 0$. This asymptotic form of the self-gravity angular momentum luminosity is similar to the approximation performed to obtain Eq.~(7.49) from (7.48). Notice also that Eq.~(6.39) predicts correctly $L_H^{sg}$ to order $\epsilon^2$, although the basic streamline parametrization on which the whole analysis relies is valid only to order $\epsilon$. Note finally that for $k<0$ (leading wave) the luminosity Eq.~(7.62) changes sign. Let us now rederive the expression of the torque exerted on a wave excited at an inner Lindblad resonance in the limit of linear undamped waves \citep{GT78c,GT79c}. For definiteness we focus on an inner Lindblad resonance ($s=1$); the torque has the same magnitude but opposite sign at an outer resonance. From Eq.~(6.38), (7.46) and (7.50), one finds \begin{plain} $$\eqalign{T_s=\int_0^{+\infty}da\ {\cal T}_s=- & {m\Psi_{mk}^2 a_R\over 2G}\times \cr & {\rm Im}\left[i\int_{-\infty}^{+\infty}dx\ \exp\left(i{x^2\over 2\delta}\right)\int_{-\infty}^x dy\ \exp\left(-i{y^2\over 2\delta}\right)\right],}\eqno(7.64)$$ \end{plain} \noindent where Im($z$) designates the imaginary part of a complex number $z$. Following \cite{SYL85}, let us call $N$ the complex number whose imaginary part is to be evaluated in Eq. (7.64) and compute the complex conjugate $N^*$. Interchanging the order of the integrals in $N^*$ yields \begin{plain} $$N^*=- i\int_{-\infty}^{+\infty}dx\ \exp\left(i{x^2\over 2\delta}\right)\int_x^{+\infty} dy\ \exp\left(-i{y^2\over 2\delta}\right),\eqno(7.65)$$ \end{plain} \noindent so that \begin{plain} $${\rm Im} N= {N-N^*\over 2i}={1\over 2}\int_{-\infty}^{+\infty}dx \int_{-\infty}^{+\infty} dy\ \exp\left(i{x^2-y^2\over 2\delta}\right)=\pi\delta,\eqno(7.66)$$ \end{plain} \noindent Plugging this result into Eq. (7.64) gives back the standard expression of the linear torque\footnote{Because the linear equation is not exact, as discussed in an earlier footnote, one may at first glance question the validity and relevance of this result. However, it is well-known that the linear torque is independent of the details of the physics of disks; the same linear torque obtains in disks without self-gravity but pressure and/or damping instead (\citealt{MVS87} have produced what is probably the most generic justification of this result). As a consequence, the integrated linear torque is correct, but of course the details of the torque density are not in the near-resonance region are not, where most of the torque is deposited.} \begin{plain} $$T_s=-{\pi^2m\Psi_{mk}^2\sigma_0\over {\cal D}},\eqno(7.66)$$ \end{plain} \noindent where \begin{plain} $${\cal D}\equiv\left(a{d\over da}\left[\kappa^2-m^2(\Omega-\Omega_p)^2\right]\right)_{a_R}={3(m-1)GM_p \over a_R^3}.\eqno(7.67)$$ \end{plain} \noindent The same expression is obtained from Eqs.~(7.53) and (7.62), as expected from Eq.~(7.60). It is also instructive to see how the torque accumulates with radius; to this effect, the cumulative integrand $N(x)$ the imaginary quantity in Eq.~(7.64) is illustrated on Figure~\ref{fig:torque} (see Appendix~\ref{app:dw}). Actual waves are affected by nonlinear effects and by viscous damping. The effect of the ring viscosity (or more correctly, internal stress) are not expected to be very important, as the damping occurs in the far wave region, whereas a numerical evaluation of $\int_{-\infty}^x{\cal T}_s da$ as a function of $x$ shows that most of the torque is deposited within a few wavelengths of the resonance\footnote{This conclusion is robust although the details of the torque deposition implied by Eq.~(7.49) are not).}. \cite{SDLYC85} argue that nonlinear effects don't reduce the torque much below its linear value, but their conclusion is dependent on the details of their model equation; however, a similar conclusion was reached by \cite{LB86} from a nonlinear kinematic analysis of the Mimas 5:3 density wave\footnote{This analysis suffers also from some limitations, but this conclusion is most probably robust. It is fair to say though, that the nonlinear torque magnitude is still somewaht uncertain to this date}. Therefore, Eq.~(7.66) should give a correct estimate of the angular momentum exchange between rings and satellite at Lindblad resonances. As the wave is damped, the angular momentum flux due to the ring self-gravity falls below the value implied by Eq.~(7.60). The variation of $L_H^{sg}$ with $a$ can be related to the viscous coefficient $t_1$ in the following way, slightly adapted from \cite{BGT85}. From Eq.~(6.47), we can eliminate $da/dt$ in Eq. (6.32) in favor of $L_H^c$. This yields \begin{plain} $$2\pi\Omega a^3{\partial \sigma_0\over\partial t}+{\partial L_H^c\over \partial a}-{L_H^c\over 2a}=0.\eqno(7.68)$$ \end{plain} \noindent Furthermore, from $L_E\equiv L_E^c+L_E^{sg}+L_E^{vis}$ (with a similar definition for $L_H^{sg}$), and from $L_E^{sg}=\Omega_pL_H^{sg}$ and Eqs.~(6.44) through (6.47), one obtains \begin{plain} $$L_E=\Omega L_H+(\Omega_p-\Omega)L_H^{sg}-{3\over 2}\Omega L_H^c+ 2\pi a^2\Omega\epsilon[t_1\cos\gamma+t_2\sin\gamma].\eqno(7.69)$$ \end{plain} \noindent Computing $\partial L_H^{sg}/\partial a$ from there, and using Eqs.~(6.48) and (6.49) finally yields \begin{plain} $${\partial\over\partial a}(L_H^{sg}-2\pi a^2\epsilon m[t_2\sin\gamma+t_1\cos\gamma])=-2\pi m a q t_1+{\cal T}_s,\eqno(7.70)$$ \end{plain} \noindent which is the fundamental equation describing wave damping\footnote{From the linear theory of density waves, it is known that the wave amplitude is fixed either from a second order WKB analysis (in the wave propagation region) or from the equation for the wave action, i.e., from the equation of angular luminosity conservation (see, e.g., \citealt{D72}). The same consideration follows in the nonlinear regime, and in the wave propagation region, where the tight-winding condition applies, the satellite contribution to the amplitude in Eq.~(7.47) is negligible; similarly, in the same approximation, the self-gravity contribution to $de/dt$ cancels, and one needs an alternative way to constrain the wave amplitude. This is provided by the wave-damping equation, supplemented by the nonlinear dispersion relation and the conservation of the total (viscous and self-gravitational) angular momentum luminosity once the torque is totally deposited, which provide enough constraints to determine the wavenumber $k$, the nonlinearity parameter $q$ and the surface density $\sigma_0$ (see \citealt{BGT86} for details; the only uncertainty in this analysis, as pointed out earlier, is the magnitude of the nonlinear torque, although it should emerge self-consistently if one starts from the exact equations and avoids the tight-winding approximation).}. In the tight-winding limit ($\gamma=\pi/2$), $t_1$ disappears from the left-hand side. The contribution of the remaining $t_2$ term is usually small in planetary rings, but \cite{SDLYC85} have pointed out that it might generate a phenomenon referred to as a ``Q barrier" in the linear theory of density waves in spiral galaxies (see, e.g., \citealt{T69}): as the wave propagates away from the resonance, the increase in macroscopic energy dissipation due to the presence of the wave (with respect to unperturbed regions) results in an increase of the velocity dispersion. If the increase is important enough, the two roots of the dispersion relation (short and long waves) will merge, and the long wave becomes evanescent, until the velocity dispersion has dropped again. Therefore, the long wave could be partly ``reflected" as a short wave on the newly formed evanescent region, and partly transmitted as the observed long wave. In order to avoid this interesting, but rather complex possibility, \cite{SDLYC85} have neglected the contribution of $t_2$, an approximation made at the onset by \cite{BGT86}. Whether Q barriers actually occur in Saturn's rings is still an open question. In any case, Eq.~(7.70) relates the radial variation of the amplitude of the motion, $\partial\epsilon/\partial a$ to the viscous coefficient $t_1$. One sees again that the wave is damped if $t_1<0$, and that viscous overstabilities can take place in the opposite case (but see the comment on footnote 43). Let us come back now to the discussion of the effects of the satellite torque on the evolution of the ring. Let us first assume for the sake of the argument that the wave is in steady state. Due to the ring viscosity, and according to Eq.~(6.49), Eq. (7.60) now reads $L_H^{sg}+L_H^{vis}\simeq T_s+L_{vis}^-$ in the propagation region, where $L_{vis}^-$ is the unperturbed angular momentum flux inside the wave region. Denoting the unperturbed flux outside the wave region by $L_{vis}^+$, the steady state condition implies $L_{vis}^+=L_{vis}^-+ T_s$. First, notice that $L_{vis}^-$ and $L_{vis}^+$ are positive and $T_s$ is negative. If the satellite torque is larger than the unperturbed angular momentum flux, $L_H>0$ outside the wave region and $L_H<0$ inside, resulting in a steady loss of angular momentum, an inward mass drift and the formation of a gap in the end. \cite{GT78c} have argued that this process was responsible for the formation of the Cassini division by Mimas. Once a gap is open, the torque can be reduced from its full value (the resonance region is not completely occupied with ring material, so that the torque integral is truncated), and another equilibrium or quasi-equilibrium can be reached in which the satellite confines the inner edge of the gap (see section 6). On the other hand, if $L_{vis}^-+T_s>0$, an enhancement of the surface density in the ring region might result, a feature apparent in the observed wave profiles. \cite{BGT86} provide the following explanation for this effect. If a strong enough density wave is launched in a medium of constant background surface density, $q$ will quickly reach values such that $a_{r\theta}<0$ in the wave zone, so that $L_H<0$ and an inward drift again takes place. As the surface density increases, the angular momentum flux needed to evacuate the torque can be obtained for lower values of q, which decreases until it becomes $\lesssim q_2$, the critical value for viscous angular momentum flux reversal (see section 5). Then $L_H$ is positive again, and a quasi steady-state can be maintained. These authors have also used this argument to show that strong waves have their amplitudes limited to $q\lesssim q_2<1$. As satellite torques are negative, satellite orbits expand in time. Calculations based on the cumulative effects of all torques excited in the rings show that the related time-scales of satellite recession are remarkably short, e.g. $\sim 10^7$ years for the shepherd satellites of the F ring \citep{GT82,BGT84}. In the same time, a substantial inward drift of ring material should have taken place \citep{LPC84}, and the analysis of the situation in 1984 was that either the rings were young or that some essential piece of physics was overlooked, e.g. that nonlinear saturation might reduce the torque magnitude below its linear value. This consideration actually motivated the development of a nonlinear theory of density waves; however, nonlinear effects do not seem to reduce significantly the satellite torques. Conversely, the idea that the rings are young is now somewhat more popular; \cite{D91}, e.g., has given some support to a recent cometary origin for the rings, which appears to be the most likely late mechanism of formation of ring systems. However, the issue is not yet completely settled. \section{Conclusion} A general formalism for the analysis of the dynamics of major ring systems (in the sense defined in the introduction) has been presented here. This formalism draws on two complementary approaches, one which deals with the macroscopic ring motions, and one related to the microphysical collisional processes\footnote{In ring dynamics, ``microphysical'' has a rather strange meaning, as the individual ring particles are macroscopic in size.}. It should not be forgotten that the formalism relies on a fluid approach, which is strictly valid only for phenomena whose characteristic length-scale is larger than the typical particles' mean size and mean separation. Also, the dynamics is treated in the one-fluid approximation. Obviously the collisional behavior is much less understood than the global dynamics, limiting our interpretation of the finest dynamical effects, although some important general features of the ring internal stress have been uncovered in the past few years. Note that it might be difficult to improve systematically on this point, at least in the framework of the Boltzmann collision term, which relies on the assumption of ``molecular chaos", and neglects velocity and position correlations of the particles. The condition of validity of this assumption has been extensively discussed in Plasma Physics on the basis of the so-called BBGKY hierarchy, and a criterion has been derived, which states that correlations are negligible whenever the mean kinetic energy of individual particles is much larger than their mean energy of mutual interaction. In ring systems, the velocity dispersion is comparable to the mean two-body gravitational potential, so that this criterion is not satisfied, although it is not strongly violated. It seems important however not to disregard heuristic approaches at the profit of purely formal developments, as many breakthroughs have already been performed in this way. Also, it is likely that direct N-body numerical simulations will bring useful information on this front in the future. On the side of the global dynamics, there are still some major unresolved problems. The most critical is probably the question of the rigid precession of narrow elliptic rings. As argued in section 7.1, the results of the recent encounter of Voyager II with the Uranian system have cast some doubts on the validity of the self-gravity model put forward by \cite{GT79a}. It has also appeared that the standard radiative transfer theory is not appropriate for the analysis of the radio data on the Uranian rings, due to the high density and particle close packing that is prevalent in these systems. The analyses of the damping of density waves on one hand, and of the collisional evolution of the ring particle size distribution on the other, suggest two contradictory models of ring particle structure and collisional properties. This issue might be alleviated if as yet unmodelled damping mechanisms are found, e.g. the scattering of the wave by large particles (see section 7.2.2); alternatively, the DEBs modeled might be disproved in the end. Also, the absence of perturbation at some resonances is not yet understood; similarly, a detailed criterion for gap opening remains to be established. The Uranian rings are apparently correctly described as elliptic modes. However, the mechanism of selection of the observed modes is not yet understood (why, e.g., does the $\alpha$ ring have an $m=1$ mode and the delta ring an $m=2$ ?). Also, the kinematic analysis of the $\gamma$ ring still exhibits unexplained kinematic residuals (see \citealt{Fetal88}). For a long time, no convincing mechanism of confinement of the inner edges of Saturn's rings (like the A and B rings, for example) had been proposed. However, it has recently been argued that such edges could be maintained by ballistic transport processes (see \citealt{Detal92}). Further studies are needed to confirm or disregard this proposal. The question of the confinement of the Neptunian ring arcs is still open, although some interesting progress has been made recently (see \citealt{P91}). Finally, let us point out that some interesting questions concerning the dynamics of charged particles are still open, as, e.g., questions related to the formation, propagation (if any) and disparition of spokes. All these issues, as well as a number of others, are the object of active ongoing research. \section*{Acknowledgements\markboth{Acknowledgements}{}} \label{sec:acknow} \addcontentsline{toc}{section}{\nameref{sec:acknow}} The material presented here represents mostly the work of a collaboration between N. Borderies, P. Goldreich and S. Tremaine over the past ten years or so. I am indebted to N. Borderies and P. Goldreich of many discussions on all aspects of the content of these notes, and I wish to thank them for these fruitful exchanges. \clearpage \section*{Appendix\markboth{Appendix}{}} \label{sec:appendix} \addcontentsline{toc}{section}{\nameref{sec:appendix}}
1,314,259,995,694
arxiv
\section{Introduction and main result} Let $\Omega$ be a bounded domain in $\mathbb{R}^n$ with $C^{2}$ boundary, and let $D_{1}^{*}$ and $D_{2}^{*}$ be two open sets whose closure belongs to $\Omega$, touching only at the origin with the inner normal vector of $\partial{D}_{1}^{*}$ pointing in the positive $x_{n}$-direction. Translating $D_{1}^{*}$ and $D_{2}^{*}$ by $\frac{\varepsilon}{2}$ along $x_{n}$-axis, we obtain $$D_{1}^{\varepsilon}:=D_{1}^{*}+(0',\frac{\varepsilon}{2}),\quad\mbox{and}\quad\,D_{2}^{\varepsilon}:=D_{2}^{*}-(0',\frac{\varepsilon}{2}).$$ When there is no confusion, we drop the superscripts $\varepsilon$ and denote $D_{1}:=D_{1}^{\varepsilon}$ and $D_{2}:=D_{2}^{\varepsilon}$. Denote $\widetilde{\Omega} := \Omega \setminus \overline{(D_1 \cup D_2)}$, we consider the following elliptic equation with Dirichlet boundary data: \begin{equation}\label{equk} \begin{cases} \mathrm{div}\Big(a_{k}(x)\nabla{u}_{k}\Big)=0&\mbox{in}~\Omega,\\ u_{k}=\varphi(x)&\mbox{on}~\partial\Omega, \end{cases} \end{equation} where $\varphi\in{C}^{2}(\partial\Omega)$ is given, and $$a_{k}(x)= \begin{cases} k\in(0,\infty)&\mbox{in}~D_{1}\cup{D}_{2},\\ 1&\mbox{in}~\widetilde{\Omega}. \end{cases} $$ The equation above can be considered as a simple model for electric conduction, where $a_k$ refers to conductivities, which can be assumed to be 1 in the matrix after normalization, and the solution $u_k$ gives the voltage potential. From an engineering point of view, it is very important to estimate $\nabla u_k$, which represents the electric fields, in the narrow region between the inclusions. This problem is analogous to a linear system of elasticity studied by Babu\v{s}ka, Andersson, Smith and Levin \cite{BASL}, where they analyzed numerically that, when the ellitpicity constants are bounded away from $0$ and infinity, gradient of solutions remain bounded independent of $\varepsilon$, the distance between inclusions. Bonnetier and Vogelius \cite{BV} proved that for a fixed $k$, $|\nabla u_k|$ remains bounded as $\varepsilon$ goes to 0, for circular inclusions $D_1$ and $D_2$ in dimension $n = 2$. This result was extended by Li and Vogelius \cite{LV} to general second order elliptic equation of divergence form with piecewise H\"older coefficients and general shape of inclusions $D_1$ and $D_2$ in any dimension. Furthermore, they established a stronger $C^{1,\alpha}$ control of $u_k$, which is independent of $\varepsilon$, in each region. Li and Nirenberg \cite{LN} further extended this $C^{1,\alpha}$ result to general second order elliptic systems of divergence form. When $k$ equals to $\infty$ (perfect conductor) or $0$ (insulator), it was shown in \cite{Kel, BudCar, Mar} that the gradient of solutions generally becomes unbounded, as $\varepsilon \to 0$. When $k$ goes to $\infty$, $u_k$ converges to the solution of the following perfect conductivity problem: \begin{equation}\label{equinfty} \begin{cases} \Delta{u}=0&\mbox{in}~\widetilde{\Omega},\\ u=C_i \mbox{ (Constants)}&\mbox{on}~\partial{D}_{i},~i=1,2,\\ \int_{\partial{D}_{i}}\frac{\partial{u}}{\partial\nu}=0&i=1,2,\\ u=\varphi(x)&\mbox{on}~\partial\Omega. \end{cases} \end{equation} When $k$ goes to $0$, $u_k$ converges to the solution of the following insulated conductivity problem: \begin{equation}\label{equzero} \left\{ \begin{aligned} -\Delta u &=0 \quad \mbox{in }\widetilde{\Omega},\\ \frac{\partial u}{\partial \nu} &= 0 \quad \mbox{on}~\partial{D}_{i},~i=1,2,\\ u &= \varphi \quad \mbox{on } \partial \Omega. \end{aligned} \right. \end{equation} See, e.g., Appendix of \cite{BLY1} and \cite{BLY2} for derivations of the above equations. Here $\nu$ denotes the inward unit normal vectors on $\partial D_i$, $i = 1,2$. Ammari et al. proved in \cite{AKLLL} and \cite{AKL}, among other things, the following. Let $D_1^*$ and $D_2^*$ be unit balls in $\mathbb{R}^2$, and let $H$ be a harmonic function in $\mathbb{R}^2$. They considered the perfect and insulated conductivity problems in $\mathbb{R}^2$: \begin{equation*} \begin{cases} \Delta{u}=0&\mbox{in}~\mathbb{R}^2\setminus\overline{(D_1 \cup D_2)},\\ u=C_i \mbox{ (Constants)}&\mbox{on}~\partial{D}_{i},~i=1,2,\\ \int_{\partial{D}_{i}}\frac{\partial{u}}{\partial\nu}=0&i=1,2,\\ u(x)-H(x) = O(|x|^{-1})&\mbox{as}~|x| \to \infty, \end{cases} \end{equation*} and \begin{equation*} \begin{cases} \Delta{u}=0&\mbox{in}~\mathbb{R}^2\setminus\overline{(D_1 \cup D_2)},\\ \frac{\partial u}{\partial \nu} = 0 &\mbox{on}~\partial{D}_{i},~i=1,2,\\ u(x)-H(x) = O(|x|^{-1})&\mbox{as}~|x| \to \infty. \end{cases} \end{equation*} In both cases, they proved that for some $C$ independent of $\varepsilon$, $$\| \nabla u\|_{L^\infty(B_4)} \le C \varepsilon^{-1/2}.$$ They also showed that the upper bounds are optimal in the sense that for appropriate $H$, $$\| \nabla u\|_{L^\infty(B_4)} \ge \varepsilon^{-1/2}/C.$$ Yun extended in \cite{Y1} and \cite{Y2} the results allowing $D_1^*$ and $D_2^*$ to be any bounded strictly convex smooth domains. The above gradient estimates were localized and extended to higher dimensions by Bao, Li and Yin in \cite{BLY1} and \cite{BLY2}. For the perfect conductor case, they considered problem \eqref{equinfty} and proved in \cite{BLY1} that \begin{equation*} \begin{cases} \| \nabla u \|_{L^\infty(\widetilde{\Omega})} \le C\varepsilon^{-1/2} \|\varphi\|_{C^2(\partial \Omega)} &\mbox{when}~n=2,\\ \| \nabla u \|_{L^\infty(\widetilde{\Omega})} \le C|\varepsilon \ln \varepsilon|^{-1} \|\varphi\|_{C^2(\partial \Omega)} &\mbox{when}~n=3,\\ \| \nabla u \|_{L^\infty(\widetilde{\Omega})} \le C\varepsilon^{-1} \|\varphi\|_{C^2(\partial \Omega)} &\mbox{when}~n\ge 4. \end{cases} \end{equation*} The above bounds were shown to be optimal in the paper. For further works on the perfect conductivity problem and closely related ones, see e.g. \cite{ACKLY,BT1,BT2,DL,KLY1,KLY2,L,LLY,LWX,BLL,BLL2,DZ,KL,CY,ADY,Gor,LimYun} and the references therein. For the insulated problem \eqref{equzero}, it was proved in \cite{BLY2} that \begin{equation}\label{insulated_upper_bound} \| \nabla u \|_{L^\infty(\widetilde{\Omega})} \le C\varepsilon^{-1/2} \|\varphi\|_{C^2(\partial \Omega)}\quad \mbox{when}~n\ge 2. \end{equation} The upper bound is optimal for $n = 2$ as mentioned earlier, while it was not known whether it is optimal in dimensions $n \ge 3$. Yun \cite{Y3} considered the following insulated problem in $\mathbb{R}^3$ minus 2 balls: Let $H$ be a harmonic function in $\mathbb{R}^3$, $D_1 = B_1\left(0,0,1+\frac{\varepsilon}{2} \right)$, and $D_2 = B_1\left(0,0,-1-\frac{\varepsilon}{2} \right)$, \begin{equation*} \begin{cases} \Delta{u}=0&\mbox{in}~\mathbb{R}^3\setminus\overline{(D_1 \cup D_2)},\\ \frac{\partial u}{\partial \nu} = 0 &\mbox{on}~\partial{D}_{i},~i=1,2,\\ u(x)-H(x) = O(|x|^{-2})&\mbox{as}~|x| \to \infty. \end{cases} \end{equation*} He proved that for some positive constant $C$ independent of $\varepsilon$, $$\max_{|x_3|\le \varepsilon/2}|\nabla u(0,0,x_3)| \le C \varepsilon^{\frac{\sqrt{2}-2}{2}}.$$ He also showed that this upper bound of $|\nabla u|$ on the $\varepsilon$-segment connecting $D_1$ and $D_2$ is optimal for $H(x) \equiv x_1$. In this paper, we focus on the insulated conductivity problem \eqref{equzero} in dimension $n \ge 3$, and improve the upper bound \eqref{insulated_upper_bound} to the rate $\varepsilon^{-1/2 + \beta}$, for some $\beta > 0$. Analogous questions for elliptic system are still open, and we give some discussions in Section 4. We point out that the insulator case for Lam\'{e} systems in dimension $n = 2$ was studied by Lim and Yu \cite{LimYu}. From now on, we assume that $\partial{D}_{1}^{*}$ and $\partial{D}_{2}^{*}$ are $C^2$, and they are relatively convex near the origin. That is, for some positive constants $R_0, \kappa$, we assume that when $0<|x'|<R_{0}$, $\partial{D}_{1}^{*}$ and $\partial{D}_{2}^{*}$ are respectively the graphs of two $C^{2}$ functions $f$ and $g$ in terms of $x'$, and $$f(x')>g(x'),\quad\mbox{for}~~0<|x'|<R_{0},$$ \begin{equation}\label{fg_0} f(0')=g(0')=0,\quad\nabla_{x'}f(0')=\nabla_{x'}g(0')=0, \end{equation} \begin{equation}\label{fg_1} \nabla^{2}_{x'}(f-g)(x')\geq\kappa I_{n-1},\quad\mbox{for}~~0<|x'|<R_{0}, \end{equation} where $I_{n-1}$ denotes the $(n-1) \times (n-1)$ identity matrix. Let $a(x) \in C^\alpha(\overline{\widetilde{\Omega}})$, for some $\alpha \in (0,1)$, be a symmetric, positive definite matrix function satisfying $$\lambda \le a(x) \le \Lambda, \quad \mbox{for }x \in \widetilde{\Omega},$$ for some positive constants $\lambda, \Lambda$. Let $\nu = (\nu_1, \cdots, \nu_n)$ denote the unit normal vector on $\partial D_1$ and $\partial D_2$, pointing towards the interior of $D_1$ and $D_2$. We consider the following insulated conductivity problem: \begin{equation}\label{main_problem} \left\{ \begin{aligned} -\partial_i (a^{ij} \partial_j u) &=0 \quad \mbox{in }\widetilde{\Omega},\\ a^{ij} \partial_j u \nu_i &= 0 \quad \mbox{on } \partial (D_1 \cup D_2),\\ u &= \varphi \quad \mbox{on } \partial \Omega, \end{aligned} \right. \end{equation} where $\varphi \in C^{2}(\partial \Omega)$ is given. \\ For $0 < \,r\leq\,R_{0}$, we denote \begin{align}\label{domain_def_Omega} \Omega_{x_0,r}:=\left\{(x',x_{n})\in \widetilde{\Omega}~\big|~-\frac{\varepsilon}{2}\right.&\left.+g(x')<x_{n}<\frac{\varepsilon}{2}+f(x'),~|x' - x_0'|<r\right\},\nonumber\\ \Gamma_+ :=& \left\{ x_n = \frac{\varepsilon}{2}+f(x'),~|x'|<R_0\right\},\\ \Gamma_- :=& \left\{ x_n = -\frac{\varepsilon}{2}+g(x'),~|x'|<R_0\right\}. \nonumber \end{align} Since the blow-up of gradient can only occur in the narrow region between $D_1$ and $D_2$, we will focus on the following problem near the origin: \begin{equation}\label{main_problem_narrow} \left\{ \begin{aligned} -\partial_i (a^{ij} \partial_j u) &=0 \quad \mbox{in }\Omega_{0,R_0},\\ a^{ij} \partial_j u \nu_i &= 0 \quad \mbox{on } \Gamma_+ \cup \Gamma_-,\\ \end{aligned} \right. \end{equation} where $\nu = (\nu_1, \cdots, \nu_n)$ denotes the unit normal vector on $\Gamma_+$ and $\Gamma_-$, pointing upward and downward respectively. Here is our main result in the paper.\\ \begin{theorem}\label{main_thm} Let $f,g,a,\alpha$ be as above, and let $u \in H^1(\Omega_{0,R_0})$ be a solution of \eqref{main_problem_narrow} in dimension $n \ge 3$. There exist positive constants $r_0, \beta$ and $C$ depending only on $n$, $\lambda$, $\Lambda$, $R_0$, $\kappa$, $\alpha$, $\|a\|_{C^\alpha(\Omega_{0,R_0})}$, $\|f\|_{C^{2}(\{|x'| \le R_0\})}$ and $\|g\|_{C^{2}(\{|x'| \le R_0\})},$ such that \begin{equation}\label{main_result} |\nabla u (x_0)| \le C \| u\|_{L^\infty(\Omega_{0,R_0})} \left(\varepsilon + |x_0'|^2 \right) ^{-1/2 + \beta}, \end{equation} for all $x_0 \in \Omega_{0 , r_0}$ and $\varepsilon \in (0,1)$.\\ \end{theorem} Let $u \in H^1(\widetilde{\Omega})$ be a weak solution of \eqref{main_problem}. By the maximum principle and the gradient estimates of solutions of elliptic equations, \begin{equation}\label{boundedness_u} \|u\|_{L^\infty(\widetilde{\Omega})} \le \|\varphi\|_{L^\infty(\partial \Omega)}, \end{equation} and $$\| \nabla u\|_{L^\infty(\widetilde{\Omega} \setminus \Omega_{0, r_0} )} \le C\| \varphi\|_{C^{2}(\partial \Omega)}.$$ Therefore, a corollary of Theorem \ref{main_thm} is as follows.\\ \begin{corollary} Let $u \in H^1(\widetilde{\Omega})$ be a weak solution of \eqref{main_problem} in dimension $n \ge 3$. There exist positive constants $\beta$ and $C$ depending only on $n$, $\lambda$, $\Lambda$, $R_0$, $\kappa$, $\|a\|_{C^\alpha}$, $\|f\|_{C^{2}}$ and $\|g\|_{C^{2}},$ such that \begin{equation}\label{main_result_2} \|\nabla u\|_{L^\infty(\widetilde{\Omega})} \le C \| \varphi\|_{C^{2}(\partial \Omega)} \varepsilon ^{-\frac{1}{2} + \beta}. \end{equation}\\ \end{corollary} \begin{remark} If there are more than two inclusions, estimate \eqref{main_result_2} still holds, with $\varepsilon$ being the minimal distance between inclusions. \end{remark} The rest of this paper will be organized as follows. In section 2, we prove a lemma which is used in the proof of Theorem \ref{main_thm}. Theorem \ref{main_thm} is proved in Section 3. In section 4, we give a gradient estimate to a problem for elliptic systems analogous to problem \eqref{main_problem_narrow}.\\ \section{A regularity lemma} In this section, we give a regularity lemma for elliptic systems (elliptic equations when $N = 1$). Let us first describe the nature of domains and operators. We define $S$ to be a cylinder $$S = \{ (x',x_n) \in \mathbb{R}^n ~\big|~ |x'| < 1, |x_n| < 1 \},$$ and some constants $c_m$, with $0 \le m \le l$, such that $$-1 = c_0 < c_1 < \cdots < c_l = 1,$$ and denote the integer $m_0$ to be the integer such that $$c_{m_0 - 1} \le 0 < c_{m_0}.$$ We divide the domain $S$ into $l$ parts by setting $$\Omega_m = \{x \in S ~\big|~ c_{m-1} < x_n < c_m \}, \quad \mbox{for }1 \le m \le l.$$ For $1 \le \alpha, \beta \le n, 1 \le i,j \le N$, let $A^{\alpha \beta}_{ij}(x)$ be a function such that $$\| A^{\alpha \beta}_{ij} \|_{L^\infty(S)} \le \Lambda,$$ $$\int_S A^{\alpha \beta}_{ij}(x) \partial_\alpha \varphi_i(x) \partial_\beta \varphi_j(x) \ge \lambda \int_S |\nabla \varphi|^2, \quad \forall \varphi \in H_0^1(S; \mathbb{R}^N),$$ for some $\lambda, \Lambda > 0$, and for each $1 \le m \le l$, $A^{\alpha \beta}_{ij}(x) \in C^\mu(\overline{\Omega}_m)$, for some $0 < \mu <1$. We denote $(A^{\alpha \beta}_{ij}(x))$ by $A(x)$. For $1 \le \alpha \le n, 1 \le i \le N$, let \begin{align*} H(x) &= \{H_i\} \in L^\infty(S),\\ G(x) &= \{G_i^\alpha\} \in C^\mu(\overline{\Omega}_m), \end{align*} for all $m = 1 ,\cdots , l$. Then we have the following interior gradient estimate.\\ \begin{lemma}\label{gradient_lemma} Let $A(x)$, $H(x)$ and $G(x)$ be as above. There exists a positive constant $C$, depending only on $n, \mu, \lambda, \Lambda$ and an upper bound of $\{\|A\|_{C^\mu(\overline{\Omega}_m)}\}_{m = 1}^l$, such that if $u \in H^1(S; \mathbb{R}^N)$ is a weak solution to $$\partial_\alpha (A^{\alpha \beta}_{ij}(x) \partial_\beta u_j) = H_i + \partial_\alpha G^\alpha_i \quad \mbox{in }S,$$ then $$\| u\|_{L^\infty(\frac{1}{2}S)} + \| \nabla u\|_{L^\infty(\frac{1}{2}S)} \le C\left(\|u\|_{L^2(S)} + \|H\|_{L^\infty(S)} + \max_{1 \le m \le l} \|G\|_{C^\mu(\overline{\Omega}_m)} \right).$$\\ \end{lemma} \begin{remark} We point out that the constant $C$ in the Lemma is independent of $l$. \end{remark} \begin{proof} The proof of Lemma \ref{gradient_lemma} is a modification of the proof of Proposition 4.1 in \cite{LN}. Even though the constant $C$ in \cite[Proposition 4.1]{LN} depends on $l$, the number of subdomains we divide in the domain $S$, this dependence only enters in estimating the quantities $\| A - \bar{A} \|_{Y^{1+\alpha, 2}}$, $\| G - \bar{G} \|_{Y^{1+\alpha, 2}}$, and $\| H - \bar{H} \|_{Y^{\alpha, 2}}$ which will be defined below. We will show that such quantity is independent of $l$ due to the nature of our domain $S$, and hence the constant $C$ in Lemma \ref{gradient_lemma} is independent of $l$.\\ \end{proof} For $s > 0, 1 < p < \infty$, we define the norm $$\|f\|_{Y^{s,p}}:= \sup_{0 < r \le 1} r^{1-s} \left( {\int\hspace*{-4.3mm}\diagup}_{rS} |f|^p \right)^{1/p}.$$ We define a piecewise-constant coefficients $\bar{A}$ associated to $A$ by setting $$\bar{A}(x) := \left\{\begin{aligned} \lim_{x \in \Omega_m, x \to (0', c_{m-1})} &A(x), &&\mbox{if }x \in \Omega_m, m > m_0;\\ &A(0), &&\mbox{if }x \in \Omega_{m_0};\\ \lim_{x \in \Omega_m, x \to (0', c_m)} &A(x), &&\mbox{if }x \in \Omega_m, m < m_0. \end{aligned} \right.$$ Similarly, we can define piecewise-constant tensor $\bar{G}$ associated to $G$. We also define a constant vector $\bar{H}$ associated to $H$ by $$\bar{H} := {\int\hspace*{-4.3mm}\diagup}_S H.$$\\ \begin{lemma} Let $A, \bar{A}, H, \bar{H}, G, \bar{G}$ be as above. Then there exists a positive constant $C$, depending only on $n$, such that \begin{align*} \| A - \bar{A} \|_{Y^{1+\mu , 2}} & \le C\max_{1 \le m \le l} \|A\|_{C^\alpha(\overline{S}_m)},\\ \| G - \bar{G} \|_{Y^{1+\mu , 2}} & \le C\max_{1 \le m \le l} \|G\|_{C^\alpha(\overline{S}_m)},\\ \| H - \bar{H} \|_{Y^{\mu , 2}} & \le C\|H\|_{L^\infty(S)}.\\ \end{align*} \end{lemma} \begin{proof} The last inequality follows immediately from the definition of $Y^{\mu,2}$ and $\bar{H}$: \begin{align*} \| H - \bar{H} \|_{Y^{\mu , 2}} \le \sup_{0 < r \le 1} r^{1-\mu} \left( {\int\hspace*{-4.3mm}\diagup}_{rS} |H - \bar{H}|^2 \right)^{1/2} \le C\|H\|_{L^\infty(S)}. \end{align*} By a direct computation, we will have \begin{align*} \left( {\int\hspace*{-4.3mm}\diagup}_{rS} |A - \bar{A}|^2 \right)^{1/2} &\le \left( \frac{1}{|rS|} \sum_{m = 1}^l \int_{rS \cap S_m} |A(x) - \bar{A}(x)|^2 \, dx \right)^{1/2}\\ &\le \left[ \frac{1}{|rS|} \left( \sum_{m = 1}^{m_0 - 1} \|A\|^2_{C^\mu(\overline{S}_m)} \int_{rS \cap S_m} |x - (0', c_m)|^{2\mu} \, dx \right.\right.\\ &+ \|A\|^2_{C^\mu(\overline{S}_{m_0})} \int_{rS \cap S_{m_0}} |x|^{2\mu} \, dx\\ + \sum_{m = m_0 + 1}^{l} &\left. \left.\|A\|^2_{C^\mu(\overline{S}_{m-1})} \int_{rS \cap S_{m-1}} |x - (0', c_{m-1})|^{2\mu} \,dx \right) \right]^{1/2} \\ &\le \max_{1 \le m \le l} \|A\|_{C^\mu(\overline{S}_m)} \left( {\int\hspace*{-4.3mm}\diagup}_{rS} |x|^{2\mu} \, dx \right)^{1/2}\\ &\le C\max_{1 \le m \le l} \|A\|_{C^\mu(\overline{S}_m)}r^\mu. \end{align*} This proves the first inequality. The second inequality follows similarly. \end{proof} \section{Proof of Theorem \ref{main_thm}} In this section, we prove Theorem \ref{main_thm}. For a small $r_0$ independent of $\varepsilon$, and any $x_0 \in \Omega_{0, r_0}$, we estimate $|\nabla u(x_0)|$ as follows: First we establish a Harnack inequality in $\Omega_{x_0, r} \setminus \Omega_{x_0, r/2}$, for $r > 0$ in a suitable range. Together with the maximum principle, this gives the oscillation of $u$ in $\Omega_{x_0, \delta}$ a decay $\delta^{2\beta}$, for some positive $\varepsilon$-independent $\beta$, where $$\delta:= (\varepsilon + |x_0'|^2)^{1/2}.$$ Then we perform a suitable change of variables in $\Omega_{x_0, \delta/4}$, and apply Lemma \ref{gradient_lemma} to obtain the desired estimate on $|\nabla u(x_0)|$. We fix a $\gamma \in (0,1)$, and let $r_0 >0$ denote a constant depending only on $n$, $\kappa$, $\gamma$, $R_0$, $\|f\|_{C^{2}}$ and $\|g\|_{C^{2}}$, whose value will be fixed in the proof. We will always consider $0 < \varepsilon \le r_0^2$. First, we require $r_0$ small so that for $|x_0'| < r_0$, $$10\delta < \delta^{1- \gamma} < R_0/4.$$\\ \begin{lemma}\label{harnack_inequality} There exists a small $r_0$, depending only on $n, \kappa, \gamma, R_0, \|f\|_{C^{2}}$ and $\|g\|_{C^{2}}$, such that for any $x_0 \in \Omega_{0,r_0}$, $5|x_0'| < r < \delta^{1- \gamma}$, if $u \in H^1(\Omega_{x_0, 2r} \setminus \Omega_{x_0, r/4})$ is a positive solution to the equation $$ \left\{ \begin{aligned} -\partial_i (a^{ij}(x) \partial_j u(x)) &=0 \quad \mbox{in }\Omega_{x_0, 2r} \setminus \Omega_{x_0, r/4},\\ a^{ij}(x) \partial_j u(x) \nu_i(x) &= 0 \quad \mbox{on } (\Gamma_+ \cup \Gamma_-) \cap \overline{\Omega_{x_0, 2r} \setminus \Omega_{x_0, r/4}},\\ \end{aligned} \right. $$ then, \begin{equation}\label{harnack} \sup_{\Omega_{x_0,r} \setminus \Omega_{x_0, r/2}} u \le C \inf_{\Omega_{x_0, r} \setminus \Omega_{x_0, r/2}} u, \end{equation} for some constant $C >0$ depending only on $n, \kappa, \lambda, \Lambda, R_0, \|f\|_{C^{2}}$ and $\|g\|_{C^{2}},$ but independent of $r$ and $u$. \end{lemma} \begin{proof} We only need to prove \eqref{harnack} for $|x_0'| > 0$, since the $|x_0'| = 0$ case follows from the result for $|x_0'|>0$ and then sending $|x_0'|$ to $0$. We denote $$h_r := \varepsilon + f\left(x_0' - \frac{r}{4} \frac{x_0'}{|x_0'|}\right) - g\left(x_0' - \frac{r}{4} \frac{x_0'}{|x_0'|}\right),$$ and perform a change of variables by setting \begin{equation}\label{x_to_y_1} \left\{ \begin{aligned} y' &= x' - x_0' ,\\ y_n &= 2 h_r \left( \frac{x_n - g(x') + \varepsilon/2}{\varepsilon + f(x') - g(x')} - \frac{1}{2} \right), \end{aligned}\right. \quad (x',x_n) \in \Omega_{x_0, 2r} \setminus \Omega_{x_0, r/4}. \end{equation} This change of variables maps the domain $\Omega_{x_0, 2r} \setminus \Omega_{x_0, r/4}$ to an annular cylinder of height $h_r$, denoted by $Q_{2r, h_r} \setminus Q_{r/4, h_r}$, where \begin{equation}\label{Q_s_t} Q_{s,t}:= \{ y = (y',y_n) \in \mathbb{R}^n ~\big|~ |y'| < s, |y_n| < t\}, \end{equation} for $s,t > 0$. We will show that the Jacobian matrix of the change of variables \eqref{x_to_y_1}, denoted by $\partial_x y$, and its inverse matrix $\partial_y x$ satisfy \begin{equation}\label{transformation_lipschitz} |(\partial_x y)^{ij}| \le C, \quad |(\partial_y x)^{ij}| \le C, \quad \mbox{for }y \in Q_{2r, h_r} \setminus Q_{r/4, h_r}, \end{equation} where $C > 0$ depends only on $n, \kappa, R_0, \|f\|_{C^{2}}$ and $\|g\|_{C^{2}}$. Let $v(y) = u(x)$, then $v$ satisfies \begin{equation}\label{equation_v} \left\{ \begin{aligned} -\partial_i(b^{ij}(y) \partial_j v(y)) &=0 \quad \mbox{in } Q_{2r, h_r} \setminus Q_{r/4, h_r},\\ b^{nj}(y) \partial_j v(y) &= 0 \quad \mbox{on } \{y_n = -h_r\} \cup \{y_n = h_r\}, \end{aligned} \right. \end{equation} where the matrix $(b^{ij}(y))$ is given by \begin{equation}\label{b_ij_formula} (b^{ij}(y)) = \frac{(\partial_x y)(a^{ij})(\partial_x y)^t}{\det (\partial_x y)}, \end{equation} $(\partial_x y)^t$ is the transpose of $\partial_x y$. It is easy to see that \eqref{transformation_lipschitz} implies, using $\lambda \le (a^{ij}) \le \Lambda$, \begin{equation}\label{b_ij_ellpticity} \frac{\lambda}{C} \le (b^{ij}(y)) \le C\Lambda, \quad \mbox{for }y \in Q_{2r, h_r} \setminus Q_{r/4, h_r}, \end{equation} for some constant $C > 0$ depending only on $n, R_0, \kappa, \|f\|_{C^{2}}$ and $\|g\|_{C^{2}}$. In the following and throughout this section, we will denote $A \sim B$, if there exists a positive universal constant $C$, which might depend on $n, \lambda, \Lambda, R_0, \kappa, \|f\|_{C^{2}}$, and $\|g\|_{C^{2}},$ but not depend on $\varepsilon$, such that $C^{-1} B \le A \le C B$. From \eqref{x_to_y_1}, one can compute that \begin{align*} (\partial_x y)^{ii} &= 1, \quad \mbox{for } 1 \le i \le n-1,\\ (\partial_x y)^{nn} &= \frac{2h_r}{\varepsilon + f(x_0'+ y') - g(x_0' + y')},\\ (\partial_x y)^{ni} &= - \frac{2h_r \partial_i g(x_0' + y') + 2y_n [\partial_i f(x_0' + y') - \partial_i g(x_0' + y')]}{\varepsilon + f(x_0' + y')- g(x_0' + y')}, \quad \mbox{for } 1 \le i \le n-1,\\ (\partial_x y)^{ij} &= 0, \quad \mbox{for } 1 \le i \le n-1, j \neq i. \end{align*} By \eqref{fg_0} and \eqref{fg_1}, one can see that $$h_r \sim \varepsilon + \left|x_0' - \frac{r}{4} \frac{x_0'}{|x_0'|}\right|^2.$$ Since $|y_n| \le h_r$, by using \eqref{fg_0} and \eqref{fg_1}, we have that, for $1 \le i \le n-1$, \begin{align*} \left|(\partial_x y)^{ni} \right| &\le C\frac{h_r |\partial_i g(x_0' + y')| + h_r [|\partial_i f(x_0' + y')| + |\partial_i g(x_0' + y')|]}{\varepsilon + f(x_0' + y')- g(x_0' + y')}\\ &\le C \frac{h_r}{\varepsilon + f(x_0' + y')- g(x_0' + y')} \left[ |\partial_i f(x_0' + y')| + |\partial_i g(x_0' + y')| \right]\\ &\le C \frac{\varepsilon + \left|x_0' - \frac{r}{4} \frac{x_0'}{|x_0'|}\right|^2}{\varepsilon + |x_0' + y'|^2} |x_0' + y'|, \end{align*} Since $r/4 < |y'| < 2r < 2\delta^{1- \gamma}$ and $|x_0'| < \delta$, we can estimate \begin{align*} \left|(\partial_x y)^{ni} \right| \le C|x_0' + y'| \le C(|x_0'| + |y'|) \le C \delta^{1 - \gamma}. \end{align*} Next, we will show that \begin{equation}\label{partial_x_y_nn} (\partial_x y)^{nn} \sim 1, \quad \mbox{for }y \in Q_{2r, h_r} \setminus Q_{r/4, h_r}. \end{equation} Indeed, by \eqref{fg_0} and \eqref{fg_1}, we have $$(\partial_x y)^{nn} = \frac{2h_r}{\varepsilon + f(x_0'+ y') - g(x_0' + y')} \sim \frac{\varepsilon + \left|x_0' - \frac{r}{4} \frac{x_0'}{|x_0'|}\right|^2}{\varepsilon + |x_0' + y'|^2}.$$ Since $|y'| > r/4$, it is easy to see $$(\partial_x y)^{nn} \le C \frac{\varepsilon + \left|x_0' - \frac{r}{4} \frac{x_0'}{|x_0'|}\right|^2}{\varepsilon + |x_0' + y'|^2} \le C.$$ On the other hand, since $|y'| < 2r$ and $|x_0'| < r/5$, we have \begin{align*} \varepsilon + \left|x_0' - \frac{r}{4} \frac{x_0'}{|x_0'|}\right|^2 &\ge \varepsilon + \left( \left| \frac{r}{4} \frac{x_0'}{|x_0'|} \right| - |x_0'| \right)^2\ge \varepsilon + \left( \frac{r}{4} - \frac{r}{5} \right)^2 = \varepsilon + \frac{1}{400}r^2, \end{align*} and \begin{align*} \varepsilon + |x_0' + y'|^2 &\le \varepsilon + 2|x_0'|^2 + 2|y'|^2 \le \varepsilon + \frac{2}{25}r^2 + 8r^2 < \varepsilon + 9r^2. \end{align*} Therefore, $$(\partial_x y)^{nn} \ge \frac{1}{C} \frac{\varepsilon + \left|x_0' - \frac{r}{4} \frac{x_0'}{|x_0'|}\right|^2}{\varepsilon + |x_0' + y'|^2} \ge \frac{1}{C} \frac{\varepsilon + r^2/400}{\varepsilon + 9 r^2} \ge \frac{1}{C},$$ and \eqref{partial_x_y_nn} is verified. We have shown $(\partial_x y)^{ii} \sim 1$, for all $i = 1, \cdots, n$, and $|(\partial_x y)^{ij}| \le C \delta^{1-\gamma}$, for $i \neq j$. We further require $r_0$ to be small enough so that off-diagonal entries of $\partial_x y$ are small. Therefore \eqref{transformation_lipschitz} follows. As mentioned earlier, \eqref{b_ij_ellpticity} follows from \eqref{transformation_lipschitz}. Now we define, for any integer $l$, $$A_l:= \left\{y \in \mathbb{R}^n ~\big|~ \frac{r}{4} < |y'| < 2r,~ (l-1) h_r < z_n < (l+1) h_r \right\}.$$ Note that $A_0 = Q_{2r, h_r} \setminus Q_{r/4, h_r}.$ For any $l \in \mathbb{Z}$, we define a new function $\tilde{v}$ by $$\tilde{v}(y) := v\left(y', (-1)^l\left(y_n - 2l h_r\right)\right), \quad \forall y \in A_l.$$ We also define the corresponding coefficients, for $k = 1,2, \cdots, n-1$, $$\tilde{b}^{nk}(y)=\tilde{b}^{kn}(y) := (-1)^lb^{nk}\left(y', (-1)^l\left(y_n - 2l h_r\right)\right), \quad \forall y \in A_l,$$ and for other indices, $$\tilde{b}^{ij}(y) := b^{ij}\left(y', (-1)^l\left(y_n - 2l h_r\right)\right), \quad \forall y \in A_l.$$ Therefore, $\tilde{v}(y)$ and $\tilde{b}^{ij}(y)$ are defined in the infinite cylinder shell $Q_{2r, \infty} \setminus Q_{r/4, \infty}$. By \eqref{equation_v}, $\tilde{v} \in H^1(Q_{2r, \infty} \setminus Q_{r/4, \infty})$ satisfies $$-\partial_i (\tilde{b}^{ij}(y) \partial_j \tilde{v}(y)) = 0 \quad \mbox{in }Q_{2r, \infty} \setminus Q_{r/4, \infty}.$$ Note that for any $l \in \mathbb{Z}$ and $y \in A_l$, $\tilde{b}(y) = (\tilde{b}^{ij}(y))$ is orthogonally conjugated to $b\left(y', (-1)^l\left(y_n - 2l h_r\right)\right)$. Hence, by \eqref{b_ij_ellpticity}, we have $$\frac{\lambda}{C} \le \tilde{b}(y) \le C\Lambda, \quad \mbox{for }y \in Q_{2r, \infty} \setminus Q_{r/4, \infty}.$$ We restrict the domain to be $Q_{2r, r} \setminus Q_{r/4, r}$, and make the change of variables $z = y/r$. Set $\bar{v}(z) = \tilde{v}(y), \bar{b}^{ij}(z) = \tilde{b}^{ij}(y)$, we have $$-\partial_i (\bar{b}^{ij}(z) \partial_j \bar{v}(z)) = 0 \quad \mbox{in }Q_{2, 1} \setminus Q_{1/4, 1},$$ and $$\frac{\lambda}{C} \le \bar{b}(z) \le C\Lambda, \quad \mbox{for }z \in Q_{2, 1} \setminus Q_{1/4, 1}.$$ Then by the Harnack inequality for uniformly elliptic equations of divergence form, see e.g. \cite[Theorem 8.20]{GT}, there exists a constant $C$ depending only on $n, \kappa, \lambda, \Lambda, R_0, \|f\|_{C^{2}}$ and $\|g\|_{C^{2}},$ such that $$\sup_{Q_{1,1/2} \setminus Q_{1/2,1/2}} \bar{v} \le C \inf_{Q_{1,1/2} \setminus Q_{1/2,1/2}} \bar{v}.$$ In particular, we have $$\sup_{Q_{1,h_r/r} \setminus Q_{1/2,h_r/r}} \bar{v} \le C \inf_{Q_{1,h_r/r} \setminus Q_{1/2,h_r/r}} \bar{v},$$ which is \eqref{harnack} after reversing the change of variables.\\ \end{proof} \begin{remark} If dimension $n = 2$, Lemma \ref{harnack_inequality} fails since $Q_{2, 1} \setminus Q_{1/4, 1} \subset \mathbb{R}^{2}$ is the union of two disjoint rectangular domains, and the Harnack inequality cannot be applied on it. In fact, in our proof of Theorem \ref{main_thm}, Lemma \ref{harnack_inequality} is the only ingredient where dimension $n \ge 3$ is used. As mentioned above, the conclusion of Theorem \ref{main_thm} does not hold in dimension $n = 2$.\\ \end{remark} For any domain $A \subset \widetilde{\Omega}$, we denote the oscillation of $u$ in A by $\mbox{osc}_A u := \sup_{A} u - \inf_{A} u$. Using Lemma \ref{harnack_inequality}, we obtain a decay of $\mbox{osc}_{\Omega_{x_0, \delta}}u$ in $\delta$ as follows.\\ \begin{lemma}\label{osc_u_decay_lemma} Let $u$ be a solution of \eqref{main_problem_narrow}. For any $x_0 \in \Omega_{0, r_0}$, where $r_0$ is as in Lemma \ref{harnack_inequality}, there exist positive constants $\sigma$ and $C$, depending only on $ n, R_0, \kappa, \|f\|_{C^{2}}$ and $\|g\|_{C^{2}},$ such that \begin{equation}\label{osc_u} \mbox{osc}_{\Omega_{x_0, \delta}} u \le C \| u\|_{L^\infty(\Omega_{x_0, \delta^{1 - \gamma}})} \delta^{\gamma \sigma}. \end{equation} \end{lemma} \begin{proof} For simplicity, we drop the $x_0$ subscript and denote $\Omega_r = \Omega_{x_0,r}$ in this proof. Let $5|x_0'| < r < \delta^{1- \gamma}$ and $u_1 = \sup_{\Omega_{2r}}u - u, u_2 = u - \inf_{\Omega_{2r}} u.$ By Lemma \ref{harnack_inequality}, we have \begin{align*} \sup_{\Omega_r \setminus \Omega_{r/2}} u_1 &\le C_1 \inf_{\Omega_r \setminus \Omega_{r/2}} u_1, \\ \sup_{\Omega_r \setminus \Omega_{r/2}} u_2 &\le C_1 \inf_{\Omega_r \setminus \Omega_{r/2}} u_2, \end{align*} where $C_1 > 1$ is a constant independent of $r$. Since both $u_1$ and $u_2$ satisfy equation \eqref{main_problem_narrow}, by the maximum principle, \begin{align*} \sup_{\Omega_r \setminus \Omega_{r/2}} u_i = \sup_{\Omega_r} u_i, \quad \inf_{\Omega_r \setminus \Omega_{r/2}} u_i = \inf_{\Omega_r} u_i, \end{align*} for $i = 1,2$. Therefore, \begin{align*} \sup_{\Omega_r} u_1 &\le C_1 \inf_{\Omega_r} u_1, \\ \sup_{\Omega_r} u_2 &\le C_1 \inf_{\Omega_r} u_2. \end{align*} Adding up the above two inequalities, we have $$\mbox{osc}_{\Omega_r} u \le \left( \frac{C_1 - 1}{C_1 + 1} \right)\mbox{osc}_{\Omega_{2r}} u.$$ Now we take $\sigma > 0$ such that $2^{-\sigma} = \frac{C_1 - 1}{C_1 + 1}$, then \begin{equation}\label{recurrence} \mbox{osc}_{\Omega_r} u \le 2^{-\sigma} \mbox{osc}_{\Omega_{2r}} u. \end{equation} We start with $r = r_0 = \delta^{1- \gamma}/2$, and set $r_{i+1} = r_i/2$. Keep iterating \eqref{recurrence} $k+1$ times, where $k$ satisfies $5\delta \le r_k < 10 \delta$, we will have $$\mbox{osc}_{\Omega_{\delta}} u \le \mbox{osc}_{\Omega_{r_k}} u \le 2^{-(k+1)\sigma} \mbox{osc}_{\Omega_{2r_0}} u \le 2^{1-(k+1)\sigma} \|u\|_{L^\infty (\Omega_{\delta^{1-\gamma}})}.$$ Since $10\delta > r^{k} = 2^{-k}r_0 = 2^{-(k+1)}\delta^{1 - \gamma},$ we have $ 2^{-(k+1)} < 10 \delta^\gamma$, and hence \eqref{osc_u} follows immediately.\\ \end{proof} \begin{proof}[Proof of Theorem \ref{main_thm}] Let $u \in H^1(\Omega_{0,R_0})$ be a solution of \eqref{main_problem_narrow}. For $x_0 \in \Omega_{0,r_0}$, we have, using Lemma \ref{osc_u_decay_lemma}, \begin{equation}\label{u-u_0} \| u - u_0\|_{L^\infty(\Omega_{x_0, \delta})} \le C \| u\|_{L^\infty(\Omega_{x_0, \delta^{1 - \gamma}})} \delta^{\gamma \sigma}, \end{equation} for some constant $u_0$. We denote $v: = u - u_0$, and $v$ satisfies the same equation \eqref{main_problem_narrow}. We work on the domain $\Omega_{x_0, \delta/4}$, and perform a change of variables by setting \begin{equation}\label{x_to_y} \begin{cases} y' = \delta^{-1} (x'- x_0'),\\ y_n = \delta^{-1} x_n. \end{cases} \end{equation} The domain $\Omega_{x_0, \delta/4}$ becomes \begin{align*} \left\{y\in \mathbb{R}^n ~\big|~ |y'| \le \frac{1}{4}, \delta^{-1} \left( -\frac{1}{2}\varepsilon+ g(x_0' + \delta y')\right)< y_n < \delta^{-1} \left( \frac{1}{2}\varepsilon+ f(x_0'+ \delta y')\right) \right\}. \end{align*} We make a change of variables again by \begin{equation}\label{y_to_z} \begin{cases} z' = 4y' ,\\ z_n = 2\delta \left( \frac{\delta y_n - g(x_0' + \delta y') + \varepsilon/2}{\varepsilon + f(x_0' + \delta y') - g(x_0' + \delta y')} - \frac{1}{2} \right). \end{cases} \end{equation} Now the domain in $z$-variables becomes a thin plate $Q_{1, \delta}$, where $Q_{s,t}$ is defined as in \eqref{Q_s_t}. Let $w(z) = v(x)$, then $w$ satisfies \begin{equation}\label{equation_w} \left\{ \begin{aligned} -\partial_i(b^{ij}(z) \partial_j w(z)) &=0 \quad \mbox{in } Q_{1, \delta},\\ b^{nj}(z) \partial_j w(z) &= 0 \quad \mbox{on } \{z_n = -\delta\} \cup \{z_n = \delta\}, \end{aligned} \right. \end{equation} where the matrix $b(z) = (b^{ij}(z))$ is given by \begin{equation}\label{b_ij_formula_2} (b^{ij}(z)) = \frac{(\partial_y z)(a^{ij})(\partial_y z)^t}{\det (\partial_y z)}. \end{equation} Similar to the proof of Lemma \ref{harnack_inequality}, we will show that the Jacobian matrix of the change of variables \eqref{y_to_z}, denoted by $\partial_y z$, and its inverse matrix $\partial_z y$ satisfy \begin{equation}\label{transformation_lipschitz_2} |(\partial_y z)^{ij}| \le C, \quad |(\partial_z y)^{ij}| \le C, \quad \mbox{for }z \in Q_{1, \delta}, \end{equation} where $C > 0$ depends only on $n, \kappa, R_0, \|f\|_{C^{2}}$ and $\|g\|_{C^{2}}$. This leads to \begin{equation}\label{b_ij_ellpticity_2} \frac{\lambda}{C} \le b(z) \le C\Lambda, \quad \mbox{for }z \in Q_{1, \delta}. \end{equation} From \eqref{y_to_z}, one can compute that \begin{align*} (\partial_y z)^{ii} &= 4, \quad \mbox{for } 1 \le i \le n-1,\\ (\partial_y z)^{nn} &= \frac{2\delta^2}{\varepsilon + f(x_0' + \delta z'/4) - g(x_0' + \delta z'/4)},\\ (\partial_y z)^{ni} &= - \frac{2 \delta^2 \partial_i g(x_0' + \delta z'/4) + 2z_n \delta [\partial_i f(x_0' + \delta z'/4) - \partial_i g(x_0' + \delta z'/4)]}{\varepsilon + f(x_0' + \delta z'/4)- g(x_0' + \delta z'/4)}\\ &~\quad \mbox{for } 1 \le i \le n-1,\\ (\partial_y z)^{ij} &= 0, \quad \mbox{for } 1 \le i \le n-1, j \neq i. \end{align*} First we will show that \begin{equation}\label{partial_y_z_nn} (\partial_y z)^{nn} \sim 1, \quad \mbox{for }z \in Q_{1, \delta}. \end{equation} Since $|z'| < 1$ and $|x_0'| < \delta$, it is easy to see that $$(\partial_y z)^{nn} \sim \frac{\delta^2}{\varepsilon + |x_0' + \delta z'/4|^2} \ge \frac{\delta^2}{\varepsilon + C \delta^2} \ge \frac{1}{C}, \quad \mbox{for }z \in Q_{1, \delta},$$ due to \eqref{fg_0} and \eqref{fg_1}. On the other hand, \begin{align*} (\partial_y z)^{nn} &\sim \frac{\delta^2}{\varepsilon + |x_0' + \delta z'/4|^2}\\ &= \frac{\delta^2}{\varepsilon + |x_0'|^2 + (1/4)^2 \delta^2 |z'|^2 + \delta x_0' \cdot z'/2}\\ &\le \frac{\delta^2}{\delta^2 + (1/4)^2|z'|^2\delta^2 - |z'||x_0'|\delta/2}\\ &\le \frac{\delta^2}{(1 + (1/4)^2|z'|^2 - 1/2) \delta^2} \le C, \quad \mbox{for }z \in Q_{1, \delta}. \end{align*} Therefore, \eqref{partial_y_z_nn} is verified. Since $|z_n| < \delta$, $|z'| < 1$ and $|x_0'| < \delta$, by \eqref{fg_0} and \eqref{fg_1}, for $1 \le i \le n-1$, \begin{align*} |(\partial_y z)^{ni}| &\le \frac{2 \delta^2 |\partial_i g(x_0' + \delta z'/4)| + 2 \delta^2 [|\partial_i f(x_0' + \delta z'/4)| + |\partial_i g(x_0' + \delta z'/4)|]}{\varepsilon + f(x_0' + \delta z'/4)- g(x_0' + \delta z'/4)}\\ &\le \frac{C\delta^2}{\varepsilon + f(x_0' + \delta z'/4) - g(x_0' + \delta z'/4)}[|\partial_i f(x_0' + \delta z'/4)| + |\partial_i g(x_0' + \delta z'/4)|]\\ &\le C\frac{\delta^2}{\varepsilon + |x_0' + \delta z'/4|^2} |x_0' + \delta z'/4|\\ &\le C (|x_0'| + \delta|z'|) \le C\delta, \end{align*} where in the last line, we have used the same arguments in showing $(\partial_y z)^{nn} \le C$ earlier. We have shown $(\partial_y z)^{ii} \sim 1$, for all $i = 1, \cdots, n$, and $|(\partial_y z)^{ij}| \le C \delta$, for $i \neq j$. We further require $r_0$ to be small enough so that off-diagonal entries are small. Therefore \eqref{transformation_lipschitz_2} follows. As mentioned earlier, \eqref{b_ij_ellpticity_2} follows from \eqref{transformation_lipschitz_2}. Next, we will show \begin{equation}\label{b_ij_holder} \|b \|_{C^\alpha(\overline{Q}_{1,\delta})} \le C, \end{equation} for some $C > 0$ depending only on $n, \kappa, R_0, \|a\|_{C^\alpha}, \|f\|_{C^{2}}$ and $\|g\|_{C^{2}}$, by showing \begin{equation}\label{partial_y_z_lipschitz} |\nabla_z (\partial_y z)^{ij}(z)| \le C, \quad \left| \nabla_z \frac{1}{\det(\partial_y z)} \right| \le C, \quad \mbox{for }z \in Q_{1, \delta}. \end{equation} Then \eqref{b_ij_holder} follows from \eqref{partial_y_z_lipschitz}, \eqref{b_ij_formula_2}, and $\|a\|_{C^\alpha} \le C$. By a straightforward computation, we have, for any $i = 1, \cdots, n-1$, \begin{align*} \left| \partial_{z_i} \frac{1}{\det(\partial_y z)} \right| &= \left| \partial_{z_i} \left( \frac{\varepsilon + f(x_0' + \delta z'/4) - g(x_0' + \delta z'/4)}{2 \cdot 4^{n-1}\delta^2} \right) \right|\\ &= \left| \frac{\delta[\partial_i f(x_0' + \delta z'/4) - \partial_i g(x_0' + \delta z'/4)]}{2 \cdot 4^{n-1}\delta^2} \right|\\ &\le \frac{C}{\delta}[|\partial_i f(x_0' + \delta z'/4)| + |\partial_i g(x_0' + \delta z'/4)|]\\ &\le \frac{C}{\delta}|x_0' + \delta z'/4| \le C, \quad \mbox{for }z \in Q_{1, \delta}, \end{align*} where in the last inequality, \eqref{fg_0} and \eqref{fg_1} have been used. For any $i = 1, \cdots, n-1$, \begin{align*} |\partial_{z_i} (\partial_y z)^{nn}| &= \left| \frac{2\delta^3 [\partial_i f(x_0' + \delta z'/4) - \partial_i g(x_0' + \delta z'/4)]}{(\varepsilon + f(x_0' + \delta z'/4) - g(x_0' + \delta z'/4))^2} \right|\\ &\le \frac{C\delta^3}{(\varepsilon + |x_0' + \delta z'/4|^2)^2}|x_0' + \delta z'/4|\\ &\le \frac{C\delta^3}{\delta^4}(|x_0'| + |\delta z'|) \le C, \quad \mbox{for }z \in Q_{1, \delta}, \end{align*} where in the last line, we have used the same arguments in showing $(\partial_y z)^{nn} \le C$ earlier. Similar computations apply to $\partial_{z_i} (\partial_y z)^{ni}$, for $i = 1, \cdots, n-1$, and we have $$|\partial_{z_i} (\partial_y z)^{ni}| \le C, \quad \mbox{for }z \in Q_{1, \delta}.$$ Finally, we compute, for $i = 1, \cdots, n-1$, \begin{align*} |\partial_{z_n} (\partial_y z)^{ni}| &= \left| \frac{ 2 \delta [\partial_i f(x_0' + \delta z'/4) - \partial_i g(x_0' + \delta z'/4)]}{\varepsilon + f(x_0' + \delta z'/4)- g(x_0' + \delta z'/4)} \right|\\ &\le \frac{C\delta|x_0' + \delta z'/4|}{\varepsilon + |x_0' + \delta z'/4|^2} \le C, \quad \mbox{for }z \in Q_{1, \delta}. \end{align*} Therefore, \eqref{partial_y_z_lipschitz} is verified, and hence \eqref{b_ij_holder} follows as mentioned above. Now we define $$S_l:= \left\{z \in \mathbb{R}^n ~\big|~ |z'| < 1,~ (l-1) \delta < z_n < (l+1) \delta \right\}$$ for any integer $l$, and $$S: = \left\{z \in \mathbb{R}^n ~\big|~ |z'| < 1,~ |z_n| < 1\right\}.$$ Note that $Q_{1, \delta} = S_0$. As in the proof of Lemma \ref{harnack_inequality}, we define, for any $l \in \mathbb{Z}$, a new function $\tilde{w}$ by setting $$\tilde{w}(z) := w\left(z', (-1)^l\left(z_n - 2l \delta\right)\right), \quad \forall z \in S_l.$$ We also define the corresponding coefficients, for $k = 1,2, \cdots, n-1$, $$\tilde{b}^{nk}(z)=\tilde{b}^{kn}(z) := (-1)^lb^{nk}\left(z', (-1)^l\left(z_n - 2l \delta\right)\right), \quad \forall z \in S_l,$$ and for other indices, $$\tilde{b}^{ij}(z) := b^{ij}\left(z', (-1)^l\left(z_n - 2l \delta\right)\right), \quad \forall y \in S_l.$$ Then $\tilde{w}$ and $\tilde{b}^{ij}$ are defined in the infinite cylinder $Q_{1, \infty}$. By \eqref{equation_w}, $\tilde{w}$ satisfies the equation $$-\partial_i (\tilde{b}^{ij} \partial_j \tilde{w}) = 0, \quad \mbox{in }Q_{1, \infty}.$$ Note that for any $l \in \mathbb{Z}$, $\tilde{b}(z)$ is orthogonally conjugated to $b\left(z', (-1)^l\left(z_n - 2l \delta\right)\right),$ for $z \in S_l$. Hence, by \eqref{b_ij_ellpticity_2}, we have \begin{equation*} \frac{\lambda}{C} \le \tilde{b}(z) \le C\Lambda, \quad \mbox{for }z \in Q_{1,\infty}, \end{equation*} and, by \eqref{b_ij_holder}, \begin{equation*} \|\tilde{b} \|_{C^\alpha(\overline{S}_{l})} \le C, \quad \forall l \in \mathbb{Z}. \end{equation*} Apply Lemma \ref{gradient_lemma} on $S$ with $N = 1$, we have $$\| \nabla \tilde{w} \|_{L^\infty(\frac{1}{2}S)} \le C \| \tilde{w} \|_{L^2(S)}.$$ It follows that $$\| \nabla w \|_{L^\infty(Q_{1/2, \delta})} \le \frac{C}{\delta} \| w \|_{L^2(Q_{1, \delta})} \le C\|w\|_{L^\infty(Q_{1, \delta})},$$ for some positive constant $C$, depending only on $n, \alpha, R_0, \kappa, \lambda, \Lambda, \|a\|_{C^\alpha}, \|f\|_{C^{2}}$ and $\|g\|_{C^{2}}$. Since $\|(\partial_z y)\|_{L^\infty(Q_{1,\delta})} \le C$ by \eqref{transformation_lipschitz_2}, where $C$ depends only on $R_0, \kappa, \|f\|_{C^{2}}$ and $\|g\|_{C^{2}}$, and in particular, is independent of $\varepsilon$ and $\delta$. Reversing the change of variables \eqref{y_to_z} and \eqref{x_to_y}, we have $$\delta \| \nabla v\|_{L^\infty(\Omega_{x_0, \delta/8})} \le C \|v\|_{L^\infty(\Omega_{x_0, \delta/4})} \le C \| u\|_{L^\infty(\Omega_{x_0, \delta^{1 - \gamma}})} \delta^{\gamma \sigma}$$ by \eqref{u-u_0}. In particular, this implies $$|\nabla u(x_0)| \le C \| u\|_{L^\infty(\Omega_{x_0, \delta^{1 - \gamma}})} \delta^{-1 + \gamma \sigma},$$ and it concludes the proof of Theorem \ref{main_thm} after taking $\beta = \gamma \sigma/2$.\\ \end{proof} \section{Gradient estimates of elliptic systems} A natural question is whether the estimate in Theorem \ref{main_thm} can be extended to elliptic systems of divergence form. We tend to believe that the answer to this question is affirmative, and plan to pursue this in a subsequent paper. Following closely the proof of \eqref{insulated_upper_bound} in \cite{BLY2}, we give a preliminary gradient estimate of elliptic systems in this section. We consider the vector-valued function $u = (u_1, \cdots, u_N)$, and for $1 \le \alpha, \beta \le n, 1 \le i,j \le N$, let $A^{\alpha \beta}_{ij}(x)$ be a function such that $$\| A^{\alpha \beta}_{ij} \|_{L^\infty(\Omega_{0,R_0})} \le \Lambda,$$ $$\int_{\Omega_{0,R_0}} A^{\alpha \beta}_{ij}(x) \partial_\alpha \varphi_i(x) \partial_\beta \varphi_j(x) \ge \lambda \int_{\Omega_{0,R_0}} |\nabla \varphi|^2, \quad \forall \varphi \in H_0^1(\Omega_{0,R_0}; \mathbb{R}^N),$$ for some $\lambda, \Lambda > 0$, where $\Omega_{0,R_0}$ is defined as in \eqref{domain_def_Omega}. We assume $A^{\alpha \beta}_{ij}(x) \in C^\mu(\Omega_{0,R_0})$ for some $\mu \in(0,1)$, and consider the system \begin{equation}\label{system} \left\{ \begin{aligned} -\partial_\alpha \left(A^{\alpha \beta}_{ij}(x) \partial_\beta u_j(x)\right) &=0 \quad \mbox{in }\Omega_{0,R_0},\\ A^{\alpha \beta}_{ij}(x) \partial_\beta u_j(x) \nu_\alpha(x) &= 0 \quad \mbox{on } \Gamma_+ \cup \Gamma_-,\\ \end{aligned} \right. \end{equation} for $i = 1,\cdots, N$, where $\Gamma_+, \Gamma_-$ are defined as in \eqref{domain_def_Omega}, $\nu = (\nu_1, \cdots, \nu_n)$ denotes the unit normal vector on $\Gamma_+$ and $\Gamma_-$, pointing upward and downward respectively. We have the following gradient estimate by essentially following the proof of Theorem 1.2 in \cite{BLY2}.\\ \begin{theorem}\label{system_thm} Let $u\in H_0^1(\Omega_{0,R_0}; \mathbb{R}^N)$ be a solution to \eqref{system} in dimension $n \ge 2$, with the coefficient $A^{\alpha \beta}_{ij}$ defined as above. There exist positive constants $r_0$ and $C$ depending only on $n$, $\lambda$, $\Lambda$, $R_0$, $\kappa$, $\mu$, $\|A\|_{C^\mu(\Omega_{0,R_0})}$, $\|f\|_{C^{2}(\{|x'| \le R_0\})}$ and $\|g\|_{C^{2}(\{|x'| \le R_0\})},$ such that \begin{equation}\label{main_result_system} |\nabla u (x_0)| \le C \| u\|_{L^\infty(\Omega_{0,R_0})} \left(\varepsilon + |x_0'|^2 \right) ^{-1/2}, \end{equation} for all $\varepsilon \in (0,1)$, $x_0 \in \Omega_{0 , r_0}$.\\ \end{theorem} \begin{remark} The elliptic systems we have considered include the linear systems of elasticity: $n = N$, and the coefficients $A^{\alpha \beta}_{ij}$ satisfy $$A^{\alpha \beta}_{ij} = A^{\beta \alpha}_{ji} = A^{i \beta}_{\alpha j},$$ and for all $n \times n$ symmetric matrices $\{\xi_{\alpha}^i\}$, $$\lambda |\xi|^2 \le A^{\alpha \beta}_{ij}\xi_{\alpha}^i \xi_{\beta}^j \le \Lambda|\xi|^2.$$\\ \end{remark} \begin{proof}[Proof of Theorem \ref{system_thm}] Let $u\in H^1(\Omega_{0,R_0}; \mathbb{R}^N)$ be a solution to \eqref{system}. We perform the changes of variables \eqref{x_to_y} and \eqref{y_to_z}. For any $1 \le i,j \le N$, we define $$B_{ij}^{\alpha \beta}(z) = \frac{(\partial_y z)(A^{\alpha \beta}_{ij})(\partial_y z)^t}{\det (\partial_y z)},$$ and let $v(z) = u(x)$. Then $v$ satisfies \begin{equation*} \left\{ \begin{aligned} -\partial_\alpha \left(B^{\alpha \beta}_{ij}(z) \partial_\beta v_j(z)\right) &=0 \quad \mbox{in } Q_{1, \delta},\\ B^{n \beta}_{ij}(z) \partial_\beta v_j(z) &= 0 \quad \mbox{on } \{z_n = -\delta\} \cup \{z_n = \delta\}, \end{aligned} \right. \end{equation*} for $i = 1,\cdots, N$, where $Q_{s,t}$ is defined as in \eqref{Q_s_t}. As in the proof of Theorem \ref{main_thm}, we can show that $$\| B^{\alpha \beta}_{ij} \|_{L^\infty(Q_{1,\delta})} \le C\Lambda, \quad \| B^{\alpha \beta}_{ij} \|_{C^\mu(\bar{Q}_{1,\delta})} \le C,$$ $$\int_{Q_{1,\delta}} B^{\alpha \beta}_{ij}(z) \partial_\alpha \varphi_i(z) \partial_\beta \varphi_j(z) \ge \frac{\lambda}{C} \int_{Q_{1,\delta}} |\nabla \varphi|^2, \quad \forall \varphi \in H_0^1(Q_{1,\delta}; \mathbb{R}^N),$$ where $C$ is a positive constant that depends only on $n$, $N$, $\mu$, $R_0$, $\kappa$, $\lambda$, $\Lambda$, $\|A\|_{C^\mu}$, $\|f\|_{C^{2}}$ and $\|g\|_{C^{2}}$. Then we argue as in the proof of Theorem \ref{main_thm} to obtain $$|\nabla v(0)| \le C\|v\|_{L^\infty(Q_{1, \delta})},$$ which is \eqref{main_result_system} after reversing the changes of variables \eqref{x_to_y} and \eqref{y_to_z}.\\ \end{proof}
1,314,259,995,695
arxiv
\section{Introduction and main results} This paper deals with the scale-invariant, nonlocal functional inequality \begin{equation}\label{HS} \left(\int_{{\mathbb R}^N} \frac{|u|^q}{|x|^\alpha}\, dx\right)^{\frac{1}{q}}\leq C\left(\int_{{\mathbb R}^N\times{\mathbb R}^N}\frac{|u(x)-u(y)|^p}{|x-y|^{N+ps}}\, dx\, dy\right)^{\frac{1}{p}} \end{equation} for some constant $C>0$. Here, $N> \alpha \geq 0$, $q\geq p\geq 1$, and $s\in \ ]0,1[$ are determined by scale invariance. In order that $C$ be finite, one can write \eqref{HS} for $u_\lambda(x):=u(\lambda x)$, deducing \begin{equation}\label{scalingrelation} \frac{N-\alpha}{q}=\frac{N-ps}{p}. \end{equation} So, the constant $C$ depends on $N, p, s,\alpha$, namely $C:=C(N, p, s,\alpha)$. Further, since $q\geq p$, from \eqref{scalingrelation} we immediately infer \[ 0\leq \alpha\leq ps<N. \] If $q=p$ and $\alpha=ps<N$ then the classical Hardy fractional inequality is recovered. For $q=p^*$, where \begin{equation}\label{pstar} p^*:=\frac{Np}{N-ps}, \end{equation} and $\alpha=0$ we obtain the fractional Sobolev inequality. The general expression \eqref{HS} is usually called fractional Hardy-Sobolev inequality; see \cite{Mazya}. It is well known that \eqref{HS} with $q=p>1$, whence $\alpha=ps$, does not admit optimizers and indeed the concentration-compactness method fails. More precisely, letting \[ [u]_{s,p}^p:=\int_{{\mathbb R}^N\times{\mathbb R}^N}\frac{|u(x)-u(y)|^p}{|x-y|^{N+ps}}\, dx\, dy \] and \begin{equation}\label{I} I_\lambda:=\inf\left\{[u]_{s,p}^p:\int_{{\mathbb R}^N} \frac{|u|^q}{|x|^\alpha}\, dx=\lambda\right\}, \qquad \lambda>0, \end{equation} after scaling one has \begin{equation}\label{Ilambda} I_\lambda=\lambda^{\frac{p}{q}} I_1, \end{equation} and thus the strict subadditivity condition $I_{\lambda+\mu}<I_\lambda+I_\mu$, which represents the main tool of concentration-compactness, holds only if $q>p$. In such a case, existence of optimizers has mostly been taken as granted thanks to \cite[Remark I.6]{Lions1}. For $p=2$, a full proof has been done in \cite{PP}, with $\alpha=0$, exploiting a refined version of Sobolev's embedding (obtained via Morrey spaces), and in \cite{Y}, where $\alpha\in [0, ps[$. We will instead establish the existence of optimizers for general $p>1$ through the original Lions' approach, accordingly considering (as a natural non-local counterpart of $|\nabla u|^p(x)$) the energy-density function \begin{equation}\label{defDsup} |D^s u|^p(x):=\int_{{\mathbb R}^N}\frac{|u(x)-u(x+h)|^p}{|h|^{N+ps}}\, dh. \end{equation} As it turns out, the proof is quite involved at times, mainly due to the fact that non-local interactions arise when one analyzes dichotomy and/or concentration. They are typical of non-local problems and, to treat them, we will use estimates having no analogue in the local framework. Once existence is achieved, standard rearrangement inequalities ensure that the minimizers are radially monotonic. Hence, a natural conjecture is whether the family of minimizers consists of constant multiples, translations, and dilations of the function \begin{equation}\label{talentiane} U(x):=\frac{1}{(1+|x|^{\frac{p-\alpha/s}{p-1}})^{\frac{N-sp}{p-\alpha/s}}},\quad x\in{\mathbb R}^N, \end{equation} which coincides with the classical Aubin-Talenti function provided $s=1$, $\alpha=0$. Such a conjecture has been proved in \cite{GY} when $s=1$, $p>1$, $\alpha\in [0, p[$ through Bliss inequality, and in \cite{CLO} if $s\in\ ]0, 1[$, $p=2$, $\alpha=0$. However, up to now, the explicit form of optimizers is not known for general $s\in \ ]0, 1[$ and $p\neq 2$. Notice that, contrary to the local case, where a simple ODE argument applies, when $s\in \ ]0, 1[$ it is not even clear how to show that \eqref{talentiane} at least solves the corresponding Euler-Lagrange equation. To attack the problem of finding minimizers in \eqref{I}, a first step might be to check compatibility of the {\em a priori} asymptotic behavior of minimizers with the one exhibited by \eqref{talentiane}, i.e., \[ U(R)\simeq\frac{1}{R^{\frac{N-ps}{p-1}}},\quad \nabla U(R)\simeq \frac{1}{R^{\frac{N-ps}{p-1}+1}},\quad R\to+\infty. \] Concerning the first estimate, we will show that any minimizer $u$ actually obeys the same asymptotics as $U$. Relevant arguments are patterned after those of \cite{BMS}, where $\alpha=0$. The second estimate clearly requires much higher regularity than the natural one for $u$, which seems out of reach when $s$ is very small. Accordingly, we will consider an appropriate weaker version that, if $s=1$, reads as \begin{equation} \label{decloc} \nabla U \in L^\gamma({\mathbb R}^N)\quad \Leftrightarrow\quad \gamma\in \ ]\frac{N(p-1)}{N-1}, p]. \end{equation} Since one is interested in {\em decay} estimates, the {\em lowest possible} summability exponent of $\nabla U$ has to be sought out. For $s\in \ ]0, 1[$, we first observe that the asymptotic behavior \[ v(R)\geq \delta\, R^{-\frac{N-ps}{p-1}}, \qquad \delta>0, \quad R\geq 1, \] combined with Hardy's inequality (which holds for any $\gamma\in ]0, N/s[$; see \cite[Theorem 1.1]{D}) \[ [v]_{s,\gamma}^\gamma\geq \frac{1}{C^\gamma}\int_{{\mathbb R}^N} \frac{|u|^\gamma}{|x|^{\gamma s}}\, dx\geq \frac{\delta^\gamma}{C^\gamma}\int_{{\mathbb R}^N\setminus B_1}|x|^{-\gamma\frac{N-s}{p-1}}\, dx \] produces the information \[ [v]_{s, \gamma}<+\infty \quad \Rightarrow\quad \gamma>\frac{N(p-1)}{N-s}. \] This obviously applies to the function $U$ defined in \eqref{talentiane} and, by the obtained asymptotics, to any optimizer $u$ of \eqref{I} as well. Theorem \ref{MT} below ensures that the opposite implication holds true both for every minimizer $u$ and for $U$. The condition $\gamma \in\ ] \frac{N(p-1)}{N-s}, p]$ is thus optimal in the decay sense. Our motivation does not rely only on the asymptotic compatibility of the conjectured form \eqref{talentiane} of minimizers. Indeed, the decay estimate \begin{equation}\label{dec} u(x)\simeq \frac{1}{|x|^{\frac{N-ps}{p-1}}}, \quad |x|\to+\infty \end{equation} has proved to be useful for treating the nonlinear, critically perturbed, eigenvalue problem \[ \begin{cases} (-\Delta_p)^s u=\lambda |u|^{p-2}u+ |u|^{p^*-2}u&\text{in $\Omega$},\\ u\equiv 0&\text{in ${\mathbb R}^N\setminus \Omega$} \end{cases} \] analogous to the Brezis-Nirenberg one. Here, $p^*$ is given by \eqref{pstar} while $(-\Delta_p)^s$ denotes the fractional $p-s$ Laplacian, defined as the differential of $u\mapsto \frac 1 p [u]_{s,p}^p$. Very recently, existence results have been obtained in \cite{MPSY} chiefly through cutoff and rescaling of solutions to \eqref{I}, using only the scaling properties of $u_\varepsilon$ and the pointwise decay \eqref{dec}. However, when we deal with more general operators of mixed order, as the $(p,q)$-Laplacian, a precise estimate at other, less natural, differentiability scales (e.g., $\|\nabla u\|_{q}$ if $q<p$) is essential. Such information has been achieved in \cite{DH}, provided $s=1$, $\alpha=0$, through the explicit form of minimizers for \eqref{I}. The corresponding results have been extensively exploited to treat mixed critical problems; see \cite{CMP, YY} and the references therein. We therefore plan to apply the $s$-derivative decay estimate established below to analogous mixed fractional order problems in future works. The results of the paper can be summarized as follows. \begin{theorem}\label{MT} Let $N>ps$, $q>p$, and $\alpha\in [0, ps[$ satisfy \eqref{scalingrelation}. Then: \begin{itemize} \item Problem \eqref{I} has a minimizer. \item Every minimizer $u$ is of constant sign, radially monotone, and fulfills \begin{equation}\label{as1} \frac{1}{C|x|^{\frac{N-ps}{p-1}}}\leq |u(x)|\leq \frac{C}{|x|^{\frac{N-ps}{p-1}}},\quad |x|\geq 1, \end{equation} for some constant $C:=C(N, p, s, \alpha, u)$, as well as \begin{equation}\label{as2} \int_{{\mathbb R}^N\times {\mathbb R}^N}\frac{|u(x)-u(y)|^\gamma}{|x-y|^{N+\gamma s}}\, dx\, dy<+\infty\quad \forall\,\gamma\in\ \left] \frac{N(p-1)}{N-s}, p\right]. \end{equation} \item Estimates \eqref{as1}--\eqref{as2} hold for the function $U$ defined in \eqref{talentiane} and thus for any translation, multiple, and rescaling of it. \end{itemize} \end{theorem} \vskip5pt {\bf Sketch of proof}. Since the main novelty is \eqref{as2}, it may be instructive to look at a simple proof of \eqref{decloc} {\em without knowing the minimizer's explicit form}. When $\alpha=0$, nonnegative minimizers satisfy the Euler-Lagrange equation \begin{equation}\label{pik} -\Delta_p u= c\, u^{p^*-1}=:f. \end{equation} Then a variant of the Strauss lemma for radially decreasing functions yields the decay estimate $u(x)\leq C|x|^{-\frac{N-p}{p-1}}$, with large $|x|$, provided $f\in L^1({\mathbb R}^N)$ (a nontrivial fact at the global level). To prove \eqref{decloc}, we first decompose $u$ in its horizontally dyadic components, given by slicing $u$ at heights $u(2^i)$: \[ u=\sum_{i=0}^{+\infty} u_i, \quad\mbox{where}\quad 0\leq u_i\leq u(2^{i-1})\, \chi_{B_{2^i}}\, ,\quad i\geq 1. \] We avoid here more involved arguments and assume $\gamma\geq 1$. Thus, on account of the triangle inequality, it suffice to estimate $\|\nabla u_i\|_{L^\gamma}$ separately. As ${\rm supp}(u_i)\subseteq B_{2^i}$ and $\gamma\leq p$, H\"older's inequality gives \begin{equation}\label{nMin} \|\nabla u_i\|_{L^\gamma}\leq |B_{2^i}|^{1-\frac{\gamma}{p}}\, \|\nabla u_i\|_{L^p}\leq C\, 2^{iN(1-\frac{\gamma}{p})}\, \|\nabla u_i\|_{L^p}. \end{equation} Now, since $\nabla u_i=\nabla u$ a.e. in $B_{2^i}\setminus B_{2^{i-1}}$, $\nabla u_i=0$ a.e. in $B_{2^{i-1}}$, and $u_i\leq u(2^{i-1})$, testing \eqref{pik} with $u_i$ we obtain \[ \|\nabla u_i\|_{L^p}^p=\langle -\Delta_p u, u_i\rangle=\int_{{\mathbb R}^N} f \, u_i\, dx\leq \|f\|_{L^1}\, u(2^{i-1})\leq C\, \|f\|_{L^1}\, 2^{-i\frac{N-p}{p-1}}, \] where the decay estimate has been used in the last inequality. Finally, raise to the $1/p$-power and insert inside \eqref{nMin}, to achieve \[ \|\nabla u\|_{L^\gamma}\leq \sum_{i=0}^{+\infty}\|\nabla u_i\|_{L^\gamma}\leq C\, \|f\|_{L^1}^{\frac{1}{p}}\sum_{i=0}^{+\infty}2^{iN(1-\frac{\gamma}{p})}\, 2^{-i\frac{N-p}{p(p-1)}}, \] which is finite as long as $\gamma>N(p-1)/(N-1)$. \vskip5pt {\bf Difficulties}. Two main issues arise in trying to reproduce the previous proof for the fractional case $s\in \ ]0, 1[$. The first one is technical, because in order to obtain the best summability lower bound $\gamma>N(p-1)/(N-s)$ we may have to deal with exponents $\gamma$ less than $1$. Not only the triangle inequality fails in such a case but, more importantly, there does not seem to exist a satisfying interpolation theory for the concrete spaces $W^{s,\gamma}({\mathbb R}^N)$ (contrary to the interpolation theory for Besov-Lizorkin spaces, mainly due to Peetre in the low summability case). The second one, however, is substantial. Inequality \eqref{nMin} evidently implies the natural embedding $W^{1,p}(B_R)\hookrightarrow W^{1,\gamma}(B_R)$ for $p\geq \gamma$, which is actually {\em false} in the fractional case, as an example of Mironescu and Sickel \cite{MiS} shows. Indeed, for any $s\in\ ]0, 1[$ and any $p>\gamma\geq 1$, the space $W^{s,p}(B_R)$ is never a subset of $W^{s,\gamma}(B_R)$. This forces a weakening of \eqref{nMin} through an interpolation inequality and a higher differentiability estimate in Besov spaces for the minimizers. \vskip5pt {\large{\bf Outline of the paper}}. Let us finally discuss the structure of the paper. Section 2 is devoted to framework and known tools that will be employed for proving Theorem \ref{MT}. The existence part is performed in Section 3 via concentration-compactness. We remark that other approaches are available, as, e.g., the one of Lieb \cite{lieb}, based on rearrangements. However, Lion's technique seems viable to treat (more general) situations where rearrangement is not available. In developing the concentration-compactness scheme for nonlocal problems, two main difficulties arise. Ruling out dichotomy for minimizing sequences is quite delicate, since splitting a function through cutoffs gives rise to nonlocal effects, which have to be precisely quantified. This is done by observing that the smallness of $|D^su|^p$ as per \eqref{defDsup}, contrary to the local case, entails strong {\em global} information on $u$; for instance, if $|D^s u|^p$ vanishes at some point then $u$ must be constant. The quantitative estimate needed to rule out dichotomy is Lemma \ref{lemmaint} below. This technical result deals with the loss of compactness due to translation (precisely in the dichotomy case), while the other difficulty lies in the nonlocal effects stemming from the loss of compactness due to dilations, i.e., concentration. However, in this respect, the relevant argument has been derived in \cite[Theorem 2.5]{MS} and we refer to the discussion therein for further details. Section 4 involves some general regularity results for the model equation $(-\Delta_p)^su=f$ on the entire space. We will be concerned with both summability and differentiability estimates. The former are more or less already available in the literature, although the fact that we work on the whole ${\mathbb R}^N$ requires some care. The higher differentiability of solutions to the model equation has been treated only recently in \cite{BL} in the superquadratic case, assuming various {\em differentiability} hypotheses on the forcing term $f$. We are then going to refine and extend the techniques of \cite{BL} to obtain a higher Besov regularity result solely under suitable {\em summability} assumptions on $f$; see Lemma \ref{regest}. Finally, Section 5 contains the proof of Theorem \ref{MT}. As outlined before, the main tool is an $L^1$ estimate of the forcing term of the corresponding Euler-Lagrange equation for minimizers, which readily implies the decay estimate \eqref{as1}. Relevant arguments are patterned after \cite{BMS}. To show \eqref{as2}, we decompose a given minimizer into its horizontally dyadic components and evaluate the corresponding $W^{s, \gamma}$ terms separately. An interpolation inequality (cf. Lemma \ref{intbesov}), together with the slightly better differentiability properties of minimizers, ensure that the $W^{s,\gamma}$ energy of the dyadic components can (almost entirely) be controlled in terms of their $W^{s, p}$ norm. Exploiting \eqref{as1} to estimate them via the Euler-Lagrange equations, produces \eqref{as2}. \section{Preliminary material} Let us first fix some notation. If $p> 1$, we put $p':=p/(p-1)$ while $p':=\infty$ when $p=1$. Denote by $B_r(x)$ the open ball of center $x$ and radius $r>0$ in ${\mathbb R}^N$. If the center is not specified then it is to be understood as zero, i.e., $B_r:=B_r(0)$. Given any Lebesgue measurable set $E\subseteq {\mathbb R}^N$, $\chi_E$ denotes its indicator function, $|E|$ its Lebesgue measure, and $E^c:={\mathbb R}^N\setminus E$. Finally, $\omega_N:=|B_1(0)|$. The {\em symmetric-decreasing rearrangement} of a measurable function $u:{\mathbb R}^N\to {\mathbb R}_+$ is, by definition, the radial function $u^*(x)=u^*(|x|)$ such that $u^*$ is non-increasing, right continuous with respect to $r:=|x|$, and \[ |\{u^*>t\}|=|\{u>t\}|\quad \text{for a.e. $t\geq 0$}. \] From now on, the dimension $N\geq 1$ will be fixed, $s\in \ ]0,1[$, and $p\geq 1$ will satisfy $ps<N$. Moreover, $p^*$ denotes the fractional critical exponent, namely $p^*:=Np/(N-ps)$. Let $u:{\mathbb R}^N\to{\mathbb R}$ be measurable. We say that $u$ {\em vanishes at infinity} if \begin{equation}\label{vi} |\{|u|>a\}|<+\infty\quad \text{for all $a>0$}. \end{equation} Define \[ |D^s u|^p(x)=\int_{{\mathbb R}^N}\frac{|u(x)-u(y)|^p}{|x-y|^{N+ps}}\, dy,\quad x\in{\mathbb R}^N. \] Elementary inequalities ensure that for all measurable $v,w:{\mathbb R}^N\to{\mathbb R}$ one has \begin{equation}\label{Liebn} \int_{{\mathbb R}^N}|D^s(v\, w)|^p\, dx\leq 2^{p-1}\int_{{\mathbb R}^N} \left(|v|^p\, |D^s w|^p+ |w|^p\, |D^s v|^p\right) dx. \end{equation} Next, set \[ [u]_{s,p}:=\left(\int_{{\mathbb R}^N\times{\mathbb R}^N}\frac{|u(x)-u(y)|^p}{|x-y|^{N+ps}}\, dx\, dy\right)^{\frac{1}{p}} \] in addition to \[ \|u\|_{\alpha, q}:=\left(\int_{{\mathbb R}^N} \frac{|u|^q}{|x|^\alpha}\, dx\right)^{\frac{1}{q}}, \] where $\alpha\in [0, N[$ and $q\geq 1$. Obviously, $\|u\|_q:=\|u\|_{0, q}$. The symbol $\dot{W}^{s,p}({\mathbb R}^N)$ denotes the homogeneous Sobolev space \[ \dot{W}^{s,p}({\mathbb R}^N):=\{u: \text{$u$ is measurable, \eqref{vi} holds, and}\ [u]_{s,p}<+\infty\}, \] while for any $\Omega$ open (not necessarily bounded) subset of ${\mathbb R}^N$ we let \[ W^{s,p}_0(\Omega):=\{u\in \dot{W}^{s,p}({\mathbb R}^N): u\equiv 0 \, \text{ a.e. in $\Omega^c$}\}. \] Clearly, $\dot{W}^{s,p}({\mathbb R}^N)=W^{s,p}_0({\mathbb R}^N)$. If $p>1$ then both ${\dot W}^{s,p}({\mathbb R}^N)$ and $W^{s,p}_0(\Omega)$ are reflexive Banach spaces with respect to the norm $[\cdot ]_{s,p}$. Moreover, $C^\infty_c({\mathbb R}^N)$ is a dense subspace of them provided $\Omega={\mathbb R}^N$ or $\Omega$ is bounded and smooth. $\dot{W}^{s,p}({\mathbb R}^N)$ falls inside the wider class of Besov spaces, whose definition we now recall. For any $h\in {\mathbb R}^N\setminus\{0\}$ and measurable $g:{\mathbb R}^N\to {\mathbb R}$, put \[ g_h(x):=g(x+h),\quad \delta_h g(x):=g_h(x)-g(x),\quad \delta^2_h g:=\delta_{-h}(\delta_h g)=\delta_h(\delta_{-h} g). \] Given $1\leq p<+\infty$, $1\leq q\leq+\infty$, and $s\in \ ]0, 2[\ \setminus \{1\}$, the classical (homogeneous) Besov space is \[ \dot B^{s}_{p, q}({\mathbb R}^N):=\left\{u: \text{\eqref{vi} holds and} \ [u]_{B^{s}_{p,q}}^p:=\int_{{\mathbb R}^N}\left(\int_{{\mathbb R}^N}\left|\frac{\delta^2_hu(x)}{|h|^s}\right|^p dx\right)^{q/p}.\frac{dh}{|h|^N}<+\infty\right\} \] when $q<+\infty$, while \[ \dot B^{s}_{p, \infty}({\mathbb R}^N):=\left\{u: \text{\eqref{vi} holds and} \ [u]_{B^{s}_{p,\infty}}^p:=\sup_{h\neq 0}\int_{{\mathbb R}^N}\left|\frac{\delta^2_hu(x)}{|h|^s}\right|^p dx<+\infty\right\} \] for $q=+\infty$. Recall that $[\cdot ]_{B^{s}_{p, q}}$ turns out to be a norm on $\dot{B}^{s}_{p, q}({\mathbb R}^N)$ and that $(\dot{B}^{s}_{p, q}({\mathbb R}^N),[\cdot ]_{B^{s}_{p, q}})$ is complete. For larger values of $s$, which we don't need in the sequel, the norm actually involves higher order differences $\delta^m_hu$, where $m>s$. Finally, if $s\in \ ]0, 1[$ then $$\dot B^{s}_{p, p}({\mathbb R}^N)=\dot W^{s,p}({\mathbb R}^N),$$ and the respective norms are equivalent; see, e.g., \cite[Chapter 10]{Mazya}. In such a case, by a simply changing variables, one has \[ [u]_{s,p}^p=\int_{{\mathbb R}^N}\left\|\frac{\delta_hu}{|h|^s}\right\|_p^p \frac{dh}{|h|^N}. \] Given an arbitrary nonempty open set $\Omega\subseteq {\mathbb R}^N$, the functional $u\mapsto \frac 1 p [u]_{s,p}^p$, $u\in W^{s,p}_0(\Omega)$, turns out to be convex and differentiable provided $p>1$. Its differential $(-\Delta_p)^su$ at any point $u$ lies in $$W^{-s,p'}(\Omega):=\big(W^{s,p}_0(\Omega)\big)^*.$$ One clearly has $L^{(p^*)'}({\mathbb R}^N)\hookrightarrow W^{-s, p'}({\mathbb R}^N)$, because $$W^{s,p}_0(\Omega)\hookrightarrow\dot W^{s,p}({\mathbb R}^N)\hookrightarrow L^{p^*}({\mathbb R}^N).$$ Thus, the equation $(-\Delta_p)^s u=f$ makes sense (weakly) in $\Omega$ for all $f\in L^{(p^*)'}({\mathbb R}^N)$. Here, we will be concerned with \emph{more general right-hand sides}. Precisely, let $f\in L^{1}_{\rm loc}({\mathbb R}^N)$. We say that $u\in W^{s,p}_0(\Omega)$ is a weak solution of $(-\Delta_p)^su=f$ provided \begin{equation}\label{locweak} \langle (-\Delta_p)^s u, \varphi\rangle=\int_{{\mathbb R}^N} f\, \varphi\, dx\quad \forall\,\varphi\in W^{s,p}_0(\Omega) \mbox{ such that } f\,\varphi\in L^1({\mathbb R}^N), \end{equation} where $\langle \ , \ \rangle$ denotes the duality pairing between $W^{-s, p'}(\Omega)$ and $W^{s,p}_0(\Omega)$. Every $\varphi\in W^{s,p}_0(\Omega)$ fulfilling $f\,\varphi\in L^1({\mathbb R}^N)$ will be called a \emph{suitable test function}. If $\Omega={\mathbb R}^N$ and $\varphi\in\dot{W}^{s,p}({\mathbb R}^N)$ is bounded and has a finite measure support then $\varphi$ turns out to be a suitable test function. Let us now recall some facts about the space $\dot{W}^{s,\gamma}({\mathbb R}^N)$ with $\gamma\in\ ]0, 1[$. One can introduce again the vector space \[ \dot{W}^{s, \gamma}({\mathbb R}^N)=\{u: \text{$u$ is measurable, \eqref{vi} holds and } [u]_{s,\gamma}<+\infty\}, \] but it isn't a Banach space and, for sufficiently small $\gamma>0$, its elements may fail to be locally integrable and therefore to be distributions. On the other hand, the Besov spaces $\dot{B}^s_{\gamma, q}({\mathbb R}^N)$ for $s,q,\gamma>0$ can still be defined via suitable decay properties of the Littlewood-Paley decomposition, giving rise to a space of distributions (which may contain singular measures). However, unless $\gamma>\frac{N}{N+s}$, it is no longer true that $\dot{W}^{s, \gamma}({\mathbb R}^N)=\dot{B}^{s}_{\gamma, \gamma}({\mathbb R}^N)$; see \cite[Section 2.5.12]{T} for more details. Regarding the basic properties of problem \eqref{HS}, we start by pointing out that, in order to seek optimizers in \eqref{HS}, radial functions suffice. \begin{lemma}\label{radiality} Suppose $u\in\dot W^{s, p}({\mathbb R}^N)$ is a minimizer of \eqref{I}. Then $u$ turns out to be radially non-increasing around some point, which is zero if $\alpha>0$. \end{lemma} \begin{proof} An easy computation based on \eqref{Ilambda} ensures that $u$ realizes the Rayleigh quotient \[ {\mathcal R}:=\inf\left\{\frac{[v]_{s,p}^p}{\|v\|_{\alpha, q}^{p}}: v\in\dot{W}^{s,p}({\mathbb R}^N)\setminus \{0\}\right\}. \] Let $u^*$ be the symmetric-decreasing rearrangement of $u$. Thanks to Theorem 3.4 of \cite{LiebLoss} we have \[ \|u\|_{\alpha, q}\leq \|u^*\|_{\alpha, q}, \] because $x\mapsto 1/|x|^\alpha$ coincides with its symmetric-decreasing rearrangement. If $\alpha:=0$ then equality always holds, while when $u\neq u^*$ and $\alpha>0$ the inequality turns out to be strict. Moreover, the P\'olya-Szeg\"o principle \cite[Theorem 9.2]{AL} gives \[ [u^*]_{s,p}\leq [u]_{s,p}, \] with strict inequality once $u$ is not a translation of $u^*$. The last assertion is peculiar to the nonlocal nature of $[u]_{s,p}$; see \cite[Theorem A1]{FS}. \end{proof} Let us also observe that inequalities \eqref{HS} stem from the borderline Hardy inequality, namely \eqref{HS} written for $\alpha:=ps$ and $q:=p$. The corresponding best constant, say $C_H$, has been explicitly computed in \cite[Theorem 1.1]{FS}. The next result basically is folklore. \begin{lemma} Let $C_H:=C(N, p, s, ps)$ in \eqref{HS}. Then \[ C(N, p, s, \alpha)\leq C_H\left(\frac{N\omega_N}{N-ps}\right)^{\frac{1}{p}\frac{\alpha-ps}{N-\alpha}}. \] \end{lemma} \begin{proof} Pick any $u\in C^\infty_c({\mathbb R}^N)$. On account of Lemma \ref{radiality}, we may assume $u=u(r)$ both nonnegative and radially non-increasing. It is known \cite{HLP} that if $\lambda\geq 1$ and $h:[0, +\infty)\to {\mathbb R}^+_0$ is non-increasing then \[ \int_0^{+\infty} h(t)^\lambda\, t^{\lambda-1} dt= \int_0^{+\infty} (h(t)t)^{\lambda-1}h(t) dt\leq \int_0^{+\infty}\left(\int_0^t h(s) ds\right)^{\lambda-1} h(t) dt\leq \left(\int_0^{+\infty} h(t) dt\right)^\lambda. \] Choosing $t:=r^{N-ps}$, $h(t):=u(t^{1/(N-ps)})^p$, $\lambda:=q/p$ yields \[ \int_{{\mathbb R}^N} \frac{u^p}{|x|^{ps}}\, dx=\frac{N\omega_N}{N-ps}\int_0^{+\infty} h(t)\, dt\geq \frac{N\omega_N}{N-ps}\left(\int_0^{+\infty} u(t^{\frac{1}{N-ps}})^q\, t^{\frac{q}{p}-1} dt\right)^{\frac{p}{q}}. \] Since $t=r^{N-ps}$, from \eqref{scalingrelation} it follows \[ \int_{{\mathbb R}^N} \frac{u^p}{|x|^{ps}}\, dx\geq \frac{N\omega_N}{N-ps}\left((N-ps)\int_0^{+\infty} \frac{u^q}{r^\alpha} r^{N-1}\, dr\right)^{\frac{p}{q}}= \left(\frac{N\omega_N}{N-ps}\right)^{-\frac{\alpha-ps}{N-\alpha}}\left(\int_{{\mathbb R}^N} \frac{u^q}{|x|^\alpha}\, dx\right)^{\frac{p}{q}}. \] Now, rearranging and inserting inside \eqref{HS} with $q:=p$ and $\alpha:=ps$ leads to the conclusion through P\'olya-Szeg\"o principle. \end{proof} Finally, we collect here the following two technical lemmas. \begin{lemma} \label{23} Let $\eta\in C^\infty({\mathbb R}^N)$ and let $\gamma>0$. Then \begin{equation}\label{deta1} \||D^s \eta|^\gamma\|_\infty\leq C(N, \gamma)\, \left(\frac{1}{1-s}+\frac{1}{s}\right)\,{\rm Lip}(\eta)^{\gamma s}\, \|\eta\|_\infty^{\gamma(1-s)}. \end{equation} If, moreover, ${\rm supp}(\eta)\subseteq B_R$ then for every $\theta>0$ there exists $C_\theta:=C(\theta, N, \gamma, s)$ such that \begin{equation}\label{deta2} |D^s\eta|^\gamma(x)\leq \frac{C_\theta\, R^N\, |\eta|_\infty^\gamma}{|x|^{N+\gamma s}},\qquad x\in B_{(1+\theta) R}^c. \end{equation} \end{lemma} \begin{proof} We may assume finite the right-hand side of \eqref{deta1}. Pick $\lambda>0$ and observe that \[ \begin{split} |D^s\eta|^\gamma(x)&\leq \left(\int_{B_{\lambda}(x)}\frac{{\rm Lip}(\eta)^\gamma\, |x-y|^\gamma}{|x-y|^{N+\gamma s}}\, dy+\int_{B_{\lambda}^c(x)}\frac{2^\gamma\, \|\eta\|_\infty^\gamma}{|x-y|^{N+\gamma s}}\, dy\right)\\ & \leq C\left( \frac{\lambda^{\gamma(1-s)}}{\gamma(1-s)}\, {\rm Lip}(\eta)^\gamma+\frac{2^\gamma\, \|\eta\|_\infty^\gamma}{\lambda^{\gamma s}\gamma s}\right), \end{split} \] where $C=C(N)$. Optimizing in $\lambda>0$ this inequality directly yields \eqref{deta1}.\\ Let us next come to \eqref{deta2}. If $|x|\geq(1+\theta)R$ and $|y|\leq R$ then \[ |x|\leq |x-y|+|y|\leq |x-y|+R\leq |x-y|+\frac{|x|}{1+\theta}\quad \Rightarrow\quad |x|\leq \frac{1+\theta}{\theta}|x-y|. \] Since ${\rm supp}(\eta)\subseteq B_R$, one has \[ |D^s\eta|^\gamma(x)=\int_{B_R}\frac{|\eta(y)|^\gamma}{|x-y|^{N+\gamma s}}\, dy\leq \left(\frac{1+\theta}{\theta}\right)^{N+\gamma s}\frac{\|\eta\|_\gamma^\gamma}{|x|^{N+\gamma s}}, \] which easily entails \eqref{deta2}. \end{proof} Inspecting the previous proof one can also show that for $\gamma\leq 1$ it holds $C(N, \gamma)=C(N)/\gamma$. \begin{lemma}\cite[Lemma A.2]{BP}\label{lemmag} Suppose $g:{\mathbb R}\to {\mathbb R}$ is absolutely continuous continuous and non-decreasing, $p>1$, and \begin{equation} \label{defG} G(t):=\int_0^tg'(\tau)^\frac{1}{p}\, d\tau,\quad t\in{\mathbb R}. \end{equation} Then $[G(u)]_{s,p}^p\leq \langle (-\Delta_p)^s u, g(u)\rangle$ for every $u\in \dot W^{s,p}({\mathbb R}^N)$. \end{lemma} \section{Concentration Compactness} In this section we prepare details of the concentration-compactness scheme for problem \eqref{I}. Some arguments will closely follow \cite[Theorem 2.4]{Lions2}, but serious modifications, which we will explicitly outline below, are needed in order to deal with nonlocal interactions. \begin{theorem}\label{exist} Let $N>ps$, $q>p$, and $\alpha\in[0, ps[$ satisfy \eqref{scalingrelation}. Then \eqref{I} possesses a nonnegative radially decreasing minimizer. \end{theorem} \begin{proof} It suffices to verify the assertion for $\lambda=1$. Define, provided $u\in \dot{W}^{s,p}({\mathbb R}^N)$, \[ \rho(u):=|D^s u|^p + |u|^{p^*}\in L^1({\mathbb R}^N). \] If $\{u_n\}$ is a minimizing sequence of \eqref{I} (where $\lambda=1$) then, up to subsequences, \[ \lim_{n\to+\infty}\int_{{\mathbb R}^N}\rho(u_n)\, dx= L\geq I_1>0. \] We can choose a rescaling as well as a subsequence, still denoted by $\{u_n\}$, such that, setting \[ Q_n(t):=\sup_{y\in {\mathbb R}^N} \int_{B_t(y)}\rho(u_n)\, dx,\quad n\in{\mathbb N},\; t\geq 0, \] one has \[ Q_n(1)=I_1/2\quad\forall\, n\in{\mathbb N}, \quad \lim_{n\to+\infty}Q_n(t)=Q(t)\quad\forall\, t\geq 0. \] Therefore, vanishing cannot evidently occur in \cite[Lemma I.1]{Lions}. We will show that the same holds true for dichotomy; this is the point where nonlocal effects force a modification of the standard proof. Suppose on the contrary \[ \lim_{t\to +\infty} Q(t)=a\in\ ]0, L[. \] Then for every $\varepsilon>0$ there exist $\{y_n\}\subseteq{\mathbb R}^N$, $R_n\geq R_0>0$, $R_n\uparrow +\infty$ such that \begin{equation}\label{dich} \left|\int_{B_{R_0}(y_n)}\rho(u_n)\, dx-a\right|+\left|\int_{B_{R_n}^c(y_n)}\rho(u_n)\, dx-(L-a)\right|+\left|\int_{B_{R_n}(y_n)\setminus B_{R_0}(y_n)}\rho(u_n)\, dx\right|<\varepsilon. \end{equation} The following elementary but useful inequalities quantify how the local behaviour of $|D^s u|$ controls the magnitude of $u$ nearby. Clearly, specific numbers are to some extent arbitrary choices. \begin{lemma} Let $u\in \dot{W}^{s,p}({\mathbb R}^N)$ and let $R>0$. Then \begin{equation}\label{nl1} \frac{1}{R^{ps}}\int_{B_R} |u|^p\, dx\leq C\int_{B_{3R}\setminus B_{2R}}|D^su|^p+\frac{1}{R^{ps}}|u|^p\, dx, \end{equation} \begin{equation}\label{nl2} R^{N}\int_{B_{4R}^c} \frac{|u|^p}{|x|^{N+ps}}\, dx\leq C\int_{B_{3R}\setminus B_{2R}}|D^su|^p+\frac{1}{R^{ps}}|u|^p\, dx, \end{equation} with appropriate constant $C=C(N,p,s)$. \end{lemma} \begin{proof} By density, we may assume $u\in C^\infty_c({\mathbb R}^N)$ while via scaling one can put $R=1$. Let us first observe that $z\in B_3\setminus B_2$, $x\in B_1$ imply $1\leq |x-z|\leq 4$, whence \[ \int_{B_1} |u|^p\, dx\leq 2^{p-1}\int_{B_1}|u(x)-u(z)|^p+ |u(z)|^p dx\leq C\left(\int_{B_1}\frac{|u(x)-u(z)|^p}{|x-z|^{N+ps}}\, dx+|u(z)|^p\right). \] After integration in $z\in B_3\setminus B_2$, this entails \[ \begin{split} \int_{B_1} |u|^p\, dx&\leq C\left(\int_{B_1\times (B_3\setminus B_2)}\frac{|u(x)-u(z)|^p}{|x-z|^{N+ps}}\, dx\, dz+\int_{B_3\setminus B_2}|u(z)|^p\, dz\right)\\ &\leq C\int_{B_3\setminus B_2}|D^su|^p+|u|^p\, dx, \end{split} \] as desired. Similarly, since $|x-z|\leq |x|+|z|\leq 2|x|$ for all $x\in B_4^c$, $z\in B_3\setminus B_2$, one has \[ \int_{B_4^c} \frac{|u|^p}{|x|^{N+ps}}\, dx\leq C\left(\int_{B_4^c}\frac{|u(x)-u(z)|^p}{|x-z|^{N+ps}}\, dx+|u(z)|^p\right), \] which integrated in $z\in B_3\setminus B_2$ provides \[ \int_{B_{4}^c} \frac{|u|^p}{|x|^{N+ps}}\, dx\leq C\int_{B_{3}\setminus B_{2}}|D^su|^p+|u|^p\, dx, \] and \eqref{nl2} follows. \end{proof} \begin{lemma}\label{lemmaint} Suppose $\eta\in C^\infty({\mathbb R}^N)$ fulfills $0\leq \eta\leq 1$, $\eta\lfloor_{B_4}=1$, $\eta\lfloor_{ B_5^c}=0$, and define $\eta_R(x):=\eta(x/R)$, $R>0$. Then there exists $C_\eta:=C(N, p, s, {\rm Lip}(\eta))$ such that for every $u\in \dot{W}^{s,p}({\mathbb R}^N)$ one has \begin{equation}\label{st1} \left|\int_{B_R}|D^su|^p\, dx-\int_{{\mathbb R}^N}|D^s(\eta_R\, u)|^p\, dx\right|\leq C_\eta\int_{B_{20R}\setminus B_{R}}|D^s u|^p\, dx+\frac{C_{\eta}}{R^{ps}}\int_{B_{20R}\setminus B_{R}}|u|^p\, dx. \end{equation} Suppose $\xi\in C^\infty({\mathbb R}^N)$ fulfills $0\leq \xi\leq 1$, $\xi\lfloor_{B_{1/5}}=0$, $\xi\lfloor_{B_{1/4}^c}=1$, and define $\xi_R(x):=\xi(x/R)$, $R>0$. Then there exists $C_\xi:=C(N, p, s, {\rm Lip}(\xi))$ such that for every $u\in \dot{W}^{s,p}({\mathbb R}^N)$ one has \begin{equation}\label{st2} \left|\int_{B_{R}^c}|D^su|^p\, dx-\int_{{\mathbb R}^N}|D^s(\xi_R\, u)|^p\, dx\right|\leq C_\xi\int_{B_{R}\setminus B_{R/20}}|D^s u|^p\, dx+\frac{C_\xi}{R^{ps}}\int_{B_{R}\setminus B_{R/20}}|u|^p\, dx. \end{equation} \end{lemma} \begin{proof} We start by proving \eqref{st1}. Scale to $R=1$ and observe that \begin{equation}\label{diff} \begin{split} \int_{B_1}|D^s u|^p\, dx-&\int_{{\mathbb R}^N}|D^s(\eta\, u)|^p\, dx=-\int_{B_1^c\times {\mathbb R}^N}\frac{|\eta(x)\, u(x)-\eta(y)\, u(y)|^p}{|x-y|^{N+ps}}\, dx\, dy\\ &\quad +\int_{B_1\times B_{4}^c}\frac{|u(x)-u(y)|^p-|u(x)\, \eta(x)-\eta(y)\, u(y)|^p}{|x-y|^{N+ps}}\, dx\, dy. \end{split} \end{equation} Now, \[ \begin{split} \int_{B_1\times B_{4}^c}&\frac{|u(x)-u(y)|^p}{|x-y|^{N+ps}}\, dx\, dy\leq 2^{p-1}\int_{B_1\times B_{4}^c}\frac{|u(x)|^p+|u(y)|^p}{|x-y|^{N+ps}}\, dx\, dy\\ &\leq C\left(\int_{B_1}|u|^p\, dx+\int_{B_4^c}\frac{|u|^p}{|y|^{N+ps}}\, dy\right) \leq C \int_{B_3\setminus B_2}|D^su|^p+|u|^p\, dx. \end{split} \] A similar inequality is true for the term involving $\eta$. Let us next estimate the other integral that appears in \eqref{diff}. Evidently, \begin{equation}\label{lieb} |\eta(x)\, u(x)-\eta(y)\, u(y)|^p\leq 2^{p-1}\left(|\eta(x)|^p\, |u(x)-u(y)|^p+|\eta(x)-\eta(y)|^p\, |u(y)|^p\right) \end{equation} and \[ \int_{B_1^c\times {\mathbb R}^N}\frac{|\eta(x)|^p\, |u(x)- u(y)|^p}{|x-y|^{N+ps}}\, dx\, dy\leq C\int_{B_5\setminus B_1}|D^s u|^p\, dx. \] Exploiting \eqref{deta1} on $B_6$ and \eqref{deta2} on $B_6^c$ (recall that ${\rm supp}(\eta)\subseteq B_5$), we get \begin{equation* \begin{split} \int_{B_1^c\times {\mathbb R}^N}&\frac{|\eta(x)-\eta(y)|^p\, |u(y)|^p}{|x-y|^{N+ps}}\, dx\, dy\leq \int_{{\mathbb R}^N}|D^s\eta|^p\, |u|^p\, dy\\ &\leq \int_{B_6}|D^s\eta|^p\, |u|^p\, dy+\int_{B_6^c}|D^s\eta|^p\, |u|^p\, dy\\ &\leq C\, (1+{\rm Lip}(\eta)^{ps})\left(\int_{B_6}|u|^p\, dy+\int_{B_6^c}\frac{|u|^p}{|y|^{N+ps}}\, dy\right). \end{split} \end{equation*} Thanks to \eqref{nl1}--\eqref{nl2}, this entails \begin{equation*} \begin{split} \int_{B_1^c\times {\mathbb R}^N}&\frac{|\eta(x)-\eta(y)|^p\, |u(y)|^p}{|x-y|^{N+ps}}\, dx\, dy\\ &\leq C_\eta\int_{B_{18}\setminus B_{12}}|D^s u|^p+|u|^p \, dx+ C_\eta\int_{B_{9/2}\setminus B_{3}}|D^s u|^p+|u|^p \, dx. \end{split} \end{equation*} Gathering together the estimates above we achieve \eqref{st1}. The proof of \eqref{st2} is entirely analogous but, for the sake of completeness, we sketch it. Since \[ \begin{split} &\int_{B_{1}^c}|D^su|^p\, dx-\int_{{\mathbb R}^N}|D^s(\xi\, u)|^p\, dx=\\ &\int_{B_1^c\times B_{1/4}}\frac{|u(x)-u(y)|^p-|\xi(x)\, u(x)-\xi(y)\, u(y)|^p}{|x-y|^{N+ps}}\, dx\, dy-\int_{B_1}|D^s(\xi\, u)|^p\, dx, \end{split} \] let us estimate the two terms separately. Concerning the first one, \[ \begin{split} \left|\int_{B_1^c\times B_{1/4}}\frac{|u(x)-u(y)|^p-|u(x)-\xi(y)\, u(y)|^p}{|x-y|^{N+ps}}\, dx\, dy\right| &\leq 2^p\int_{B_1^c\times B_{1/4}}\frac{|u(x)|^p+|u(y)|^p}{|x-y|^{N+ps}}\, dx\, dy\\ &\leq C\int_{B_1^c}\frac{|u|^p}{|x|^{N+ps}}\, dx +C\int_{B_{1/4}}|u(y)|^p\, dy, \end{split} \] which, via a suitable rescaling of \eqref{nl1}--\eqref{nl2}, provides \[ \left|\int_{B_1^c\times B_{1/4}}\frac{|u(x)-u(y)|^p-|u(x)-\xi(y)\, u(y)|^p}{|x-y|^{N+ps}}\, dx\, dy\right|\leq C\int_{B_{3/4}\setminus B_{1/2}}|D^s u|^p+|u|^p\, dx. \] For the second one, we proceed as in \eqref{lieb} and obtain \[ \begin{split} \int_{B_1}|D^s(\xi \, u)|^p\, dx&\leq C\left(\int_{B_1}|\xi|^p\, |D^s u|^p\, dx+\int_{{\mathbb R}^N}|u|^p\, |D^s\xi|^p\, dx\right)\\ &\leq C\left(\int_{B_1\setminus B_{1/5}}|D^s u|^p\, dx+\int_{{\mathbb R}^N}|u|^p\, |D^s\xi|^p\, dx\right). \end{split} \] Since $|D^s\xi|^p=|D^s(1-\xi)|^p$, due to \eqref{deta1}--\eqref{deta2} one has \[ \begin{split} \int_{{\mathbb R}^N}|u|^p\, |D^s\xi|^p\, dx&=\int_{B_{1/6}}|u|^p\, |D^s(1-\xi)|^p\, dx+\int_{B_{1/6}^c}|u|^p\, |D^s(1-\xi)|^p\, dx\\ &\leq C_\xi\left(\int_{B_{1/6}}|u|^p\, dx +\int_{B_{1/6}^c}\frac{|u|^p}{|x|^{N+ps}}\, dx\right). \end{split} \] Through a suitable rescaling of \eqref{nl1}--\eqref{nl2}, it yields \[ \int_{{\mathbb R}^N}|u|^p\, |D^s\xi|^p\, dy\leq C_\xi\int_{B_{1/2}\setminus B_{1/3}}|D^s u|^p+|u|^p\, dx+C_\xi\int_{B_{1/8}\setminus B_{1/12}}|D^s u|^p+|u|^p\, dx. \] Gathering together the above estimates we arrive at \eqref{st2}. \end{proof} Pick $\eta, \xi$ as in Lemma \ref{lemmaint} and define \[ u_n^1(x):=\eta\big(\frac{x-y_n}{R_0}\big)\, u_n(x),\quad u_n^2(x)=\xi\big(\frac{x-y_n}{R_n}\big)\, u_n(x). \] If $R_n>400R_0$ then, by \eqref{st1}--\eqref{st2} \[ \begin{split} \left|\int_{B_{R_0}(y_n)}\right. & \left.|D^s u_n|^p\, dx-\int_{{\mathbb R}^N}|D^s u_n^1|^p\, dx\right|+\left|\int_{B_{R_n}^c(y_n)}|D^s u_n|^p\, dx-\int_{{\mathbb R}^N}|D^s u_n^2|^p\, dx\right|\\ &\leq C_\eta\left(\int_{B_{20R_0}(y_n)\setminus B_{R_0}(y_n)}|D^s u_n|^p\,dx +\frac{1}{R_0^{ps}}\int_{B_{20R_0}(y_n)\setminus B_{R_0}(y_n)} |u_n|^p\, dx\right)\\ &\quad +C_\xi\left(\int_{B_{R_n}(y_n)\setminus B_{\frac{R_n}{20}}(y_n)}|D^s u_n|^p\,dx +\frac{1}{R_n^{ps}}\int_{B_{R_n}(y_n)\setminus B_{\frac{R_n}{20}}(y_n)} |u_n|^p\, dx\right)\\ &\leq C_\eta\int_{B_{20R_0}(y_n)\setminus B_{R_0}(y_n)}|D^s u_n|^p\,dx +C'_\eta\left(\int_{B_{20R_0}(y_n)\setminus B_{R_0}(y_n)} |u_n|^{p^*}\, dx\right)^{\frac{p}{p^*}}\\ &\quad +C_\xi\int_{B_{R_n}(y_n)\setminus B_{\frac{R_n}{20}}(y_n)}|D^s u_n|^p\,dx +C'_\xi\left(\int_{B_{R_n}(y_n)\setminus B_{\frac{R_n}{20}}(y_n)} |u_n|^{p^*}\, dx\right)^{\frac{p}{p^*}}\\ &\leq C_1\int_{B_{R_n}(y_n)\setminus B_{R_0}(y_n)} \rho(u_n)\, dx+C_2\left(\int_{B_{R_n}(y_n)\setminus B_{R_0}(y_n)} \rho(u_n)\, dx\right)^{\frac{p}{p^*}}\leq C_3(\varepsilon+\varepsilon^{\frac{p}{p^*}}). \end{split} \] Therefore, due to \eqref{dich}, \begin{equation}\label{split0} \left|[u_n]_{s,p}^p-[u_n^1]_{s,p}^p-[u_n^2]_{s,p}^p\right|\le C\, (\varepsilon+\varepsilon^{\frac{p}{p^*}}). \end{equation} Concerning $L^{p^*}$-norms, we readily have \[ \left|\int_{B_{R_0}(y_n)}|u_n|^{p^*}\, dx-\|u_n^1\|_{p^*}^{p^*}\right|+\left|\int_{B_{R_n}^c(y_n)}|u_n|^{p^*}\, dx-\|u_n^2\|_{p^*}^{p^*}\right|\le C\int_{B_{R_n}(y_n)\setminus B_{R_0}(y_n)}|u_n|^{p^*}\, dx, \] whence, in view of \eqref{dich} again, \begin{equation} \label{split2} \left|\int_{{\mathbb R}^N}\rho(u_n^1)\, dx-a\right|+\left|\int_{{\mathbb R}^N}\rho(u_n^2)\, dx-(L-a)\right|< C\, (\varepsilon+\varepsilon^{\frac{p}{p^*}}),\quad a\in \ ]0, L[. \end{equation} To conclude, suppose that \[ \lim_{n\to+\infty}\int_{{\mathbb R}^N} \frac{|u_n^1|^q}{|x|^\alpha}\, dx=\lambda_1,\quad \lim_{n\to+\infty}\int_{{\mathbb R}^N}\frac{|u_n^2|^q}{|x|^\alpha}\, dx=\lambda_2 \] (where a subsequence is considered when necessary) with appropriate $\lambda_1,\lambda_2\in [0, 1]$ depending on $\varepsilon$, i.e., $\lambda_1:=\lambda_1(\varepsilon)$, $\lambda_2:=\lambda_2(\varepsilon)$, and put \[ \eta_n(x):=\eta(\frac{x-y_n}{R_0}),\quad \xi_n(x):=\xi(\frac{x-y_n}{R_n}),\quad \theta_n(x):=(1-\eta_n(x)^q-\xi_n(x)^q)^{1/q}. \] Clearly, $0\le \theta_n\le \chi_{B_{R_n}(y_n)\setminus B_{R_0}(y_n)}$ because $R_n\geq 400 R_0$. By H\"older's and Hardy's inequalities, besides \eqref{scalingrelation}, one has \[ \begin{split} \int_{{\mathbb R}^N}(|u_n|^q&-|u_n^1|^q-|u_n^2|^q)\frac{dx}{|x|^\alpha}=\int_{{\mathbb R}^N}|\theta_n\, u_n|^q\, \frac{dx}{|x|^\alpha}\le \int_{{\mathbb R}^N}\frac{|u_n|^{\frac{\alpha}{s}}}{|x|^\alpha}|\theta_n\, u_n|^{q-\frac{\alpha}{s}}\, dx\\ &\le \| u_n\|_{ps, p}^{\frac{\alpha}{s}}\, \|\theta_n \,u_n \|_{p^*}^{p^*(1-\frac{\alpha}{ps})}\le C\left(\int_{B_{R_n}(y_n)\setminus B_{R_0}(y_n)}\rho(u_n)\, dx\right)^{1-\frac{\alpha}{ps}}\le C\, \varepsilon^{1-\frac{\alpha}{ps}}. \end{split} \] Thus, \begin{equation}\label{alphabeta} |\lambda_1+\lambda_2-1|= \lim_{n\to+\infty}\left|\int_{{\mathbb R}^N}(|u_n|^q-|u_n^1|^q-|u_n^2|^q)\frac{dx}{|x|^\alpha}\right|\leq C\, \varepsilon^{1-\frac{\alpha}{s}}. \end{equation} The remaining proof of tightness now follows verbatim from \cite[Section I.2, Step 1]{Lions1}. Nevertheless, we briefly sketch it here. Via \eqref{split2} and Sobolev's inequality, one can find $b>0$, $\varepsilon_0>0$ such that \[ [u^1_n]_{s,p}^p\geq b,\qquad [u^2_n]_{s,p}^p \geq b \] for all $\varepsilon<\varepsilon_0$. Consequently, both numbers $\lambda_J$ are bounded away from $0$ or $1$ provided $\varepsilon$ is sufficiently small. Indeed, using \eqref{split0}, the above inequalities, and \eqref{Ilambda} yields \[ I_1=\lim_{n\to+\infty}[u_n]^p_{s,p}\geq \lim_{n\to+\infty}\left([u_n^1]^p_{s,p}+[u^2_n]_{s,p}^p\right)-O(\varepsilon)\geq b+I_{\lambda_2}-O(\varepsilon)=b+\lambda_2^{\frac{p}{q}} I_1-O(\varepsilon). \] This entails a bound from above to $\lambda_2$, as long as $O(\varepsilon)<b/2$, and also a bound from below for $\lambda_1$, thanks to \eqref{alphabeta}. A similar reasoning furnishes a bound from above for $\lambda_1$. Hence, \[ I_1\geq \lim_{n\to+\infty}[u_n^1]^p_{s,p}+[u^2_n]_{s,p}^p\geq I_{\lambda_1}+I_{\lambda_2}-O(\varepsilon). \] Now, pick $\varepsilon:=\varepsilon_k\to 0$. Up to subsequences, we evidently have $\lambda_1(\varepsilon_k)\to\bar\lambda\in \ ]0, 1[$ and, by \eqref{alphabeta}, $\lambda_2(\varepsilon_k)\to 1-\bar\lambda\in \ ]0, 1[$. So, due to \eqref{Ilambda}, $$I_1\geq\bar\lambda^{\frac{p}{q}}I_1+(1-\bar\lambda)^{\frac{p}{q}}I_1,$$ which is impossible whenever $q>p$. Finally, Conclusion (i) of \cite[Lemma I.1]{Lions} produces a sequence $\{y_n\}\subseteq{\mathbb R}^N$ with the following property: \[ \mbox{For every $\varepsilon>0$ there exists $R>0$ such that}\;\int_{B_R^c(y_n)}\rho(u_n)\, dx<\varepsilon,\;\; n\in{\mathbb N}. \] Let us next show that $\{y_n\}$ is bounded. To do this, pick $\eta$ as in Lemma \ref{lemmaint} and define $\eta_n(x):=\eta((x-y_n)/R)$, $u_n^1(x):=u_n(x)\, \eta_n(x)$. Inequality \eqref{Liebn} provides \[ [u_n-u_n^1]_{s,p}^p\leq 2^{p-1}\left(\int_{{\mathbb R}^N}|1-\eta_n|^p\, |D^s u_n|^p\, dx+\int_{{\mathbb R}^N}|u_n|^p\, |D^s\eta_n|^p\, dx\right), \] while, by construction, \[ \begin{split} \int_{{\mathbb R}^N}|1-\eta_n|^p\, |D^s u_n|^p\, dx&\leq\int_{B^c_{4R}(y_n)}|D^s u_n|^p\, dx\leq\int_{B_R^c(y_n)}\rho(u_n)\, dx <\varepsilon,\\ \int_{{\mathbb R}^N}|u_n|^p\, |D^s\eta_n|^p\, dx&=\int_{B_{6R}(y_n)}|u_n|^p\, |D^s\eta_n|^p\, dx+ \int_{B^c_{6R}(y_n)}|u_n|^p\, |D^s\eta_n|^p\, dx. \end{split} \] Since ${\rm Lip}(\eta_n)={\rm Lip}(\eta)/R$, through \eqref{deta1}, \eqref{nl1} (rescaled), and H\"older's inequality, we obtain \[ \begin{split} \int_{B_{6R}(y_n)} |u_n|^p\, |D^s\eta_n|^p\, dx&\leq \frac{C_1}{R^{ps}}\int_{B_{6R}(y_n)} |u_n|^p\, dx\leq C_2 \int_{B_{18R}(y_n)\setminus B_{12R}(y_n)}|D^s u_n|^p+\frac{|u_n|^p}{R^{ps}}\, dx\\ &\leq C_3\left[\int_{B_R^c(y_n)}\rho(u_n)\, dx+\left(\int_{B_R^c(y_n)}\rho(u_n)\, dx\right)^{\frac{p}{p^*}}\right] \leq C_3(\varepsilon+\varepsilon^{\frac{p}{p^*}}). \end{split} \] Analogously, on account of \eqref{deta2} and \eqref{nl2} (rescaled), one has \[ \begin{split} \int_{B_{6R}^c(y_n)} |u_n|^p\, |D^s\eta_n|^p\, dx&\leq C_4 \, R^N\int_{B_{6R}^c(y_n)}\frac{|u_n|^p}{|x|^{N+ps}}\, dx\leq C_5\int_{B_{\frac{9}{2}R}\setminus B_{3R}}|D^s u_n|^p+\frac{|u_n|^p}{R^{ps}}\, dx\\ &\leq C_6\left[\int_{B_R^c(y_n)}\rho(u_n)\, dx+\left(\int_{B_R^c(y_n)}\rho(u_n)\, dx\right)^{\frac{p}{p^*}}\right] \leq C_6(\varepsilon+\varepsilon^{\frac{p}{p^*}}). \end{split} \] Gathering together the above inequalities produces \[ [u_n-u_n^1]_{s,p}^p\leq C\, (\varepsilon+\varepsilon ^{\frac{p}{p^*}}), \] and, to see that $\{y_n\}$ is bounded, we proceed exactly as in \cite[p. 64]{Lions2}. Finally, the compactness of $\{u_n\}$ stems from the Second Concentration-Compactness Lem\-ma as performed in \cite[Theorem 2.5]{MS}. It suffices to substitute $\|u\|_{p^*}$ with $\|u\|_{\alpha, q}$ in the proof. \end{proof} \section{Regularity estimates} Recall that the weak-$L^q$ quasi-norm of a measurable function $u:{\mathbb R}^N\to {\mathbb R}$ is defined by setting \[ \|u\|_{L^{q, \infty}}:=\sup_{k>0}k|\{|u(x)|>k\}|^{1/q}. \] While in the next lemma we consider arbitrary open $\Omega\subseteq {\mathbb R}^N$, we will be mainly interested in the case $\Omega={\mathbb R}^N$. \begin{theorem}[Summability estimates]\label{Rlemma} Let $N>ps$, let $\Omega\subseteq{\mathbb R}^N$ be nonempty open, and let $f\in L^r({\mathbb R}^N)$ for some $r\geq 1$. Suppose $u\in W^{s,p}_0(\Omega)$ weakly solves $(-\Delta_p)^su= f$ in $\Omega$, in the sense of \eqref{locweak}. Then there exists a constant $C>0$ such that \begin{align} \label{stimalr4} \|u\|_{L^{\frac{p^*}{p'}, \infty}}&\leq C\, \|f\|_1^{\frac{1}{p-1}} & \quad & \text{if}\quad r=1,\\ \label{stimalr1} \|u\|_t&\leq C\, \|f\|_r^{\frac{1}{p-1}}&\quad &\text{if }\quad 1<r<\frac{N}{ps},\; t=\frac{N(p-1)r}{N-psr},\\ \label{stimalr2} \|u\|_t&\leq C\, \|f\|_{N/ps}^{\frac{t-p^*}{t(p-1)}}\, \|u\|_{p^*}^{\frac{p^*}{t}}&\quad &\text{if}\quad r=\frac{N}{ps},\; t\geq p^*,\\ \label{stimalr3} \|u\|_{\infty}&\leq C\, \|f\|_r^{\frac{r'}{p^*-r'}}\, \|u\|_{p^*}^{\frac{p^*-pr'}{p^*-r'}}&\quad &\text{if}\quad \frac{N}{ps}<r\leq +\infty. \end{align} The constant $C$ depends only on $N, p, s, r$ and possibly $t$ in the case $r=\frac{N}{ps}$. \end{theorem} \begin{proof} Given $k>\varepsilon>0$, $\beta\geq 1$ we define $$t_{k, \varepsilon}:=\min\{k, (t-\varepsilon)_+\},\quad g_\beta(t):=(t_{k, \varepsilon})^\beta\quad\forall\, t\in{\mathbb R}.$$ Clearly, $g$ is non-decreasing, Lipschitz continuous, and \begin{equation}\label{constG} G_\beta(t)=\frac{\beta^{1/p} p}{\beta+p-1} (t_{k, \varepsilon})^{\frac{\beta+p-1}{p}}, \end{equation} with $G_\beta$ as in \eqref{defG}. Moreover, $g_\beta\circ u\in W^{s,p}_0(\Omega)$ turns out to be a suitable test function, because it is bounded and has a finite measure support. Thus, using Lemma \ref{lemmag}, Sobolev inequality on the left, and H\"older inequality on the right, yields \begin{equation}\label{moser} C\left\| u_{k, \varepsilon}^{\frac{\beta+p-1}{p}}\right\|_{p^*}^p\leq [G_\beta(u)]_{s, p}^p\leq \langle(-\Delta_p)^s u, g_\beta\circ u\rangle=\int_\Omega f \, g_\beta\circ u\, dx\leq\|f\|_r\, \|u_{k, \varepsilon}^\beta\|_{r'} \end{equation} for some $C=C(N, p, s, \beta)>0$ and any $r\geq 1$. {\em Case 1: $r=1$ (whence $r'=\infty$).}\\ Pick $\beta:=1$ in \eqref{moser}. By the Tchebychev inequality one has $$k^p\, |\{|u|\geq k\}^{\frac{p}{p^*}}\leq \|u_{k, \varepsilon}\|_{p^*}^p\leq C\, \|f\|_1\, \|u_{k, \varepsilon}\|_\infty\leq C\, \|f\|_1 \, k,$$ which easily entails \eqref{stimalr4} once $\varepsilon\to 0^+$ and the supremum over $k>0$ is taken. {\em Case 2: $1<r<\frac{N}{ps}$ and $r'\leq p^*$}. \\ These inequalities force \begin{equation}\label{betazero} \beta_0(r):=\frac{(p-1)p^*}{pr'-p^*}\geq 1 \end{equation} as well as \begin{equation}\label{condbeta} \frac{\beta_0+p-1}{p}p^*=\beta_0 r'=\frac{N(p-1)r}{N-psr}. \end{equation} If $\beta:=\beta_0$ then \eqref{moser} becomes \eqref{stimalr1} with $u:=u_{k, \varepsilon}$. Letting $k\to +\infty$, $\varepsilon\to 0^+$ we achieve the conclusion. {\em - Case 3: $1<r<\frac{N}{ps}$ and $r'> p^*$}. \\ In this case, $0<\beta_0<1$, with $\beta_0$ given by \eqref{betazero}, and $g$ is no longer Lipschitz continuous. Define, provided $k>\varepsilon>0$, $$\tilde g(t):=\min\{k^{\beta_0}, \max\{t, \varepsilon\}^{\beta_0}-\varepsilon^{\beta_0}\} \quad \forall\, t\in{\mathbb R}^+_0,\quad \tilde g (t):=-\tilde g(-t)\quad\forall\, t\in{\mathbb R}^-.$$ The inequality \[ \left(\frac{\beta_0^{1/p}p}{\beta_0+p-1}\right)^{p^*} |\tilde g(t)|^{r'}\leq |\tilde G(t)|^{p^*} \] is reduced to \[ \frac{(\tau^q-1)^{1/q}}{\tau-1}\geq 1,\qquad q:=\frac{r'}{p}>1,\qquad \tau:=(t/\varepsilon)^{\beta_0}\geq 1, \] which can be verified via elementary considerations. Observe also that $\tilde g$ is Lipschitz continuous. So, $\tilde g\circ u\in W^{s,p}_0(\Omega)$ turns out to be a suitable test function, because it is bounded and has finite measure support. On account of \eqref{condbeta}, the same argument employed for proving \eqref{moser} produces here $$C\, \|\tilde g\circ u\|_{r'}^{\frac{pr'}{p^*}}\leq \|f\|_{r}\, \|\tilde g\circ u\|_{r'}.$$ As before, this entails \eqref{stimalr1}. {\em Case 4: $r\geq \frac{N}{ps}$}.\\ Without loss of generality, we may suppose $\|u\|_{p^*}=\|f\|_r=1$. Indeed, if $v^{\lambda, \mu}(x):=\lambda v (\mu x)$ for every $\lambda, \mu>0$ and measurable $v:{\mathbb R}^N\to{\mathbb R}$, then \[ (-\Delta_p)^s u^{\lambda, \mu}=\lambda^{p-1}\mu^{ps} f(\mu x)=f^{\lambda^{p-1} \mu^{ps}, \mu}. \] Since there obviously exist $\bar\lambda, \bar\mu>0$ such that $\|u^{\bar\lambda, \bar\mu}\|_{p^*}=\|f^{\bar\lambda^{p-1}\bar\mu^{ps}, \bar\mu}\|_r=1$, showing \eqref{stimalr2}--\eqref{stimalr3} for $u^{\bar\lambda, \bar\mu}$ actually gives the general case by scaling and homogeneity. Define \begin{equation*} \tilde\beta_0:=p^*,\quad \tilde\beta_{n+1}:=p^*\frac{\frac{\tilde\beta_{n}}{r'}+p-1}{p}, \end{equation*} and test the equation $(-\Delta_p)^su=f$ with $(u_{k, \varepsilon})^{\beta_n}$, where $$\beta_n:=\tilde\beta_n/r'\geq\tilde\beta_0/r'\geq p>1.$$ Then \eqref{moser} reads \begin{equation}\label{moser2} C_{n+1}\left\|u_{k, \varepsilon}\right\|_{\tilde\beta_{n+1}}^{\tilde\beta_{n+1}\frac{p}{p^*}}\leq \left\|u_{k, \varepsilon}\right\|_{\tilde\beta_n}^{\frac{\tilde\beta_n}{r'}},\quad C_1\|u_{k, \varepsilon}\|_{\tilde\beta_1}^{\tilde\beta_{1}\frac{p}{p^*}}\leq 1 \end{equation} because $\|u\|_{p^*}=\|f\|_r=1$. Now, from $r\geq \frac{N}{ps}$ it follows $\tilde\beta_n\to +\infty$ as $n\to +\infty$. So, if $t\geq p^*$ then $\tilde\beta_{n}\leq t\leq \tilde\beta_{n+1}$ for some $n\in{\mathbb N}$. By interpolation one has $$\|u_{k, \varepsilon}\|_t\leq \|u_{k, h}\|_{\tilde\beta_n}^\theta\|u_{k, \varepsilon}\|_{\tilde\beta_{n+1}}^{1-\theta}\leq C'_n \|u_{k, \varepsilon}\|_{\tilde\beta_{n}}^{a_{n}}\leq C'_{n-1} \|u_{k, \varepsilon}\|_{\tilde\beta_{n-1}}^{a_{n-1}}\leq \dots\leq C'_0\|u_{k, \varepsilon}\|_{p^*}^{a_0}= C_0'(t)$$ with appropriate $a_n, C'_n>0$. Letting $k\to +\infty$ and $\varepsilon\to 0^+$ yields \eqref{stimalr2} after scaling back. Finally, suppose $r>\frac{N}{ps}$. Through \eqref{constG} we achieve $C_n\geq C/\tilde\beta_n^{p-1}$ for any sufficiently large $n$. This polynomial decay ensures that \eqref{moser2} can be iterated {\em ad infinitum} provided $\{\tilde\beta_n\}$ grows geometrically, which holds true being $r>\frac{N}{ps}$. One thus has \[ \|u_{k, \varepsilon}\|_\infty=\lim_{n\to +\infty}\|u_{k, \varepsilon}\|_{\tilde\beta_{n+1}}\leq \lim_{n\to +\infty}C_n^{-\frac{p^*}{p\tilde\beta_{n+1}}}\|u_{k, \varepsilon}\|_{\tilde{\beta}_n}^{\frac{p^*}{pr'} \frac{\tilde\beta_n}{\tilde\beta_{n+1}}}\leq \dots \leq C(n, N, p, s), \] and the proof of \eqref{stimalr3} goes on as before. \end{proof} The next corollary shows that the (lower) summability threshold at which $|(-\Delta_p)^s u|$ exhibits a better decay rate than the natural one is $r=(p^*)'$. \begin{corollary} \label{cordecay} Let $N>ps$ and let $u\in \dot{W}^{s,p}({\mathbb R}^N)$ be a radial, radially decreasing weak solution of $(-\Delta_p)^s u=f$ in ${\mathbb R}^N$, where $f\in L^r({\mathbb R}^N)$ for some $1\leq r\leq \frac{p^*}{p^*-1}$. Then, for a suitable $C=C(N, p, s)$ it holds \begin{equation}\label{decay} |u(R)|\leq \frac{C\, \|f\|_r^{\frac{1}{p-1}}}{R^{\frac{N-psr}{(p-1)r}}}\quad \forall\, R>0. \end{equation} \end{corollary} \begin{proof} The conclusion directly follows from \eqref{stimalr4}, \eqref{stimalr1}, and the decay estimates for radially decreasing functions in Lorentz spaces established in \cite[Lemma 2.9]{BMS}. It suffices to observe that $N>ps$ forces $\frac{p^*}{p^*-1}<\frac{N}{ps}$ and that $r\leq \frac{p^*}{p^*-1}$ means $p^*\geq \frac{N(p-1)r}{N-psr}$. \end{proof} Notice that, if $r\geq \frac{p^*}{p^*-1}$, then the natural summability $u\in L^{p^*}({\mathbb R}^N)$ provides a faster decay rate for radially decreasing functions than the one deduced from \eqref{stimalr1}--\eqref{stimalr3}, namely $$|u(R)|\leq C\, \|u\|_{p^*}\, R^{-\frac{N}{p^*}},\quad R>0.$$ The following lemma represents a higher regularity estimate in Besov spaces. \begin{lemma}[Regularity estimate]\label{regest} Let $p,r,t>1$ and $\theta\in \ ]0, 1]$ be such that \begin{equation}\label{deftheta} \frac{\theta}{p}+\frac{1-\theta}{t}=\frac{1}{r'}. \end{equation} Suppose $u\in L^t({\mathbb R}^N) \cap \dot W^{s,p}({\mathbb R}^N)$ and $f\in L^r({\mathbb R}^N)\cap \dot{W}^{-s, p'}({\mathbb R}^N)$ satisfy $(-\Delta_p)^s u=f$ weakly in ${\mathbb R}^N$, as per \eqref{locweak}. Then \begin{align}\label{Bp>2} \sup_{|h|>0}\left\|\frac{\delta^2_hu}{|h|^{\frac{sp}{p-\theta}}}\right\|_p&\leq C\, \|f\|_{r}^{\frac{1}{p-\theta}}\, \|u\|_t^{\frac{1-\theta}{p-\theta}}&&\text{if $p\geq 2$},\\ \label{Bp<2} \sup_{|h|>0}\left\|\frac{\delta^2_hu}{|h|^{\frac{2s}{2-\theta}}}\right\|_p&\leq C\, \|f\|_r^{\frac{1}{2-\theta}}\, \|u\|_t^{\frac{1-\theta}{2-\theta}}\, [u]_{s,p}^{\frac{2-p}{2-\theta}}& &\text{if $1<p<2$,} \end{align} with appropriate constant $C:=C(N, p, s, r, t)>0$. \end{lemma} \begin{proof} Pick $h\in {\mathbb R}^N\setminus\{0\}$. By translation invariance one has \[ \langle (-\Delta_p)^s u_h, \varphi\rangle =\int_{{\mathbb R}^N} f_h\, \varphi\, dx,\qquad \langle (-\Delta_p)^s u, \varphi\rangle =\int_{{\mathbb R}^N} f\, \varphi\, dx, \] which entails \begin{equation}\label{eqdiff} \langle (-\Delta_p)^s u_h-(-\Delta_p)^s u, \varphi\rangle =\int_{{\mathbb R}^N}\varphi\, \delta_h f\, dx=\int_{{\mathbb R}^N} f\, \delta_{-h}\varphi\, dx. \end{equation} Observe next that $\delta^2_h u$ turns out to be a viable test function for a.e. $h\neq 0$. Indeed, from $$[u]_{s,p}^p=\int_{{\mathbb R}^N}\|\delta_h u\|_p^p\, \frac{dh}{|h|^{N+ps}}<+\infty$$ we evidently infer $\|\delta_h u\|_p<+\infty$ for almost every $h$, even when neither $u$ nor $u_h$ lie in $L^p({\mathbb R}^N)$. The continuity of $L^p$-norm with respect to translation yields $\delta_h u\in L^p({\mathbb R}^N)$, whence $\delta^2_hu\in L^p({\mathbb R}^N)$, because $\|\delta^2_h u\|_p\leq2\, \|\delta_h u\|_p$. Exploiting \eqref{deftheta} (with $\theta:=1$ if $p=r'=t$), H\"older's inequality and the inequality $\|\delta^2_h u\|_t\leq 4\, \|u\|_t$, easily provides \begin{equation}\label{fd2u} \|f\, \delta^2_h u\|_1\leq \|f\|_r\, \|\delta^2_h u\|_{r'}\leq \|f\|_r\, \|\delta^2_h u\|^\theta_p\, \|\delta^2_h u\|_t^{1-\theta}\leq 4\, \|f\|_r\, \|\delta^2_h u\|^\theta_p\, \| u\|_t^{1-\theta}. \end{equation} Hence, $f\, \delta_h^2 u\in L^1({\mathbb R}^N)$, as desired, for a.e. $h\neq 0$. Since we will take the essential supremum in $h$, we can assume that this holds for any $h\neq 0$. We can thus set $\varphi:=\delta_hu$ in \eqref{eqdiff}, whose left-hand side becomes \begin{equation}\label{lhs} \begin{split} &\langle (-\Delta_p)^s u_h-(-\Delta_p)^s u, \delta_h u\rangle=\\ &\int_{{\mathbb R}^{2N}}\frac{\left((u_h(x)-u_h(y))^{p-1}-(u(x)-u(y))^{p-1}\right)\left((u_h(x)-u_h(y))-(u(x)-u(y))\right)}{|x-y|^{N+ps}} \,dx \,dy. \end{split} \end{equation} Now, the proof naturally splits into two cases. {\em Case 1: $p\geq 2$}.\\ The known inequality \begin{equation}\label{dp1} (a^{p-1}-b^{p-1})(a-b)\geq c_{p}|a-b|^p\quad \forall\, a, b\in {\mathbb R} \end{equation} (see, e.g., \cite[10(I)]{L}), when applied to \eqref{lhs} with $a:=u_h(x)-u_h(y)$ and $b:=u(x)-u(y)$, furnishes $$\langle (-\Delta_p)^s u_h-(-\Delta_p)^s u, \delta_h u\rangle\geq c_p[\delta_h u]_{s,p}^p.$$ Through \eqref{eqdiff}--\eqref{fd2u}, this entails \begin{equation}\label{newone} \left[\delta_hu\right]_{s,p}^p\leq C\, \|f\|_r\, \|u\|_t^{1-\theta}\, \|\delta^2_h u\|_p^\theta. \end{equation} Since, by Lemma A1 of \cite{BLP}, \begin{equation}\label{BL} \sup_{|h|>0}\left\|\frac{\delta_h^2 v}{|h|^\sigma}\right\|_p\leq 2\sup_{|h|>0}\left\|\frac{\delta_h v}{|h|^\sigma}\right\|_p\le C\, [v]_{\sigma, p}\quad \forall\, \sigma\in \ ]0, 1[, \quad p\geq 1, \end{equation} we have $$\left\|\frac{\delta^2_hu}{|h|^{s+\frac{\theta\beta}{p}}}\right\|_p =\frac{1}{|h|^{\frac{\theta\beta}{p}}}\left\|\frac{\delta_h(\delta_{-h} u)}{|h|^s}\right\|_p \leq \frac{1}{|h|^{\frac{\theta\beta}{p}}}\sup_{|k|>0}\left\|\frac{\delta_{k}(\delta_{-h}u)}{|k|^s}\right\|_p \leq \frac{C}{|h|^{\frac{\theta\beta}{p}}}[\delta_{-h}u]_{s,p} =C\left[\frac{\delta_h u}{|h|^\frac{\theta\beta}{p}}\right]_{s,p},$$ which, on account of \eqref{newone}, easily leads to $$\left\|\frac{\delta^2_hu}{|h|^{s+\frac{\theta}{p}\beta}}\right\|_p^p\leq C\left[\frac{\delta_h u}{|h|^\frac{\theta\beta}{p}}\right]_{s,p}^p\leq C\, \|f\|_r\, \|u\|_t^{1-\theta}\, \left\|\frac{\delta^2_h u}{|h|^{\beta}}\right\|_p^{\theta}$$ for any fixed $h\neq 0$, $\beta>0$. If $\beta:=s$ then the right-hand side is finite, because $u\in \dot{W}^{s,p}({\mathbb R}^N)$ and \eqref{BL} holds. We can thus iterate on the differentiability orders $\beta_n$ defined as \[ \begin{cases} \beta_0:=s,\\ \beta_{n+1}:=s+\frac{\theta}{p}\beta_n, \end{cases} \quad\Rightarrow\quad \lim_{n\to+\infty}\beta_n=\frac{s}{1-\frac{\theta}{p}}=:\beta_\infty, \] producing the inequality \[ \left\|\frac{\delta^2_hu}{|h|^{\beta_{n}}}\right\|_p\leq \left(C\, \|f\|_r\, \|u\|_t^{1-\theta}\right)^{\frac{1}{p}\sum_{i=0}^{n-1}\frac{\theta^i}{p^i}}\left\|\frac{\delta^2_h u}{|h|^{\beta_0}}\right\|^{\frac{\theta^{n}}{p^{n}}}_p,\quad n\in{\mathbb N}. \] Since $\theta/p<1$, one arrives at \[ \left\|\frac{\delta^2_hu}{|h|^{\beta_\infty}}\right\|_p=\lim_{n\to +\infty}\left\|\frac{\delta^2_hu}{|h|^{\beta_n}}\right\|_p\leq C\, \|f\|_r^{\frac{1}{p-\theta}}\, \|u\|_t^{\frac{1-\theta}{p-\theta}}, \] and \eqref{Bp>2} follows (recall that the previous inequality holds for a.e. $h\neq 0$). {\em Case 2: $1<p<2$}.\\ It is known that \eqref{dp1} no longer holds. Nevertheless, \begin{equation}\label{dp2} (a^{p-1}-b^{p-1})(a-b)\geq c_p\frac{|a-b|^2}{(a^2+b^2)^{\frac{2-p}{2}}}\quad\forall\, (a,b)\in{\mathbb R}^2\setminus\{(0,0)\}; \end{equation} cf. \cite[Lemma B.4]{BP}. Setting $a:=u_h(x)-u_h(y)$, $b:=u(x)-u(y)$, and raising \eqref{dp2} to $p/2$, we obtain \[ \begin{split} |\delta_hu(x)-\delta_h u(y)|^p\leq &c_p^{\frac{p}{2}}\left[\left( (u_h(x)-u_h(y))^{p-1}-(u(x)-u(y))^{p-1}\right)(\delta_hu(x)-\delta_hu(y))\right]^{\frac{p}{2}}\times\\ &\times \left[ |u_h(x)-u_h(y)|^{2}+|u(x)-u(y)|^{2}\right]^{\frac{2-p}{2}\frac{p}{2}}. \end{split} \] Next, multiply by $|x-y|^{-N-ps}$, integrate over ${\mathbb R}^{2N}$, and apply H\"older's inequality with exponents $\frac{2}{p}$, $\frac{2}{2-p}$. Thanks to \eqref{lhs}, this entails \[ [\delta_h u]^p_{s,p}\leq C\langle (-\Delta_p)^s u_h-(-\Delta_p)^s u, \delta_h u\rangle^{\frac{p}{2}}\left(\int_{{\mathbb R}^{2N}}\frac{\left( |u_h(x)-u_h(y)|^{2}+|u(x)-u(y)|^{2}\right)^{\frac{p}{2}}}{|x-y|^{N+ps}}dxdy\right)^{1-\frac{p}{2}}, \] which, through \eqref{eqdiff}, \eqref{fd2u}, besides the sub-additivity of $\tau\mapsto |\tau|^{\frac{p}{2}}$, gives \[ [\delta_h u]^p_{s,p}\leq C\left(\|f\|_r\, \|\delta^2_h u\|_{r'}\right)^{\frac{p}{2}}\left([u_h]_{s,p}^p+[u]_{s,p}^p\right)^{1-\frac{p}{2}}\leq C\, [u]_{s,p}^{p(1-\frac{p}{2})}\left(\|f\|_r\, \|u\|_t^{1-\theta}\, \|\delta^2_h u\|_{p}^\theta\right)^{\frac{p}{2}}. \] Pick $\beta>0$ and divide by $|h|^{\beta\theta\frac{p}{2}}$. Like before we have \[ \left\|\frac{\delta^2_h u}{|h|^{s+\beta\frac{\theta}{2}}}\right\|_p\leq C\, [u]_{s,p}^{(1-\frac{p}{2})}\left(\|f\|_r\, \|u\|_t^{1-\theta}\right)^{\frac{1}{2}}\left\|\frac{\delta^2_h u}{|h|^{\beta}}\right\|_{p}^{\frac{\theta}{2}}. \] Let us finally iterate on the differentiability orders $\beta_n$ defined as \[ \begin{cases} \beta_0:=s,\\ \beta_{n+1}:=s+\frac{\theta}{2}\beta_n \end{cases} \quad\Rightarrow\quad\lim_{n\to +\infty}\beta_n= \frac{2s}{2-\theta}, \] to achieve the inequality \[ \left\|\frac{\delta^2_h u}{|h|^{\frac{2s}{2-\theta}}}\right\|_p\leq \left[C\, [u]_{s,p}^{(1-\frac{p}{2})}\left(\|f\|_r\, \|u\|_t^{1-\theta}\right)^{\frac{1}{2}}\right]^{\sum_{i=0}^{+\infty}\frac{\theta^i}{2^i}}, \] valid for a.e. $h\neq 0$, whence \eqref{Bp<2} follows after an elementary calculation. \end{proof} \begin{remark} Estimates \eqref{Bp>2}--\eqref{Bp<2} can naturally be re-casted in the framework of Besov spaces. Putting \begin{equation}\label{defsigma} \sigma:= \begin{cases} s\frac{p}{p-\theta}&\text{if $p\geq 2$},\\ s\frac{2}{2-\theta}&\text{if $1<p<2$}, \end{cases} \end{equation} the conditions $s\in \ ]0, 1[$, $\theta\in\ ]0, 1]$ force $\sigma\in\ ]s, 2s]\subseteq \ ]0, 2[$. So, the left-hand sides of \eqref{Bp>2}--\eqref{Bp<2} read as $[u]_{B^{\sigma}_{p, \infty}}$. Further, when $r<\frac{N}{ps}$ and $2\leq p\leq r'<p^*$, combining \eqref{stimalr1} with \eqref{Bp>2} easily yields $$[u]_{B^{\sigma}_{p, \infty}}\leq C\, \|f\|_r^{\frac{1}{p-1}}.$$ \end{remark} \section{Decay estimates} We are now ready to prove the pointwise and Sobolev estimates stated in Section 1. \begin{lemma}[Interpolation inequality]\label{intbesov} Let $p>1>\tau>s>0$, $\gamma\in\ ]0, p[$, and $\mu\in \ ]0, 1[$. Then there exists a constant $C:=C(N,p,s,\gamma,\mu,\tau)>0$ such that $$[u]_{s, \gamma}\leq C\, R^{\frac{N}{\gamma}-\frac{N}{p}+\mu(\tau-s)}\, [u]_{B^{\tau}_{p, \infty}}^\mu\, [u]_{s,p}^{1-\mu}$$ for every $u\in \dot B^{\tau}_{p,\infty}({\mathbb R}^N)\cap \dot W^{s,p}({\mathbb R}^N)$ with ${\rm supp} (u)\subseteq B_R$. \end{lemma} \begin{proof} Suppose $R=1$. Observe that if $|h|>2$ then $u$, $u_h$, and $u_{2h}$ have disjoint supports. Hence, \begin{equation}\label{ugu} {\rm supp}(u)\subseteq B_1\quad \Rightarrow \quad\|\delta^2_hu\|_q=2^{1/q} \|\delta_h u\|_q=4^{1/q}\|u\|_q\quad \text{for any $|h|>2$, $q>0$,} \end{equation} which implies \[ \begin{split} [u]_{s,\gamma}^\gamma&=\int_{|h|\le 2}\frac{\|\delta_h u\|_\gamma^\gamma}{|h|^{s\gamma}}\frac{dh}{|h|^N}+2\int_{|h|> 2}\frac{\|u\|_\gamma^\gamma}{|h|^{s\gamma}}\frac{dh}{|h|^N}=\int_{|h|\le 2}\frac{\|\delta_h u\|_\gamma^\gamma}{|h|^{s\gamma}}\frac{dh}{|h|^N}+C_1\, \|u\|_\gamma^\gamma\\ &\leq \int_{|h|\le 2}\frac{\|\delta_h u\|_\gamma^\gamma}{|h|^{s\gamma}}\frac{dh}{|h|^N}+C_2\, \|u\|_p^\gamma. \end{split} \] The first term will be estimated through successive applications of the H\"older's inequality \[ \begin{split} \int_{|h|\leq 2}\frac{\|\delta_h u\|_\gamma^\gamma}{|h|^{s\gamma}}\frac{dh}{|h|^N}&\leq C_3\int_{|h|\leq 2}\frac{\|\delta_h u\|_p^\gamma}{|h|^{s\gamma}}\frac{dh}{|h|^N}\\ &=C_3\int_{|h|\leq 2}\frac{\|\delta_h u\|_p^{(1-\mu)\gamma}}{|h|^{s(1-\mu)\gamma}}\frac{\|\delta_h u\|_p^{\mu\gamma}}{|h|^{\tau\mu\gamma}}\frac{dh}{|h|^{N-(\tau-s)\mu\gamma}}\\ &\leq C_3\, [u]_{B^{\tau}_{p,\infty}}^{\mu\gamma}\int_{|h|\leq 2}\frac{\|\delta_h u\|_p^{(1-\mu)\gamma}}{|h|^{s(1-\mu)\gamma}}|h|^{(\tau-s)\mu\gamma}\frac{dh}{|h|^{N}}\\ &\leq C_3\, [u]_{B^{\tau}_{p,\infty}}^{\mu\gamma}\left(\int_{|h|\leq 2}\frac{\|\delta_h u\|_p^{p}}{|h|^{sp}}\frac{dh}{|h|^{N}}\right)^{\frac{(1-\mu)\gamma}{p}}\left(\int_{|h|\leq 2}|h|^{(\tau-s)\mu\gamma}\frac{dh}{|h|^{N}}\right)^{1-\frac{(1-\mu)\gamma}{p}}\\ &\leq C_4\, [u]_{B^{\tau}_{p, \infty}}^{\mu\gamma}[u]_{s,p}^{(1-\mu)\gamma}. \end{split} \] To evaluate the other term we use \eqref{ugu} and obtain \[ \|u\|_p^\gamma=C_5\, \|u\|_p^\gamma\left(\int_{|h|>2}\frac{dh}{|h|^{N+ps}}\right)^{\frac{\gamma}{p}}=\frac{C_5}{2^{\gamma/p}}\left(\int_{|h|>2}\frac{\|\delta_hu\|_p^p}{|h|^{ps}}\frac{dh}{|h|^{N}}\right)^{\frac{\gamma}{p}}\leq \frac{C_6}{2^{\gamma/p}}[u]_{s,p}^\gamma. \] Similarly, by H\"older's inequality and \eqref{ugu} again, \[ \|u\|_p^\gamma=2^{\tau\gamma}\sup_{|h|>2} \left(\frac{\|u\|_p}{|h|^\tau}\right)^\gamma=\frac{2^{\tau\gamma}}{4^{\gamma/p}} \left(\sup_{|h|>2}\left\|\frac{\delta_{h}^2u}{|h|^\tau}\right\|_p \right)^\gamma\leq \frac{2^{\tau\gamma}}{4^{\gamma/p}}[u]_{B^{\tau}_{p,\infty}}^\gamma. \] Gathering together the above inequalities yields \[ \|u\|_p^\gamma\leq C_7\, [u]_{B^{\tau}_{p, \infty}}^\mu[u]_{s,p}^{1-\mu}, \] as desired. Now, the general case $R\neq 1$ comes out from a standard scaling argument. \end{proof} \begin{remark} The conclusion of Lemma \ref{intbesov} actually holds for any $\tau\in\ ]0,p[$, but the proof is slightly more complicated once $\tau\geq 1$, which we do not need here. Moreover, it should be noted that the constant $C$ blows up as $\tau\to s^+$, because $C\geq C_4$ and $$C_4:=C_3\left(\int_{|h|\leq 2}|h|^{(\tau-s)\mu\gamma}\frac{dh}{|h|^{N}}\right)^{1-\frac{(1-\mu)\gamma}{p}}.$$ This is quite natural, since otherwise one would obtain the limiting inequality \[ [u]_{s,\gamma}\leq C\, [u]_{B^s_{p, \infty}}^\mu[u]_{s,p}^{1-\mu},\quad {\rm supp}(u)\subseteq B_1 \] which, when combined with \eqref{BL}, would imply the embedding $\dot{W}^{s,p}(B_1)\hookrightarrow \dot{W}^{s,\gamma}(B_1)$ for all $\gamma<p$. However, this embedding is false as soon as $s\in\ ]0, 1[$ and $1\leq \gamma <p$; cf. \cite{MiS}. \end{remark} \begin{remark} If $\gamma>\frac{N}{N+s}$ then the interpolation inequality above has a very simple proof. In fact, in such a case, the Sobolev space $W^{s,\gamma}(B_1)$ coincides with the Besov space $B^{s}_{\gamma, \gamma}(B_1)$, for which a complete interpolation theory is available. In particular, since $\tau>s$, \cite[Theorem 3.3.6, iii)]{T} gives \[ \left(B^{\tau}_{p, \infty}(B_1); B^s_{p, p}(B_1)\right)_{\mu, \gamma}=B^{\mu \tau+(1-\mu) s}_{p, \gamma}(B_1) \] with $(X; Y)_{\mu, \gamma}$ denoting the Lions-Peetre real interpolation space. Hence, \[ \|u\|_{B^{\mu \tau+(1-\mu) s}_{p, \gamma}(B_1)}\leq C\, \|u\|_{B^{\tau}_{p, \infty}(B_1)}^\mu\|u\|_{B^{s}_{p, p}(B_1)}^{1-\mu}, \] where $\|\cdot\|_{B^{\sigma}_{q, r}(B_1)}=[\cdot]_{B^{\sigma}_{q, r}(B_1)}+\|\cdot\|_q$. On the other hand, classical embedding theorems \cite[Theorem 3.3.1, i]{T} yield $B^{\mu \tau+(1-\mu) s}_{p, \gamma}(B_1)\hookrightarrow B_{\gamma, \gamma}^s(B_1)$ because $\tau>s$ and $p>\gamma$, whence one readily infers the interpolation inequality \[ \|u\|_{B_{\gamma, \gamma}^s(B_1)}\leq C\, \|u\|_{B^{\tau}_{p, \infty}(B_1)}^\mu\|u\|_{B^{s}_{p, p}(B_1)}^{1-\mu}, \quad \tau>s, \ \mu\in\ ]0, 1[. \] For us, the issue with this proof is twofold: not only one would need a homogeneous version of the previous inequality, but, more substantially, we will work with values of $\gamma$ which can be smaller than the threshold $\frac{N}{N+s}$. \end{remark} \begin{lemma}\label{Glemma} Suppose $p>1>s>0$, $N>ps$, $\sigma\in \ ]s, 2[$, $r\in[1,\frac{p^*}{p^*-1}]$. If $u\in \dot W^{s,p}({\mathbb R}^N)\cap \dot B^\sigma_{p, \infty}({\mathbb R}^N)$ is a radially non-increasing weak solution of $(-\Delta_p)^su=f$, where $f\in L^r({\mathbb R}^N)$, then \[ [u]_{s,\gamma}<+\infty \quad \text{for every $\gamma\in \ ]r^*(p-1), p]$}. \] \end{lemma} \begin{proof} Fix $\tau\in\ ]s,\min\{\sigma, 1\}[$ and $\lambda\in \ ]0, 1[$ such that $\tau=\lambda s+(1-\lambda)\sigma$. Since $$\frac{\|\delta^2_hu\|_p}{|h|^\tau}=\frac{\|\delta^2_hu\|_p^\lambda}{|h|^{\lambda s}}\, \frac{\|\delta^2_hu\|_p^{1-\lambda}}{|h|^{(1-\lambda)\sigma}},$$ the elementary interpolation inequality $$[u]_{B^{\tau}_{p, \infty}}\leq [u]_{B^{s}_{p, \infty}}^\lambda[u]_{B^\sigma_{p, \infty}}^{1-\lambda}$$ holds. Thus, on account of \eqref{BL}, \begin{equation}\label{hh} [u]_{B^{\tau}_{p, \infty}}\leq C\, [u]_{s, p}^\lambda\, [u]_{B^\sigma_{p, \infty}}^{1-\lambda}. \end{equation} As already pointed out, $N>ps$ forces $\frac{p^*}{p^*-1}<\frac{N}{ps}$, whence $r<\frac{N}{ps}$. So, due to \eqref{decay}, we have $\lim_{t\to +\infty} u(t)=0$, where $u(x)=u(|x|)$ by abuse of notation. Now, consider the horizontal dyadic layer cake decomposition \begin{equation}\label{hdlcd} u_0(t):=(u(t)-u(1))_+,\quad u_i(t):=\min\{u(2^{i-1})-u(2^i), (u(t)-u(2^{i}))_+\},\quad i\geq 1, \end{equation} of $u$. Setting $A_0:=[0, 1[$ and $A_i:=[2^{i-1}, 2^{i}[$, one can write, whenever $t\in A_k$ for some $k\geq 0$, \[ u_i(t)= \begin{cases} u(2^{i-1})-u(2^{i})&\text{if $i>k$},\\ u(t)-u(2^k)&\text{if $i=k$},\\ 0&\text{if $i<k$}, \end{cases} \] which means \begin{equation}\label{HL} u(t)= u(t)-u(2^k)+\sum_{i=k+1}^{+\infty}[u(2^{i-1})-u(2^{i})]=\sum_{i=0}^{+\infty} u_i(t). \end{equation} The above series converges in $L^\infty({\mathbb R}^N)$ because \[ \|u_i\|_\infty\le u(2^{i-1})\le \frac{C}{2^{b(i-1)}}\|f\|_r^{\frac{1}{p-1}},\quad\mbox{with}\quad b:=\frac{N}{p-1}\left(\frac 1 r-\frac{ps}{N}\right)>0, \] due to \eqref{decay} and the monotonicity of $u$. Observe next that $u_i=g_i\circ u$ for some $1$-Lipschitz continuous function $g_i$. Hence, $|\delta_h u_i|\leq |\delta_h u|$ and using \eqref{BL} in \eqref{hh} produces \[ [u_i]_{B^{\tau}_{p, \infty}}\leq C\, [u]_{s, p}^\lambda\, [u]_{B^\sigma_{p, \infty}}^{1-\lambda}. \] By Lemma \ref{intbesov}, for every $\mu\in \ ]0, 1[$ there exists a constant $C_\mu=C(N,p,s,\gamma,\mu,\tau)>0$ fulfilling \[ [u_i]_{s,\gamma}\leq C_\mu\, 2^{i\left(\frac{N}{\gamma}-\frac{N}{p}+\mu(\tau-s)\right)}[u_i]_{B^\tau_{p, \infty}}^\mu[u_i]_{s,p}^{1-\mu}. \] Therefore, \begin{equation}\label{ui} [u_i]_{s,\gamma}\leq C_\mu\, 2^{i\left(\frac{N}{\gamma}-\frac{N}{p}+\mu(\tau-s)\right)}[u]_{s, p}^{\lambda\mu}\, [u]_{B^\sigma_{p, \infty}}^{\mu(1-\lambda)}\, [u_i]_{s, p}^{1-\mu}. \end{equation} Since $u$ is radially non-increasing, $u_i\in W^{s,p}_0(B_{2^i})\cap L^\infty({\mathbb R}^N)$, $i\geq 0$, namely $u_i$ turns out to be a suitable test function. Via Lemma \ref{lemmag}, besides the properties of $u_i$, we thus obtain \[ [u_i]_{s,p}^p\leq \langle (-\Delta_p)^s u, u_i\rangle=\int_{{\mathbb R}^N} f\, u_i\, dx\leq\|f\|_{r}\, u(2^{i-1})\, \omega_N^{\frac{1}{r'}}(2^i)^{\frac{N}{r'}}. \] Using Corollary \ref{cordecay}, this entails, \[ [u_i]_{s,p}^p\leq C_\mu\, \|f\|_r^{p'}(2^{i})^{\frac{N}{r'}+\frac{N}{p-1}(\frac{ps}{N}-\frac{1}{r})}, \] which, when inserted into \eqref{ui}, gives \begin{equation}\label{ui2} [u_i]_{s,\gamma}\leq C_\mu\, \|f\|_r^{\frac{1}{p-1}}\, [u]_{s, p}^{\lambda\mu}\, [u]_{B^\sigma_{p, \infty}}^{\mu(1-\lambda)}\, 2^{iNa_\mu}, \end{equation} where, to avoid cumbersome formulas, \[ a_\mu:=a(p, s, \gamma, r, \tau, \mu):=\frac{1}{\gamma}-\frac{1}{p}+\mu(\tau-s)+\frac{1-\mu}{p}\left(\frac{1}{r'}+\frac{1}{p-1}(\frac{ps}{N}-\frac{1}{r})\right). \] Finally, since \[ \gamma>r^*(p-1)=\frac{Nr(p-1)}{N-sr}\quad \Leftrightarrow \quad a_0=\frac{1}{\gamma}-\frac{1}{p}+\frac{1}{p}\left(\frac{1}{r'}+\frac{1}{p-1}(\frac{ps}{N}-\frac{1}{r})\right)<0 \] we can find a sufficiently small $\mu>0$ such that $a_\mu<0$. If $\gamma\geq 1$ then \eqref{HL}, the triangle inequality, and \eqref{ui2} yield \[ [u]_{s,\gamma}\leq C_\mu\, \|f\|_r^{\frac{1}{p-1}}\, [u]_{s, p}^{\lambda\mu}\, [u]_{B^\sigma_{p, \infty}}^{\mu(1-\lambda)}\sum_{i=0}^{+\infty}2^{iNa_\mu}<+\infty, \] as desired. So, suppose $r^*(p-1)<1$ and $\gamma \in\ ]r^*(p-1), 1[$. From $\sum_{i=0}^{+\infty} u_i=u$ a.e. in ${\mathbb R}^N$ it follows \[ \lim_{n\to +\infty} \frac{\left|\sum_{i=0}^{n} u_i(x)-\sum_{i=0}^{n} u_i(y)\right|^\gamma}{|x-y|^{N+\gamma s}}=\frac{|u(x)-u(y)|^\gamma}{|x-y|^{N+\gamma s}} \] for almost all $(x, y)\in {\mathbb R}^{2N}$. Now, Fatou's lemma and the subadditivity of $r\mapsto r^\gamma$ lead to \[ [u]_{s, \gamma}^\gamma\leq \liminf_{n\to +\infty}\int_{{\mathbb R}^{2N}} \frac{\left|\sum_{i=0}^{n} u_i(x)-\sum_{i=0}^{n} u_i(y)\right|^\gamma}{|x-y|^{N+\gamma s}}\, dx\, dy\leq \lim_{n\to +\infty}\sum_{i=0}^n[u_i]_{s,\gamma}^\gamma\, , \] and one can conclude as before using \eqref{ui2}. This completes the proof. \end{proof} Theorem \ref{MT} will be a consequence of Lemmas \ref{Rlemma}, \ref{Glemma}, and the next two. \begin{lemma}\label{luno} Let $N>ps$, $q>p$, and $\alpha\in[0,ps[$ satisfy \eqref{scalingrelation}. Let moreover $u\in\dot W^{s,p}({\mathbb R}^N)$ be a nonnegative minimizer of \eqref{I} with $\lambda:=1$. Then $u$ is radially non-increasing around some point and for appropriate $f\in L^1({\mathbb R}^N)$ one has $(-\Delta_p)^s u=f$ weakly as per \eqref{locweak}. \end{lemma} \begin{proof} We first stress that such an $u$ exists due to Theorem \ref{exist}. Moreover, by Lemma \ref{radiality}, it turns out to be radially non-increasing around some point while standard arguments, chiefly based on \eqref{I}, yield \begin{equation}\label{wf} \langle (-\Delta_p)^s u, v\rangle= I_1 \int_{{\mathbb R}^N}\frac{u^{q-1}}{|x|^\alpha}\, v\, dx\quad \forall\, v\in \dot{W}^{s,p}({\mathbb R}^N), \end{equation} where the integral at the right-hand side is absolutely convergent because of H\"older's and Hardy-Sobolev's inequalities. The conclusion will be achieved once one verifies that $x\mapsto u(x)^{q-1}/|x|^{\alpha}$ lies in $L^1({\mathbb R}^N)$. Set, for any $\varepsilon>0$, \[ \psi_\varepsilon(t):=\int_0^t \left[(\varepsilon+\tau)^{-\frac{1}{q}}-\frac{1}{q}\tau(\varepsilon+\tau)^{-1-\frac{1}{q}}\right]^p\,d\tau, \quad t\in{\mathbb R}^+_0; \] cf. the proof of \cite[Proposition 3.3]{BMS}. The function $\psi_\varepsilon:[0,+\infty)\to{\mathbb R}$ is Lipschitz continuous, increasing, and fulfills \begin{equation}\label{geps} 0\le \psi_\varepsilon(t)\leq \int_0^t (\varepsilon+\tau)^{-\frac{p}{q}}\,d\tau=\frac{1}{1-\frac{p}{q}}((\varepsilon+t)^{1-\frac{p}{q}}-\varepsilon^{1-\frac{p}{q}})\leq \frac{1}{1-\frac{p}{q}}(\varepsilon+t)^{-\frac{p}{q}}t. \end{equation} Further, if \[ \Psi_\varepsilon(t):=\int_0^t \psi'_\varepsilon(\tau)^\frac{1}{p}\,d\tau=\frac{t}{(\varepsilon+t)^{\frac{1}{q}}} \] then \eqref{wf}, written with $v:=\psi_\varepsilon\circ u\in \dot{W}^{s,p}(\mathbb{R}^N)$, and Lemma \ref{lemmag} entail \[ [\Psi_\varepsilon(u)]_{s,p}^p\leq \langle (-\Delta_p)^s u, \psi_\varepsilon\circ u\rangle= I_1\int_{{\mathbb R}^N} \frac{u^{q-1}\psi_\varepsilon\circ u}{|x|^\alpha}\, dx. \] Using \eqref{HS} on the left-hand term and \eqref{geps} on the right-hand one, we arrive at \begin{equation}\label{lhrhs} \int_{{\mathbb R}^N}\frac{u^q}{u+\varepsilon}\frac{dx}{|x|^\alpha}\leq C_1\left(\int_{{\mathbb R}^N}\frac{u^{q-1}\psi_\varepsilon\circ u}{|x|^\alpha}\, dx\right)^{\frac{q}{p}}\leq C_2\left(\int_{{\mathbb R}^N}u^{q-p}\frac{u^p}{(u+\varepsilon)^{\frac{p}{q}}}\frac{dx}{|x|^\alpha}\right)^{\frac{q}{p}}. \end{equation} Observe next that to every $\delta>0$ there corresponds $K>0$ satisfying \[ \int_{\{u<K\}}u^{q}\frac{dx}{|x|^\alpha}<\delta. \] Consequently, by H\"older's inequality, \begin{equation}\label{bla} \begin{split} \int_{{\mathbb R}^N}u^{q-p}\frac{u^p}{(u+\varepsilon)^{\frac{p}{q}}}&\frac{dx}{|x|^\alpha}=\int_{\{u\geq K\}}u^{q-p} \frac{u^p}{(u+\varepsilon)^{\frac{p}{q}}}\frac{dx}{|x|^\alpha}+\int_{\{u<K\}}u^{q-p}\frac{u^p}{(u+\varepsilon)^{\frac{p}{q}}} \frac{dx}{|x|^\alpha}\\ &\leq \int_{\{u\geq K\}}u^{q-\frac{p}{q}}\frac{dx}{|x|^\alpha}+\left(\int_{\{u<K\}}u^{q}\frac{dx}{|x|^\alpha}\right)^{1-\frac{p}{q}}\left(\int_{{\mathbb R}^N}\frac{u^q}{u+\varepsilon}\frac{dx}{|x|^\alpha}\right)^{\frac{p}{q}}\\ &\leq \int_{\{u\geq K\}}u^{q-\frac{p}{q}}\frac{dx}{|x|^\alpha} +\delta^{1-\frac{p}{q}}\left(\int_{{\mathbb R}^N}\frac{u^q}{u+\varepsilon}\frac{dx}{|x|^\alpha}\right)^{\frac{p}{q}}. \end{split} \end{equation} From \eqref{lhrhs}--\eqref{bla} it now follows \[ \int_{{\mathbb R}^N}\frac{u^q}{u+\varepsilon}\frac{dx}{|x|^\alpha} \leq C_3\left(\int_{\{u\geq K\}}u^{q-\frac{p}{q}}\frac{dx}{|x|^\alpha}\right)^{\frac{q}{p}} +C_4\, \delta^{\frac{q}{p}-1}\int_{{\mathbb R}^N}\frac{u^q}{u+\varepsilon}\frac{dx}{|x|^\alpha}, \] whence \[ \int_{{\mathbb R}^N}\frac{u^q}{u+\varepsilon}\frac{dx}{|x|^\alpha}\leq C\left(\int_{\{u\geq K\}}u^{q-\frac{p}{q}} \frac{dx}{|x|^\alpha}\right)^{\frac{q}{p}} \] provided $\delta$ is sufficiently small. Here, $C:=C(p, s, \alpha)$ and $K:=K(p, s, \alpha, u)$. Since $u^{q-\frac{p}{q}}$ belongs to $ L^1_{\rm loc}(dx/|x|^\alpha)$, letting $\varepsilon\to 0^+$ shows the claim. \end{proof} \begin{lemma}\label{final} Let $N>ps$, $q>p$, $\alpha\in[0,ps[$ satisfy \eqref{scalingrelation} and $u\in\dot W^{s,p}({\mathbb R}^N)$ be a nonnegative minimizer of \eqref{I} with $\lambda:=1$. Then $u\in L^\infty({\mathbb R}^N)$ and $\frac{u^{q-1}}{|x|^\alpha}\in L^r({\mathbb R}^N)$ for every $r\in [1, \frac{N}{\alpha}[$. \end{lemma} \begin{proof} Given $k>0$, $t\geq 0$, and $\beta\geq 1$, we define $g_\beta(t):=t (t_k)^{\beta-1}$, where $t_k:=\min\{t,k\}$. An easy verification ensures that $g_\beta\circ u\in \dot W^{s,p}({\mathbb R}^N)$ is a suitable test function in \eqref{wf}. Moreover, since $p\geq 1$, one obtains through elementary considerations \[ G_\beta(t)\geq \frac{p\,\beta^{1/p}}{p+\beta-1}\, g_{\frac{\beta-1}{p}}(t), \] with $G_\beta$ as in \eqref{defG}. Exploiting Lemma \ref{lemmag} and Hardy-Sobolev's inequality entails \[ C_\beta \left(\int_{{\mathbb R}^N} \frac{g_{\frac{\beta-1}{p}}(u)^q}{|x|^\alpha}\, dx\right)^{\frac{p}{q}}\leq [G_\beta]_{s,p}^p\leq \langle (-\Delta_p)^s u, g_\beta\circ u\rangle= \int_{{\mathbb R}^N}\frac{u^q\, u_k^{\beta-1}}{|x|^\alpha}\, dx \] for some $C_\beta=C(N, p, s, \alpha, \beta)$, so that \begin{equation}\label{mhs} \left(\int_{{\mathbb R}^N} \frac{u^q\, u_k^{\frac{q}{p}(\beta-1)}}{|x|^\alpha}\, dx\right)^{\frac{p}{q}}\leq C_\beta\int_{{\mathbb R}^N}\frac{u^q\, u_k^{\beta-1}}{|x|^\alpha}\, dx, \end{equation} for another $C_\beta=C(N, p, s, \alpha, \beta)$. Observe then that for any $K>0$ one has \[ \begin{split} \int_{{\mathbb R}^N}\frac{u^q \, u_k^{\beta-1}}{|x|^\alpha}\, dx&\leq \int_{\{u<K\}}\frac{u^q \, u_k^{\beta-1}}{|x|^\alpha}\, dx+\int_{\{u\geq K\}}\frac{u^q\, u_k^{\beta-1}}{|x|^\alpha}\, dx\\ &\leq K^{\beta-1}\int_{{\mathbb R}^N}\frac{u^q}{|x|^\alpha}\, dx+\left(\int_{\{u\geq K\}}\frac{u^q}{|x|^\alpha}\, dx\right)^{\frac{q-p}{q}}\left(\int_{{\mathbb R}^N}\frac{u^q\, u_k^{\frac{q}{p}(\beta-1)}}{|x|^\alpha}\, dx\right)^{\frac{p}{q}}, \end{split} \] where H\"older's inequality with respect to the measure $u^qdx/|x|^\alpha$ has been used on the second term. Since $u^q\in L^1({\mathbb R}^N, dx/|x|^\alpha)$ and $q>p$, the last term can be reabsorbed on the left of \eqref{mhs} provided $K$ is large enough, thus arriving at \[ \frac{1}{2} \left(\int_{{\mathbb R}^N} \frac{u^q\, u_k^{\frac{q}{p}(\beta-1)}}{|x|^\alpha}\, dx\right)^{\frac{p}{q}}\leq C_\beta\, K^{\beta-1}\int_{{\mathbb R}^N}\frac{u^q}{|x|^\alpha}\, dx. \] Now, let $k\to +\infty$. As $\beta\geq 1$ was arbitrary, we get $u\in L^t({\mathbb R}^N, dx/|x|^\alpha)$ for all $t\geq q$. By \eqref{stimalr3}, the conclusion $u\in L^\infty({\mathbb R}^N)$ is achieved once one verifies that $u^{q-1}/|x|^\alpha\in L^{\bar r}({\mathbb R}^N)$ for some $\bar r>\frac{N}{ps}$. Hence, fix $R_0>1$ fulfilling $u\leq 1$ on $B_{R_0}^c$. Thanks to Lemma \ref{luno}, the function $f(x):=u(x)^{q-1}/|x|^\alpha$ lies in $L^1({\mathbb R}^N)$. Moreover, $0\leq f\leq 1$ on $B_{R_0}^c$, whence \begin{equation}\label{poi} \int_{B_{R_0}^c} f^\beta\, dx\leq \int_{B_{R_0}^c}f\, dx<+\infty\quad \forall\, \beta\geq 1. \end{equation} Finally, we choose $\bar r\in\ ]\frac{N}{ps},\frac{N}{\alpha}[$ and $t>1$ so large that \[ \alpha(\bar r-\frac{1}{t})t'<N \] so that H\"older's inequality with exponents $t$ and $t'$ yields \[ \int_{B_{R_0}}f^{\bar r} dx=\int_{B_{R_0}}\frac{u^{\bar r(q-1)}}{|x|^{\frac{\alpha}{t}}}\frac{1}{|x|^{\alpha(\bar r-\frac{1}{t})}}\, dx \leq \left(\int_{{\mathbb R}^N}\frac{u^{\bar r(q-1)t}}{|x|^\alpha}\, dx\right)^{\frac{1}{t}} \left(\int_{B_{R_0}}\frac{1}{|x|^{\alpha(\bar r-\frac{1}{t})t'}}\, dx\right)^{1-\frac{1}{t}}. \] Since both integrals are finite, $f\in L^{\bar r}({\mathbb R}^N)$, as desired. The remaining conclusion, namely $f\in L^r({\mathbb R}^N)$ for every $r\in [1, \frac{N}{\alpha}[$, directly follows from the above inequality and \eqref{poi}. \end{proof} \begin{proof}[Proof of Theorem \ref{MT}] Theorem \ref{exist} provides a nonnegative, radially decreasing minimizer $u$ of \eqref{I}. Lemma \ref{luno} and Corollary \ref{cordecay} give the upper bound in \eqref{as1}, while the lower bound can be achieved via the same argument exploited to show \cite[Corollary 3.7]{BMS}. Indeed, by \cite[Corollary 5.5 and Theorem 5.2]{IMS}, $u$ is continuous on ${\mathbb R}^N\setminus \{0\}$ as well as everywhere positive. To get \eqref{as2}, we shall verify the hypotheses of Lemma \ref{Glemma} for $r=1$. The summability request $f:=(-\Delta_p)^s u\in L^1({\mathbb R}^N)$ is stated in Lemma \ref{luno}. Concerning the regularity of $u$, observe that $N>ps>\alpha$ forces $\frac{p^*}{p^*-1}<\frac{N}{\alpha}$. Consequently, due to Lemma \ref{final}, $u\in L^\infty({\mathbb R}^N)$ and $f\in L^{\frac{p^*}{p^*-1}}({\mathbb R}^N)$. We can apply now Lemma \ref{regest}, with $r:=\frac{p^*}{p^*-1}$, $t:=+\infty$, to arrive at $u\in \dot B^\sigma_{p, \infty}({\mathbb R}^N)$, where $\sigma\in \ ]s, 2[$ is given by \eqref{defsigma}. It remains to show that the function $U$ defined in \eqref{talentiane} satisfies \eqref{as1}--\eqref{as2}. Estimate \eqref{as1} is obvious, so we fix $\gamma>\frac{N(p-1)}{N-s}$ and prove \eqref{as2}. If $U_i$, $i\in{\mathbb N}_0$, denotes the horizontal dyadic layer cake decomposition (see, e.g., \eqref{hdlcd}) of $U$ then ${\rm supp}(U_i)\subseteq B_{2^i}$ and \[ U_i\lfloor_{B_{2^{i-1}}}\equiv U(2^{i-1})-U(2^i)\leq C\, 2^{-i\frac{N-ps}{p-1}},\quad {\rm Lip}(U_i)\leq C\,2^{-i(\frac{N-ps}{p-1}+1)} \] for appropriate $ C=C(p, s, \alpha)>0$. Moreover, $U=\sum_{i=0}^{+\infty} U_i$ pointwise. Using Lemma \ref{23} we obtain \[ \||D^s U_i|^\gamma\|_\infty \le C_1 \, 2^{-i\gamma\frac{N-s}{p-1}},\qquad |D^s U_i|^\gamma(x)\le C_2\, \frac{2^{i(N-\gamma\frac{N-ps}{p-1})}}{|x|^{N+\gamma s}}\quad\forall\,x\in B_{2^{i+1}}^c, \] which entails \[ \begin{split} [U_i]_{s,\gamma}^\gamma&=\int_{B_{2^{i+1}}} |D^s U_i|^\gamma\, dx+\int_{B_{2^{i+1}}^c} |D^s U_i|^\gamma\, dx\\ &\leq C_1\, 2^{-i\gamma\frac{N-s}{p-1}}|B_{2^{i+1}}| +C_2\, 2^{i(N-\gamma\frac{N-ps}{p-1})}\int_{B_{2^{i+1}}^c}\frac{dx}{|x|^{N+\gamma s}}\\ &\leq C_3\, 2^{i(N-\gamma\frac{N-s}{p-1})}+C_4\, 2^{i(N-\gamma\frac{N-ps}{p-1})}2^{-i\gamma s} =C_5\, 2^{i(N-\gamma\frac{N-s}{p-1})}. \end{split} \] Notice that the last exponent is negative, so that proceeding as in the final part of Lemma \ref{Glemma}'s proof gives the claim. \end{proof}
1,314,259,995,696
arxiv
\section{Introduction} The history of what would come to be called glueballs goes back to the early days of hadronic physics, before the emergence of QCD. At that time, hadron-hadron scattering processes at high-energies and low transferred momenta, written in terms of the Mandelstan variables, $s \gg m^2 \simeq -t $, were ruled by Regge theory. In that scenario, Regge proposed that in the hadronic processes ``particles were exchanged'', for instance, as a meson ($\rho$, $\omega$, etc.) or as a ``Reggeons''. In both cases, their scattering amplitudes, in the $t$ channel, behaves like ${\cal A}(s,t) \sim s^{\alpha(t)}$. If one considers a family or a set of resonances sharing the same quantum numbers, one can display them in a plane $[ t\equiv m^2, \alpha(t)\equiv J]$ fulfilling a linear relationship written as \begin{equation}\label{Regge} J(m^2) = \alpha' m^2 + \alpha_0\,, \end{equation} \noindent where $J$ is total angular momentum, $m$ is the mass of the Reggeized particle, $\alpha'$ and $\alpha_0$ are two constants. The above relationship plotted in a Chew-Frautschi plane is known as the Regge trajectory. If these Reggeized particles are Reggeized gluons, one has the so-called glueballs. Glueballs are represented by their total angular momentum $J$ and their vacuum quantum numbers $P$, $C$ and $I$, where $P$ is the $P-$parity (or spatial inversion), $C$ is the $C-$parity (or charge conjugation) and $I$ is the isospin. By using the spectroscopy notation one has $J^{PC}$, omitting the isospin $I$ since it is zero for all states considered here. For a review on glueballs, one can see Ref. \cite{Mathieu:2008me}. From now on, let us focus on oddballs or glueballs with odd angular momentum ($J\geq 1$), and quantum numbers taken as $P=-1$, $C=-1$, and $I=0$, such as, $1^{--}, 3^{--}, 5^{--}, \cdots$. Odd spin glueballs are particularly interesting because they lie on the Regge trajectory of an exchanged Reggeon called odderon. In the context of perturbative QCD, the odderon is described by the Bartels-Kwiecinski-Praszalowicz (BKP) equation \cite{Bartels:1978fc, Bartels:1980pe, Kwiecinski:1980wb}, as a colorless $C$-odd three reggeons (gluons) compound state in the $t$ channel, as can be seen pictorially in Fig. \ref{3gluons}. An interesting review on the odderon physics can be seen in Ref. \cite{Ewerz:2003xi}. \begin{figure}[ht] \centering \includegraphics[scale = 0.6]{odderon.jpg} \caption{The odderon as colorless $C$-odd three-gluon bound state exchanged in a hadron-hadron scattering.} \label{3gluons} \end{figure} The original proposal for the existence of the odderon in early 1970s appeared in Ref. \cite{Lukaszuk:1973nt}, the first attempts for its measurement in Refs. \cite{Hill:1973bq, Bonamy:1973dz}, and continued through the decades of 1980 and 1990 \cite{Apokin:1982kw, Augier:1993sz}. Note that all these collaborations did not provide reliable experimental evidence about the existence of the odderon. Recently, the outstanding efforts done in TOTEM and D0 Collaborations, analyzing the cross sections for $pp$ and $p \bar{p}$, and eventually their differences, $\Delta \sigma (s) = \sigma^{pp}(s) - \sigma^{p{\bar p}}(s) \propto \ln s$, supported the existence of the odderon with $3.4 \sigma$ of significance \cite{TOTEM:2017sdy}. In Ref. \cite{TOTEM:2020zzr} the significance was improved to $5.2\sigma - 5.7 \sigma$. The combination of these results may be considered sufficient to give the odderon experimentally discovered. Motivated by this recent discovery, in the present work we are interested in odd spin glueballs $J^{--}$, with ($J\geq 1$). Our aim here is to contribute with new insights and proposals to compute the oddball masses and then calculate the corresponding Regge trajectory related to the odderon. To do so, we will resort to an AdS/QCD model inspired by a duality proposed by Maldacena \cite{Maldacena:1997re}. AdS/QCD is a suitable approach to deal with QCD phenomenology in the nonperturbative regime, where glueballs are formed. The AdS/QCD model used here is known as the hardwall model, as proposed independently in Refs. \cite{Polchinski:2001tt, BoschiFilho:2002vd, BoschiFilho:2002ta}. In this model, conformal symmetry is broken due to the introduction of an IR cutoff $z_{\rm max}$ in the holographic coordinate $z$ and considering a slice of the anti–de Sitter (AdS) space, given by the interval $0 \leq z \leq z_{\rm max}$. In Ref. \cite{Erlich:2005qh} the authors used the hardwall model to compute the masses of vector mesons. In the last 20 years the AdS/QCD community have done many contributions offering many approaches to deal with glueballs and correlated issues. Here, one can see in Refs. \cite{BoschiFilho:2005yh, Colangelo:2007pt, Wang:2009wx, Huang:2007fv, Afonin:2012jn, BoschiFilho:2012xr, Capossoli:2013kb, Li:2013oda, Capossoli:2015ywa, Brunner:2015oqa, Brunner:2015yha, Capossoli:2016kcr, Chen:2015zhh, Capossoli:2016ydo, Brunner:2016ygk, Rodrigues:2016cdb, FolcoCapossoli:2016ejd, Rinaldi:2017wdn, Afonin:2018era, Rinaldi:2018yhf, FolcoCapossoli:2019imm, Rinaldi:2020ssz, Rinaldi:2021dxh, Zhang:2021itx} an incomplete list of those contributions which take into account even and odd spin glueballs, top-down and botton-up holographic models, considering anomalous dimension, dynamical AdS/QCD models, deformed AdS metric space, Einstein-Maxwell-dilaton background, among other proposals. This work is organized as follows: in section \ref{v1} we present our holographic description of the odd spin glueballs with $J^{PC}$=$1^{--}$, $3^{--}$, $5^{--}$, etc, starting from a twist five operator in a massive vector gauge boson. In section \ref{res}, we calculate oddball masses using Dirichlet and Neumann boundary conditions and construct some proposals to the odderon Regge trajectory. In this section we also compare our results for masses and trajectories with known results from the literature. Finally, in section \ref{conc} we present our last comments, interpretations and conclusions. \section{Holographic description of Odd spin glueballs}\label{v1} Here, in this section, we are going to present the description of a vector glueball state within the AdS/QCD model, compute the masses for $J^{PC}$=$1^{--}$, $3^{--}$, $5^{--}$, etc, and construct the Regge trajectory associated with the odderon. First of all, let us emphasize the main feature of this work. As the ground state for the odd spin glueballs, $1^{--}$ is a vector object, living at the UV boundary, we start our calculation within the holographic hardwall model by relating it to a five-dimensional massive gauge boson field defined in the AdS$_5$ space. This procedure, which relates operators in the four-dimensional theory to fields in the bulk of five-dimensional space, represents the accomplishment of the AdS/CFT correspondence. The twist or twist dimension, represented by $\tau$, is given by the conformal dimension ($\Delta$) of an operator minus its spin. In particular it will be shown that the conformal dimension of the state $1^{--}$ is $\Delta = 6$, and then $\tau = \Delta - J =5$. In this sense we are going to refer to our model as a twist-five approach. Note that in the Ref. \cite{Capossoli:2013kb} the authors dealt with oddballs and odderon Regge trajectories, also using the hardwall model, however relating the ground state for the odd glueballs $1^{--}$ to a massive scalar field in the AdS$_5$ space. In the reference \cite{Chen:2015zhh} the authors also started their computation, within the hardwall model, from a massive boson field in AdS side. However, among many exotic glueball states, the authors considered only one odd glueball state, namely the state $1^{--}$. Now let us introduce the action for a five-dimensional massive gauge boson field $A_m$ which represents the physical vector glueball at four-dimensional boundary theory, so that: \begin{equation}\label{vec_hw} S = -\frac{1}{2}\int d^5 x \sqrt{-g}\; [ \frac{1}{2} g^{pm} g^{qn} F_{mn} F_{pq} +M_5^2 g^{pm} A_p A_m]\,. \end{equation} \noindent Note that vector field stress tensor is assumed as $F_{mn} = \partial_m A_n - \partial_n A_m$ and $M_5$ is the mass of the gauge boson field. Besides $g$ is the determinant of the metric $g_{mn}$ of the $AdS_5$ space, given by: \begin{equation}\label{metric} ds^2 = g_{mn} dx^m dx^n= \frac{L^2}{z^2} \, (dz^2 + \eta_{\mu \nu}dy^\mu dy^\nu)\,, \end{equation} \noindent where $z$ is the holographic coordinate, and $L$ is the AdS radius. From now on we take $L=1$ throughout the text and $\eta_{\mu \nu}$ with signature $(-,+,+,+)$ is the Minkowski flat spacetime metric. By computing $\delta S / \delta A_n = 0$, one obtains the corresponding equations of motion: \begin{equation}\label{f16} \partial_p [ \sqrt{-g} g^{m p} g^{nq}\; F_{mn}] - M_5^2 \sqrt{-g} g^{nq} A_n = 0\,. \end{equation} Plugging the AdS metric in the above equation and considering $p=z,\mu$, one finds: \begin{equation}\label{eom_vec_4} \partial_z \left[\left(\frac{1}{z}\right) F_{zn} \eta^{nq}\right] + \partial_{\mu}\left[\left(\frac{1}{z}\right) \eta^{m \mu} F_{m n} \eta^{nq}\right] - M_5^2 \left(\frac{1}{z}\right)^3 A_n \;\eta^{nq}= 0, \end{equation} \noindent with $g^{mn} = z^{2} \eta^{mn}$. In order to solve the above equation, firstly, we will use an ansatz for a plane wave with four-momentum $q_{\mu}$, which is propagating in the in the transverse coordinates $x^{\mu}$, given by: \begin{equation}\label{anvec} A_{\rho} (z, x^{\mu}) = \epsilon_{\rho} \, v(z) e^{i q_{\mu} x^{\mu}} , \end{equation} \noindent where $\epsilon_{\rho}$ is the polarization four-vector defined in the transverse space to $z$ coordinate, and the plane wave amplitude depends on of $z$ coordinate, only. Note that $\epsilon^{\rho} \epsilon_{\rho} = \eta^{\rho \lambda} \epsilon_{\rho} \epsilon_{\lambda}=1$. Following Ref. \cite{Erlich:2005qh} we are going to consider $A_z=0$, and then, it implies that $F_{zn}=\partial_{z} A_n$. Besides we choose $\partial_{\mu} A^{\mu}=0$, which implies $q^{\mu} \epsilon_{\mu} = \eta^{\mu \lambda} q^{\mu} \epsilon_{\lambda} = q \cdot \epsilon = 0$ ensuring that the field can be written as a plane wave. Therefore, one can get \begin{eqnarray} \label{fields} \eta^{m \mu} \partial_{\mu} F_{m n} & = & \eta^{m \mu} (i q_{\mu}) (i q_m A_n - \partial_m A_n) \nonumber \\ & = & -q^2 A_n - (\partial_{\mu} A^{\mu}) \nonumber \\ & = & -q^2 A_n \end{eqnarray} At this point, we can rewrite Eq. \eqref{eom_vec_4} as \begin{equation} \partial_z \left[\left(\frac{1}{z}\right) \partial_z A_{n} \eta^{nq}\right] - \left(\frac{1}{z}\right) q^2 A_n \eta^{nq} - M_5^2 \left(\frac{1}{z}\right)^3 A_n \;\eta^{nq}= 0, \end{equation} or using \eqref{anvec}, one has: \begin{equation}\label{eom_vec_5} \left\{ \partial_z \left[\left(\frac{1}{z}\right) \partial_z v(z)\right] - \left(\frac{1}{z}\right) q^2 v(z) - M_5^2 \left(\frac{1}{z}\right)^3 v(z) \right\} \cdot e^{i q_{\mu} x^{\mu}} \epsilon^q = 0\,. \end{equation} Defining $v(z) = z\,\psi(z)$ and plugging it in above equation, so that: \begin{eqnarray} z^2 \frac{d^2 \psi(z)}{dz^2} + z \frac{d \psi(z)}{dz} - [(1 + M_5^2) + q^2 z^2] = 0\,, \end{eqnarray} whose solutions are given by a linear combination of Bessel ($J_{\nu}$) and Neumann ($Y_{\nu}$) functions: \begin{equation}\label{psi} \psi (z) = {\cal A}_{\nu, k} J_{\nu}(m_{\nu, k} \,z)+ {\cal B}_{\nu, k} Y_{\nu}(m_{\nu, k} \,z)\,, \end{equation} where ${\cal A}_{\nu, k}$ and ${\cal B}_{\nu, k}$ are normalization constants, the index $\nu = \sqrt{M_5^2 +1}$ and $m_{\nu, k}^2 = -q^2$ will be the mass squared of the odd spin glueballs at the boundary. Note that $k=1,2,3, \cdots$ denote radial excitations, with $k= 1$ for the ground state. As we are interested in regular solutions inside the bulk, we are just considering the Bessel solution and disregarding the Neumann one. Now, by plugging Eq. \eqref{psi} in Eq. \eqref{anvec} we can construct the complete solution for the field $A_{\rho} (z, x^{\mu})$, so that: % \begin{equation}\label{anveccomp} A_{\rho} (z, x^{\mu}) = {\cal A}_{\nu, k}\, z \, J_{\nu}(m_{\nu, k} \,z) e^{i q_{\mu} x^{\mu}} \epsilon_{\rho}\,. \end{equation} In order to get the odd spin glueball masses we are going to impose boundary conditions, such as Dirichlet and Neumann, on the vector field $A_{\rho} (z, x^{\mu})$. Before we impose those boundary conditions, one has to resort to the AdS/CFT dictionary and learn how to relate the gauge boson bulk mass ($M_5$) and the conformal dimension ($\Delta$) of the corresponding operator (${\cal O}$) in the four-dimensional theory. Such a relationship is written as: \begin{equation}\label{brod} M^2_5 = (\Delta - p) (\Delta + p - 4), \end{equation} where, $p$ represents the $p-$form index. Here we will consider $p=1$. In particular, for the glueball ground state $1^{--}$, it is associated to an operator ${\cal O}_6$ at the UV, given by \cite{Wang:2009wx, Capossoli:2013kb, Csaki:1998qr, Brower:2000rp}: \begin{equation} {\cal O}_{6} =\text{Sym}\,\text{Tr}\left( {\tilde{F}_{\mu \nu}}F^2\right)\,. \end{equation} From this operator one can infer that the scaling or conformal dimension should be $\Delta = 6$. As a consequence the ground state for oddballs is associated with a twist-five operator, since the twist $\tau$ is defined as the dimension minus spin, and then, $\tau = \Delta - J = 5$. To construct higher spin glueball states we will follow Ref. \cite{deTeramond:2005su} where the authors proposed to raise the total angular momentum by inserting symmetrised covariant derivatives in a given operator with spin $S$. After this insertion one gets \begin{equation}\label{6+J} {\cal O}_{6 + \ell} = SymTr\left( {\tilde{F}_{\mu \nu}}F D_{\lbrace\mu1 \cdots} D_{\mu \ell \rbrace}F\right), \end{equation} \noindent with conformal dimension $\Delta = 6 + \ell$ and total angular momentum $J= 1+\ell$. So to obtain the states $3^{--}, 5^ {--},$ etc, we take $\ell=2, 4, \cdots$. Then, all odd spin states in this formulation will have twist $\tau=5$. Now, replacing $\Delta = 6 + \ell$ in Eq. \eqref{brod}, one has \begin{equation}\label{hsi} M^2_{5} = (\Delta +\ell - p) (\Delta + \ell + p - 4)\,; \qquad ({\rm even}\, \ell \geq 2, \,\,p=1). \end{equation} In this work we consider all odd spin glueball states associated with $p=1$ forms. \section{Results achieved}\label{res} In this section we will present our results for the masses of higher odd spin glueballs as well as the Regge trajectories associated with the odderon achieved from our holographic hardwall model within a twist five operator approach, considering the usual Dirichlet and Neumann boundary conditions. In order to compare our results for oddball masses, we consider as benchmarks other results found within different approaches. Those data were extracted from the literature and are summarized in Table \ref{oddlit}. Note that there are no experimental data for glueball masses and there are few values from lattice simulations, QCD rules, Wilson loops, and semirelativistic potentials. In particular, lattice simulations require strong computational efforts to compute high spin glueball masses. Regarding the odderon's Regge trajetory, one should note that the precise values for its slope ($\alpha'$) and intercept ($\alpha_0$), are not consensus and are still open questions. Almost twenty years ago the Ref. \cite{Kovchegov:2003dm} pointed out that Refs. \cite{Janik:1998xj, Korchemsky:2001nx, Bartels:1999yt}, considering different solutions for BKP equation, found divergent values for the odderon's intercept. Besides, in Ref. \cite{Bartels:1999yt} one can see the largest intercept reported which is close to the unity. In particular, two different odderon's Regge trajectories were proposed in Ref. \cite{LlanesEstrada:2005jf}, which are \begin{equation}\label{l1} J^{RMB}(m^2) = 0.23 m^2 -0.88\,, \end{equation} \noindent obtained by using a relativistic many-body (RMB) model, and \begin{equation}\label{l2} J^{NRCM}(m^2) = 0.18 m^2 +0.25\,, \end{equation} \noindent based on a nonrelativistic constituent model (NRCM). \begin{table} {\small \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline Models used & \multicolumn{6}{c|}{{Odd Glueball States $J^{PC}$}} \\ \cline{2-7} &\qquad $1^{ - - }$ \qquad & \qquad $3^{- - }$ \qquad & \qquad $5^{- - }$ \qquad & \qquad $7^{- - }$ \qquad & \qquad $9^{- - }$ \qquad & \qquad $11^{- - }$ \qquad \\ \hline \hline $SU(3)$ gauge th. \cite{Athenodorou:2020ani} & 4.03(7) & & & & & \\ \hline { Iso. lattice} \cite{Meyer:2004jc, Meyer:2004gx} & { 3.240(330)(150)} & {4.330(260)(200)} & & && \\ \hline {Anis. lattice} \cite{Chen:2005mg} & {3.830(40)(190)} & {4.200(45)(200)} & & && \\ \hline {Anis. lattice} \cite{Morningstar:1999rf} & {3.850 (50) (190)} & {4.130 (90) (200)} & & && \\ \hline {QCD sum rules} \cite{Chen:2021cjr} & {$3.29^{+1.49}_{-0.32}$} & {$3.47^{+?}_{-0.50}$} & & && \\ \hline {Doub. pole model} \cite{Szanyi:2019kkn} & {$3.001$} & {$4.416$} & 5.498 & && \\ \hline Relat. many body \cite{LlanesEstrada:2005jf} & 3.95 & 4.15 & 5.05 & 5.90 && \\ \hline Nonrelat. const. \cite{LlanesEstrada:2005jf} & 3.49 & 3.92 & 5.15 & 6.14 && \\ \hline Wilson loop \cite{Kaidalov:1999yd} & 3.49 & 4.03 & & && \\ \hline Vac. correlator \cite{Kaidalov:2005kz} & 3.02 & 3.49 & 4.18 & 4.96 && \\ \hline Vac. correlator \cite{Kaidalov:2005kz} & { 3.32} & 3.83 & { 4.59} & {5.25} && \\ \hline Semirelat. pot. \cite{Mathieu:2008pb} & {3.99} & {4.16} & {5.26} & && \\ \hline Hardwall twist 4 D \cite{Capossoli:2013kb} & 3.24 & 4.09 & 4.93 & 5.75 & 6.57 & 7.38 \\ \hline Hardwall twist 4 N \cite{Capossoli:2013kb} & 3.24 & 4.21 & 5.17 & 6.13 & 7.09 & 8.04 \\ \hline Modified softwall \cite{Capossoli:2015ywa} & 2.82 & 3.94 & 5.03 & 6.11 & 7.19 & 8.26 \\ \hline \end{tabular}} \caption{{Glueball masses for $J^{PC}$ states expressed in GeV, with odd $J$, achieved with nonholographic and some holographic models from the literature.Note the abbreviations in the first column of this table can be read as: $SU(3)$ gauge th. ($SU(3)$ gauge theory in (3+1)d); Iso. lattice (Isotropic lattice); Anis. lattice (Anisotropic lattice); Doub.pole model (Double pole model), Relat. many body (Relativistic many body); Nonrelat. const. (Nonrelativistic constituent); Vac. correlator (Vacuum correlator) and Semirelat. pot. (Semirelativistic potential).}} \label{oddlit} \end{table} \subsubsection{Dirichlet boundary condition} In order to apply the Dirichlet boundary condition to compute the masses of oddballs, it requires the following condition on Eq. \eqref{anveccomp} \begin{equation} A_{\nu} (z, x^{\mu})|_{z=z_{\rm max}}= 0 \Rightarrow J_{\nu}(m_{\nu, k} \,z)|_{z=z_{\rm max}} = 0\,, \end{equation} meaning that odd glueball masses will be given by the roots of the Bessel function. From the above equation, one has \begin{equation}\label{massaD} m_{\nu, k}^D = \frac{\xi_{\nu, k} }{z_{\rm max}}\,, \end{equation} where $\xi_{\nu, k}$ is the $k$ th zero of the Bessel function of order $\nu$. Due to a lack of experimental/theoretical data regarding higher radial excitation states for odd glueballs, we are going to focus only in the ground state and fix $k=1$. Then Eq. \eqref{massaD} becomes \begin{equation}\label{massaDk1} m_{\nu, 1}^D = \frac{\xi_{\nu, 1} }{z_{\rm max}}\,. \end{equation} As we are interest higher odd spin glueballs, let us take a look at the Bessel function index $\nu$, in Eq. \eqref{anveccomp}. Such an index is related to the bulk mass $M_5$ by \begin{equation}\label{index} \nu = \sqrt{M_5^2 +1}\,. \end{equation} Now, by plugging Eq. \eqref{hsi} in the above equation, one gets a relationship between the Bessel function index and the glueballs' angular momentum \begin{equation}\label{indexl} \nu = \sqrt{(\Delta +\ell - p) (\Delta + \ell + p - 4)+1}\,. \end{equation} In particular, for the state $1^{--}$ one has $\ell=0$, $\Delta=6$, $p=1$ and then $\nu=4$. The IR cutoff ${z_{\rm max}}$ will be fixed by using the mass of this state, $m_{4, 1}^D$, as an input. At this moment, we can eliminate $z_{\rm max}$ by dividing an arbitrary odd spin state by the mass of the ground odd spin state $1^{--}$, in Eq. \eqref{massaDk1}, and get an expression to compute the masses of higher odd spin glueball states [$\ell\,\, \rm{(even)} \geq 2$], so that: \begin{equation}\label{massageral} m_{4+\ell, 1}^D = \frac{\xi_{4+\ell, 1} }{\xi_{4, 1}}m_{4, 1}^D\,. \end{equation} Note that we will choose $m_{4, 1}^D = 3.02$ GeV as an input from \cite{Kaidalov:2005kz}. For this chosen input, one has ${z_{\rm max}}=2.51$ GeV$^{-1}$. This value for ${z_{\rm max}}$ was obtained from a chi-squared minimization procedure with the rms error given by \begin{equation}\label{rms} \delta_{{\rm RMS}}= \sqrt{ \frac 1{N-N_p} \sum_{i=1}^N \left( \frac {\delta O_i}{O_i} \right)^2} \times 100\,, \end{equation} for the glueball masses present in Table \ref{mgd} with Dirichlet boundary condition. Here $N$ is the number of measurements (glueball masses) and, $N_p=1$ is our only free parameter ($z_{\rm max}$). Note that in the original hardwall model presented in Ref. \cite{Erlich:2005qh}, the $\rho$-meson mass was chosen to set the scale for the other particle masses. In that case, this is quite appropriate since they were considering three different meson families all with conformal dimension operator $\Delta=3$. In our case, the oddballs are characterized by $\Delta=6+\ell$. Thus it is natural for the present model to take the mass of the oddball ground state $1^{--}$ to fix $z_{\rm max}$. \bigskip \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{6}{c|}{Odd Glueball states $J^{PC}$} \\ \cline{2-7} & $1^{--}$ & $3^{--} $ & $5^{--}$ & $7^{--}$ & $9^{--}$ & $11^{--}$ \\ \hline \hline Dirichlet b.c. &\, 3.02 \, &\, 3.95 \,&\, 4.87 \,& \, 5.76 \, &\, 6.45 \, &\, 7.52 \, \\ \hline \end{tabular} \caption{ Odd spin glueball masses expressed in {\rm GeV} considering Dirichlet boundary condition, given by Eq. \eqref{massageral}.} \label{mgd} \end{table} Now, we are going to consider different sets of oddball states to construct possible odderon Regge trajectories. By considering the set $1^{--}, \cdots, 11^{--}$, and taking the masses in Table \ref{mgd}, one can construct the following Regge trajectory associated with the odderon: \begin{equation}\label{rgd1} J_{{\rm Dir}}^{\{1-11\}} (m^2) = (0.21 \pm 0.01) m^2 - (0.35 \pm 0.48). \end{equation} Analogously, for the states $1^{--}, \cdots, 9^{--}$, one gets \begin{equation}\label{rgd2} J_{{\rm Dir}}^{\{1-9\}} (m^2) = (0.24 \pm 0.01) m^2 - (0.95 \pm 0.24), \end{equation} and for $3^{--}, \cdots, 11^{--}$, one finds \begin{equation}\label{rgd3} J_{{\rm Dir}}^{\{3-11\}} (m^2) = (0.19 \pm 0.01) m^2 + (0.26 \pm 0.53). \end{equation} It is worthwhile to mention that these Regge trajectories Eqs. \eqref{rgd1}-\eqref{rgd3} were obtained from a standard linear regression method by using the glueball masses from Table \ref{mgd}. The errors for the slope and intercept come from such an analysis. These Regge trajectories are displayed in Figs. \ref{dirbc1}, \ref{dirbc2} and \ref{dirbc3}. \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.5]{D1_11.pdf} \caption{Odderon Regge trajectory with Dirichlet boundary condition corresponding to Eq. \eqref{rgd1}.}\label{dirbc1} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.5]{D1_9.pdf} \caption{Odderon Regge trajectory with Dirichlet boundary condition corresponding to Eq. \eqref{rgd2}.}\label{dirbc2} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.5]{D3_11.pdf} \caption{Odderon Regge trajectory with Dirichlet boundary condition corresponding to Eq. \eqref{rgd3}.}\label{dirbc3} \end{center} \end{figure} \newpage \subsubsection{Neumann boundary condition} For Neumann boundary condition on Eq. \eqref{anveccomp}, it requires \begin{equation} \frac{d}{dz} A_{\nu} (z, x^{\mu})|_{z=z_{\rm max}}= 0 \Rightarrow \frac{d}{dz} [z J_{\nu}(m_{\nu, k} \,z)]|_{z=z_{\rm max}} = 0\,. \end{equation} And then one gets \begin{equation}\label{} J_{\nu}(m^N_{\nu, k} z_{\rm max}) + m^N_{\nu, k} z_{\rm max} \left [\frac{d}{dz} J_{\nu }(m^N_{\nu, k} z_{\rm max})\right ] = 0\,. \end{equation} By using the following property: \begin{equation}\label{} \frac{d}{dz} J_{\alpha }(x) = J_{\alpha -1 }(x) - \frac{\alpha}{x} J_{\alpha}(x)\,, \end{equation} one has \begin{equation}\label{massaN} m^N_{\nu, k} z_{\rm max} J_{\nu -1}(m^N_{\nu, k} z_{\rm max}) + (1- \nu) J_{\nu }(m^N_{\nu, k} z_{\rm max}) = 0\,, \end{equation} where the odd glueball mass computed in the hardwall model with Neumann boundary condition is given by \begin{equation}\label{} m_{\nu, k}^N = \frac{\chi_{\nu, k} }{z_{\rm max}}\,. \end{equation} Here, we will fix ${z_{\rm max}}=1.89$ GeV$^{-1}$ with $m_{4, 1}^N = 3.02$ GeV, coming from \cite{Kaidalov:2005kz}, as an input. The value of ${z_{\rm max}}$ is determined by a chi-square minimization procedure analogous to the one done for Dirichlet boundary condition but now with the masses coming from Table \ref{t2}. To get higher odd spin glueball states we will proceed as done for Dirichlet boundary condition, and then we can rewrite Eq. \eqref{massaN} as \begin{equation}\label{massan} \chi_{\nu+\ell, k} J_{\nu+\ell -1}(\chi_{\nu+\ell, k}) + (1- \nu + \ell) J_{\nu +\ell }(\chi_{\nu+\ell, k}) = 0\,, \end{equation} so that \begin{equation}\label{massageralN} m_{\nu+\ell, k}^N = \frac{\chi_{\nu+\ell, k} }{z_{\rm max}}\,. \end{equation} As before, we just consider $k=1$ corresponding to nonexcited radial states. Then, from our model with Neumann boundary condition we get the set of masses, presented in Table \ref{t2}. \bigskip \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{6}{c|}{Odd Glueball states $J^{PC}$} \\ \cline{2-7} & $1^{--}$ & $3^{--} $ & $5^{--}$ & $7^{--}$ & $9^{--}$ & $11^{--}$ \\ \hline \hline Neumann b.c. &\, 3.02 \, &\, 4.14 \,&\, 5.26 \,& \, 6.38 \, &\, 7.48 \, &\, 8.59 \, \\ \hline \end{tabular} \caption{ Odd spin glueball masses expressed in {\rm GeV} considering Neumann boundary condition, given by Eq. \eqref{massageralN}.} \label{t2} \end{table} Considering different sets of oddball states to construct possible odderon Regge trajectories from our model with Neumann boundary condition, we get for $1^{--}, \cdots, 11^{--}$, \begin{equation}\label{rgn1} J_{{\rm Neu}}^{\{1-11\}} (m^2) = (0.16 \pm 0.01) m^2 + (0.33 \pm 0.45). \end{equation} In the same way, for $1^{--}, \cdots, 9^{--}$, \begin{equation}\label{rgn2} J_{{\rm Neu}}^{\{1-9\}} (m^2) = (0.17 \pm 0.01) m^2 - (0.06\pm 0.41), \end{equation} and for $1^{--}, \cdots, 5^{--}$, \begin{equation}\label{rgn3} J_{{\rm Neu}}^{\{1-5\}} (m^2) = (0.22 \pm 0.02) m^2 - (0.83 \pm 0.30). \end{equation} Once again, these Regge trajectories Eqs. \eqref{rgn1}-\eqref{rgn3} were obtained from a standard linear regression method by using the glueball masses from Table \ref{t2}. The errors for the slope and intercept come from such an analysis. These Regge trajectories are displayed in Figs. \ref{neubc1}, \ref{neubc2}, and \ref{neubc3} . \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.5]{N1_11.pdf} \caption{Odderon Regge trajectory with Neumann boundary condition corresponding to Eq. \eqref{rgn1}.}\label{neubc1} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.5]{N1_9.pdf} \caption{Odderon Regge trajectory with Neumann boundary condition corresponding to Eq. \eqref{rgn2}.}\label{neubc2} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.5]{N1_5.pdf} \caption{Odderon Regge trajectory with Neumann boundary condition corresponding to Eq. \eqref{rgn3}.}\label{neubc3} \end{center} \end{figure} In order to compare these results for the glueball masses, we are going to calculate the rms error with Eq. \eqref{rms}. Taking the values of the glueball masses from $1^{--}$ to $7^{--}$ of the vacuum correlator model in Ref. \cite{Kaidalov:2005kz} as our benchmarks, from \eqref{rms} with $N=4$, one finds that $\delta_{\rm {RMS}} = 3.60 \%$ for the Dirichlet boundary condition from Table \ref{mgd} and $\delta_{\rm {RMS}} = 5.61 \%$ for the Neumann boundary condition from Table \ref{t2}. From this point of view the results for the hardwall model with twist-5 operator approach the Dirichlet boundary condition seems to work better. \section{Conclusions}\label{conc} In this section, we present our last comments on our work and present some interpretations on our achieved results. Here, we have used the holographic hard wall model to compute the masses of odd spin glueball states from a twist-five operator approach as well as derive the corresponding Regge trajectories related to the Odderon with Dirichlet and Neumann boundary conditions. As the oddball ground state $1^{--}$ has spin 1 and the corresponding operator has conformal dimension $\Delta=6$, the twist of this state is $\tau=5$. In this sense, the twist-five operator approach seemed appropriated to deal with the odd glueball ground state. To implement it, we started with a massive gauge boson field living in the AdS$_5$ related to the vector glueball at the boundary theory. The higher spin oddballs $J^{--}=(1+\ell)^{--}$ (with even $\ell$) are then represented by operators with conformal dimension $\Delta+\ell=6+\ell$ so these states also have twist $\tau=5$. Note that one can wonder if it is possible to accommodate higher even spin glueball states in our model. Note however that two possible even spin ground states $0^{++}$ and $2^{++}$ are a twist-4 or twist-2 objects. In our case we are dealing with just the twist-5 objects. In order to compute the odd spin glueball masses, we had to introduce an IR cutoff $z_{\rm max}$ by using the mass of ground state $1^{--}$ as an input. For our purposes, our input was taken from the vacuum correlator model as in Ref. \cite{Kaidalov:2005kz}. Besides, in this reference one can also find values for higher odd spin glueballs masses as well as other Refs. mentioned in Table \ref{oddlit}. As one can see, the masses computed here for higher spin oddballs, by considering Dirichilet and Neumann boundary conditions (Tables \ref{mgd} and \ref{t2}, respectively) are fully compatible with most of the models presented in Table \ref{oddlit}. It is worth to mention that mass for the state $3^{--}$ computed in this work is also in agreement with the one obtained using a holographic QCD model as reported recently in Ref. \cite{Zhang:2021itx}. It is worth mentioning that in this work the results coming from the Dirichlet boundary condition seems to give better glueball masses than the Neumann one, taking as benchmarks the results from Ref. \cite{Kaidalov:2005kz}, since the respective rms errors are 3.60\% and 5.61\%, as discussed at the end of the previous section. Another point of interest in this work is to derive, from the odd spin glueball masses, the Regge trajectories associate with the odderon. By taking a look at the masses in Table \ref{mgd}, within Dirichilet boundary condition, one can construct Regge trajectories for the odderon. For the oddballs considered in this work, from the ground state $1^{--}$ to the state $11^{--}$ and from the ground state $1^{--}$ to the state $9^{--}$, one can obtain the Regge trajectories presented in Eqs. \eqref{rgd1} and \eqref{rgd2}, respectively. These Regge trajectories are compatible with the one presented in Eq. \eqref{l1} within the RMB model of Ref. \cite{LlanesEstrada:2005jf}. On the other hand, if the one considers the set of the states $3^{--}, 5^{--}, 7^{--}, 9^{--}$ and $11^{--}$, the hardwall model used here, provides a Regge trajectory given by Eq. \eqref{rgd3} compatible with the one in Eq. \eqref{l2} within the NRCM, also in Ref. \cite{LlanesEstrada:2005jf}. Regarding to the Neumann boundary condition, from Table \ref{t2}, one can also consider different sets of oddball states and derive the corresponding Regge trajectories related to the odderon. The Regge trajectories presented in Eqs. \eqref{rgn1} and \eqref{rgn2}, considered the sets from the ground state $1^{--}$ to the state $11^{--}$ and to the state $9^{--}$ respectively, are compatible with the one presented in Eq. \eqref{l2} within the NRCM in Ref. \cite{LlanesEstrada:2005jf}. Nevertheless, the Regge trajectory in Eq. \eqref{rgn3}, considering the states $1^{--}, 3^{--}$ and $5^{--}$ is compatible with the one presented in Eq. \eqref{l1} within the RMB model of Ref. \cite{LlanesEstrada:2005jf}. One should notice that the values of odd spin glueball masses within Neumann boundary condition are greater than the ones coming from Dirichlet boundary condition. To build the Regge trajectory one has to choose a set of oddball states. This feature implies that if one increases the number of elements in the chosen set, the slope of the Regge trajectory will decrease. This explains the difference between the slope and intercept of the Regge trajectories obtained in this work with both boundary conditions. Even though the hardwall model may be the simplest among the AdS/QCD models it provides good estimates of glueball masses despite the fact that the corresponding Regge trajectories are not intrinsically linear. Anyway, the hardwall model can provide approximate linear trajectories as the ones presented in this work compatible with other holographic and nonholographic approaches. In particular, one can note that the rms errors found here for glueballs are smaller than the corresponding ones for other hadrons as presented, for instance in Ref. \cite{Erlich:2005qh}. To conclude, we should keep in mind that although the oddballs discussed here are still lacking direct observation, the odderon itself was discovered experimentally \cite{TOTEM:2017sdy, TOTEM:2020zzr}. We hope that the oddball quest will come to a good end in future experiments. \begin{acknowledgments} J.P.M.G. is supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) under Grant No. 151701/2020-2. H.B.-F. is partially supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) under Grant No. 311079/2019-9. This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), finance code 001. \end{acknowledgments} {\it Note added}.—Recently Refs. \cite{Vega:2021yyj} and \cite{Rinaldi:2021xat} appeared on arXiv proposing a holographic study for glueballs at finite temperature, within softwall and hardwall models, respectively.
1,314,259,995,697
arxiv
\section{Introduction} According to Gr\"unbaum \cite[Ch.~12]{grue03}, a polytope $P$ is \emph{dimensionally $k$-ambiguous} if the $k$-skeleton of $P$ is isomorphic to that of a polytope $Q$ and $\dim Q \not= \dim P$. So, not only is the $k$-skeleton of such a polytope \emph{not} characteristic but, even worse, it does not even give away the dimension in which to look for it. Unfortunately, there is no effective way to decide when a polytope is dimensionally ambiguous and even the list of known instances of such polytopes is rather short. The prime example of a dimensionally $\lfloor\frac{d-3}{2}\rfloor$-ambiguous polytope is the $d$-simplex as is certified by the existence of \emph{neighborly simplicial polytopes} such as the cyclic polytopes (cf.\ \cite{zie95}). However, in recent years two more families of polytopes joined the list: the family of cubes via the existence of \emph{neighborly cubical polytopes} \cite{jos00} and the family of products of even polygons in guise of \emph{projected deformed products of polygons} \cite{zie04, sz07}. In both cases, the construction principle (unified in \cite{sz07}) is to give a special realization of the combinatorial type and to verify that a projection to lower dimensions strictly preserves the skeleton in question. The main motivation for this paper was to investigate the limitations of this approach. To be more precise: What are necessary conditions for the existence of a polytope $P \subset \R^d$ of a fixed combinatorial type and a projection $\pi: \R^d \rightarrow \R^e$ such that $P$ and $\pi(P)$ have isomorphic $k$-skeleta. Building on technology developed in \cite{san07}, we devise tools that give fairly good necessary conditions on the existence of such pairs $(P,\pi)$ in terms of topological combinatorics. The main observation is that if $\pi: P \rightarrow \pi(P)$ retains the $k$-skeleton for $k \ge 0$ then there is an associated pair of spaces $(\partial\A,\|\Sigma_k\|)$ with $\|\Sigma_k\| \hookrightarrow \partial\A$ where $\Sigma_k$ is a simplicial complex and $\partial\A$ is a (polyhedral) sphere. The simplicial complex $\Sigma_k$ is defined in terms of the combinatorics of $P$ whereas the dimension of the sphere $\partial\A$ depends on $e$. Thus, the existence of $(P,\pi)$ implies that $\Sigma_k$ is embeddable into a sphere of a specific dimension. Obstructing the embeddability of $\Sigma_k$ into a sphere of this dimension then impedes the existence of $(P,\pi)$. Drawing from methods of topological combinatorics \cite{MatousekBZ:BU}, our obstructions take the form of graph coloring problems and integer linear programs. We focus on polytopes of \emph{product type} for which the factorization of the skeleta allows us to replace the simplicial complex $\Sigma_k$ by somewhat simpler subcomplexes. We apply the tools to the following three classes of polytopes: \begin{asparaenum} \item[\it Products of polygons.] One curiosity left in connection with the deformed products of polygons of \cite{sz07} is that the general construction scheme fails for \emph{odd} polygons, i.e.\ polygons with an odd number of vertices. With respect to the number of \emph{even and odd} polygons we prove necessary conditions on products of polygons to be dimensionally ambiguous via projection. Along the way, we obtain interesting byproducts. For example, it is known, though apparently nowhere written up properly, that there is no realization of a product of two odd polygons such that a projection to the plane retains all vertices. As a teaser, we generalize this result to \noindent \emph{There is no realization of a product of $r$ odd polygons such that a projection to $r$-space retains all vertices.} \item[\it Products of simplices.] Products of simplices are ubiquitous in geometric and topological combinatorics. Most notable are their appearances in work on tropicial geometry and subdivisions \cite{santos05}, game theory and polynomial equations \cite{sturm02}, and as building blocks for prodsimplicial complexes such as Hom-complexes \cite{pfeifle07}. It is known to both discrete geometers and topologists that no $d$-polytope is dimensionally $k$-ambiguous for $k \ge \lfloor \frac{d}{2} \rfloor$ (cf.~Theorem \ref{thm:vanKampen}). Essentially, the reason is that the statement is already false for the $d$-simplex. In Section~\ref{sec:ProductsOfSimplices}, we investigate obstructions to the projectability of products of simplices -- calculating these obstructions is intricately related to the coloring of \emph{Kneser graphs}. We generalize a result in~\cite{san07} that products of $r \ge d$ simplices of dimension $d$ cannot retain all vertices under projection to lower dimensions. \item[\it Wedge products.] The properties of (combinatorial) products that we exploit for the calculation of the obstructions hold for more general polytope constructions, most notably the wedge product. The wedge product, introduced in \cite[Ch.~4]{Z109} (see also \cite{RZ09}), is a degeneration of the product that may be described purely combinatorially. The interest for this class stems from the original context in which wedge products were introduced: The (straight) realization of (equivelar) polyhedral surfaces. The equivelar surfaces of type $\{r,2n\}$ are topological surfaces \emph{glued} exclusively from $r$-gons, $2n$ of which meet at every vertex. The discrete-geometric realization question now is to find a geometric embedding in which all the polygons are convex and flat. In~\cite{Z109} it is shown that certain equivelar families of type $\{r,2n\}$ are naturally embedded into the wedge products~$\wp{r}{n-1}$ of $r$-gons and $(n-1)$-simplices. Techniques similar to the deformed products (cf.~\cite{sz07}) allow for the realization of the subfamily $\{r,4\}$ in euclidean $3$-space. In Section~\ref{sec:WedgeProducts} we show (cf.\ Theorem~\ref{thm:ObstructionWedgeProductSurface}) that this is probably the only family that embeds into $3$-space via projection: \noindent \emph{For $r \ge 4$ and $n \ge 3$ there is no realization of the wedge product $\wp{r}{n-1}$ such that a projection to $4$-space retains the surface $\WPsurf{r}{2n}$.} \noindent Our methods do not yield an obstruction for $r=3$ in which case the surface is triangulated and the wedge product of triangle and $(n-1)$-simplex is a simplex. \end{asparaenum} {\bf Acknowledgments.} We would like to thank G\"{u}nter Ziegler for stimulating discussions and comments on an earlier draft of this paper. \section{Combinatorial types, projections, and obstructions} \label{sec:projections} In this section we develop a general framework for investigating the projectability of skeleta or more general subcomplexes of the boundary of a polytope. We briefly recap the necessary polytopal background and then proceed to reduce polytopes to their combinatorial structure -- their combinatorial types. The benefit will be apparent in our results which state conditions under which there is no polytope with specific combinatorial and geometric qualities. Throughout a (convex) \emph{polytope} $P \subset \R^d$ is the convex hull of finitely points $P = \conv \{ v_1,\dots, v_n \}$ and, equivalently, the bounded intersection of finitely many halfspaces $P = \{ x \in \R^d : a_i\cdot x \le b_i \text{ for all } i = 1,\dots,m \}$. In both representations, we assume that the collection of \emph{vertices} $v_1,\dots, v_n$ and of \emph{facet-defining inequalities} $a_i \cdot x \le b_i$ is irredundant, that is, no vertex or linear inequality can be omitted. It is customary to write the system of linear inequalities succinctly as $Ax \le b$. A hyperplane $H = \{ x \in \R^d : c \cdot x = \delta \}$ is \emph{supporting} $P$ if $P \subseteq H^- = \{ x \in \R^d: c \cdot x \le \delta \}$ and $F = P \cap H$ is called a \emph{face} of $P$ -- the emptyset and $P$ are also faces of~$P$. In particular, it is clear that every $a_i \cdot x \le b_i$ is a supporting hyperplane and the corresponding faces are called \emph{facets}. The dimension $\dim F$ of a face $F \subseteq P$ is the dimension of its affine span. Vertices are faces of dimension $0$ and facets are faces of dimension $\dim P - 1$. We abbreviate the notions of $k$-dimensional face and $d$-dimensional polytope with $k$-face and $d$-polytope, respectively. \begin{prop}[{\cite[Prop.~2.3]{zie95}}]\label{prop:face_reps} Let $P \subset \R^d$ be a polytope with vertex set $V \subset \R^d$ and facets $F_i$ defined by $a_i \cdot x \le b_i$ for $i = 1,\dots,m$. If $F \subseteq P$ is a face, then \begin{enumerate} \item $F = \conv(F \cap V)$ and \item $F = \{ x \in P : a_i \cdot x = b_i \text{ for all } i \in I_P(F) \}$ with $ I_P(F) := \{ i \in [m] : F \subseteq F_i \} $ \end{enumerate} \end{prop} The collection of faces $\fl(P)$ of a polytope $P$ ordered by inclusion is called the \emph{face lattice} of $P$. The face lattice is a graded lattice of rank $\dim P -1$ and it can be thought of as the combinatorial structure of $P$. We call two polytopes \emph{combinatorially isomorphic} if $\fl(P) \cong \fl(Q)$ as graded lattices. It follows from Proposition~\ref{prop:face_reps} that the face lattice has two canonical representations. \begin{cor} Let $P$ be a polytope with vertex set $V$ and facets indexed by $[m] = \{ 1, 2, \dots, m\}$. Then $\fl(P)$ is isomorphic to \begin{enumerate} \item $\{ F \cap V : F \subseteq P \text{ a face} \} \subseteq 2^V$ ordered by inclusion and \hfill {\rm(vertex description)} \item $\{ I_P(F) : F \subseteq P \text{ a face } \} \subseteq 2^{[m]}$ ordered by reverse inclusion. \hfill {\rm(facet description)} \end{enumerate} \end{cor} Our main results will be concerned with the non-existence of geometric realizations of polytopes with given combinatorial features under projection. In order to avoid cumbersome formulations, we wish to abstract from the geometry of a polytope $P$. \begin{dfn} A graded lattice $\P$ is called a \emph{combinatorial type} of dimension $d$, or \emph{$d$-type} for short, if $\P \cong \fl(P)$ for some $d$-polytope $P$. \end{dfn} We want to think about combinatorial types as polytopes stripped from their geometric realization but we will nevertheless stick to our geometric terminology and, for example, call $F \in \P$ a face of $\P$. Moreover, when no confusion arises we use $P$ and $\P$ interchangeably. Identifying the collection of facets of $\P$ with $F_1, \dots, F_m$, we write \[ I_\P(F) = \{ i : F \subseteq F_i \} \subseteq [m] \] for the \emph{facet-incidences} of $\P$. The collection of all faces of $\P$ of dimension at most $k$ is the \emph{$k$-skeleton} of $\P$ and we call a $d$-type $\P$ \emph{simple} if every $k$-face $F$ is contained in exactly $d-k$ facets. \subsection{Geometry and topology of projections} \label{sec:GenCase} Let $P$ be a $d$-polytope and let $\pi: P \rightarrow \pi(P) \subseteq \R^e$ be an affine projection. Throughout it is understood that $d \ge e$ and that $\pi(P)$ is full-dimensional. We want to find conditions under which $P$ and $\pi(P)$ have isomorphic $k$-skeleta. The key concept for establishing such conditions is that of faces strictly preserved under $\pi$. \enlargethispage{3em} \begin{dfn}[Preserved and strictly preserved faces; {\cite{san07,zie04}}] Let $P$ be a polytope, $F \subset P$ a proper face and $\pi: P \rightarrow \pi(P)$ a projection of polytopes. The face $F$ is \emph{preserved} under $\pi$ if \begin{enumerate} \item[ i)] $G = \pi(F)$ is a proper face of $\pi(P)$ and \item[ ii)] $F$ and $G$ are combinatorially isomorphic. \end{enumerate} If, in addition, \begin{enumerate} \item[iii)] $\pi^{-1}(G)$ is equal to $F$ \end{enumerate} then $F$ is \emph{strictly preserved}. \end{dfn} With the notion of strictly preserved faces at our disposal, the task of deciding isomorphic $k$-skeleta of $P$ and $\pi(P)$ can be checked one face at a time. \begin{lem}\label{lem:iso_skel} Let $P$ be a polytope and let $\pi: P \rightarrow \pi(P)$ be a projection of polytopes. For \mbox{$0 \le k < \dim P$} the polytopes $P$ and $\pi(P)$ have isomorphic $k$-skeleta if and only if every $k$-face of $P$ is strictly preserved under~$\pi$. \end{lem} \begin{proof} Assume that $P$ and $\pi(P)$ have isomorphic $k$-skeleta. We show by induction on $k$ that all preserved $(k+1)$-faces are then strictly preserved. Since $f_l(P) = f_l(\pi(P))$ for $0 \le l \le k$, the $0$-skeleton is strictly preserved. If for $l \ge 1$ the $(l-1)$-skeleton is strictly preserved under projection, then the preimage of every $l$-face of $\pi(P)$ is an $l$-face. Indeed, let $\bar{F}$ be an $l$-face of~$\pi(P)$ and let $F = \pi^{-1}(\bar{F})$. Then the map $\pi|_{F}: F \rightarrow \pi(F) = \bar{F}$ is a projection of polytopes that strictly preserves the $(l-1)$-skeleton of $F$. Thus the $(l-1)$-skeleton of $F$ is a subcomplex of an $(l-1)$-sphere. Hence $F$ is an $l$-face of $P$ and $\bar{F}$ is strictly preserved. Therefore all $k$-faces are strictly preserved since all $k$-faces are preserved and $P$ and $\pi(P)$ have isomorphic $(k-1)$-skeleta. Conversely, since every $i$-face for $i \le k$ is strictly preserved we have that the \mbox{$k$-skeleton} of $P$ is isomorphic to a subposet of the $k$-skeleton of $\pi(P)$. Assume that the inclusion is strict and let $H \subset P$ be a proper face of dimension greater than $k$ and $\pi(H)$ a $k$-face of $\pi(P)$. As a polytope, $H$ has a proper face $F$ of dimension $k$. But~$F$ is a $k$-face of~$P$ with $\pi(F) = \pi(H)$, since $\pi(F) \subseteq \pi(H)$ and $\dim \pi(F) = \dim \pi(H) = k$. Thus $F$ is not strictly preserved. \end{proof} In \cite{san07}, for every simple polytope $P$ a simplicial complex $\Sigma_0 = \Skel{0}{P}$ is defined in terms of the combinatorics of the vertices of $P$. Furthermore, it is shown that if $\pi : P \rightarrow \pi(P)$ is a projection strictly preserving the vertices, then $\Sigma_0$ is realized as a subcomplex of a (simplicial) sphere whose dimension depends on $\dim \pi(P)$. Theorem~\ref{thm:main} below is a generalization of this result for which we separate the technical part in the following proposition. \begin{prop} \label{prop:main} Let $P$ be a $d$-polytope on $m$ facets and let $\pi : P \rightarrow \pi(P) $ be a projection retaining all vertices of $P$. Then there is a polytope $\A = \A(P,\pi)$ of dimension $m-d-1+\dim\pi(P)$ with vertices $a_1, a_2,\dots, a_m$ such that the following holds: For every strictly preserved face $G \subset P$ the set \[ \A_G := \conv \{ a_i : i \in [m] \setminus \eqset_P(G) \} \] is a simplex face of $\A$. \end{prop} \begin{proof} Let $e = \dim \pi(P)$. Fix a strictly preserved face $G$ and let $I = \eqset_P(G)$. Proposition 3.8 and Lemma 3.2 in \cite{san07} assert that there exists a polytopal Gale transform $\mathcal{G} = \{ g_1, g_2, \dots, g_m \} \subset \R^{d-e}$ with the property that $\mathcal{G}_I := \{ g_i : i \in I \}$ positively spans $\R^{d-e}$. Let $\A = \conv\{a_1,\dots,a_m\}$ be the $(m-d-1+e)$-dimensional polytope Gale-dual to $\mathcal{G}$. Gale duality implies that $\A_G = \conv \{ a_i : i \not\in I\}$ is a face of $\A$. Clearly, every set $\mathcal{G}_J$ with $I \subseteq J \subseteq [m]$ is positively spanning as well. So Gale duality implies that every subset of the vertices of $\A_G$ is also a face. Hence $\A_G$ is a simplex face of $\A$. \end{proof} We call the polytope $\A(P,\pi)$ the \emph{projection polytope}. The collection of strictly preserved faces induces the following simplicial complex in the boundary of $\A$ that certifies the strict preservation. \begin{dfn} Let $P$ be a polytope on $m$ facets and let $\pi: P \rightarrow \pi(P)$ be a projection of polytopes retaining all vertices. We define the \emph{strict projection complex} $\K(P,\pi) \subseteq 2^{[m]}$ as the simplicial complex generated by the sets $\{ [m] \setminus I_P(G) : G \text{ strictly preserved under } \pi\}.$ \end{dfn} We may now rephrase Proposition \ref{prop:main} as follows. \begin{thm}\label{thm:main} Let $P$ be a $d$-polytope on $m$ facets and let $\pi : P \rightarrow \pi(P)$ be a projection strictly preserving all vertices. Then $\K(P,\pi)$ is embedded in a (polytopal) sphere of dimension \mbox{$m-d-2 + \dim \pi(P)$}. \qed \end{thm} \begin{rem} The conditions of Theorem \ref{thm:main} can be weakened to the requirement that for each facet $F$ there is a strictly preserved vertex $v$ with $v \not\in F$. The proof relies on a slight variation of \cite[Proposition 3.8]{san07} which verifies that the set $\mathcal{G}$ is indeed a polytopal Gale transform. \end{rem} As we are primarily interested in the preservation of full skeleta of a given dimension we introduce the following complex of a combinatorial type. \begin{dfn} Let $\P$ be a combinatorial type of dimension $d$ on $m$ facets. For $-1 \le k \le d$, the \emph{$k$-th coskeleton complex} is the simplicial complex \[ \Skel{k}{\P} = \{ \tau \subseteq [m] : \tau \cap I_\P(G) = \emptyset \text{ for some $k$-face } G \in \P \} \subseteq 2^{[m]}. \] \end{dfn} The maximal faces of $\Skel{k}{\P}$ are in bijection with the $k$-faces of $\P$ under the correspondence $G \mapsto [m] \setminus I_\P(G)$. The connection to $\K(P,\pi)$ is the following. \begin{obs*}\label{obs:tower} If $\pi : P \rightarrow \pi(P)$ is a projection retaining the $k$-skeleton then \[ \{\emptyset\} = \Skel{-1}{P} \ \subset\ \Skel{0}{P} \ \subset\ \Skel{1}{P} \ \subset\ \cdots \ \subset\ \Skel{k}{P} \ \subset\ \K(P,\pi) \] is an increasing sequence of subcomplexes. \end{obs*} As every $k$-face is contained in at least $d-k$ facets, the dimension of $\Skel{k}{\P}$ is at most $m + k - d - 1$. If $\P$ is a simple $d$-type, then $\Skel{k}{\P}$ is pure of this dimension. In \cite{san07}, $\Skel{0}{\P}$ was defined for simple $d$-types in terms of the \emph{complement complex} of the boundary complex of the dual of~$\P$. Here, we abandon the restriction to simple polytopes. Every simplicial complex can be embedded in a sphere of some dimension. We will be interested in the smallest dimension of such a sphere. \begin{dfn}[Embeddability dimension] Let $\K \subseteq 2^{[m]}$ be a simplicial complex on $m$ vertices. The \emph{embeddability dimension} $\Edim{\K}$ is the smallest integer $d$ such that $\|\K\|$ may be embedded into the $d$-sphere, i.e.\ $\|\K\|$ is homeomorphic to a closed subset of the $d$-sphere. \end{dfn} Theorem \ref{thm:main} can be read as an upper bound on the embeddability dimension of the strict projection complex $\K(P,\pi)$. However, $\K(P,\pi)$ heavily depends on the geometry of the projection and hence a priori our knowledge about $\K(P,\pi)$ is rather limited. The virtue of the coskeleton complex is that it is a subcomplex of $\K(P,\pi)$ defined entirely in terms of the combinatorics of~$P$. \begin{cor}\label{cor:proj_obstruct} Let $\P$ be a $d$-type on $m$ facets and, for $0 \le k < d$, let $\Lskel = \Lskel(\P)$ be the $k$-th coskeleton complex of $\P$. If \[ e \ <\ \Edim{\Sigma_k} + d - m + 2 \] then there is no realization of $\P$ such that a projection to $\R^e$ retains the $k$-skeleton. \end{cor} \begin{proof} By contradiction, assume that $P$ is a realization of $\P$ and $\pi: P \rightarrow \pi(P)$ is a projection retaining the $k$-skeleton with $\dim \pi(P) = e < \Edim{\Sigma_k} + d - m + 2$. By Theorem \ref{thm:main}, the above observation, and the fact that the embeddability dimension is monotone along subcomplexes, the complex $\Sigma_k$ is realized in a sphere of dimension \[ \Edim{\Sigma_k} \le m - d - 2 + e < \Edim{\Sigma_k}. \] \end{proof} The following, well-known fact bounds the embeddability dimension of a simplicial complex in terms of its dimension. \begin{prop}[{\cite[Thm.\ 11.1.8, Ex.\ 4.8.25]{grue03}}] \label{prop:edim_triv_bnds} Let $\K$ be a simplicial complex of dimension $\dim \K = \ell$. Then \[ \ell \ \le\ \Edim{\K} \ \le\ 2\ell + 1. \] \end{prop} It is instructive to consider the statement of Corollary~\ref{cor:proj_obstruct} in the extreme cases of Proposition~\ref{prop:edim_triv_bnds}. If the $(m+k-d-1)$-dimensional complex $\Edim{\Sigma_k}$ attains the lower bound then Corollary~\ref{cor:proj_obstruct} implies that the dimension of the target space has to be at least $e \ge k+1$. This is reassuring as the projection embeds $\Skel{k}{\P}$ into a sphere of dimension $e-1$. Now, suppose that $\Edim{\Sigma_k}$ attains the upper bound and that $\P$ is a simple type. Then $\dim \Skel{k}{\P} = m - (d-k) - 1$ and the $k$-skeleton is not projectable to $e$-space if $e < m-d+2k+1$. This is the linear Van~Kampen--Flores result: \begin{thm}[{\cite[Thm.~2]{grue65}}]\label{thm:vanKampen} Let $\P$ be a $d$-type and let $0 \le k \le \lfloor \frac{d-2}{2} \rfloor$. If \[ e \ \le \ 2k + 1 \] then there is no realization of $\P$ such that a projection to $e$-space retains the $k$-skeleton. \end{thm} \subsection{Cotype complexes of products} \label{sec:ProductTypes} For our purposes we need better bounds than provided by Proposition~\ref{prop:edim_triv_bnds} and so we need more sophisticated techniques to determine or at least bound the embeddability dimension $\Edim{\Sigma_k}$. In this and the next section we introduce two notions that approximate the coskeleton complex as well as the embeddability dimensions and allow us to calculate bounds. For the cases in which we want to apply Corollary~\ref{cor:proj_obstruct}, the combinatorial types under consideration are products or, at least, closely related (cf. Section~\ref{sec:WedgeProducts}). Let $P \subset \R^d$ and $P^\prime \subset \R^{d^\prime}$ be two polytopes of combinatorial types $\P$ resp.\ $\P^\prime$. The product of $P$ and $P^\prime$ is the polytope $P \times P^\prime = \conv \{ (p,p^\prime) : p \in P, p^\prime \in P^\prime\}$. Combinatorially we define \[ \P \times \P^\prime := \fl(P \times P^\prime) = \{ (F,F^\prime) : F \in \fl(P), F^\prime \in \fl(P^\prime) \text{ such that } F = \emptyset \text{ iff } F^\prime = \emptyset \}. \] Note that this product of combinatorial types differs from the usual direct product of lattices inasmuch as every face of the product $\P\times \P^\prime$ is a product of non-empty faces of $\P$ and $\P^\prime$. In particular, we have $\dim (F,F^\prime) = \dim F \times F^\prime = \dim F + \dim F^\prime$ and the facet incidences of the product are given by \[ I_{P \times P^\prime}(F \times F^\prime) = I_P(F) \uplus I_{P^\prime}(F^\prime). \] The following definition distinguishes the faces of the product by their ``type''. \begin{dfn}\label{dfn:FaceComplex} Let $\P = \P_1 \times \P_2 \times \cdots \times \P_r$ with $\P_i$ a $d_i$-type on $m_i$ facets for $i = 1,\dots,r$. For a fixed $0 \le k < d = d_1 + \cdots + d_r$ we call a composition $\tl = (\l_1,\dots,\l_r) \in \Z^r$ with $0 \le \l_i \le d_i$ and $\l_1 + \cdots + \l_r = k$ a \emph{face type} of dimension $k$. We denote by $\fType{\P}$ the collection of $k$-dimensional face types for $\P$. For $\tl \in \fType{\P}$ we define the \emph{cotype complex} of type $\tl$ as the join of coskeleton complexes \[ \Sigma_{\tl}(\P) := \Sigma_{\l_1}(\P_1) \ * \ \Sigma_{\l_2}(\P_2) \ * \ \cdots \ * \ \Sigma_{\l_r}(\P_r). \] \end{dfn} It is clear from the definition of the product that every face of $\P$ belongs to some type and this yields a partition of the coskeleton complex. \begin{prop} \label{prop:ProductFaceTypes} Let $\P = \P_1 \times \P_2 \times \cdots \times \P_r$ and $0 \le k < \dim \P$. Then \[ \Skel{k}{\P} = \bigcup_{\tl \in \fType{\P}} \Skel{\tl}{\P}. \] \\[-5ex]\qed \end{prop} The monotonicity of the embeddability dimension along subcomplexes yields our first bound for the projectibility of products. \begin{cor} \label{cor:projectabilityBound} Let $\P$ be a product and $0 \le k < \dim \P$. If there is a face type $\tl \in \fType{\P}$ such that \[ e < \Edim{\Sigma_\tl} + d - m + 2 \] then there is no realization of $\P$ such that a projection to $\R^e$ retains the $k$-skeleton. \qed \end{cor} \begin{example} To illustrate the usefulness of the cotype complex, consider the following question: Is there a realization of $\P = \Delta_1 \times \Delta_2$, a prism over a triangle, such that a projection to the plane preserves the three \emph{vertical} edges (see Figure \ref{fig:Desargue}). The ad-hoc negation of the question is that by \emph{Desargues' Theorem} (cf.~\cite[Sect.~14.3]{cox89}) the three vertical edges in the prism meet in a common point (at infinity) and a linear projection retains this property. Using the developed machinery, we see that the assumed projection satisfies the conditions of Proposition \ref{prop:main} and the vertical edges correspond to the face type $\tl = (1,0)$. The cotype complex is also shown in Figure \ref{fig:Desargue}: it consists of three triangles that share a common edge. Corollary \ref{cor:projectabilityBound} implies that such a projection does not exist as $\Skel{(1,0)}{\P}$ is not planar, i.e.\ $\Edim{\Skel{(1,0)}{\P}} = 3$. But $d=3$ and $m=5$ and hence Corollary~\ref{cor:projectabilityBound} yields the non-projectability because $\Edim{\Skel{(1,0)}{\P}} + 3 - 5 + 2 = 3>2=e$. \begin{figure}[ht] \centering \includegraphics[width=.18\linewidth]{figs/desargue} \hspace{.1\linewidth} \includegraphics[width=.18\linewidth]{figs/desargue-proj} \hspace{.1\linewidth} \includegraphics[width=.18\linewidth]{figs/tri} \caption{The triangular prism to the left with bold vertical edges. An alleged projection in the middle with preserved vertical edges. And the associated cotype complex to the right.} \label{fig:Desargue} \end{figure} \end{example} \begin{rem} The definition of the cotype complex relies on properties of the product that are shared by other polytope constructions such as joins, direct sums, and wedge products (see Section \ref{sec:WedgeProducts}). The common generalization is that of a \emph{compound type} (cf.~\cite{raman-thesis}) which is subject to further study. \end{rem} \subsection{Bounding the embeddability dimension} \label{sec:SarkariaIndex} In general it is hard to decide the embeddability of a complex $\K$ into some $\R^e$. The following notions, taken and adapted from \cite{MatousekBZ:BU}, show that in fortunate cases bounds on $\Edim{\K}$ can be obtained combinatorially. For a simplicial complex $\K \subseteq 2^{[m]}$ we denote by $\nf(\K)$ the set of {\it minimal non-faces}, i.e.\ the inclusion-minimal sets in $2^{[m]} \setminus \K$. The {\it Kneser graph} $\KG(\nf)$ on a set system $\nf \subseteq 2^{[m]}$ has the elements of $\nf$ as vertices and $F,G \in \nf$ share an edge iff $F$ and $G$ are disjoint. Furthermore, for a graph $G$ we denote by $\chi(G)$ the \emph{chromatic number} of $G$. \begin{dfn} Let $\K$ be a simplicial complex on $m$ vertices and $\nf = \nf(\K)$ the collection of minimal non-faces. The {\it Sarkaria index} of $\K$ is \[ \Sind \K := m - \chi( \KG( \nf ) ) - 1. \] \end{dfn} \begin{thm}[Sarkaria's coloring/embedding theorem {\cite[Sect.~5.8]{MatousekBZ:BU}}] \label{thm:SarkariaColoringEmbedding} Let $\K$ be a simplicial complex. Then \[ \Edim{\K} \ge \Sind \K. \] \end{thm} Every embedding of a simplicial complex $\K$ into a $d$-sphere gives rise to a $\Z_2$-equivariant map of the \emph{deleted join} $\K^{*2}_\Delta$ to a $d$-sphere. The \emph{$\Z_2$-index} of $\K^{*2}_\Delta$ is the smallest such $d$ for which an equivariant map exists. In its original form in~\cite{MatousekBZ:BU} the above theorem bounds from below the $\Z_2$-index of $\K^{*2}_\Delta$ and thus also bounds from below the embeddability dimension of $\K$. The next observation reduces the calculation of the Sarkaria index of a product to its factors. \begin{prop}[{\cite[Prop.~3.10]{san07}}] \label{prop:IndexJoin} Let $\K$ and $\L$ be simplicial complexes. Then \[ \Sind( \K * \L) = \Sind \K + \Sind \L + 1. \] \end{prop} Thus it follows directly from Definition~\ref{dfn:FaceComplex} that the Sarkaria index of a cotype complex is determined by its factors. \begin{cor}\label{cor:LinearIndex} Let $\P = \P_1 \times \P_2 \times \cdots \times \P_r$ and let $\tl \in \fType{\P}$. Then \[ \Sind \Skel{\tl}{\P} = \sum_{i=1}^r \Sind \Skel{\l_i}{\P_i} + r - 1. \] \\[-2em]\qed \end{cor} We determine the exact embeddability dimensions and Sarkaria indices for two coskeleton complexes of an arbitrary combinatorial type. The result depends only on the number of facets. \begin{prop} \label{prop:IndexSimplePolytopes} Let $\P$ be a $d$-type on $m$ facets. Then $\Sigma_d(\P) = \simplex{m-1}$ is homeomorphic to an $(m-1)$-ball and \[ m-1 = \Edim{\Skel{d}{\P}} = \Sind \Skel{d}{\P}. \] For the $(d-1)$-skeleton we have that $\Sigma_{d-1}(\P) = \partial \simplex{m-1} \cong S^{m-2}$ and \[ m-2 = \Edim{\Skel{d-1}{\P}} = \Sind \Skel{d-1}{\P}. \] \end{prop} \begin{proof} The first claim follows from the definition of the skeleton complex. Thus the embeddability dimensions are $m-1$ and $m-2$, respectively. For the Sarkaria index we get in the former case that the Kneser graph of the minimal nonfaces of $\Sigma_d(\P)$ has no vertices, whereas in the latter case the graph has no edges. \end{proof} In the special case that we have an $r$-fold product $\P^r = \P \times \P \times \cdots \times \P$ of the same combinatorial type $\P$, bounds on the embeddability dimension of $\Skel{k}{\P^r}$ can be obtained by solving a \emph{knapsack-type} problem. \begin{lem}\label{lem:knap} Let $\P$ be a $d$-type and let $r \ge 1$ and $0 \le k \le rd - 1$. For $i = 0, \dots, d$ set $s_i = \Sind \Skel{i}{\P}$ and let $s^*$ be the optimal value of the integer linear program \[ \begin{array}{rr@{\ +\ }r@{\ +\ }r@{\ +\ }rl} \max & s_0\,\mu_0 & s_1\,\mu_1 & \cdots & s_d\,\mu_d \\ \mathsf{s.t.}& 0\,\mu_0 & 1\,\mu_1 & \cdots & d\,\mu_d & =\ k \\ & \mu_0 & \mu_1 & \cdots & \mu_d & =\ r \\ \end{array} \] with $\mu_0,\dots,\mu_d \in \Z_{\ge 0}$. Then $\Edim{\Skel{k}{\P^r}} \ge s^* + r - 1$. \end{lem} \begin{proof} To a face type $\tl \in \fType{\P^r}$ associate the non-negative numbers $(\mu_0, \mu_1,\dots,\mu_d)$ with \[ \mu_i \ =\ \#\{ j \in [r] : \lambda_j = i \}. \] They satisfy \[ \begin{array}{r@{\ +\ }r@{\ +\ }r@{\ +\ }rl} 0\,\mu_0 & 1\,\mu_1 & \cdots & d\,\mu_d & =\ k \text{ and} \\ \mu_0 & \mu_1 & \cdots & \mu_d & =\ r \\ \end{array} \] since $\tl$ is a partition of $k$ in $r$ parts and the Sarkaria index of $\Skel{\tl}{\P^r}$ is given by $\sum_i s_i\, \mu_i + r - 1$. Vice versa, every such non-negative collection of numbers $\mu_i$ that satisfies the conditions of the integer program gives rise to a valid face type. \end{proof} \section{Products of Polygons} \label{sec:PolygonProducts} Denote by $\P_m$ the combinatorial type of an $m$-gon, that is, a $2$-dimensional combinatorial type on $m \ge 3$ facets labeled in cyclic order. In this section we determine necessary conditions for the existence of a realizations of $\P = \P_{m_1} \times \P_{m_2} \times \cdots \times \P_{m_r}$, a product of polygons, that retain the $k$-skeleton under a suitable projection. To that end, we need to determine (bounds on) the embeddability dimension of $\Skel{k}{\P}$ for $0 \le k < 2r = \dim \P$. For a single polygon, Proposition~\ref{prop:IndexSimplePolytopes} leaves us to determine the Sarkaria index for the $0$-th coskeleton complex of an $m$-gon. \begin{lem} \label{lem:Sind_mgon} Let $m \ge 3$ and $\P_m$ the combinatorial type of an $m$-gon. The Sarkaria index for the $0$-th coskeleton complex is given by \[ \Sind \Sigma_0(\P_m) = \left\{ \begin{array}{ll} m-3, & \text{ if $m$ is even, and } \\ m-2, & \text{ if $m$ is odd. } \\ \end{array}\right. \] \end{lem} \begin{proof} We show that the Kneser graph of minimal non-faces of $\Sigma_0(\P_m)$ has chromatic number $2$ and $1$, respectively. For that let us determine the minimal non-faces of $\Skel{0}{\P_m}$: A subset $\sigma \subseteq [m]$ of the facets of $\P_m$ is a non-face of $\Sigma_0(\P_m)$ if and only if every vertex of $\P_m$ is incident to at least one facet $F_i$ of $\P_m$ with $i \in \sigma$. If a vertex of $\P_m$ is covered twice by $\sigma$ then every other minimal non-face intersects $\sigma$ and thus $\sigma$ is an isolated vertex in the Kneser graph. If $\sigma$ covers every vertex exactly once, then $[m] \setminus \sigma$ is again a minimal non-face. It follows that for odd $m$ the Kneser graph consists of isolated vertices alone while for even $m$ there is exactly one edge. \end{proof} \begin{example} Let us consider $\Skel{0}{\P_5}$, the $0$-th coskeleton complex of the pentagon. The figure shows the triangles of the $0$-th coskeleton complex of the pentagon which form a M{\"o}biusstrip. Hence $\Skel{0}{\P_5}$ is not embeddable in the $2$-sphere. \begin{figure}[ht] \centering \includegraphics[width=.6\linewidth]{figs/moeb2} \vspace{.5cm} \caption{The five triangles of the $0$-th coskeleton complex of a pentagon fit together to form a M\"obiusstrip.} \label{fig:Moeb} \end{figure} \end{example} The example shows that the $0$-th coskeleton complex of an \emph{odd} polygon has a certain twist to it that obstructs the embeddability into $m-3$ dimensional space. Lemma~\ref{lem:Sind_mgon} implies that projectability bounds for products of polygons arising from Corollary~\ref{cor:LinearIndex} will only depend on the total number of facets and the number of odd and even polygons. Thus, it suffices to consider the generic product \[ \P \ =\ \P_{\mathsf{even}}^{r_e} \ \times \ \P_{\mathsf{odd}}^{r_o}, \] of $r_e$ \emph{even} and $r_o$ \emph{odd} polygons. We denote by $m$ the total number of facets and by $r = r_e + r_o$ the number of factors. For a product of polygons, we utilize the \emph{knapsack-type} integer program introduced in Lemma~\ref{lem:knap}. \begin{thm}\label{thm:EdimProductPolygons} Let $\P$ be a product of polygons with $r = r_o + r_e$ factors and $m$ facets. For $0 \le k \le 2r$ the embeddability dimension of the $k$-th coskeleton complex is bounded by \[ \Edim{\Skel{k}{\P}} \ \ge \ m - 1 - r + \left\lfloor \frac{k}{2} \right\rfloor + \min\Big\{ 0, \left\lceil \frac{k}{2} \right\rceil - r_e \Big\}. \] \end{thm} \begin{proof} In the spirit of Lemma~\ref{lem:knap} consider the following integer linear program \[ \newcommand\1{\phantom{1}} \newcommand\e{\mathsf{even}} \renewcommand\o{\mathsf{odd}} \begin{array}{llllll} \mathsf{min} & 2\,\mu_0^\e &+\ \1\,\mu_0^\o &+\ \1\,\mu_1 \\ \mathsf{s.t.} & & &\phantom{+}\ \1\,\mu_1 &+\ 2\, \mu_2&=\ k\\ & \1\,\mu_0^\e &+\ \1\,\mu_0^\o &+\ \1\,\mu_1 &+\ \1\, \mu_2&=\ r\\ & \1\,\mu_0^\e & & & &\le\ r_e\\ & &\phantom{+}\ \1\,\mu_0^\o & & &\le\ r_o\\ \end{array} \] with $\mu_0^\mathsf{even}, \mu_0^\mathsf{odd},\mu_1,\mu_2 \in \Z_{\ge0}$. Every face type $\tl = (\lambda_1,\dots,\lambda_r)\in \Lambda_k(\P)$ gives rise to a feasible solution by the association \[ \begin{array}{lll} \mu_2 &:=\ \# \{ i : \l_i = 2 \} & \text{ (polygons)} \\ \mu_1 &:=\ \# \{ i : \l_i = 1 \} & \text{ (edges)} \\ \mu_0^\mathsf{odd} &:=\ \# \{ i : \l_i = 0, 1 \le i \le r_o \} & \text{ (odd vertices)} \\ \mu_0^\mathsf{even} &:=\ \# \{ i : \l_i = 0, r_o < i \le r \} & \text{ (even vertices)} \\ \end{array} \] and, vice versa, every feasible solution yields a face type. The integer program reduces to a problem in essentially two variables and the optimal solution is easily seen to be \[ \mu^* = r - \left\lfloor \frac{k}{2} \right\rfloor + \max\Big\{ 0, r_e - \left\lceil\frac{k}{2}\right\rceil \Big\}. \] The result then follows from the fact that $\Edim{\Skel{k}{\P}} \ge m - 1 - \mu^*$. \end{proof} In order to put the above result in perspective, let us calculate upper bounds on the embeddability dimension. \begin{prop} \label{prop:prod_poly_upper} Let $\P$ be a product of polygons with $r = r_o + r_e$ factors and let $m$ be the number of facets. For $0 \le k < 2r$ the embeddability dimension is bounded by \[ \Edim{\Skel{k}{\P}} \le \left\{ \begin{array}{ll} m - r - r_e - 1,& \text{ if } k = 0\\ m - r - 1,& \text{ if } k = 1\\ m - 2,& \text{ otherwise}. \end{array} \right. \] \end{prop} \begin{proof} \newcommand\bl{{\boldsymbol \ell}} \newcommand\Hskel{\hat{\Sigma}} Let $\P = \P_{m_e}^{r_e} \times \P_{m_o}^{r_o}$ and for $\ell = \min \{ k, 2 \}$ define \[ \hat{\Sigma} \ =\ \Skel{\ell}{\P_{m_e}}^{*r_e} \ * \ \Skel{\ell}{\P_{m_o}}^{*r_o}. \] We claim that $\hat{\Sigma}$ contains $\Sigma = \Skel{k}{\P}$ as a subcomplex. By construction, $\hat{\Sigma}$ and $\Sigma$ have identical vertex sets. For every admissible face type $\tl = (\l_1,\dots,\l_r) \in \fType{\P}$ we have $\l_i \le \ell$ for $i = 1,\dots,r$ and, by Observation \ref{obs:tower} and the relation of subcomplexes among joins, this shows $\Skel{\tl}{\P} \subseteq \Hskel$. Since $\Sigma$ is the union of all cotype complexes, this proves the claim. We will therefore bound the embeddability dimension of $\Hskel$ from above. For $\ell = 2$, we have by Proposition \ref{prop:IndexSimplePolytopes} that $\Skel{2}{\P_n} = \simplex{n-1} \hookrightarrow \partial\simplex{n}$ and thus $\Hskel$ embeds into the boundary of $\simplex{m_e}^{\oplus r_e} \oplus \simplex{m_o}^{\oplus r_o}$, a simplicial sphere of dimension $r_e m_e + r_o m_o - 1 = m - 1$. For $\ell = 1$, we again make use of Proposition \ref{prop:IndexSimplePolytopes} to get $\Skel{1}{\P_n} = \partial\simplex{n-1}$ and therefore $\Hskel \hookrightarrow \partial(\simplex{m_e-1}^{\oplus r_e} \oplus \simplex{m_o-1}^{\oplus r_o})$, which is a simplicial sphere of dimension $r_e(m_e-1) + r_o(m_o-1) - 1 = m - r - 1$. For $\ell = 0$, the $0$-th coskeleton complex of $\P_n$ may be embedded into the boundary of an $(n-1)$-simplex. However, for even $n = 2t$ we can do better: Consider the $(n - 2)$-dimensional polytope $Q_t = \simplex{t-1} \oplus \simplex{t-1}$ and the mapping from the vertices of $\Skel{0}{\P_n}$ that maps the $i$-th vertex to the $\lfloor\frac{i}{2}\rfloor$-th vertex of the first summand if $i$ is even and of the second otherwise. We claim that this gives an embedding. Every vertex $v$ of $\P_n$ is the intersection of an odd and an even edge. Thus the corresponding facet $[n] \setminus \eqset(v)$ is the disjoint union of $t-1$ odd and $t-1$ even vertices. These sets correspond to facets of $Q_t$. Thus $\Skel{0}{\P} = \Hskel$ embeds into the boundary of $Q_t^{\oplus r_e} \oplus \simplex{m_o - 1}^{\oplus r_o}$ with $t = \frac{m_e}{2}$. \end{proof} Combining the bounds on the embeddability dimensions of the coskeleton complexes of Theorem~\ref{thm:EdimProductPolygons} with Corollary~\ref{cor:proj_obstruct} we obtain the following obstructions to projectability of products of polygons. \begin{thm} \label{thm:ProjectionProdPolygons} Let $r = r_o + r_e$ and $0 \le k < 2r$. There is no realization of a product of $r_o$ odd and $r_e$ even polygons such that a projection to $e$-dimensional space strictly preserves the $k$-skeleton if \[ e < r + 1 + \left\lfloor \frac{k}{2} \right\rfloor + \min\Big\{0,\left\lceil\frac{k}{2}\right\rceil-r_e\Big\}. \] \\[-3em]\qed \end{thm} In~\cite{sz07}, $e$-dimensional polytopes with the $\left\lfloor\tfrac{e-2}{2}\right\rfloor$-skeleton of the $r$-fold product of \emph{even} polygons are projections of a suitable products of even polygons. The following corollary shows that this construction technique does not generalize to products of odd polygons. \begin{cor} \label{cor:ProjectionOddPolygons} There is no realization of an $r_o$-fold product of \emph{odd} polygons such that the $k$-skeleton is strictly preserved under projection to $\R^e$ if \[ e < r_o + 1 + \left\lfloor\frac{k}{2}\right\rfloor. \] \end{cor} In the special case of $r_o=2$ and $k=0$ the result reduces to the well-known fact that a product of two odd $m$-gons does not project to an $m^2$-gon. Another case of interest is $k = \lfloor\tfrac{e}{2}\rfloor - 1$. In case such a realization and projection exists, the resulting polytope is called neighborly, in analogy to the simplicial neighborly polytopes. \begin{cor} \label{cor:ProjectionMoreEvenPolygons} Let $r = r_e + r_o$ and $e \ge 1$. If \[ \left\{ \begin{array}{rcll} \left\lceil\frac{3e-2}{4}\right\rceil &<& r &\text{ for $r_e < \left\lfloor\frac{e}{4}\right\rfloor$,} \\[.5em] \left\lfloor\frac{e}{2}\right\rfloor &<& r_o &\text{ for $r_e \ge \left\lfloor\frac{e}{4}\right\rfloor$} \end{array}\right. \] then there is no realization of a product of $r_e$ even and $r_o$ odd polytopes such that a projection to $e$-space is neighborly. \end{cor} Paraphrasing the situation for products of odd polygons, the result puts an upper bound of $\lceil\tfrac{3e-2}{4}\rceil$ on the number of odd polygons for a ``neighborly'' projection to $\R^e$. \section{Products of simplices} \label{sec:ProductsOfSimplices} In this section we investigate obstructions to skeleta-preserving projections of products of simplices. Appealing to the results from Section \ref{sec:projections}, we bound the embeddability dimension for the respective coskeleton complexes. We denote by $\simplex{n-1} = 2^{[n]}$ the combinatorial type of an $(n-1)$-simplex. The key to determining the embeddability dimension and the Sarkaria index of $\Skel{k}{\simplex{n-1}}$ is the following observation. \begin{obs*} For $n \ge 1$ and $0 \le k \le n - 1$ the $k$-th coskeleton complex $\Skel{k}{\simplex{n-1}}$ of the $(n-1)$-simplex is isomorphic to the $k$-skeleton of $\simplex{n-1}$. \end{obs*} Thus $\Skel{k}{\simplex{n-1}}$ is a \emph{well known} complex and the calculation of the Sarkaria index involves the \emph{classical} Kneser graphs $\KG_{n,\ell} = \KG \tbinom{[n]}{\ell}$ for $0 \le \ell \le n$, that is, the Kneser graphs on the collection of $\ell$-sets of an $n$-set. Their chromatic numbers are a celebrated result in topological combinatorics. \begin{thm}[Lov\'{a}sz~\cite{lov78}]\label{thm:lov_kneser} For $1 \le \ell \le n$ the chromatic number of $\KG_{n,\ell}$ is given by \[ \chi(\KG_{n,\ell}) = \begin{cases} n - 2\ell + 2& \text{if } \ell \le \tfrac{n+1}{2} \\ 1 & \text{otherwise.} \end{cases} \] \end{thm} This result immediately implies the Sarkaria index of the $k$-th skeleton complex $\Skel{k}{\simplex{n-1}}$. \begin{lem} \label{lem:IndexSimplex} For $n \ge 2$ and $0 \le k \le n-1$ the Sarkaria index of the $k$-th coskeleton complex $\Sigma_k = \Skel{k} {\simplex{n-1}}$ of the $(n-1)$-simplex is \[ \Sind \Sigma_k = \left\{ \begin{array}{rl@{\ }cl} 2k+1, & \text{ if } & 0 &\le\ k \ \le \ \tfrac{n-3}{2}, \\ n-2, & \text{ if } & \tfrac{n-3}{2} &< \ k \ \le \ n-2, \\ n-1, & \text{ if } & \multicolumn{2}{r}{ k \ = \ n-1.} \end{array}\right. \] \end{lem} \begin{proof} By the above observation, we have $\KG(\nf(\Sigma_k)) = \KG_{n,k+2}$. The first two cases follow directly from Theorem~\ref{thm:lov_kneser}. The last case follows from Proposition~\ref{prop:IndexSimplePolytopes}. \end{proof} In combination with Proposition \ref{prop:edim_triv_bnds} we obtain the following corollary. \begin{cor}\label{lem:IndexSimplexUpper} Let $\Sigma_k = \Skel{k}{\simplex{n-1}}$ be its $k$-th skeleton complex of an $(n-1)$-simplex for $n \ge 2$. Then the embeddability dimension satisfies \[ \Edim{\Sigma_k} = \left\{ \begin{array}{ll@{\ }cl} 2 k+1,& \text{ if }& 0 &\le\ k \ \le\ \tfrac{n-3}{2}, \\ n-2, & \text{ if }& \tfrac{n-3}{2} &<\ k \ \le\ n-2,\\ n-1, & \multicolumn{3}{l}{\text{ otherwise.}} \end{array} \right. \] \qed \end{cor} In the following we denote by \[ \Psimplex = \underbrace{ \simplex{n-1} \times \simplex{n-1} \times \cdots \times \simplex{n-1}}_{r} \] an $r$-fold product of $(n-1)$-simplices. \begin{thm} \label{thm:IndexProdSimplices} Let $n \ge 2$, $r \ge 1$ and $0 \le k < r(n-1)$. The embeddability dimension of the $k$-th coskeleton complex $\Sigma_k = \Skel{k}{\Psimplex}$ of the product of simplices $\Psimplex$ is bounded from below by \[ \Edim{\Sigma_k} \ge \left\{ \begin{array}{l@{\text{ if }}ccccc} 2r + 2k - 1, & 0 & \le & k & \le & r \lfloor\tfrac{n-3}{2}\rfloor\\ \tfrac{1}{2} rn + k - 1, & r\lfloor\tfrac{n-3}{2}\rfloor & < & k & \le & r\lfloor\tfrac{n-2}{2}\rfloor\\ r(n-1)+\alpha-1, & r\lfloor\tfrac{n-2}{2} \rfloor & < & k & < & r(n-1) \end{array} \right. \] and \[ \alpha = \left\lfloor \frac{ k - r \lfloor\frac{n-2}{2}\rfloor }{\lfloor\frac{n+1}{2}\rfloor} \right\rfloor. \] \end{thm} \begin{proof} We use the knowledge gained from Lemma \ref{lem:IndexSimplex} to set up the integer linear program as in Lemma~\ref{lem:knap}. Set $c = \lfloor\frac{n-3}{2}\rfloor$ and let $0 \le k < r(n-1)$. The program is \[ \begin{array}{rll} \max & \sum_{j=0}^{c} (2j+1)\, \mu_j + (n-2) \sum_{j = c+1}^{n-2} \mu_j + (n-1) \mu_{n-1}\\ \mathsf{s.t.} & \phantom{0\,}\mu_0 + \phantom{1\,}\mu_1 + \cdots +\phantom{(n-1)\,}\mu_{n-1}&= r\\ & 0\,\mu_0 + 1\,\mu_1 + \cdots + (n-1)\mu_{n-1} &= k\\ \end{array} \] and subject to the condition that the $\mu_i$ are non-negative and integral. Any feasible solution with value $s$ gives the bound $\Edim{\Sigma_k} \ge r - 1 + s$. Using the two above constraints we rewrite the objective function \[ r + 2k - \min \left( \sum_{j=c+1}^{n-2} (2j-n+3) \mu_j + n \mu_{n-1} \right) \] Note that all coefficients are non-negative and thus the minimum is at least $0$. For $0 < k \le r\lfloor\frac{n-2}{2}\rfloor$ set $\ell = \lceil \frac{k}{r} \rceil \le c+1$. Define $\mu = (\mu_0,\dots,\mu_{n-1}) \in \Z^{n}$ by \[ \left(\begin{array}{l} \mu_{\ell-1} \\ \mu_{\ell} \end{array}\right) = \begin{pmatrix} 1 & 1 \\ \ell-1 & \ell \end{pmatrix}^{-1} \begin{pmatrix} r \\ k \end{pmatrix} = \begin{pmatrix} r\ell - k \\ k - r( \ell-1) \end{pmatrix} \] and $\mu_j = 0$ otherwise. For $n$ odd we have $\ell \le \lfloor\frac{n-2}{2}\rfloor = c$ and $\mu$ gives a feasible solution with value $0$ in the minimization above. If $n$ is even and $\ell = c+1$ the feasible solution yields a total value of $r + 2k - (k + r\ell) = k + \frac{1}{2}rn$. Note that the second case is vacuous for $n$ odd. For $r\lfloor\frac{n-2}{2}\rfloor < k$, let $h = k - r \lfloor \frac{n-2}{2} \rfloor - \alpha\lfloor\frac{n+1}{2}\rfloor$ set \begin{align*} \mu_{n-1} &= \alpha & \mu_{c\phantom{+1}} &= r - \alpha - 1 & \mu_{c+h\phantom{+1}} &= 1 \intertext{ for $n$ odd and } \mu_{n-1} &= \alpha & \mu_{c+1} &= r - \alpha - 1 & \mu_{c+h+1} &= 1 \end{align*} for $n$ even and $\mu_j = 0$ for all other $j$. \end{proof} As can be seen in the proof, the feasible solution for $\ell \le \lfloor\frac{n-3}{2}\rfloor$ is given by a basic solution to the linear program relaxation and it can be checked that this indeed gives the optimal solution. However, the coefficient for $\mu_{n-1}$ keeps this circumstance from being true for $\ell > \lfloor\frac{n-3}{2}\rfloor$. In conjunction with Corollary~\ref{cor:proj_obstruct} this gives the following definitive result concerning the non-projectability of skeleta of $\Psimplex$. \begin{thm} \label{thm:ProjectionProdSimplices} Let $n \ge 2$ and $r \ge 1$ and set $\alpha = \left\lfloor \frac{ k - r \lfloor\frac{n-2}{2}\rfloor }{\lfloor\frac{n+1}{2}\rfloor} \right\rfloor$. If \[ e < \left\{ \begin{array}{l@{\text{ \ \ for \ \ }}ccccc} r + 2k + 1, & 0 & \le & k & \le & r \lfloor\tfrac{n-3}{2}\rfloor\\ \tfrac{1}{2} r(n-2) + k + 1, & r\lfloor\tfrac{n-3}{2}\rfloor & < & k & \le & r\lfloor\tfrac{n-2}{2}\rfloor\\ r(n-2)+\alpha+1, & r\lfloor\tfrac{n-2}{2} \rfloor & < & k & < & r(n-1) \end{array} \right. \] then there exists no realization of the $r$-fold product $\Psimplex$ of $(n-1)$-simplices such that a projection to $\R^e$ retains the $k$-skeleton.\qed \end{thm} For $r=1$ Theorem~\ref{thm:ProjectionProdSimplices} states that there is no affine projection of the $(2k+2)$-simplex to $\R^{(2k+1)}$ which preserves the $k$-skeleton. This is exactly the linear Van~Kampen--Flores Theorem. Thus, in some sense Theorem~\ref{thm:ProjectionProdSimplices} is a generalization of the Van~Kampen--Flores Theorem from simplices to products of simplices. As a special case it gives yet another proof that no product of two triangles maps linearly to a $9$-gon. Again, let us view the statement of Theorem~\ref{thm:ProjectionProdSimplices} in comparison with upper bounds on the embeddability dimension of the complexes $\Skel{k}{\Psimplex}$. \begin{prop} \label{lem:IndexProdSimplicesUpper} Let $\Sigma_k = \Skel{k}{\Psimplex}$ be the $k$-th coskeleton complex of the $r$-fold product of $(n-1)$-simplices with $n \ge 2$ and $0 \le k < r(n-1)$. Then \[ \Edim{ \Sigma_k } \le \min \{ 2k + 2r - 1, rn - 1 \}. \] \end{prop} \begin{proof} We work along the same lines as in the proof of Proposition \ref{prop:prod_poly_upper} and we use the fact that \[ \Skel{\ell}{\simplex{n-1}} \cong \tbinom{[n]}{\le \ell+1} \hookrightarrow \partial\simplex{n} \] for all $0 \le \ell \le n-1$. Therefrom it follows that $\Sigma_k \hookrightarrow \partial\simplex{n}^{\oplus r} = \partial (\simplex{n^r})^{\triangle}$ and thus $\Edim{\Sigma_k} \le rn -1$. However, since $\dim \Sigma_k = r+k-1$ the bound given by Proposition \ref{prop:edim_triv_bnds} is better for $k \le \frac{1}{2} r(n-2)$. \end{proof} Combining the upper bounds with the lower bounds from Theorem \ref{thm:IndexProdSimplices} yields that the result of Theorem \ref{thm:ProjectionProdSimplices} is sharp for $k \le r \lfloor\frac{n-3}{2}\rfloor$. On the geometric side, this is complemented in the work of Matschke, Pfeifle, and Pilaud \cite{mpp09} on \emph{prodsimplicial-neighborly polytopes}. The constructions given in \cite{mpp09} yield products of simplices for which the projections retain the $k$-skeleta for $k \le r \lfloor\frac{n-3}{2}\rfloor$. Their constructions also include products of simplices of different dimensions and they generalize the topological obstructions to give bounds in the mixed case. \section{Wedge products} \label{sec:WedgeProducts} \newcommand\1{\mathbf{1}} The \emph{wedge product} $P \wedgeproduct Q$ of two polytopes $P$ and $Q$ is a geometric degeneration of the product $Q^m$ that bears very interesting combinatorial properties. It corresponds to an iterated subdirect product in the sense of McMullen~\cite{mcm76} and is dual to a \emph{wreath product} as studied by Joswig \& Lutz \cite{lutz05:_one}. Our motivation for studying wedge products stems from the work of R\"orig \& Ziegler \cite[Ch.~4]{Z109} on questions concerning the realizability of equivelar surfaces. In short, an equivelar surface is a $2$-dimensional polytopal surface that satisfies certain regularity conditions. It is both combinatorially and geometrically challenging to construct equivelar surfaces as they exhibit extremal combinatorial behavior. For example, unlike triangulated surfaces, equivelar surfaces need not posses a geometric realization with flat and convex faces (cf.~Betke~\&~Gritzmann~\cite{betke82}). In \cite{Z109} it is shown that a certain family of wedge products $\wp{r}{n-1}$ contains equivelar surfaces in their $2$-skeleta. Furthermore, for all $r \ge 3$ the surface contained in $\wp{r}{1}$ posses a straight-line realization in $3$-space. The approach is to give a geometric realization of $\wp{r}{1}$ such that a projection to $\R^4$ strictly preserves the surface and the resulting polytope carries the surface in its lower hull. In this section we prove non-projectability results regarding skeleta of wedge products and, in particular, we show that for $r \ge 4$ and $n \ge 3$ there is no realization of the wedge product $\wp{r}{n-1}$ such that a projection to $\R^4$ strictly preserves the equivelar surface. \subsection{Wedge products and products} \label{ssec:wedge-prod-prod} Wedge products of polytopes were introduced in \cite{Z109} from several perspectives such as an iteration of a generalized wedge construction and in terms of interior and exterior presentations. In this paper, we will only need the description in terms of facet-defining halfspaces. \begin{dfn}[{\cite[Def.~4.10]{Z109}}] For polytopes $P = \{ y \in \R^d : a_i \cdot y \le 1 \text{ for all } i = 1,\dots,m \}$ and $Q = \{ x \in \R^{d^\prime} : B x \le \1 \}$ the \emph{wedge product} of $P$ and $Q$ is the polytope \[ P \wedgeproduct Q := \left\{ (x_1,\dots,x_m,y) \in (\R^{d^\prime})^m \times \R^d : B x_i \le (1 - a_i \cdot y) \1 \text{ for all } i = 1,\dots, m \right\}. \] \end{dfn} The geometry and the combinatorics of wedge products are studied in \cite{Z109}. For our purposes it is sufficient to know the combinatorial type of $P \wedgeproduct Q$ in the form of intersections of facets. \begin{thm}\label{thm:wp_lattice} Let $P$ and $Q$ be polytopes with facets indexed by $[m]$ and $[n]$, respectively. The face lattice of $P \wedgeproduct Q$ is given by the collection of tuples $(H_1,\dots,H_m)$ with $H_1,\dots,H_m \subseteq [n]$ such that \begin{enumerate} \item $H_i = I_Q(F_i)$ for some face $F_i \subseteq Q$ for all $i$, and \item $\{ j \in [m] : H_j = [n] \} = I_P(G)$ for some face $G \subseteq P$. \end{enumerate} The order relation is given by componentwise reverse inclusion. The dimension of the face $(H_1,\dots,H_m)$ is given by $\sum \dim F_i + \dim G$. \end{thm} \begin{proof} It follows from the lattice structure of $\fl(P)$ and $\fl(Q)$ that the stated poset is a atomic and coatomic lattice. It is known that two atomic-coatomic lattices are isomorphic if and only if they have isomorphic atom-coatom incidences. The bijection on the collection of facets is clear and the vertices are determined by Theorem~4.13 in \cite{Z109} and correspond to admissible tuples $(H_1,H_2,\dots,H_m)$ with $F_i$ of dimension at most $0$ and $G$ a vertex. \end{proof} An alternative approach to wedge products and Theorem~\ref{thm:wp_lattice} appears in \cite[Thm.~2.21]{raman-thesis}. The following observation links the wedge product to the usual product. \begin{prop}[{\cite[Prop.~4.12]{Z109}}] The intersection of the wedge product $P \wedgeproduct Q$ with the linear space $L = (\R^{d^\prime})^m \times \{ 0 \} \subset (\R^{d^\prime})^m \times \R^d$ is affinely isomorphic to $Q^m$. In particular, the intersection is given by the faces $(H_1,\dots,H_m)$ with $H_i \not= [n]$. \end{prop} It follows from Proposition~\ref{thm:wp_lattice} that every $k$-face for $k \ge 0$ of the product $Q^m$ is the unique intersection of $L$ with a face of dimension $k+\dim P$ of $P \wedgeproduct Q$. \begin{lem} \label{lem:WedgeProdSkeleta} Let $P$ and $Q$ be polytopes with $m$ being the number of facets of $P$. Then for any $0 \le k \le m\,\dim Q$ we have \[ \Skel{k}{Q^m} \hookrightarrow \Skel{k + \dim P}{P \wedgeproduct Q}. \] \qed \end{lem} We call the image of the $k$-skeleton of the product $Q^m$ in $P \wedgeproduct Q$ the \emph{special} $(k+\dim P)$-faces of the wedge product. These special faces cover all vertices of the wedge product by Theorem~\ref{thm:wp_lattice}. The bottom line is that we can re-use the bounds obtained in Section~\ref{sec:ProductsOfSimplices} to handle projections of wedge products of polygons and simplices. \subsection{Projections of wedge products of polygons and simplices} \label{ssec:projectability} In the following we restrict ourselves to the wedge product $\wp{r}{n-1} = \P_r \wedgeproduct \simplex{n-1}$ of an $r$-gon $\P_r$ and an $(n-1)$-simplex $\simplex{n-1}$. It follows from Theorem~\ref{thm:wp_lattice} that $\wp{r}{n-1}$ is an $(r(n-1)+2)$-dimensional polytope with $rn$ facets. Using the correspondence established in Lemma~\ref{lem:WedgeProdSkeleta} we apply the result of Section~\ref{sec:ProductsOfSimplices} to the projectability of the $k$-skeleta of wedge products. \begin{prop} \label{prop:ObstructionSpecialKFaces} There exists no realization of the wedge product $\wp{r}{n-1}$ of $r$-gon and $(n-1)$-simplex with $r\ge 4$ and $n \ge 2$ such that the projection to $\R^e$ preserves its special $k$-faces for $k\ge 2$ if \[ e < \left\{ \begin{array}{llccccc} r+2k-1 &\text{if}&2&\le& k &\le&r\lfloor\tfrac{n-3}{2}\rfloor + 2\\ \tfrac{1}{2} r(n-2) + k + 1&\text{if} &r\lfloor\tfrac{n-3}{2}\rfloor + 2 & < & k &\le&r\lfloor\tfrac{n-2}{2}\rfloor + 2\\ r(n-2) + \alpha + 3&\text{if}&r\lfloor\tfrac{n-2}{2}\rfloor + 2 &<& k&<&r(n-1)+2 \\ \end{array} \right. \] and \[ \alpha = \left\lfloor \frac{k-2-\left\lfloor\frac{n-2}{2}\right\rfloor} {\left\lfloor\frac{n+1}{2}\right\rfloor} \right\rfloor . \] \end{prop} \begin{proof} We are able to apply Theorem~\ref{thm:main} since the special faces cover all vertices and every face of a strictly preserved face is also strictly preserved. The strict projection complex of a projection strictly preserving the special $k$-faces of the wedge product contains the $(k-2)$-nd coskeleton complex of the product $\Psimplex$ (see Lemma~\ref{lem:WedgeProdSkeleta}). Hence the embeddability dimension of the special $k$-faces of the wedge product is equal to the embeddability dimension of $\Skel{k-2}{\Psimplex}$ given by Theorem~\ref{thm:IndexProdSimplices}. Plugging these bounds into Corollary~\ref{cor:proj_obstruct} we obtain: \[ e < \Edim{\Skel{k-2}{\Psimplex}} - r + 4 \le \Edim{\Skel{k}{\wp{r}{n-1}}} + (r(n-1) + 2) - rn + 2.\qedhere \] \end{proof} Since the $k$-skeleton of the wedge product obviously contains the special $k$-faces we obtain the following result for the projectability of skeleta of the wedge product $\wp{r}{n-1}$. \begin{thm} \label{thm:ObstructionWedgeProductSkeleta} There exists no realization of the wedge product $\wp{r}{n-1}$ of $r$-gon and $(n-1)$-simplex with $r\ge 4$ and $n \ge 2$ such that the projection to $\R^e$ preserves the $k$-skeleton for $k \ge 0$ if \[ e < \left\{ \begin{array}{llccccc} r+2k-1 &\text{if}&2&\le& k &\le&r\lfloor\tfrac{n-3}{2}\rfloor + 2\\ \tfrac{1}{2} r(n-2) + k + 1&\text{if} &r\lfloor\tfrac{n-3}{2}\rfloor + 2 & < & k &\le&r\lfloor\tfrac{n-2}{2}\rfloor + 2\\ r(n-2) + \alpha + 3&\text{if}&r\lfloor\tfrac{n-2}{2}\rfloor + 2 &<& k&<&r(n-1)+2 \\ \end{array} \right. \] and \[ \alpha = \left\lfloor \frac{k-2-\left\lfloor\frac{n-2}{2}\right\rfloor} {\left\lfloor\frac{n+1}{2}\right\rfloor} \right\rfloor . \] \end{thm} \begin{proof} The vertices of the wedge product correspond to the vectors $\H_V$ given by: \begin{equation} \label{eq:WPVerts} \H_V = \left\{ (H_1,\dots,H_r) \in \wp{r}{n-1} : H_i \not= [n] \Rightarrow |H_i| = n-1 \right\}. \end{equation} We pick a subfamily of vertices corresponding to the vectors $([n],[n],H_3,\dots,H_r)$ with $|H_i|=n-1$ for $i = 3,\dots,r$. Considering only the last $r-2$ components of the vector we obtain the following inclusion of coskeleton complexes: \[ \Skel{0}{\simplex{n-1}^{r-2}} \hookrightarrow \Skel{0}{\wp{r}{n-1}}. \] The embeddability dimension of $\Skel{0}{\simplex{n-1}^{r-2}}$ is $2r-5$ by Theorem~\ref{thm:IndexProdSimplices}. Thus we obtain the following bound on $e$ with Corollary~\ref{cor:proj_obstruct}: \[ \Edim{\Skel{0}{\wp{r}{n-1}}} + r(n-1)+2 - rn + 2 \ge \Edim{\Skel{0}{\simplex{n-1}^{r-2}}} - r + 4 = r - 1 > e. \] The $1$-skeleton of the wedge product contains a subfamily of edges corresponding to the vectors $([n],H_2,\dots,H_r)$. As for the vertices we obtain an inclusion of coskeleton complexes: \[ \Skel{0}{\simplex{n-1}^{r-1}} \hookrightarrow \Skel{1}{\wp{r}{n-1}}. \] Since by Theorem~\ref{thm:IndexProdSimplices} the embeddability dimension of $\Skel{0}{\simplex{n-1}^{r-1}}$ is $2r-3$ we obtain the following bound for the dimension projected onto using Corollary~\ref{cor:proj_obstruct}: \[ \Edim{\Skel{1}{\wp{r}{n-1}}} - r + 4 \ge \Edim{\Skel{0}{\simplex{n-1}^{r-1}}} - r + 4 = r + 1 > e. \] For $k \ge 2$ we simply use Proposition~\ref{prop:ObstructionSpecialKFaces}. \end{proof} \subsection{Equivelar surfaces in wedge products} It is shown in \cite{Z109} that the wedge product $\wp{r}{n-1} = \P_r \wedgeproduct \simplex{n-1}$ of an $r$-gon and an $(n-1)$-simplex carries a very interesting equivelar surface $\WPsurf{r}{2n}$ in its $2$-skeleton. A main result of \cite{Z109} is that in some cases this combinatorial embedding can be used to obtain a geometric embedding in $3$-space. Using the machinery developed in Section~\ref{sec:projections} and the results of Section~\ref{sec:ProductsOfSimplices} we complement the above result about projections of equivelar surfaces. The $2$-skeleton of the wedge product $\wp{r}{n-1}$ is a furtile ground for embedding equivelar surfaces. Consider the special $2$-faces of Lemma~\ref{lem:WedgeProdSkeleta}. They correspond to: \[ \H_R = \left\{ (H_1,\dots,H_r) \in \wp{r}{n-1} : |H_i| = n-1 \text{ for all } i \in [r] \right\}. \] So for every choice $j_1,\dots,j_r \in [n]$ the tuple \[ H = ([n] \setminus j_1, [n] \setminus j_2, \dots, [n] \setminus j_r) \] represents a special $2$-face of $\wp{r}{n-1}$ and each such face is isomorphic to an $r$-gon. Indeed, for every $i \in [r]$ the tuple \[ H^i = ([n] \setminus j_1, \dots, [n] \setminus j_{i-1}, [n], [n] \setminus j_{i+1}, \dots, [n] \setminus j_r) \] corresponds to an edge of $H$ by Theorem~\ref{thm:wp_lattice} and hence $H$ is a $2$-dimensional face with $r$ edges. We denote the collection of these $r$-gon edges by $\H_E$: They correspond to tuples $(H_1,\dots,H_r)$ with $|H_i| = n -1$ for all but a unique $i_0 \in [r]$ with $H_{i_0} = [n]$. In~\cite{Z109} the following subcomplex of the wedge product $\wp{r}{n-1}$ is discussed: For $r \ge 3$ and $n \ge 2$ consider the subcomplex $\WPsurf{r}{2n}$ generated by the following collection of $r$-gons of the wedge product $\wp{r}{n-1}$: \[ \WPsurf{r}{2n} = \Big\{ ([n] \setminus j_1,\ldots,[n]\setminus j_{r}) : \sum_{k=1}^{r} j_k \equiv 0,1 \mod n\Big\} \subseteq \H_R. \] The subcomplex $\WPsurf{r}{2n}$ contains all the vertices and all the edges of $\H_E$ of $\wp{r}{n-1}$. It is shown in~\cite{Z109} that $\WPsurf{r}{2n}$ is a regular (polyhedral) surface~$\WPsurf{r}{2n}$ of type $\{r,2n\}$, i.e.\ an (orientable) polyhedral $2$-manifold that is \begin{compactitem} \item \emph{equivelar}: all faces are $r$-gons and every vertex is incident to $2n$ faces, and even \item \emph{regular}: the automorphism group acts transitively on the flags of the surface. \end{compactitem} For the special case $n=2$ there are deformed realizations of the wedge products $\wp{r}{1}$ and projections that yield embeddings of the surfaces $\WPsurf{r}{4}$ in $\R^3$. \begin{thm}[{\cite[Thm.~4.26]{Z109}}] The wedge product $\wp{r}{1}$ has a realization such that all the faces corresponding to the surface $\WPsurf{r}{4}\subset \wp{r}{1}$ are preserved by the projection to $\R^4$. Hence there is a realization of $\WPsurf{r}{4}$ in $\R^3$. \end{thm} So there was hope that some realizations of the wedge products for other parameters $r$ and $n$ would yield realizations in $\R^3$ as well. But with the techniques developed in this article we obtain the following negative result. \begin{thm} \label{thm:ObstructionWedgeProductSurface} There is no realization of the wedge product $\wp{r}{n-1}$, with $n \ge 3$ and $r \geq 4$, such that all the faces corresponding to the surface $\WPsurf{r}{2n}$ are strictly preserved by the projection $\pi:\R^{2+r(n-1)}\rightarrow\R^{e}$ for $e < r+1$. \end{thm} \begin{proof} We prove the theorem by contradiction. So assume that there exists a realization of $\wp{r}{n-1}$ such that the surface $\WPsurf{r}{2n}$ is strictly preserved by the projection to $\R^e$ with $e < r+1$. By Theorem~\ref{thm:main} the embeddability dimension of the strict projection complex $\K = \K(\wp{r}{n-1},\pi)$ is then \begin{equation} \Edim{\K} \le rn-(r(n-1)+2)+e-2 = r + e - 4 < 2r-3. \tag{\sf WP} \label{eq:edim_wp} \end{equation} Since the polygons of the wedge product surface $\WPsurf{r}{2n}$ are strictly preserved by the projection $\pi$ the simplicial complex $\K$ contains a subcomplex $\Sigma$ corresponding to the polygons of $\WPsurf{r}{2n}$. The strict projection complex of all special $r$-gons is $\Skel{0}{\Psimplex}$ by Lemma~\ref{lem:WedgeProdSkeleta}. Hence \[ \Sigma = \{(j_1,\dots,j_r)\,|\, \sum_{k=1}^{r} j_k \equiv 0,1 \mod n\} \ \subset\ \Skel{0}{\Psimplex}. \] We remove the asymmetry from $\Sigma$ by only considering the edges $([n],[n]\setminus j_2,\dots,[n]\setminus j_{r})$ for $j_i \in [n]$ of the wedge product. The strict projection complex of these edges is $\Skel{0}{\simplex{n-1}^{r-1}}$. By Theorem~\ref{thm:IndexProdSimplices} the embeddability dimension of $\Skel{0}{\simplex{n-1}^{r-1}}$ is $2r-3$. Hence the embeddability dimension of $\K$ is at least $2r-3$ because $\Skel{0}{\simplex{n-1}^{r-1}} \subseteq \Sigma \subseteq \K(\wp{r}{n-1},\pi)$. This is a contradiction to Equation~\eqref{eq:edim_wp}. So there exists no realization of $\wp{r}{n-1}$ such that the surface $\WPsurf{r}{2n}$ is strictly preserved by the projection to $\R^e$. \end{proof} Theorem~\ref{thm:ObstructionWedgeProductSurface} does not obstruct straight-line realizations of the surfaces $\WPsurf{r}{2n}$ in $\R^3$ in general. It only exhibits the limitations of the approach taken in \cite{Z109}. \bibliographystyle{siam} \begin{small}
1,314,259,995,698
arxiv
\section{ Introduction} In the seminal papers \cite{campa1, campa2} Sergio Campanato introduced the spaces that nowadays are named after him, and used them to prove embedding theorems for Sobolev-Morrey spaces defined on bounded open sets $\Omega$ in $\mathbb{R}^n$. In particular, it was proved that if $f$ is a function belonging to the Campanato space $ {\mathcal L}^{\lambda}_{p }(\Omega )$ with $n<\lambda $ (and $\lambda \le n+p$) then $f$ is H\"{o}lder continuous with exponent $(\lambda -n)/p $ that is for some $c>0$ \begin{equation} \label{intro1} |f(x)-f(y)| \le c |x-y|^{\frac{\lambda -n}{p}}, \end{equation} for all $x,y\in \Omega $, and it was also proved that if $f$ is a function in the Sobolev-Morrey space $W^{l,\lambda}_p(\Omega )$ with $0\le \lambda <n$, $ n-\lambda <pl $ (and $pl < n-\lambda +p$) then $f$ is H\"{o}lder continuous with exponent $ l+\frac{\lambda -n}{p}$ that is for some $c>0$ \begin{equation} \label{intro2} |f(x)-f(y)| \le c |x-y|^{l+\frac{\lambda -n}{p}}, \end{equation} for all $x,y\in \Omega $. Here, for simplicty, $l\in \mathbb{N}$ and $W^{l,\lambda}_p(\Omega )$ is the space of functions with weak derivatives up to order $l$ in the classical Morrey space $L_p^{\lambda }(\Omega )$, but we note that the focus of \cite{campa1, campa2} was mainly on the case of fractional order of smoothness $l$ since the case of integer exponents was already discussed in \cite{greco, nirenberg}. See Section~\ref{embe} for precise definitions. The importance of these spaces is evident in regularity theory and harmonic analysis. The classical regularity approach was based on the singular integrals theory approach introduced by A.P.~Calder\'{o}n and A.~Zygmund~\cite{CZ}. Using this approach based on the heat kernel, J.~Nash~\cite{N} was able to solve the XIX Hilbert problem about the analyticity of the solutions to regular problems in the calculus of variations. One year before, E.~De Giorgi~\cite{DG} proved the same result with a different approach. He introduced a suitable function space, the so-called De Giorgi class, proved that any solution to regular problems in the calculus of variations belongs to this class and showed the embeddings of the De Giorgi classes in the space of H\"older continuous functions. It was a natural question to ask if this approach could recover the classical Calder\'{o}n and Zygmund theory for equations with regular coefficients (i.e., continuous or H\"older continuous coefficients). This question was proposed by Ennio De Giorgi and Guido Stampacchia and solved by S. Campanato with the introduction of the Campanato spaces. The regularity in $L^p$ spaces was proved by S.~Campanato and G.~Stampacchia~\cite{CS} with the supplementary hypotheses of the H\"older continuity of the coefficients (for a proof of such a result, with only the assumption of the continuity of the coefficients, see \cite{CTV}). These spaces were used for proving regularity of solutions to elliptic/parabolic systems/equations in variational/nonvariational form of the second (and higher) order (see, for instance \cite{C} and \cite{G}). The other important field of application of these function spaces is harmonic analysis. T.~Walsh~\cite{W} proved that the dual space of the Hardy space $H^p (R^ N)$ is exactly the Campanato space. The theory of Hardy spaces $H^p (R^N)$ has important applications in harmonic analysis and partial differential equations (for instance, see \cite{FS, S, S1, SW}). We recall that when $p \in (1, \infty)$, $L^ p (R^ N$) and $H^p (R^N)$ are isomorphic; but when $p \in (0, 1]$, some of singular integrals (for example, Riesz transforms) are bounded on $H^p (R^N)$ but not on $L^p (R^N)$ and this fact makes the space $H^p (R^N)$ the right space where to study the theory of the boundedness of operators. In \cite{FS} C.~Fefferman and E.M.~Stein characterised the Hardy space $ H^1 (R^N)$ as the predual of the space BMO$(R^N)$. The atomic and the molecular characterizations of $H^p (R^N)$ and their applications were studied by many authors; see, for example, \cite{Co, Co1, L, NS, SW, TW}. These characterizations (atomic and molecular) are necessary to extend the theory of Hardy spaces to spaces of homogeneous type in the sense of R.R.~Coifman and G.~Weiss \cite{CoW, CoW1}, which is, by far, one of the most general setting for singular integrals. Going back to the initial work of S. Campanato, we note that inequality \eqref{intro1} was proved under the assumption that $\Omega$ satisfies the so-called property $(A)$ which requires the existence of a constant $M>0$ such that \begin{equation} \label{conditiona} |\Omega \cap B(x,r)|\geq Mr^n\, , \end{equation} for all $x\in \Omega$ and all $r>0$ smaller than the diameter of $\Omega$. Inequality \eqref{intro2} was obtained under the stronger assumption that $\Omega$ is of class $C^{0,1}$, which means that, locally at the boundary, $\Omega$ can be represented as the subgraph of a Lipschitz continuous function (possibly after a rotation of coordinates). Note that for $\lambda =0$ we have $W^{l,\lambda}_p(\Omega )= W^{l}_p(\Omega )$ and inequality \eqref{intro2} is the celebrated Sobolev-Morrey inequality. In this paper, we consider the case of open sets $\Omega$ of class $C^{0,\gamma}$ with $0<\gamma \le 1$ which means that the functions describing the boundary of $\Omega$ are H\"{o}lder continuous of exponent $\gamma$. It is a matter of folklore that passing from Lipschitz to H\"{o}lder continuity assumptions at the boundary of an open set is highly nontrivial (see e.g., the recent paper \cite{lapi}), and it is interesting to note that also S. Campanato himself devoted his paper \cite{campa3} to the study of embeddings for Sobolev spaces on open sets with power-type cusps at the boundary. We refer to the extensive monograph \cite{mazpob} for a recent introduction to the analysis of function spaces on irregular domains. We also refer to the classical monograph \cite{kufner} for an introduction to Morrey-Campanato spaces on regular domains. Broadly speaking, one may say that classical embedding theorems for Sobolev-Morrey-Campanato spaces hold on $C^{0,\gamma}$ domains provided one replaces (in the inequalities involved) the dimension $n$ of the underlying space by $n_{\gamma}=(n-1)/\gamma +1$, a fact which also appeared in \cite{campa3}. It is important to note that $n_{\gamma}>n$ if $\gamma <1$, and this typically leads to a deterioration in the estimates. For instance, if $\Omega$ is a domain with outer power-type cusps with exponent $\gamma$, property (A) above holds provided $n$ is replaced by $n_{\gamma}$ in the right-hand side of \eqref{conditiona}. On the other hand, we observe that if one wishes to control $|\Omega \cap B(x,r) |$ from above, the best one can do is to write $|\Omega \cap B(x,r) |\le cr^n $, since it is impossible here to use $n_{\gamma}$. This discrepancy between the upper and lower bounds for $|\Omega \cap B(x,r) |$, indicates that the standard Euclidean metric is not suitable to deal with cusps and suggests to adapt the balls $B(x,r)$ to the type of domain under consideration. For example, if $\Omega$ is given by the cusp \begin{equation} \label{cusp} \{(\bar x, x_n)\in \mathbb R^n:\ \bar x \in \mathbb{R}^{n-1}, \ x_n>|\bar x|^{\gamma} \} \end{equation} with $\gamma <1$, then one should replace the Euclidean ball $B(x,r)$ by the anisotropic ball \begin{equation} \label{ballintro}B_{\gamma}(x,r)=\bigl\{y\in R^n:\ |\bar x-\bar y|<r^{\frac{1}{\gamma}},\ |x_n-y_n|<r \bigr\} \end{equation} in which case $ |\Omega \cap B_{\gamma }(x,r) |$ is asymptotic to $r^{n_{\gamma}} $ as $r\to 0$, and the discrepancy above disappears. Accordingly, in the right-hand side of inequalities \eqref{intro1}, \eqref{intro2} one has to replace the Euclidean distance $|x-y|$ by the anisotropic one $|\bar x -\bar y|^{\gamma} +|x_n-y_n| $. This idea was already used by G.C.~Barozzi~\cite{ba} where some results of S. Campanato are extended to the case of domains with power-type cusps, and was further extended by Giuseppe Da Prato in the fundamental paper \cite{da} where more general metrics were considered. We also note that final results for domains satisfying horn-type conditions are contained in the classical monograph \cite{be1, be2}. Although the existing literature seems to provide a complete picture of this subject, we have found it quite surprising that some results contained in the above mentioned papers, incorporate quite restrictive assumptions. In particular, in the analysis of inequality \eqref{intro2} for anisotropic metrics, \cite[Theorem~3]{ba} eventually assumes for simplicity that $\Omega$ is a parallelepiped, \cite[Theorem~4.1]{da} assumes that $\Omega$ is convex and the estimate in \cite[Theorem~27.4.2]{be2} is proved for all $x,y\in \Omega$ such that the segment $[x,y]$ is contained in $\Omega$. A different approach to the analysis of function spaces in domains of class $C^{0,\gamma}$ was suggested by Victor I. Burenkov in \cite{burpaper2, burpaper1} where he defined a new extension operator which, contrary to other classical extension operators, allows to deal not only with Lipschitz domains but also with $C^{0,\gamma}$ domains (as well as with anisotropic Sobolev spaces and extensions from manifolds of dimension $m<n$). Note that the flexibility of Burenkov's Extension Operator has been recently exploited in \cite{fala} where it is proved that this operator preserves general Sobolev-Morrey spaces, including the case of the classical Sobolev-Morrey spaces $W^{l,\lambda}_p(\Omega )$. If $\gamma <1$ then deterioration in the smoothness of the extended functions is expected and, in fact, Burenkov's Extension Operator maps the Sobolev space $W^{l}_p(\Omega)$ to the Sobolev space $W^{[\gamma l]}_p(\mathbb{R}^n)$ where $[\gamma l]$ is the integer part of $\gamma l$. The exponent $[\gamma l]$ is sharp (in terms of Sobolev spaces). Thus, having a function extended to the whole of $\mathbb{R}^n$ allows to apply embedding theorems in $\mathbb{R}^n$ and eventually to return to $\Omega$ by mere restriction. Although the target space $W^{[\gamma l]}_p(\mathbb{R}^n)$ is sharp, it is observed already in \cite{burpaper1} that in general the embedding theorems proved via this procedure are not sharp since the deterioration given by $[\gamma l]$ is too much for this purpose. However, this procedure has the advantage of giving at least some information even in most difficult cases. The goal of the present paper is twofold. First, we revise the above mentioned old results by adapting their formulation to the case of elementary domains of class $C^{0,\gamma}$. In passing, we also indicate how it is possible to replace the convexity assumption in \cite[Theorem~4.1]{da} by the assumption that a Poincar\'{e} inequality for balls holds, see Theorem~\ref{dapratovariant}. Secondly, we indicate how to adapt the proofs of \cite{fala} to the case of domains of class $C^{0,\gamma}$, in order to prove that Burenkov's Extension Operator maps the Sobolev-Morrey space $W^{l,\lambda}_p(\Omega )$ to the Sobolev-Morrey space $W^{[\gamma l],\gamma \lambda}_p(\mathbb R^n )$, analysing also the case of Morrey norms defined by even more general weights, see Theorem~\ref{sobolevmorreybis}. Note the extra deterioration in the Morrey exponent which passes from $\lambda $ to $\gamma \lambda$. Moreover, we apply this extension result to recover an estimate of type \eqref{intro2} in domains of class $C^{0,\gamma}$, see Corollary~\ref{maincor}. We observe that, although the new estimate is not sharp, it is obtained without any extra geometric assumptions on $\Omega$ or on the points $x,y\in \Omega$ as done by other authors. It is important to observe that our extension result is obtained by using Morrey norms involving Euclidean balls both in $\Omega$ and in $\mathbb{R}^n$, even though for elementary domains of form \eqref{cusp} it would be natural to use anisotropic balls of type \eqref{ballintro} in $\Omega$. This is due to technical reasons involved in our proofs, which prevents us from controlling reflected balls in an anistropic way, see Lemma~\ref{inc0}. On the other hand, since our final goal is to deal with general open sets $\Omega$ of class $C^{0,\gamma}$ (where cusps may have a different orientation depending on the part of the boundary under consideration), in principle there is no special reason why one should use the balls of type \eqref{ballintro} in the whole of $\Omega$. Thus, either one changes the definition of the Morrey spaces, adapting balls to the orientation of each local chart or uses, for uniformity, Euclidean balls in the whole of $\Omega$. Our approach eventually leads us to choose the second option. With reference to the problem of the extension of Sobolev-Morrey spaces, besides \cite{fala}, we would also like to quote the papers \cite{koskzhou}, \cite{lavio}, \cite{vitolo}. \section{Embedding theorems on elementary $C^{0,\gamma}$ domains } \label{embe} In this paper the elements of ${\mathbb R}^n$, $n\geq 2$, are denoted by $x=(\overline x,x_n)$ with $\overline x=(x_1,\dots , x_{n-1})\in{\mathbb R}^{n-1}$ and $x_n\in {\mathbb R}$. For any $\gamma\in ]0,1]$, we consider the metric $\delta_{\gamma}$ in ${\mathbb R}^n$ defined by $$ \delta_{\gamma}(x,y)= \max\{ |\bar x-\bar y|^{\gamma}, |x_n-y_n| \}\, , $$ for all $x,y\in {\mathbb R}^N$ and we denote by $B_{\gamma}(x,r)$ the corresponding open balls of centre $x$ and radius $r$, that is \begin{eqnarray*}\lefteqn{ B_{\gamma}(x,r)=\{y\in {\mathbb R}^n:\ \delta_{\gamma}(x,y)<r \} } \\ & & \qquad\qquad\qquad=\bigl\{y\in R^n:\ |\bar x-\bar y|<r^{\frac{1}{\gamma}},\ |x_n-y_n|<r \bigr\} . \end{eqnarray*} Note that the Lebesgue measure of $B_{\gamma}(x,y)$ is given by $$ |B_{\gamma}(x,r) | =2\omega_{n-1}r^{n_{\gamma}} $$ where $$ n_{\gamma}= \frac{n-1}{\gamma}+1\, , $$ and $\omega_{n-1}$ is the measure of the unit ball in ${\mathbb R}^{n-1}$. Note also that $n_{\gamma} = n+(n-1)(\frac{1}{\gamma}-1 )$, hence $n_{\gamma}\geq n$ and equality occurs if and only if either $n=1$ or $\gamma=1$. Given $p\in [1,\infty [$, a function $\phi : ]0,\infty [\to ]0,\infty [$ and an open set $\Omega$ in ${\mathbb R}^n$, for all $f\in L^p (\Omega )$ we set \begin{equation*} \| f \|_{L^{\phi }_{p, \gamma }(\Omega ) }:=\sup_{x\in \Omega}\sup_{r>0 }\left(\frac{1}{\phi(r)}\int_{B_{\gamma}(x, r)\cap \Omega}|f(y) |^pdy \right)^{\frac{1}{p}} \end{equation*} and \begin{equation*} | f |_{{\mathcal L}^{\phi }_{p, \gamma }(\Omega ) }:=\sup_{x\in \Omega}\sup_{r>0 }\left(\frac{1}{\phi(r)}\int_{B_{\gamma}(x, r)\cap \Omega}|f(y) -\Xint-_{B_{\gamma}(x, r)\cap \Omega} f(z)dz |^pdy \right)^{\frac{1}{p}}\, . \end{equation*} The generalised Morrey spaces are defined by $$ L^{\phi }_{p, \gamma }(\Omega )=\{ f\in L^p (\Omega ):\ \| f \|_{L^{\phi }_{p, \gamma }(\Omega ) }< \infty \} , $$ and the generalised Campanato spaces are defined by $$ {\mathcal L}^{\phi }_{p, \gamma }(\Omega )=\{ f\in L^p (\Omega ):\ | f |_{{\mathcal L}^{\phi }_{p, \gamma }(\Omega ) }< \infty \} . $$ For any $l\in {\mathbb N}$, we consider also the Sobolev-Morrey spaces $$ W^{l,\phi }_{p,\gamma}(\Omega )=\{ f\in L^p (\Omega):\ D^{\alpha }f\in L^{\phi }_{p, \gamma }(\Omega ),\ \forall |\alpha|\le l \} $$ endowed with the norm $$ \| f \|_{W^{l,\phi}_{p,\gamma}(\Omega )}=\sum_{|\alpha | \le l}\| D^{\alpha }f \|_{ L^{\phi }_{p, \gamma }(\Omega )}\, . $$ If $\lambda \geq 0$ and $\phi (r)=\min \{r^{\lambda }, 1 \}$ for all $r>0$ then the corresponding spaces will be denoted by $L^{\lambda }_{p, \gamma }(\Omega )$, ${\mathcal L}^{\lambda }_{p, \gamma }(\Omega )$, $W^{l,\lambda }_{p,\gamma}(\Omega )$. Since $| \cdot |_{{\mathcal L}^{\lambda }_{p, \gamma }(\Omega ) } $ is a semi-norm, it is customary to endow the Campanato space ${\mathcal L}^{\lambda }_{p, \gamma }(\Omega )$ with the norm defined by $$ \| f \|_{{\mathcal L}^{\lambda }_{p, \gamma }(\Omega ) } := \|f \|_{L^p(\Omega)}+ | f |_{{\mathcal L}^{\lambda }_{p, \gamma }(\Omega ) } \, , $$ for all $f\in {\mathcal L}^{\lambda }_{p, \gamma }(\Omega )$. Note that ${L^{\lambda }_{p, 1 }(\Omega ) }$, ${{\mathcal{L}}^{\lambda }_{p, 1 }(\Omega ) }$ are the classical Morrey and Campanato spaces respectively (recall that ${L^{\lambda }_{p, 1 }(\Omega ) }$ contains only the zero function for $\lambda >n$ and it coincides with $L^{\infty}(\Omega)$ for $\lambda =n$ by the Lebesgue differentiation theorem, see \cite{kufner} for more details concerning the limiting cases). \\ We consider elementary H\"older continuous domains $\Omega$ in ${\mathbb R}^n$ with exponent $\gamma\in ]0,1]$ of the form \begin{equation}\label{eldom}\Omega = \{x=(\overline x, x_n)\in {\mathbb R}^n:\ \bar x\in W,\ a< x_n<\varphi(\overline x) \}\,, \end{equation} where $-\infty \le a<\infty $, $W$ is a smooth or convex open set in ${\mathbb R}^{n-1}$, and $\varphi : W\to {\mathbb R}$ is a H\"older continuous function with exponent $\gamma$ satisfying the condition $\varphi (\bar x)> a+\delta$ for some $\delta >0$. In particular, there exists a positive constant $M$ such that \begin{equation}\label{lip1} |\varphi(\overline x)-\varphi(\overline y)| \le M|\overline x-\overline y|^{\gamma}\,, \,\,\forall\ \overline x, \overline y\in {\mathbb R}^{n-1}\,. \end{equation} The best constant $M$ in inequality (\ref{lip1}) is denoted by ${\rm Lip}_{\gamma} \varphi$. For $\gamma=1$ we obtain Lipschitz continuous domains. It is well known that Lipschitz continuous domains satisfy the usual cone condition. Similarly, H\"older continuous domains satisfy a generalisation of that condition which we call the cusp condition. Namely, for any $x\in {\mathbb R}^n$ and $h>0$, we set \begin{equation} C_{\gamma}(x, h, M)=\{y\in {\mathbb R}^N:\ x_n-h< y_n<x_n-M|\bar y -\bar x|^{\gamma} \} \end{equation} and we call it a cusp with exponent $\gamma$, vertex $x$, height $h$ and opening $M$. Then we can prove the following simple lemma which, by the way, is essential in order to apply the general results of \cite{ba, be2, da}. \begin{lemma} \label{lemmacusp} Let $\gamma\in ]0,1]$ and $\Omega $ be an elementary H\"older continuous domain in ${\mathbb R}^n$ as in \eqref{eldom} with $W={\mathbb R}^{n-1}$ and $a=-\infty$. Then for all $x\in \bar \Omega$ and $h>0$, we have \begin{equation} \label{lemmacusp0}C_{\gamma}(x, h, {\rm Lip}_{\gamma}\varphi )\subset \Omega . \end{equation} Moreover, there exists $c>0$ depending only on $n,\gamma $ and ${\rm Lip }_{\gamma}\varphi $ such that \begin{equation} \label{lemmacusp1} |B_{\gamma}(x,r)\cap \Omega| \geq cr^{n_{\gamma}}, \end{equation} for all $x\in \bar \Omega$ and $r>0$. \end{lemma} \begin{proof} Given a cusp $C_{\gamma}(x, h, {\rm Lip}_{\gamma}\varphi )$ as in the statement, for any point $y\in C_{\gamma}(x, h, {\rm Lip}_{\gamma}\varphi ) $ we have $$ y_n<x_n- {\rm Lip}_{\gamma } \varphi\, |\bar x-\bar y |^{\gamma}\le \varphi (\bar x)- {\rm Lip}_{\gamma } \varphi\, |\bar x-\bar y |^{\gamma} \le \varphi (\bar y)\, , $$ where the third inequality follows from the H\"{o}lder continuity of $\varphi$. Thus, $C_{\gamma}(x, h, {\rm Lip}_{\gamma}\varphi )\subset \Omega$. Inequality \eqref{lemmacusp1}, easily follows from \eqref{lemmacusp0}, the inclusion $C_{\gamma}(x, r, 1)\subset B_{\gamma}(x,r)$ and the fact that $|C_{\gamma}(x, h, M)|=ch^{n_{\gamma}} $ where $c$ is a positive constant depending only on $n, \gamma ,M $. \end{proof} Given two function spaces $X(\Omega)$ $Y(\Omega)$, we write $X(\Omega)\simeq Y(\Omega)$ to indicate that any function $f\in X(\Omega )$ equals almost everywhere in $\Omega $ a function $g\in Y(\Omega) $ and viceversa, and that the two norms $\| \cdot \|_{X(\Omega )} $, $\| \cdot \|_{Y(\Omega )} $ are equivalent. Note that, for the sake of simplicity, two functions $f,g$ as above will be denoted by the same symbol (being aware of this identification is particularly important when stating H\"{o}lder continuity estimates). The following theorem can be deduced by the general result \cite[Theorem~3.1]{da} combined with inequality \eqref{lemmacusp1} which guarantees that $\Omega$ is of type (A) as required in \cite[Theorem~3.1]{da}. Here, $C^{0,\alpha }(\bar \Omega , \delta_{\gamma})$ denotes the space of H\"{o}lder continuous functions with exponent $\alpha$ with respect to the metric $\delta_{\gamma}$. \begin{theorem}[Campanato-Da Prato]\label{campa} Let $\Omega $ be a bounded elementary H\"{o}l\-der continuous domain with exponent $\gamma \in ]0,1]$, $\lambda >0$. The following statements hold: \begin{itemize} \item[(i)] If $\lambda < n_{\gamma}$ then $ {\mathcal{L}}^{\lambda }_{p, \gamma }(\Omega )\simeq {L}^{\lambda }_{p, \gamma }(\Omega )$. \item[(ii)] If $\lambda > n_{\gamma}$ then $ {\mathcal{L}}^{\lambda }_{p, \gamma }( \Omega )\simeq C^{0,\alpha }(\bar \Omega , \delta_{\gamma})$ where $$ \alpha =\frac{\lambda -n_{\gamma }}{p}\, ; $$ in particular, there exists $c>0$ such that for all $f\in {\mathcal{L}}^{\lambda }_{p, \gamma }( \Omega )$ and for all $x,y\in \Omega $ we have \begin{equation} \label{campa2} |f(x)-f(y)|\le c | f|_{ {\mathcal{L}}^{\lambda }_{p, \gamma }( \Omega )} (|\bar x-\bar y|^{\gamma}+|x_n-y_n|)^{\frac{\lambda -n_{\gamma } }{p} }\, . \end{equation} \end{itemize} \end{theorem} The following result is direct application of a general result in \cite[Theorem~27.4.2, Remark~27.4.3]{be2} combined with inclusion \eqref{lemmacusp0} which guarantees that $\Omega$ satisfies the $\gamma$-horn condition described in \cite[p. 153]{be1}. As customary, we denote by $[x,y]$ the segment connecting two points $x$ and $y$ in ${\mathbb R}^n$. \begin{theorem}[Sobolev-Morrey Embedding for elementary $C^{0,\gamma }$ domains] \label{besov} Let $\Omega $ be an elementary H\"{o}lder continuous domain with exponent $\gamma \in ]0,1]$. Let $l\in {\mathbb N}$, $\lambda >0$, $p\in [1, \infty [$ be such that $$ pl> n_{\gamma} -\lambda $$ and\footnote{If viceversa $\gamma (l+\frac{\lambda -n_{\gamma } }{p} )>1$ then one has Lipschitz continuity; in the case $\gamma (l+\frac{\lambda -n_{\gamma } }{p} )=1$ one gets H\"{o}lder continuity with any exponent less than $1$. } $\gamma (l+\frac{\lambda -n_{\gamma } }{p} )<1$. Then there exists $c>0$ such that for all $f\in W^{l,\lambda}_{p,\gamma}(\Omega ) $ and for all $x,y\in \Omega $ such that $[x,y]\subset \Omega $ we have \begin{equation}\label{besov1} |f(x)-f(y)|\le c \| f\|_{ W^{l,\lambda}_{p,\gamma}(\Omega ) } |x-y|^{\gamma \left(l+\frac{\lambda -n_{\gamma } }{p} \right)}\, . \end{equation} \end{theorem} Note that by setting formally $l=0$ in \eqref{besov1}, one essentially obtains estimate \eqref{campa2}. It is interesting to observe that the previous result (with minor modifications) was proved in \cite{ba} in the case of a parallelepiped. \begin{theorem}[Barozzi]\label{barozzi} Let $\Omega $ be a parallelepiped in ${\mathbb R}^n$ of the form $\Omega =\Pi_{i=1}^n]a_i,b_i[$ with $-\infty <a_i<b_i<\infty $ for all $i=1,\dots ,n$. Let $\gamma \in ]0, 1]$, $l\in {\mathbb N}$, $\lambda >0$, $p\in [1, \infty [$ be such that $$ pl> n_{\gamma} -\lambda \, $$ and such that $ l+\frac{\lambda -n_{\gamma } }{p}\le 1$. Then for any $\epsilon>0$ there exists $c>0$ such that for all $f\in W^{l,\lambda}_{p,\gamma}(\Omega ) $ and for all $x,y\in \Omega $ we have $$ |f(x)-f(y)|\le c \|f \|_{W^{l,\lambda}_{p,\gamma}(\Omega ) }(|\bar x-\bar y|^{\gamma}+|x_n-y_n|)^{l+\frac{\lambda -n_{\gamma } }{p} -\epsilon}\, . $$ \end{theorem} Moreover, the following theorem can be deduced by a more general result obtained by G. Da Prato in \cite[Theorem 4.1]{da} for $l=1$ in the case of a convex set. \begin{theorem}\label{daprato} Let $\Omega$ be a bounded convex domain in ${\mathbb R}^n$. Let $\gamma \in ]0, 1]$, and $\eta =\frac{n_{\gamma}}{n}+n-n_{\gamma} $. Let $\lambda >0$, $p\in [1, \infty [$ be such that $$ p \eta > n_{\gamma} -\lambda \, . $$ Then there exists $c>0$ such that for all $f\in W^{1,\lambda}_{p,\gamma}(\Omega ) $ and for all $x,y\in \Omega $ we have $$ |f(x)-f(y)|\le c\| f\|_{W^{1,\lambda}_{p,\gamma}(\Omega ) } (|\bar x-\bar y|^{\gamma}+|x_n-y_n|)^{ \eta +\frac{\lambda -n_{\gamma } }{p} }\, . $$ \end{theorem} \begin{remark} We note that the constant $\eta $ in Theorem~\ref{daprato} replaces the constant $l=1$ in the previous theorems. Since $\eta <1$ for $\gamma <1$, we have a deterioration in the estimates. This seems to be due to the fact that the result in \cite[Theorem 4.1]{da} is quite general and is stated in order to embrace more general types of metrics. \end{remark} We now explain where the exponent $\eta $ in Theorem~\ref{daprato} comes from. The main ingredient is a quantitative Poincar\'{e}-Wirtinger inequality for bounded convex domains $B$ in ${\mathbb R}^n$, namely the inequality \begin{equation}\label{poincare0} \| f-f_B \|_{L^p(B)}\le \left( \frac{\omega_n}{|B|} \right )^{1-\frac{1}{n}}d^n\| \nabla f \|_{L^p(B)}, \ \ \forall f\in W^{1}_p(B), \end{equation} where $\omega _n$ denotes the Lebesgue measure of the unit ball in ${\mathbb R}^n$, $d$ denotes the Euclidean diameter of $B$ and $f_B=\Xint-_Bf(x)dx$ (see, e.g., \cite[p.164]{gitr}). It follows from \eqref{poincare0} and H\"{o}lder's inequality that if $\Omega $ is a convex domain in ${\mathbb R}^n$ and $ f\in W^{1}_p(\Omega )$ then for all $x\in \Omega $ and $r>0$ we have \begin{eqnarray}\label{poincare1}\lefteqn{ \| f-f_{\Omega \cap B_{\gamma}(x,r)} \|_{L^1(\Omega \cap B_{\gamma}(x,r))}} \nonumber \\ & & \qquad\qquad\qquad\le \omega_n ^{1-\frac{1}{n}} |\Omega \cap B_{\gamma}(x,r) |^{\frac{1}{n}-\frac{1}{p}} d^n_r\| \nabla f \|_{L^p(\Omega \cap B_{\gamma}(x,r))}, \end{eqnarray} where $d_r$ denotes the Euclidean diameter of $\Omega \cap B_{\gamma}(x,r)$. If in addition we have that $\nabla f\in L^{\lambda }_{p,\gamma}(\Omega )$, we obtain \begin{equation*} \| f-f_{\Omega \cap B_{\gamma}(x,r)} \|_{L^1(\Omega \cap B_{\gamma}(x,r))}\le c |\Omega \cap B_{\gamma}(x,r) |^{\frac{1}{n}-\frac{1}{p}} d^n_r r^{\frac{\lambda}{p}} \end{equation*} hence \begin{equation}\label{poincare3} \| f-f_{\Omega \cap B_{\gamma}(x,r)} \|_{L^1(\Omega \cap B_{\gamma}(x,r))}\le c r^{n_{\gamma} ( \frac{1}{n}-\frac{1}{p})} r^{\frac{\lambda}{p}} d^n_r \end{equation} since the measure of $ |\Omega \cap B_{\gamma}(x,r) |$ is controlled from above by a multiple of $r^{n_{\gamma}}$. In the general framework of \cite{da}, it is then assumed that $d_r\le c r^{\beta }$ for some constant $\beta \geq 1$ which in our case is $\beta =1$ and cannot be better. Keeping track of $\beta$, we obtain from \eqref{poincare3} that \begin{equation} \| f-f_{\Omega \cap B_{\gamma}(x,r)} \|_{L^1(\Omega \cap B_{\gamma}(x,r))}\le c r^{n_{\gamma} ( \frac{1}{n}-\frac{1}{p})} r^{\frac{\lambda}{p}} r^{n\beta } \end{equation} which means that $f\in {\mathcal{L}}^{\theta}_{1, \gamma }(\Omega ) $ with $$ \theta = \frac{n_{\gamma}}{n}+n\beta + \frac{ \lambda -n_{\gamma }}{p}\, . $$ If $\theta >n_{\gamma }$, that is $ (n_{\gamma}/n+n\beta -n_{\gamma} )p>n_{\gamma}-\lambda $, we deduce by Theorem~\ref{campa}~(ii) that $u\in C^{0,\alpha }(\bar \Omega , \delta_{\gamma} )$ with $$ \alpha = \frac{n_{\gamma}}{n}+n\beta +\frac{ \lambda -n_{\gamma }}{p} -n_{\gamma}, $$ which for $\beta =1 $ yields $$ \alpha = \eta +\frac{\lambda - n_{\gamma }}{p}. $$ This explains the appearance of $\eta $ in Theorem~\ref{daprato}. \\ We now reformulate the statement of \cite[Theorem 4.1]{da} in order to relax a bit the convexity assumption on $\Omega$. Namely, assume that $\Omega$ is a bounded domain in ${\mathbb R}^n$ such that condition \eqref{lemmacusp1} is satisfied and such that the following $p$-Poincar\'{e} inequality holds \begin{equation}\label{poin} \Xint-_{ \Omega \cap B_{\gamma}(x,r)} | f- f_{\Omega \cap B_{\gamma}(x,r)} |dx\le c_pr^{ \tilde \eta } \left(\Xint-_{ \Omega \cap B_{\gamma}(x,\tau r) }|\nabla f|^pdx\right)^{\frac{1}{p}}, \end{equation} for all $ f\in W^1_p(\Omega)$ and $r>0$, where $\tau \geq 1$ and $ \tilde\eta >0$ are a fixed constants. In particular, if $f\in W^{1,\lambda}_{p,\gamma}(\Omega ) $ we have \begin{eqnarray*}\lefteqn{ \| f-f_{\Omega \cap B_{\gamma}(x,r)} \|_{L^1(\Omega \cap B_{\gamma}(x,r))}} \nonumber \\ & & \qquad\qquad\qquad\le c_pr^{ \tilde \eta } |\Omega \cap B_{\gamma}(x,\tau r) |^{1-\frac{1}{p}} \| \nabla f \|_{L^p(\Omega \cap B_{\gamma}(x,r))} \nonumber \\ & & \qquad\qquad\qquad\le c r^{ \tilde \eta } (\tau r)^{n_{\gamma}(1-\frac{1}{p})} \| \nabla f \|_{L^p(\Omega \cap B_{\gamma}(x,r))} \le c r^{n_{\gamma}(1-\frac{1}{p}) +\frac{\lambda }{p}+ \tilde \eta }, \end{eqnarray*} for some $c>0$ independent of $r$. This implies that $f\in {\mathcal{L}}^{\theta}_{1, \gamma }(\Omega ) $ with $$ \theta = n_{\gamma}+ \tilde \eta +\frac{ \lambda -n_{\gamma }}{p}\, . $$ If $\theta >n_{\gamma }$, that is $ p \tilde \eta >n_{\gamma}-\lambda $, by the original result of \cite[Theorem~3.I]{da} we deduce that $u\in C^{0,\alpha }(\bar \Omega , \delta_{\gamma} )$ with $$ \alpha = \tilde \eta +\frac{ \lambda -n_{\gamma }}{p}. $$ Note that for applying \cite[Theorem~3.I]{da} we need only condition \eqref{lemmacusp1}. In conclusion, the following variant of Theorem~\ref{daprato} holds. \begin{theorem} \label{dapratovariant} Let $\Omega$ be a bounded domain in ${\mathbb R}^n$ such that condition \eqref{lemmacusp1} holds, and let $p\in [1, \infty [$. Assume that the $p$-Poincar\'{e} inequality \eqref{poin} holds. Let $\gamma \in ]0, 1]$ and $\lambda >0$ be such that $$ p \tilde \eta > n_{\gamma} -\lambda \, . $$ Then there exists $c>0$ such that for all $f\in W^{1,\lambda}_{p,\gamma}(\Omega ) $ and for all $x,y\in \Omega $ we have $$ |f(x)-f(y)|\le c\| f\|_{W^{1,\lambda}_{p,\gamma}(\Omega ) } (|\bar x-\bar y|^{\gamma}+|x_n-y_n|)^{ \tilde \eta +\frac{\lambda -n_{\gamma } }{p} }\, . $$ \end{theorem} We observe that inequality \eqref{poincare1} implies the validity of inequality \eqref{poin} with $\tilde \eta = \frac{n_{\gamma}}{n}+n-n_{\gamma} $, which is the constant $\eta$ used in Theorem~\ref{daprato}. We also note that assuming the validity of $p$-Poincar\'{e} inequalities of type \eqref{poin} is nowadays standard in Analysis on Metric Spaces. For instance, we refer to the celebrated paper \cite{koskela} where general $p$-Poincar\'{e} inequalities of the form \begin{equation}\label{poinko} \Xint-_{ B } | f- f_{B} |d\mu \le c_p r\left(\Xint-_{ \tau B} g^pd\mu \right)^{\frac{1}{p}} \end{equation} are considered. Here $g$ is the upper gradient of $f$, $B$ is an arbitrary ball of radius $r$ in a metric space space $X$, $\tau B$ is the concentric ball of radius $\tau r$ for a fixed $\tau \ge 1$ and $\mu$ is a suitable measure in $X$. Sufficient conditions ensuring the validity of \eqref{poinko} are known in the literature and are discussed e.g., in \cite[\S~10]{koskela}. See also \cite{durand} for a more recent work on this subject. We note that the study of inequalities of the type \eqref{poin} in domains with cusps or domains of class $C^{0,\gamma}$ is very delicate and in general one does not expect their validity, in particular for outer cusps. Conditions for the validity of global $(p,p)$-Poincar\'{e} inequalities (which means that the power $p$ appears also in the left-hand side of \eqref{poinko}) in domains with inner cusps and more generally John domains or $L^p$-averaging domains are given in \cite{staple} where, besides an interesting counterexample, a class of domains admitting moderately sharp outer `spires' is also analyzed. \section{ Extension of Sobolev-Morrey spaces for $C^{0,\gamma}$ domains } \subsection{The case of elementary domains of class $C^{0,\gamma}$} Let $\Omega$ be an elementary H\"older continuous domain in ${\mathbb R}^n$ with exponent $\gamma\in ]0,1]$ as in \eqref{eldom}, with $W={\mathbb R}^{n-1}$ and $a=-\infty$. Following \cite{burpaper1, b}, we set $G={\mathbb R}^n\setminus \overline \Omega$ and $$G_k=\{x\in G: 2^{-k-1}<\rho_n(x)\le 2^{-k}\}$$ for all $k\in{\mathbb Z}$, where $\rho_n(x)=x_n-\varphi(\overline x)$ is the signed distance from $x\in {\mathbb R}^n$ to $\partial G$ in the $x_n$ direction and we consider a partition of unity associated with the covering $\{G_k \}_{k\in {\mathbb Z}}$ of $G$ satisfying a number of properties. Namely, it is proved in \cite{burpaper1} that for every $k\in {\mathbb Z}$ there exists $\psi_k\in C^{\infty}({\mathbb R}^n)$ such that \begin{itemize} \item[(i)] $\displaystyle{ \sum_{k=-\infty}^\infty\psi_k=\begin{cases}1, \,\, {\rm if}\,\, x\in G,\vspace{2mm}\\ 0, \,\, {\rm if }\,\, x\notin G;\end{cases}}$ \vspace{2mm} \item[(ii)] $ G=\displaystyle \cup_{k=-\infty}^\infty {\rm supp} \psi_k $ and the covering $\{{\rm supp} \psi_k\}_{k\in {\mathbb Z}}$ has multiplicity equal to $2$;\vspace{2mm} \item[(iii)] $G_k\subset {\rm supp} \psi_k\subset G_{k-1}\cup G_k\cup G_{k+1}$, for all $k\in {\mathbb Z}$;\vspace{2mm} \item[(iv)] $|D^\alpha\psi_k(x)|\le c(\alpha) 2^{k\left(\frac{|\bar \alpha|}{\gamma }+\alpha_n \right) }$, for all $x\in {\mathbb R}^n,k\in {\mathbb Z}, \alpha\in {\mathbb N}^n_0$. \end{itemize} Note the appearance of $\gamma$ in the exponent in item (iv) above. Burenkov's Extension Operator was defined in \cite{burpaper1} as follows. Let $l\in{\mathbb N}$ and $1\le p\le \infty$. For every $f\in W^{l,p}(\Omega)$, we set \begin{equation} \label{burext}(Tf)(x)=\begin{cases}f(x), \,\, {\rm if}\,\,x \in \Omega,\vspace{2mm}\\ \displaystyle\sum_{k=-\infty}^\infty \psi_k(x)f_k(x), \,\, {\rm if}\,\,x \in G, \end{cases} \end{equation} where \begin{multline*} f_k(x)=\int_{{\mathbb R}^n}f(\overline x -2^{-\frac{k}{\gamma}}\overline z,x_n-A2^{-k}z_n)\omega(z)dz =\\ =A^{-1}2^{\frac{k}{\gamma}(n-1)+k}\int_{{\mathbb R}^n}\omega(2^{\frac{k}{\gamma}}(\overline x-\overline y), A^{-1}2^{k}(x_n-y_n))f(y)dy\, , \end{multline*} $A$ is a sufficiently large constant depending only on $n$ and $M$ in \eqref{lip1} (in \cite{burpaper1} it is chosen for example $A=200(1+Mn)$) and $\omega\in C^\infty_c({\mathbb R}^n)$ is a kernel of mollification defined by $$\omega (x)=\omega_1(x_1)\cdots \omega_n(x_n),\ \omega_i\in C^{\infty}_c(1/2,1),\ \int_{-\infty}^{+\infty}\omega (x_i)dx_i=1,\ \int_{-\infty }^{+\infty }\omega_i(x_i)x_i^kdx_i=0 $$ for all $i=1,\dots ,n$, $k=1,\dots ,l$. Among other results (in particular, concerning anisotropic Sobolev spaces), it is proved in \cite{burpaper1} that the operator $T$ is a linear continuous operator from $W^{l}_p(\Omega)$ to $W^{[\gamma l]}_p({\mathbb R}^n)$ where $[\gamma l]$ is the integer part of $\gamma l$. \\ The following theorem is a generalisation of the extension theorem pro\-ved in \cite{fala} in the case of Lispchitz domains, that is for $\gamma =1$. Considering a number of technical issues appearing in the case $\gamma <1$, we assume for simplicity that the function $\phi$ defining the Morrey norm satisfies the condition $\phi (r)=1$ for all $r>1$. \begin{theorem}\label{sobolevmorrey} Let $\Omega$ be an elementary H\"older continuous domain in ${\mathbb R}^n$ with exponent $\gamma\in ]0,1]$, with $W={\mathbb R}^{n-1}$ and $a=-\infty$. Let $l\in {\mathbb N}$, $p\in [1, \infty [$, and $\phi : ]0, \infty [ \to ]0,\infty [$ satisfy the condition $\phi (r)=1$ for all $r>1$. Then the operator $T$ maps $W^{l,\phi }_{p , 1 }(\Omega )$ continuously to $W^{[\gamma l],\phi_{\gamma} }_{p , 1 }({\mathbb R}^n )$, where $\phi_{\gamma}$ is defined by $\phi_{\gamma}(r)=\phi (r^{\gamma})$ for all $r\geq 0$. In particular, $T$ maps the space $W^{l,\lambda }_{p,1}(\Omega )$ to the space $W^{[\gamma l],\gamma \lambda }_{p,1}(\mathbb R^n )$, for any $ \lambda \geq 0$. \end{theorem} The proof of Theorem~\ref{sobolevmorrey} can be carried out by adapting the corresponding proof of \cite{fala} in a suitable way. Since the adaptation is quite technical and touches a number of delicate points, we indicate here the main steps starting from the first but crucial lemmas which we combine in the following statement. Here $\tilde G_k=G_{k-1}\cup G_k\cup G_{k+1} =\{x\in G: 2^{-k-2}<\rho_n(x)\le 2^{-k+1}\}$ for all $k\in {\mathbb Z}$ and ${\rm diam}\, C$ denotes the Euclidean diameter of a set $C$. \begin{lemma}\label{controllo_geo} Assume that $B_{ 1 }(x,r)\cap G\ne\emptyset$ for some $x\in {\mathbb R}^n$ and $r>0$. Let $h\in {\mathbb Z}$ be the minimal integer such that $B_{ 1 }(x,r) \cap G_h\ne\emptyset$. Let $k\in {\mathbb Z}$ be such that $k\ge h+3$ and $B_{ 1 }(x,r) \cap \tilde G_k \ne\emptyset$. Then \begin{equation}\label{due} |2^{-(h+3)}-2^{-k}|\le c (r+r^{\gamma})\,, \end{equation} where $c$ depends only on $\gamma$ and ${\rm Lip}_{\gamma}\varphi$. Moreover, given $E>0$ there exists $S>0$ depending only on $\gamma$, ${\rm Lip}_{\gamma}\varphi$, $ E$, and a lower bound for $h$ such that for every $\eta\in {\mathbb R}^n$, with $|\eta|<E$, \begin{equation}\label{inc0} {\rm diam }\left( \bigcup_{k=h+3}^\infty \left( B_{ 1 }(x,r) \cap \tilde G_k- (2^{-\frac{k}{\gamma}} \bar \eta, 2^{-k} \eta_n ) \right) \right) \le S(r+r^{\gamma})\,. \end{equation} \end{lemma} \begin{proof} By our assumptions we deduce that $\{x\in B_{ 1 }(x,r):\, \rho_n(x)=2^{-h-2} \} , \{x\in B_{ 1 }(x,r):\, \rho_n(x)=2^{-k+1} \}\ne \emptyset$ hence there exist $y,w\in B_{ 1 }(x,r)$ with $y_n-\varphi (\bar y)=2^{-h-2}$ and $w_n-\varphi(\bar w)=2^{-k+1} $. Since $|y_n-x_n|, |\bar y-\bar w|<2r $, by the H\"{o}lder continuity of $\varphi$ we get \begin{multline*} |2^{-(h+3)}-2^{-k}|=\frac{1}{2}|2^{-h-2}-2^{-k+1}|=\frac{1}{2}|y_n-\varphi (\bar y)-w_n+\varphi (\bar w)|\\ \le \frac{1}{2}(|y_n-w_n|+{\rm Lip}_{\gamma}\varphi |\bar y-\bar w|^{\gamma} )\le \frac{1}{2}\left( 2r +{\rm Lip}_{\gamma}\varphi (2r)^{\gamma} \right) \end{multline*} and \eqref{due} follows. We now prove \eqref{inc0}. Let $k\ge h+3$ be such that $B_{ 1 }(x,r) \cap \tilde G_k\ne\emptyset$. Let $a\in B_{ 1 }(x,r) \cap \tilde G_{h+3}$ and $b\in B_{ 1 }(x,r) \cap \tilde G_{k}$. By \eqref{due}, for all $ \eta\in {\mathbb R}^n$, with $|\eta| <E$, we have \begin{multline*} |b_n- 2^{-k}\eta_n- (a_n-2^{-(h+3)}\eta_n)|\le |b_n-a_n|+ |2^{-k}-2^{-(h+3)}||\eta_n|\\ \le 2r+ cE(r+r^{\gamma}) \, \end{multline*} and \begin{multline*} |\bar b- 2^{-\frac{ k}{\gamma} }\bar \eta- (\bar a-2^{- \frac{ h+3}{\gamma}}\bar \eta)|\le |\bar b-\bar a|+ |2^{-\frac{k}{\gamma}}-2^{-\frac{h+3}{\gamma}}||\bar \eta|\\ \le c\max\{1, 2^{-h(1-\gamma)/\gamma }\} ( r +r^{\gamma}) \end{multline*} which proves \eqref{inc0}. \end{proof} Another crucial step in the proof is \cite[Lemma~2.4, (ii)]{fala} which has to be modified as follows. As in \cite[Chap.~6]{b}, for every $k\in {\mathbb Z}$ we set \begin{equation*}\tilde \Omega_k=\{x\in \Omega: 2^{-k-2}<|\rho_n(x) |\le b2^{-k+1}\}\,, \end{equation*} where $b=10A$. \begin{lemma}\label{mainlem} Assume that $B_{ 1 }(x,r)\cap G\ne\emptyset$ for some $x\in {\mathbb R}^n$ and $r>0$. Let $f\in W^{l,p}(\Omega)$ and ${\mathcal{U}}\subset {\mathbb R}^n$ be a fixed measurable set with $d:=\sup\{\rho_n(x):\ x\in B_1(x,r)\cap {\mathcal{U}} \}<\infty $. Then there exists $c>0$ and $m\in {\mathbb N}$ depending only on $n$, $l$, $p$, $M$, $\omega$, $ d$, and for every $\alpha \in {\mathbb N}^n_0$ with $|\alpha |\le l$ there exists a function $g_\alpha$ independent of $r, {\mathcal {U}}$, such that for every $z\in {\mathbb R}^n$ with $|z| \le c$ there exist $m $ balls $B_{ 1}(x^{(i)}_{z} , r^{\gamma})$, $i=1,\dots,m $, such that \begin{multline}\label{mainlem2} \| D^{\alpha}f_k-g_{\alpha}\|^p_{L^p(B_{1 }(x,r)\cap {\mathcal{U}}\cap \tilde G_k)}\\ \le c2^{pk( \frac{ |\bar \alpha |}{\gamma}+\alpha_n-l)} \int_{|z|\le c } \sum_{|\beta |=l} \| D^{\beta }f\|^p_{L^{p}(\cup_{i=1}^{m} B_{ 1}(x^{(i)}_z, r^{\gamma} )\cap \tilde \Omega_k) } dz, \end{multline} for all $k\in {\mathbb N}$. \end{lemma} The proof of the previous lemma follows the lines of \cite[Lemma~2.4, (ii)]{fala}. We omit the lengthy details but we explain how this lemma is used and how the modified exponent $pk( \frac{ |\bar \alpha |}{\gamma}+\alpha_n-l)$ affects the final result. Namely, in order to prove Theorem~\ref{sobolevmorrey}, one has to estimate the derivatives $D^{\alpha}Tf$ of the extension $Tf$ of a function $f$. By applying the Leibnitz rule one ends up with estimating $D^{\alpha -\beta}\psi_k D^{\beta }f_k $ for all $\beta \le \alpha$. The difficult part of the work concerns the case $\beta <\alpha $ and $k>0$. One observes that $\sum_{k\in {\mathbb Z}}D^{\alpha -\beta}\psi_k D^{\beta }f_k=\sum_{k\in {\mathbb Z}}D^{\alpha -\beta}\psi_k (D^{\beta }f_k -g_{\beta})$ for $\beta <\alpha$ since $g_{\beta}$ does not depend on $k$. Thus, one has to estimate $D^{\beta }f_k -g_{\beta}$. By combining the previous lemma with property (iv) of the partition of unity, we have \begin{multline}\label{mainlem4} \| D^{\alpha -\beta}\psi_k( D^{\beta }f_k-g_{\beta} ) \|^p_{L^p(B_1(x,r)\cap {\mathcal{U}}\cap \tilde G_k )}\\ \le c 2^{ p k(\frac{|\bar \alpha -\bar \beta|}{\gamma} +\alpha_n-\beta_n )} \| D^{\beta }f_k-g_{\beta} \|^p_{L^p(B_1(x,r)\cap {\mathcal{U} }\cap \tilde G_k )}\\ \le c 2^{ p k(\frac{|\bar \alpha -\bar \beta|}{\gamma} +\alpha_n-\beta_n )} 2^{pk( \frac{ |\bar \beta |}{\gamma}+\beta_n-l)} \int_{|z|\le c} \sum_{|\beta|=l}\|D^{\beta} f\|^p_{L^{p}(\cup_{i=1}^{m } B_{ 1}(x^{(i)}_z,r^{\gamma})\cap \tilde \Omega_k ) } dz\, . \end{multline} We note that the exponent of the power of $2$ in the right-hand side of \eqref{mainlem4} equals $$ p k\biggl(\frac{|\bar \alpha -\bar \beta|}{\gamma} +\alpha_n-\beta_n \biggr) +pk\left( \frac{ |\bar \beta |}{\gamma}+\beta _n-l\right) = pk \left( \frac{|\bar \alpha |}{\gamma} +\alpha_n -l \right) $$ hence one can control the right-hand side of \eqref{mainlem4}, provided that exponent is non-positive, that is \begin{equation}\label{indexbound} |\bar \alpha | +\gamma \alpha_n\le \gamma l\, . \end{equation} Inequality \eqref{indexbound} explains why one gets $[\gamma l]$ as index of smoothness in the target Sobolev space $W^{[\gamma l],\phi }_{\lambda , \gamma }({\mathbb R}^n )$ in Theorem~\ref{sobolevmorrey}. Moreover, in estimate \eqref{mainlem2} we have the quantity $$ \| D^{\beta }f\|^p_{L^{p}(\cup_{i=1}^{m} B_{ 1}(x^{(i)}_z, r^{\gamma} )\cap \tilde \Omega_k) } $$ and, since the balls have radius $r^{\gamma}$, one eventually controls that quantity via $$ \phi (r^{\gamma} ) \| D^{\beta }f \|^p_{L^{\phi}_{p,1}(\Omega ) } $$ which explains the appearance of the new weight $\phi_{\lambda}$ in Theorem~\ref{sobolevmorrey}. For further details, we refer to the proof of \cite[Theorem 2.5]{fala}.\\ \vspace{12pt} \subsection{ The case of general domains of class $C^{0,\gamma}$} We recall the definition of open sets with $C^{0,\gamma}$ boundary. Here and in the sequel, given a set $C$ in ${\mathbb R}^n$ and $d>0$ we denote by $C_d$ the set $\{x\in C:{\rm dist} (x,\partial C)>d\}$. \begin{definition} Let $\gamma\in ]0,1]$, $d >0$, $M\geq 0$, $s\in {\mathbb N} \cup \{\infty \}$. Let $\{V_{j}\}_{j=1}^s$ be a family of cuboids, i.e. for every $j=\overline{1,s}$ there exists an isometry $\lambda_j$ in ${\mathbb R}^n$ such that $$ \lambda_j( V_j )= \Pi_{i=1}^n]a_{i,j}, b_{i,j}[ $$ where $0<a_{i,j}<a_{i,j}+d<b_{i,j}$. Assume that $D:=\sup_{j=\overline{1,s}}{\rm diam }V_j< \infty $, $(V_j)_{d}\ne \emptyset $ for all $j=\overline{1,s}$, and that the multiplicity of the covering $\{V_{j}\}_{j=1}^s$ is finite. We then say that ${\mathcal{A} }=(s,d, \{V_{j}\}_{j=1}^s, \{\lambda_{j}\}_{j=1}^s )$ is an atlas. Let $M\geq 0$. We say that an open set $\Omega$ in ${\mathbb R}^n$ is of class $C^{0,\gamma}_M({\mathcal{A}})$ if the following conditions are satisfied:\\ (i) For every $j=\overline{1,s}$, we have $\Omega \cap (V_j)_d\ne \emptyset$. \\ (ii) $\Omega \subset \cup_{j=1}^{s}(V_j)_d$.\\ (iii) For every $j=\overline{1,s}$, the set ${\mathcal{H}}_j:=\lambda_j(\Omega \cap V_j)$ satisfies the following condition: either ${\mathcal{H}}_j= \Pi_{i=1}^n]a_{i,j}, b_{i,j}[ $ (in which case $V_j\subset \Omega $), or ${\mathcal{H}}_j$ is a bounded elementary H\"{o}lder continuous domain of the form \begin{equation*}\label{ele1bis} {\mathcal{H}}_j=\left\{x\in {\mathbb R}^n:\ \bar x\in W_j,\ a_{n,j}<x_n<\varphi_j (\bar x) \right\} \end{equation*} where $\varphi_j $ is a real-valued H\"{o}lder continuous function with exponent $\gamma$, defined on $W_j= \Pi_{i=1}^{n-1}]a_{i,j}, b_{i,j}[$ such that $$ a_{n,j}+d<\varphi_j\ \ {\rm and}\ \ {\rm Lip }_{\gamma}\varphi_j \le M $$ (in which case $V_j\cap \partial \Omega \ne \emptyset$). Finally, we say that an open set $\Omega$ in ${\mathbb R}^n$ is of class $C^{0,\gamma}$ if it is of class $C^{0,\gamma}_M({\mathcal{A}})$ for some $M$ and ${\mathcal A}$. \end{definition} The definition of Burenkov's Extension Operator for a general domain of class $C^{0,\gamma}$ is given by pasting together the extension operators defined on each chart of the atlas as follows. Following \cite[p.265]{b}, given an open set $\Omega $ of class $C^{0,\gamma}_M({\mathcal{A}})$, we consider a family of functions $\{\psi\}_{j=1}^s$ such that $\psi_j\in C^{\infty }_c({\mathbb R}^n)$, $ {\rm supp} \psi_j\subset (V_j)_{d} $, $0\le \psi_j\le 1$, $\sum_{j=1}^s\psi_j^2(x)=1$ for all $x\in \Omega$ and such that $\| D^{\alpha }\psi_j\|_{L^{\infty }({\mathbb R}^n)}$ $\le M$ for all $j=\overline{1,s}$ and $\alpha\in {\mathbb N}_0^n$ with $|\alpha |\le l$, where $M$ depends only on $n,l,d$. Burenkov's Extension Operator $T$ is defined from $W^{l}_p(\Omega )$ to $W^{[\gamma l]}_p({\mathbb R}^n)$ by \begin{equation} \label{burextgen} Tf= \sum_{j=1}^s\psi_j T_j(f\psi_j), \end{equation} for all $f\in W^{l,p}(\Omega )$, where $T_j$ are the extension operators defined on each domain $\Omega \cap V_{j}$. See \cite{fala} for details. Then, we have he following. Recall that $\phi_{\gamma}$ is defined by $\phi_{\gamma}(r)=\phi (r^{\gamma})$ for all $r\geq 0$. \begin{theorem}\label{sobolevmorreybis} Let $\Omega$ be an open set in ${\mathbb R}^n$ of class $C^{0,\gamma}$ with $\gamma\in ]0,1]$. Let $l\in {\mathbb N}$, $p\in [1, \infty [$, and $\phi : ]0, \infty [ \to ]0,\infty [$ satisfying the condition $\phi (r)=1$ for all $r>1$. Then the operator $T$ maps $W^{l,\phi }_{p , 1 }(\Omega )$ continuously to $W^{[\gamma l],\phi_{\gamma} }_{p , 1 }({\mathbb R}^n )$. In particular, $T$ maps the space $W^{l,\lambda }_{p,1}(\Omega )$ to the space $W^{[\gamma l],\gamma \lambda }_{p,1}(\mathbb R^n )$, for any $\lambda \geq 0$. \end{theorem} The proof of Theorem~\ref{sobolevmorreybis} can be carried out by pasting together local extensions operators provided by Theorem~\ref{sobolevmorrey} in each cuboid of the covering of $\Omega$. This argument is described in detail in the proof of \cite[Theorem~3.3]{fala}. Finally, we can deduce the following \begin{corollary}\label{maincor} Let $\Omega$ be an open set in ${\mathbb R}^n$ of class $C^{0,\gamma}$ with $\gamma\in ]0,1]$. Let $l\in {\mathbb N}$, $p\in [1, \infty [$, and $\lambda>0$. If $$ p [\gamma l]> n -\gamma \lambda \, $$ and $ [\gamma l]+\frac{\gamma \lambda -n }{p} <1$ then there exists $c>0$ such that for all $f\in W^{l,\lambda}_{p,1}(\Omega ) $ and for all $x,y\in \Omega $ we have \begin{equation}\label{besov1tris} |f(x)-f(y)|\le c \|f\|_{W^{l,\lambda}_{p,1}(\Omega ) } |x-y|^{ [\gamma l]+\frac{\gamma \lambda -n}{p} }\, . \end{equation} \end{corollary} The proof of the previous corollary follows immediately by Theorem~\ref{sobolevmorreybis} and estimate \eqref{besov1} applied with $\gamma =1$ and $l$ replaced by $ [\gamma l]$. Indeed, by Theorem~\ref{sobolevmorreybis}, any functions $f\in W^{l,\lambda}_{p,1}(\Omega ) $ is extended to the whole of $\mathbb{R}^n$ as a function of $W^{[\gamma l],\gamma \lambda }_{p , 1 }({\mathbb R}^n )$ to which the classical Sobolev-Morrey Theorem applies. \section*{ Acknowledgments} The authors are very thankful to Prof. Victor I. Burenkov for useful discussions and for suggesting the study of the action of his extension operator on Sobolev-Morrey spaces defined on domains with H\"{o}lder continuous boundaries. The authors are members of the Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM).
1,314,259,995,699
arxiv
\section{Introduction}\label{sec:intro} General relativity (GR), our currently best theory of gravity and of spacetime, permits time travel into one's own past in the sense that it contains models of spacetime with \textit{closed timelike curves}, i.e., worldlines potentially traced out by matter in spacetime, which intersect themselves. If a particle follows such a closed worldline, it returns not only to its earlier position in space---which is common enough---, but in space\textit{time}, i.e., also to its earlier position in time. An early example of such a spacetime is what has become known as \textit{G\"odel spacetime} \citep{god49}, but in fact there are innumerably many such solutions in GR.\footnote{For a recent review, see \citet{smewut11}.} Should we thus conclude that time travel is, in fact, physically possible, i.e., in accord with the laws of nature? We should not, as there are good reasons to think that, despite its phenomenal empirical success, GR is not the last word on gravity and on the fundamental structure of what plays the role of spacetime: GR assumes that matter has essentially classical properties, e.g., by having a determinate spatiotemporal distribution. But of course we have learned from quantum physics that matter degrees of freedom behave rather differently. Thus, at a minimum, GR ought to be replaced by a theory which can accommodate the quantum nature of matter. It is for this simple but conclusive reason that we need a quantum theory of gravity.\footnote{And much less for a whole list of other reasons routinely given in the literature, and critically discussed by \citet{hugcal01,wut05,mat06}.} No such theory yet exists in its fully articulated and empirically confirmed form, but candidate theories are string theory, loop quantum gravity, causal set theory, and many more. Thus arises the question of whether these candidates for a more fundamental theory of gravity permit time travel in the same or a similar sense as does GR. In fact, there are two ways in which a quantum theory of gravity might do so. First, it may permit time travel by admitting models which contain the (analogue of) closed timelike curves. In this case, time travel would accord to the laws of nature stipulated by that theory. This would straightforwardly licence time travel's physical possibility. Second, although that theory itself may prohibit time travel in this same sense, it could allow for the emergent relativistic spacetime---which well approximates the fundamental structure at some scale---to contain closed timelike curves. Although the fundamental theory would then remain inhospitable to time travel itself, it would tolerate the possibility of time travel at some other, less fundamental, scale. It is this possibility in particular that I wish to explore in this article. After settings things up in Section \ref{sec:classical}, I will introduce four theories of quantum gravity in Section \ref{sec:qg}: semi-classical quantum gravity, causal set theory, loop quantum gravity, and string theory, and discuss the possibility of time travel directly in those theories. In Section \ref{sec:emergent}, I will turn to the second possibility, viz., that these theories themselves disallow time travel, but fail to prevent it at the emergent level. Conclusions follow in Section \ref{sec:conc}. \section{Global hyperbolicity and energy conditions}\label{sec:classical} Does a theory succeeding to GR include or exclude closed timelike curves and similar causal pathologies inside the bounds of what it deems physically possible? Since despite valiant efforts, no quantum theory of gravity has been fully articulated, let alone empirically confirmed, our discussion must remain preliminary and speculative. Still, from considering candidate theories and their presumed verdict on the question, we hope to glean intimations of an answer and at least start to survey the dialectical landscape of possibilities. There are at least three ways in which quantum gravity may prejudge the case for or against the possibility of time travel. First, it may rule it out {\em by fiat} by imposing global hyperbolicity or a kindred mathematical condition. Such an imposition may be metaphysically motivated to rule out causal pathologies, or it may be occasioned by the pragmatic desire to apply a particular mathematical apparatus, which requires the condition. This {\em a priori} restriction to causally benign structures may, of course, eventually be justified {\em a posteriori} by the empirical success of the theory. Second, it may be the case that although no such condition is demanded at the outset, it can be derived from the resources of the theory itself. In particular, time travel may be ruled out as a consequence of well-justified assumptions concerning what is physically reasonable or even possible. In a situation like this, it might appear as if time travel is ruled out on physical grounds, and that causality-violating spacetimes ought to be deemed unphysical artefacts of the mathematical formalism of overly permissive GR.\footnote{It certainly appeared so to me when Smeenk and I wrote \citet[\S8]{smewut11}.} Although suggestive, we will see in \S\ref{sec:emergent} that this may not follow. The third possibility is that we find closed timelike curves (or an analogous feature) prevalent in the more fundamental theory of quantum gravity, or at least in its physical applications. This outcome would suggest (though perhaps again not entail, see \S\ref{sec:emergent} below) that the intriguing possibility of violations of causality, first encountered in GR, may remain in quantum gravity. Before turning to approaches to quantum gravity in the next section, the remainder of this section (\S\ref{ssec:classical}) offers a brief discussion at the classical level of two notions central for the possibility of closed timelike curves and thus of time travel: global hyperbolicity and so-called `energy conditions'. \subsection{At the classical level}\label{ssec:classical} Let me start by fixing some terminology. A relativistic spacetime $\langle M, g_{ab}\rangle$ described by a four-dimensional, differentiable manifold $M$ with a metric $g_{ab}$ with Lorentz signature is \textit{time orientable} just in case it permits a globally consistent time direction in the form of an everywhere defined continuous timelike vector field. As a temporal direction is picked as the `future', the time orientation of such a spacetime is thereby determined. A \textit{worldline} is a continuous timelike curve whose orientation agrees with the time orientation of the spacetime in which it is contained. A \textit{closed timelike curve} is a closed worldline. The existence of closed timelike curves in a spacetime mark the violation of a so-called `causality condition'. It turns out that there is whole hierarchy of stronger and stronger causality conditions (\citealt[\S\S6.4-6.6]{hawell73}; \citealt[\S\S8.2-8.3]{wal84}). The weakest condition requires that there are no closed timelike curves. The strongest condition demands that the spacetime be `globally hyperbolic'. Thus, if a spacetime is globally hyperbolic, it does not contain closed timelike curves. Let us unpack the notion of global hyperbolicity. A spacelike hypersurface $\Sigma \subset M$ with no edges is called a \textit{global time slice}. If such a global time slice $\Sigma$ is \textit{achronal}, i.e., if it is not intersected more than once by any future-directed causal curve, it is called a \textit{partial Cauchy surface}. The \textit{future domain of dependence} $D^+(\Sigma)$ of a partial Cauchy surface $\Sigma$ is the set of events $p\in M$ such that every past inextendible causal curve through $p$ intersects $\Sigma$. The \textit{past domain of dependence} $D^-(\Sigma)$ is defined analogously. A partial Cauchy surface $\Sigma$ is a \textit{Cauchy surface} just in case the total domain of dependence $D^+(\Sigma) \cup D^-(\Sigma)$ is $M$. A spacetime $\langle M, g_{ab}\rangle$ which admits a Cauchy surface is said to be \textit{globally hyperbolic}. The future domain of dependence of a global time slice $\Sigma$ is of interest because it characterises the set of events for which any signal or information which reaches them must have passed through $\Sigma$. Thus, assuming that signals and information cannot travel faster than the speed of light, conditions on $\Sigma$ should determine the complete state at $p$---assuming deterministic dynamics.\footnote{This expectation is confirmed, e.g., for physical fields in curved spacetimes, which propagate in accordance to hyperbolic wave equations \citep[Ch.\ 10]{wal84}.} Similarly, the conditions on $\Sigma$ should determine the state at any event in the past domain of dependence. In the case of a globally hyperbolic spacetime, therefore, any event at all in the spacetime is similarly determined by the conditions on $\Sigma$. Consequently, there cannot be (among other things) closed timelike curves in such a spacetime: regardless of whether or not these closed timelike curves intersect $\Sigma$, $\Sigma$ would not be a Cauchy surface, and so the spacetime would not be globally hyperbolic. Global hyperbolicity or the existence of closed timelike curves are global properties of a spacetime in the sense that, although properties ascribable to spacetimes, they are not possessed by individual events and do not supervene on any such local properties. Before we get to quantum gravity, it should be noted that it would be surprising if moving beyond GR would mean relapsing into imposing global, non-dynamical constraints on spacetime structure such as prohibiting the existence of closed timelike curves by fiat, as it appears as if GR owes its success precisely to abandoning such constraints. We should expect one kind of constraint, however, to restrict the models of classical GR: energy conditions. These are universally valid (but local) constraints on the matter sector of the theory and capture the thought that not just any stress-energy tensor $T_{ab}$ can adequately represent the physical matter content of the universe. Thus, they express general conditions which any matter or non-gravitational field is required to satisfy in order to qualify as `physical'. Through the Einstein equation, \begin{equation}\label{eq:efe} G_{ab} = 8\pi T_{ab}, \end{equation} where the \textit{Einstein tensor} $G_{ab} = G_{ab} [g_{ab}]$ is constructed from the spacetime metric $g_{ab}$ and its first and second derivatives, and Newton's constant $G$ and the speed of light $c$ are set to 1, that matter content is related to the geometry of the spacetime. Just like the metric, the Einstein tensor is defined on a four-dimensional pseudo-Riemannian manifold $M$ and describes the curvature of the spacetime $\langle M, g_{ab}\rangle$. Importantly, the (classical) energy conditions are defined in tangent spaces and so obtain locally, i.e., point-wise. Since these conditions hold only strictly locally, they do not have the power to rule out causal pathologies such as closed timelike curves, which are global (or at least `regional') properties of a spacetime in that they are topological features of a spacetime that can only be exemplified in at least a region of spacetime. As it turns out, these point-wise energy conditions can only be satisfied for types of `classical' matter; they all fail for quantum fields (due to arbitrarily negative expectation values of energy densities of quantum fields at any point). Hence, the classical conditions have been relaxed to `non-local' energy conditions, which hold in extended regions of spacetime, rather than at single events. Thus, they could at least potentially disqualify spacetimes with closed timelike curves as unphysical. Although the final verdict is out, it seems, however, that this hope will not be borne out.\footnote{For a discussion of this point, see \citet[\S7]{smewut11}; for a primer on energy conditions, see \citet{cur17}.} \section{Theories of quantum gravity}\label{sec:qg} This section will introduce four approaches to quantum gravity and discuss the viability of time travel in each of them: semi-classical quantum gravity (\S\ref{ssec:semi}), causal set theory (\S\ref{ssec:cst}), loop quantum gravity (\S\ref{ssec:lqg}), and string theory (\S\ref{ssec:st}). \subsection{Semi-classical quantum gravity}\label{ssec:semi} The research programme of quantum field theory on curved spacetime offers a first stab at a quantum theory of gravity. Although mathematically demanding, the approach is physically simple: take a classical relativistic spacetime and treat it as a fixed background for quantum fields. For a linear field $\phi$ defined over globally hyperbolic spacetimes $\langle M, g_{ab}\rangle$, there is a mathematically rigorous and physically well-behaved procedure for writing down an algebra $\mathfrak{A}(\langle M, g_{ab}\rangle)$ of observables \citep[Ch.\ 4]{wal94}. For more general fields, however, semi-classical quantum gravity may not be well-behaved, and the procedure cannot be applied to non-globally hyperbolic spacetimes.\footnote{For a recent---and optimistic---review, see \citet{ver12}.} If the approach demands global hyperbolicity, it cannot accommodate time travel on closed timelike curves. Since the spacetime is already set in place---and fixed---, there is also no option of emergent time travel under the assumption of global hyperbolicity. This will play out rather differently in loop quantum gravity (see below). However, the most severe limitation of the approach is that the spacetime structure is assumed to be fixed. This stands in obvious tension with the insight in GR that the spacetime geometry not only acts upon the matter content of the world, but is also acted upon by it. Thus, spacetime geometry is dynamical, and one must countenance the `backreaction' of the matter field on the metric. The most basic way to construct a quantum theory of gravity which does this is to combine classical relativistic spacetime geometry---the left-hand side of (\ref{eq:efe})---with an account of quantum matter which will determine the right-hand side of (\ref{eq:efe}). The quantum matter fields, described by an appropriate quantum field theory (QFT), propagate in a classical spacetime. The backreaction of the matter fields on the spacetime geometry is computed through the \textit{semi-classical Einstein field equation}: \begin{equation}\label{eq:semi} G_{ab} = 8 \pi \langle \psi |\hat{T}_{ab} |\psi\rangle, \end{equation} where $\langle \psi |\hat{T}_{ab} |\psi\rangle$ is the expectation value of the stress-energy tensor of the quantum fields (which now is of course an operator) in a (physically reasonable) state $|\psi\rangle$. Semi-classical quantum gravity is a quantum theory of gravity as defined above: it combines gravity---in the form of spacetime curvature---with a quantum theory of matter. In general, semi-classical quantum gravity is expected to offer a valid extension of GR for some relatively simple cases when quantum and gravitational effects are not too strong \citep[\S2]{wut19a}. Does semi-classical quantum gravity permit time travel? As the only difference between the fully classical equation (\ref{eq:efe}) and the semi-classical one (\ref{eq:semi}) is in the description of the matter on the right-hand side, the relevant issue is whether the quantum nature of matter is less, equally, or more constraining on the spacetime geometry on the left-hand side than is classical matter. As mentioned above, the most direct way in which it is \textit{less} constraining and so more permissive of time travel than classical matter is by violating the energy conditions believed to hold for classical matter. However, the expectation value $\langle \psi |\hat{T}_{ab} |\psi\rangle$ may also act in ways which are \textit{more} constraining than classical matter. For instance, \citet{haw92} argued that since $\langle \psi |\hat{T}_{ab} |\psi\rangle$ appears to diverge on or near the boundary to the region of spacetime containing closed timelike curves (assuming there were none `before'),\footnote{These boundaries are so-called `future Cauchy horizons', i.e., boundaries of future domains of dependence of global time slides, where these domains are defined as those regions such that every past inextendible causal or timelike through any event in the region intersects the global time slice.} it effectively `cuts off' the region of spacetime with the causal pathologies, rendering it inaccessible from the causally well-behaved domain and thus effectively protecting `chronology'. Hawking took the divergence of the expectation value of the energy-momentum tensor as the Cauchy horizon is approached and thus as closed timelike curves are `about to form' to strongly support his `chronology protection conjecture', according to which the ``\textit{laws of physics do not allow the appearance of closed timelike curves}'' \citep[603, emphasis in original]{haw92}. Stated in this way, given the pervasive existence of closed timelike curves in relativistic spacetimes, there thus seems to be little reason to think that the chronology protection conjecture is true in semi-classical quantum gravity, and no reason at all to accept it in the context of GR. If successful, Hawking's argument might well only establish that the region with the closed timelike curves is beyond the reach of physical denizens of the causally well-behaved region on this side of the Cauchy horizon as they would have to pass through a wall of arbitrarily high energy density in order to be able to travel along closed timelike curves. But these curves might still exist beyond the Cauchy horizon in an inaccessible region of spacetime, and in fact could be taken advantage of by would-be time travellers on the far side of the Cauchy horizon. In this case, the laws of physics would not prevent the \textit{existence} of closed timelike curves, though perhaps their \textit{accessibility}. However, it is not clear whether Hawking's argument succeeds in the first place. A theorem due to \citet{kayeal97} establishes that the expectation value of the energy-momentum tensor for a scalar field is not everywhere well-defined on compactly generated Cauchy horizons. The authors suggest that this result may be taken as further support of Hawking's chronology protection conjecture in that it suggests that the Cauchy horizon cordons off the region with closed timelike curves. However, the result can just as well be taken to indicate that semi-classical quantum gravity is simply no longer valid at the horizon\footnote{See for example \citet{vis03}, as well as the discussion in \citet[\S5]{earsmewut}.} and that therefore, Hawking's argument fails, at least if based on solely on semi-classical quantum gravity. Thus, only a more fundamental theory of quantum gravity can deliver a final verdict on the matter. The prospect of probing more deeply to see whether chronology protection obtains motivates not only the present inquiry, but---as should become clear---also promises to shed light on the nature of quantum gravity itself. \subsection{Causal set theory}\label{ssec:cst} Just as the research programs introduced in \S\ref{ssec:lqg} and \S\ref{ssec:st}, causal set theory aims at offering a `full' quantum theory of gravity, i.e., a theory in which also the gravitational sector is subjected to a quantum treatment. It is motivated by a result in classical GR, which shows that at least for an important class of relativistic spacetimes, the causal structure determines the metric structure of the spacetime up to a conformal factor.\footnote{This is a paraphrase of a theorem due to \citet{mal77}. More precisely, the theorem states that for any two `distinguishing' (and temporally oriented) spacetimes $\langle M, g_{ab}\rangle$ and $\langle M', g'_{ab}\rangle$, a causal isomorphism $\phi: M \rightarrow M'$ is a smooth conformal isometry. A bijection $\phi: M \rightarrow M'$ is a \textit{causal isomorphism} just in case for all $p, q \in M$, $p$ is in the chronological past of $q$ if and only if $\phi(p)$ is in the chronological past of $\phi(q)$. A spacetime $\langle M, g_{ab}\rangle$ is \textit{distinguishing} just in case for all $p, q \in M$, if the chronological past of $p$ is identical to the chronological past of $q$, then $p=q$, and if the chronological future of $p$ is identical to the chronological future of $q$, then $p=q$. A causal isomorphism $\phi$ is a conformal isometry just in case it is a diffeomorphism and there exists a conformal factor $\Omega: M' \rightarrow \mathbb{R}$ such that $\phi_\ast (g_{ab}) = \Omega^2 g'_{ab}$ with $\Omega \neq 0$.} This result is interpreted to suggest that the causal structure of a (causally well-behaved) spacetime contains almost the full information concerning its geometry; in fact, all but some information about local `size'. In the causal set theory programme, this missing `size' information is naturally supplied by the number of discrete `atoms' of spacetime contained in any region. In slogan form, causal set theory assumes spacetime to be causal structure plus number. Accordingly, the fundamental structures postulated by causal set theory---the `causal sets'---are discrete sets of elementary events, which are partially ordered by a relation of causal precedence or of causal connectibility. As it stands today, causal set theory frames a promising research programme but is still a long way from offering a complete quantum theory of gravity. The promise of the research programme remains unfulfilled in three ways. First, merely requiring the fundamental structure of our world to be a discrete, partially ordered set falls way short of constraining the boundless possible combinations of such structures to serious candidates with a promise to reproduce our physical world: there are just too many discrete partial orders, almost all of which do not resemble our universe. How can one identify the `physical sector' of the theory? The most popular strategy to taming the unruly possibilities is by imposing additional constraints; in particular, advocates of causal set theory favour imposing dynamical laws in response to the problem (e.g.\ the classical sequential growth dynamics proposed in \citealt{ridsor99}). Second, even the successful resolution of this trouble would at best result in a purely classical theory: neither does the state space have the structure of a vector space, nor is there anything quantum about the dynamics. If the theory is truly to incorporate the quantum nature of matter, then causal set theory as it stands can at best be a stepping stone toward a full quantum theory of gravity. Third, causal set theory suffers from the same affliction as all other approaches to quantum gravity: a full understanding of the relationship between the fundamental physics postulated and the emergent relativistic spacetime with its dynamics between spacetime and matter as encoded in the Einstein field equation remains elusive. Whatever the eventual resolution of these challenges may look like, what does the present state of the theory suggest regarding the possibility of time travel? In special-relativistic theories, the causal structure of spacetime is expressed by the usual and well-behaved lightcones of Minkowski spacetime. In Minkowski spacetime, causal precedence thus merely \textit{partially} orders events, as spacelike related events do not stand in this relation. Since causal set theory also permits `spacelike separated' events, the ordering is equally merely partial. For our present purposes what matters, however, is that the ordering is also no weaker than partial. This means, in particular, that it is not a mere \textit{pre-order}, i.e., a reflexive and transitive order, which is not, in general, antisymmetric.\footnote{A binary relation $R$ on a domain $D$ is \textit{antisymmetric} just in case for all $x, y \in D$, if $Rxy$ and $Ryx$, then $x=y$.} The demand that the causal relation be antisymmetric (and so not a mere pre-order) thus precludes the possibility of causal loops of the form of cycles containing numerically distinct events $a$ and $b$ such that $a$ causally precedes $b$ and $b$ causally precedes $a$, as was possible in spacetimes in GR which contain closed timelike curves. In other words, causal set theory prohibits, in its central axiom, that the fundamental structure accommodates what would be the natural analogue of closed timelike curves in causal set theory, i.e., closed chains of events connected by the relation of causal connectibility. This choice simplifies the technical demands of the approach, as well as its metaphysics (\citealt{wut12c} and \citealt[Ch.\ 3]{hugwut}), but it imperils causal set theory's capacity to give rise to relativistic spacetime models which do include closed timelike curves. Although physicists are generally happy to give up non-globally hyperbolic models of GR, a theory's inability to reproduce that sector of GR may turn out to be a vice rather than a virtue. e.g.\ in case models with closed timelike curves turn out to be physically significant.\footnote{As is argued in \cite{earsmewut} and in \citet{smewut11}, this is a possibility that should not be ignored at the present stage of inquiry, given that it is difficult to know antecedently which parts of a new theory reveal important new physics.} That causal set theory cannot lend itself to spacetimes with closed timelike curves is not, however, a foregone conclusion: it might be that causal sets, although free of causal loops at the fundamental level, nevertheless can combine in ways such that at higher levels, causal loops emerge. If this turns out to be the case, however, then the emergent structure must necessarily violate the strictures of causal set theory and thus cannot be a model of it. I will return to this possibility in \S\ref{sec:emergent} below. \subsection{Loop quantum gravity}\label{ssec:lqg} Just like causal set theory, loop quantum gravity also starts from GR in its attempt to articulate a quantum theory of gravity. Instead of attempting this via the formulation of a classical discrete structure, it applies a canonical quantization procedure to GR. A canonical quantization of a classical theory attempts to preserve the core structure of the classical theory and convert it, in the most faithful way possible, into a quantum theory. This core structure consists in the canonical variables and their algebraic structure expressed by their Poisson bracket. The classical variables, such as position and momentum, are turned into quantum operators on a Hilbert space and the Poisson bracket becomes the commutation relation between the basic canonical operators. Any canonical approach to quantum gravity assumes that spacetime $\langle M, g_{ab}\rangle$ is globally hyperbolic and thus of topology $\Sigma\times\mathbb{R}$, where $\Sigma$ is again a three-dimensional spacelike submanifold of $M$.\footnote{An accessible textbook for both approaches to canonical quantum gravity described in this section is \citet{gampul}.} In this case (of global hyperbolicity), there exists a timelike vector field $t^a$ everywhere on $M$. This vector is tangent to a family of curves which can be parametrized by a `time' parameter $t$. The resulting three-dimensional surfaces $\Sigma_t$ of constant $t$ are totally ordered time slices in those spacetimes. An important technical choice for any canonical approach to quantum gravity is to select a pair of canonical variables as coordinates in the classical phase space of GR. In the traditional canonical approach, the four-dimensional metric of spacetime is rewritten as a function of the spatial three-metric defined on the $\Sigma_t$ and of the `lapse' $N$ and the `shift vector' $N^a$ resulting from the decomposition of $t^a = N n^a + N^a$, where $n^a$ is a vector field normal to the $\Sigma_t$'s. The pair of canonical variables in this approach is then given by the spatial three-metric as the `configuration' variable and what is essentially the extrinsic curvature as its conjugate momentum variable. Capturing the content of the globally hyperbolic sector of GR using this choice of canonical variables leads to a representational surplus resulting from expressing the physical content of the theory using more variables than are needed to capture the true degrees of freedom. As a consequence, `constraint equations' arise. The symmetric three-metric encodes six configuration degrees of freedom. The four constraint equations then leave us with two degrees of freedom for each point in space, as expected from GR. Solving the constraint equations thus gives us the true physical state space. Although its canonical variables permit a natural geometric interpretation, this choice is marred with insurmountable technical difficulties: the constraint equations are non-polynomial and no technique is known for solving them. This problem has essentially halted progress along these lines. An alternative choice of basic variables promised to revive the canonical quantization programme and led to the approach known as `loop quantum gravity'. In loop quantum gravity, one proceeds not by using `metrical' variables to capture the geometry of spacetime, but instead variables based on the `connection'. Rewriting the metric in terms of `triads', the connection enters their covariant derivative, yielding an expression of the geometry of spacetime equivalent to that based on metrical variables. Loop quantum gravity then selects the so-called `Ashtekar variables', i.e., the (densitized) triads as momentum variables and the connection as canonically conjugate configuration variables. Re-expressing the Einstein-Hilbert action in terms of the components of the connection and the triads, it turns out that there are three sets of constraint equations which must be satisfied in order for the rewritten theory to be equivalent to (the globally hyperbolic sector of) GR. Among these, the Hamiltonian constraint is of particular interest and will be discussed in a moment. Moving to the quantum theory by means of canonical quantization, it seems natural to consider the `connection representation' of the wave function, i.e., expressing the wave function of the system as a function of the connection variable, similarly to Maxwell and Yang-Mills theories. However, technical difficulties suggest replacing the connection representation with the `loop representation', in which the wave function is given as a functional of `holonomies' around closed loops.\footnote{See \citet[Ch.\ 8]{gampul}.} Working in this loop representation renders two of the three families of constraint equations solvable. With just one constraint remaining to be solved, we arrive at what is known as the `kinematical Hilbert space'. The so-called `spin network states' can be constructed from the loop states and constitute an orthonormal basis of this Hilbert space \citep[\S6.3]{rov08lrr}. A spin network can be represented by a `coloured' graph such that both its nodes and the links between them carry spin representations. The spin network states can naturally be interpreted as forming a kind of discrete space where the nodes of the network represent the `atoms' of this granular space, and the links the surfaces where adjacent atoms `touch' \citep[\S1.2.2]{rov04}. On this interpretation, physical space is, fundamentally, a quantum superposition of spin networks.\footnote{For a further discussion concerning the physical interpretation of these spin networks, see \citet[\S2.1]{wut17}.} Although the quantum measurement problem prohibits a straightforward interpretation of this structure as chunky space, the geometric properties of the spin networks are at least suggestive of this natural interpretation. Time, on the other hand, seems to have disappeared entirely in canonical quantum gravity. The remaining constraint equation to be solved turns out to demand that the Hamiltonian operator sends the physical states to zero. Unlike in quantum mechanics, where the Schr\"odinger equation mandates how the Hamiltonian governs the dynamical evolution of the system, the Hamiltonian constraint equation here suggests that there is no change over time for genuinely physical states. In fact, there remains no quantity that could reasonably be interpreted as time in the Hamiltonian constraint equation.\footnote{\citet[\S2]{hugeal13} offers a more detailed explanation of the problem and brief survey of reactions to this `problem of time'.} Furthermore, this equation has so far resisted being solved, stalling the programme of loop quantum gravity. Without progress on this problem, however, we seem to have no prayer of even articulating what time travel could mean in this theory. There are two workarounds. First, some physicists have symmetry-reduced the physical system under study, restricting the classical theory to homogeneous and isotropic spaces before subjecting it to quantization. This `cosmological sector' is much simpler than the full theory such that the corresponding Hamiltonian constraint equation can be solved. Unfortunately, these systems are too simple to permit anything that could reasonably be interpreted as time travel.\footnote{Though they are philosophically rich in other ways \citep{hugwut18}.} The second workaround is more relevant for our present purposes. The idea here is to forego the canonical description of the dynamical evolution in favour of a covariant formulation of the evolution. Hence, instead of the Hamiltonian operator, we express the dynamics of the theory in terms of transition amplitudes between `initial' and `final' kinematical states. These transition amplitudes are computed as weighted sums over `histories', i.e., ways in which the theory says the `final' state could have been obtained from the `initial' state. The details of how this is accomplished are irrelevant for our purposes (and are given in \citealt{rovvid}). What matters is that on a natural, but arguably overly simplistic, interpretation, both the `initial' and the `final' states deserve to be unquoted and correspond to quantum states of spatial hypersurfaces---indeed of global time slices of spacetime. Thus, we seem to be faced with a temporally innocuous structure in which no meaningful sense of time travel is permitted. This interpretation is supported by the fact that any canonical quantization scheme of GR starts out by restricting itself to globally hyperbolic spacetimes. The canonical quantization recipe simply requires the classical spacetime structure of the physical system to be quantized to be globally hyperbolic, and thus causally well behaved. Just as for causal set theory, loop quantum gravity really only considers the globally hyperbolic sector of GR. One would therefore naturally assume that the theory also prohibits an analogue of closed timelike curves at the fundamental level, as did causal set theory. This conclusion would be premature, though. First, even though macroscopically the ordering of the initial and final states at earlier and later global times precludes closed curves in time, it could be that there exist tiny loops like this at the microscopic level. Second, just like causal sets, the spin networks of loop quantum gravity may combine such that causal loops emerge at a higher level even though there are none at the fundamental level. This second option will be discussed in \S\ref{sec:emergent} below, so let me finish with a brief word on the first possibility. Given that the problem of time has so far resisted resolution in the canonical approach to solving the Hamiltonian constraint of the full theory, the possibility of microscopic causality violations remains undecidable on this approach. On the covariant alternative, the possibility can be ruled out: the transition amplitudes are constructed from oriented `simplices' which are constructed from considering, among other things, the action of the Hamiltonian on the nodes of the spin network \citep[\S\S4.4, 5.3, 7.3]{rovvid}. Thus, the distinction between `timelike' and `spacelike' directions is maintained at the fundamental level. Given the construction rules of these simplices, microscopic causal loops are ruled out. \subsection{String theory}\label{ssec:st} As a third example of full quantum gravity, let us consider the fate of causal loops in string theory \citep{pol98,zwi04}. String theory is the dominant approach to quantum gravity. Unlike causal set theory and loop quantum gravity, it starts out from the standard model of particle physics and tries to extend the framework to incorporate gravity. As string theory is based on the paradigm of particle physics, it does not conceive of gravity as a feature of a dynamical spacetime, but instead as arising from an exchange of force particles, so-called `gravitons'. Furthermore, the point particles of earlier theories are replaced by 1-dimensional `strings' (or higher-dimensional `branes') in order to circumvent the problem of `non-renormalizability', which befell earlier attempts to incorporate gravity into the framework of particle physics \citep{wit96}. String theory exists at two levels. First, there is the perturbative level. At this level, string theorists have developed mathematical tools in order to define the string perturbative expansion over a given background spacetime. Second, the perturbative level is expected to be grounded in the more fundamental non-perturbative theory. This elusive `M-theory' does not yet exist. In fact, its existence is just inferred from the usual assumption that a perturbative expansion only ever gives an approximation to the true physical situation, which must be precisely captured by a more fundamental, and non-perturbative, theory. M-theory is thought to relate five different perturbative string theories by `dualities', i.e., symmetries equating strong coupling limits in one string theory to a weak coupling limit in another string theory. As M-theory does not yet exist, it is impossible to determine its verdict on time travel. However, supersymmetric gravity---widely considered a stepping stone towards full string theory---offers guidance into whether we should expect string theory to permit time travel. Although most of the results I am aware of have been obtained in five-dimensional supersymmetric gravity rather than in higher-dimensional theories, it turns out that solutions of five-dimensional supergravity can straightforwardly be extended to solutions of ten- or eleven-dimensional supergravity \citep[4590]{gaueal03}. Consequently, it appears as if string theory will likely admit time travel in case five-dimensional supersymmetric gravity does. And it turns out that five-dimensional supersymmetric gravity admits many solutions with closed timelike curves. The systematic investigation of closed timelike curves in supersymmetric gravity starts two decades ago in \citealt{gibher99}. Since then, at least three important classes of solutions in five-dimensional supersymmetric gravity which contain closed timelike curves have been identified. First, there are supersymmetric solutions of flat space with a periodically identified time coordinate, resulting in a construction analogous to a rolled-up Minkowski spacetime of topology $S\times \mathbb{R}^3$ in GR. These solutions are topologically not simply-connected. In this case, passing to a covering spacetime avoids the closed timelike curves. In general, however, the supersymmetric solutions with closed timelike curves have a simply-connected topology and so cannot be avoided. The second class of supersymmetric solutions with closed timelike curves consists in an analogue of G\"odel spacetime \citep{gaueal03}. Just as G\"odel spacetime, these solutions model a topologically trivial, rotating, and homogeneous (and so not asymptotically flat) universe containing close timelike curves. Whether these G\"odel-type solutions really permit time travel has been contested: holography may effectively act to protect the chronology of G\"odel-type solutions in that closed timelike curves are either hidden behind a `holographic screen' and thus made inaccessible for timelike observers, or else broken up into pieces such that no closed timelike curves remain intact \citep{boyeal03}. The third class are the so-called `BMPV black hole' solutions, named after the initials of \citet{breeal97}. BMPV black holes are charged, rotating black holes in simply-connected, asymptotically flat spacetime. Thus, they are the supersymmetric counterparts of the general-relativistic Kerr-Newman black holes. Just as Kerr-Newman spacetimes can be maximally analytically extended to encompass a region inside the event horizon of the black hole to contain closed timelike curves \citep{wut99}, so can BMPV black holes, as has been shown by \citet{gibher99}. More precisely, Gibbons and Herdeiro show this to be the case for \textit{extremal} black holes, i.e., black holes whose angular momentum equals their mass (in natural units). It is unclear whether their result generalizes to include physically more realistic cases. Although they firmly establish their result only for a rather finely tuned combination of black hole parameters, \citet{gibher99} show that the presence of closed timelike curves for BMPV black holes is rather robust: this hyper-critical solution represents a simply connected, geodesically complete, asymptotically flat, non-singular, time-orientable, supersymmetric spacetime with finite mass, satisfying the dominant energy condition. Thus, `cosmic censorship', whatever its details, will struggle to eliminate this case.\footnote{See \citet[623]{smewut11} for more details.} None of these three classes of supersymmetric spacetimes with closed timelike curves conclusively establishes the possibility of time travel in supersymmetric gravity, let alone string theory. Having said that, however, it should be noted, with \citet{gaueal03}, that closed timelike curves appear generically in physically important classes of five-dimensional supersymmetric spacetimes. \citet{gaueal03} even complain how difficult it is to find five-dimensional solutions of supersymmetric gravity which do \textit{not} contain either closed timelike curves or singularities. Of course, this finding may be counted as a strike against supersymmetric gravity, rather than as a point in favour of time travel. Nevertheless, the emerging picture is one pointing toward the suggestion that closed timelike curves arise naturally in string theory, or at least in its vassal theories. Clearly, this suggestion remains preliminary in that it is wide open to what extent these results translate into a fundamental, non-perturbative version of string theory, and indeed whether string theory or any of the other approaches presented in this section are viable approaches to quantum gravity for that matter. \section{Emergent time travel?}\label{sec:emergent} In the last section, we have discussed the possibility that theories beyond GR directly issue a verdict on the permissibility of time travel. However, as stated in \S\ref{sec:intro}, we need to consider a second possibility, according to which an effective theory renders time travel physically possible, even though it is a valid approximation to a more fundamental theory, which in itself rules out time travel. This is the topic of this section. Of the four approaches discussed in \S\ref{sec:qg}, two seem to directly permit closed timelike curves and so time travel: while this was conjectured to be the case for string theory based on incomplete results from five-dimensional supersymmetric gravity, the prospects of some form of chronology protection obtaining are rather remote for semi-classical quantum gravity. Leaving aside the case of semi-classical quantum gravity, a note of caution concerning string theory. The results noted in the previous section pertain to the spacetime structure of `target space', which is the spacetime background for strings;\footnote{Strictly speaking, it is not even target space, or at least not the metric $g$ in it, is fundamental; rather, given a general metric in the action of a theory, one obtains a quantum theory of perturbations around a coherent state, which corresponds to the classical relativistic metric \citep[\S3]{hugvis15}.} it does not correspond to observed, `phenomenal' spacetime, which is an emergent phenomenon in string theory \citep{hug17}. If this is right, then regardless of whether the target spacetime contains closed timelike curves, what will be of interest is whether the emergent phenomenal spacetime will have a structure such as to permit time travel. As the emergence of spacetime, and particularly of its global properties, is at present only very partially understood in string theory,\footnote{See \citet{hugwut}.} further analysis of this will be left for another day. What is the situation in the two approaches which ruled out closed timelike curves (or their analogues) at the fundamental level, viz., causal set theory and loop quantum gravity? In a way, the situation for both causal set theory and loop quantum gravity is similar: fundamentally, they prohibit the equivalent of closed causal curves and so rule out time travel, as we have seen in the previous section. However, depending on what the relationship between the fundamental theory and emergent spacetime may be in each case, we may find that the emergent, macroscopic spacetime structure permits time travel. A consideration of the precise role and ambit of the theory for each case is necessary in order to appreciate this point. One can distinguish between the astrophysical and the cosmological ambit of GR. On the one hand, GR furnishes a theory of gravity applicable to individual stars, or `small' isolated systems consisting of stars and smaller bodies such as our solar system. As such, it can describe the orbits of planets around their central star, the gravitational collapse of a star, a black hole, the merger of two black holes and the gravitational waves emitted on the occasion, and similar astrophysical phenomena involving gravity. On the other hand, since gravity is the dominant interaction at large distances, GR also delivers a cosmological theory, i.e., a theory describing the large-scale structure of the cosmos in its entirety and throughout most of its history. This should not be confused with a `theory of everything', which it clearly need not be despite the fact that it describes our world at the largest distances and over the longest durations. Qua cosmological theory, GR still supplies the backbone of the current cosmological standard model in the form of the Friedmann-Lema\^{\i}tre-Robertson-Walker spacetimes. These two applications of GR are---though connected---nevertheless distinct. Relativistic spacetimes describing phenomena of the astrophysical kind, just as the cosmological models, are often `global' or `large-scale' in that they encompass large (typically infinite) spatial distances and temporal durations as well. For instance, a Schwarzschild black hole is represented by a spacetime of infinite extent. However, such an astrophysical spacetime is not thought to correctly describe the large-scale structure of the cosmos at all: its description is accurate only near the astrophysical object it is thought to capture. The demand that such astrophysical spacetimes be asymptotically flat---roughly that the curvature vanishes away from the astrophysical object, i.e., in the `asymptotic' region---encodes the idea that the system at stake is, at least to a good approximation, isolated from the influence of other systems or indeed the rest of the cosmos.\footnote{To articulate precisely what asymptotic flatness amounts to, and, connectedly, what it is for a system to be isolated in a background-independent theory such as GR is far from trivial and requires some unpacking, as it is offered, e.g., in \citet[\S11]{wal84}.} In principle, spacetimes representing individual systems could then be stitched together in order to obtain a more complete description of the physics of ever larger and more encompassing parts of the cosmos. Closed timelike curves arise in both types of relativistic spacetimes. Astrophysical spacetimes such as Kerr-Newman spacetimes may contain causality-violating regions (in this case inside the maximal analytic extension of the interior of the black hole). For these spacetimes, closed timelike curves are typically confined to a region of spacetime. Thus, in general, there are events such that no closed timelike curves pass through them. Cosmological solutions such as G\"odel spacetime also accommodate time travel. In those cases, closed timelike curves are sometimes not confined to a region and thus in general every event lies on some closed timelike curves. in those cases, the opportunity to time travel is thus democratically awarded to all events. Returning to quantum gravity, if causal set theory and loop quantum gravity are considered cosmological theories, their laws reign supreme and one would not expect the possibility of time travel to arise. Indeed, since such cosmological models would have be consistent with the causality-enforcing features of these theories, the possibility of time travel would in this case be precluded universally. Let us consider this case for both theories separately. \subsection{Causal set theory as a cosmological theory} \label{ssec:causalcosmo} Turning to causal set theory first, if the fundamental structure thus covers the cosmos and this structure is a causal set, then the condition of asymmetry entails that there cannot be a causal loop anywhere in the entire cosmos. Would it be possible, however, that even though causal loops are globally ruled out at the level of the fundamental structure the relativistic spacetime that emerges from this fundamental causal set contained closed timelike curves? In order to answer this question, we need to consider how relativistic spacetimes are thought to emerge from causal sets. While the emergence of spacetime in causal set theory has so far resisted resolution, the outlines are sufficiently clear for us to be in a position to settle the question.\footnote{For a much more detailed account of the emergence of spacetime in causal set theory, see \citet[Chs.\ 3, 4]{hugwut}.} As a necessary condition on the relationship between the underlying causal set and the emergent spacetime, there exists an embedding of the causal set into the spacetime. An embedding of a causal set into a spacetime is an injective map from the domain of the elements of the causal set into the manifold of the spacetime that preserves the causal structure in the sense that for any two elements $x$ and $y$ of the causal set, $x$ causally precedes $y$ if and only if the image of $x$ is contained in the causal past of the image of $y$. This condition and the asymmetry of the causal set together entail that any spacetime events in the image of the causal set cannot be part of a closed timelike curve. Now it is consistent with the condition (and with the asymmetry of the causal set) that the emergent spacetime nevertheless contains closed timelike curves. If so, however, at most one of the events on the closed timelike curve could be in the image of the elements of the causal set. Thus, if there exist closed timelike curves in the emergent spacetime, there could be absolutely no trace of this fact in the fundamental structure. As a causal set is discrete and a relativistic spacetime a continuum structure, it will in general not be the case that the fundamental causal set contains all the `information' present in a relativistic spacetime. That the emergent spacetime not contain any relevant geometric features not already in some form present in the causal set motivates the additional demand that the embedding be \textit{faithful}, i.e., that the map distributes the images of the elements of the causal set approximately uniformly on the spacetime manifold, which is assumed to be approximately flat below the discreteness scale.\footnote{For a more precise formulation, see \citet[Ch.\ 4]{hugwut}.} The idea behind imposing faithfulness is precisely that the geometry of the emergent spacetime be `boring' below the scale captured (and \textit{capturable}) by the fundamental discrete structure. If the emergent spacetime contained---presumably at Planckian scales---very thin slices disconnecting from the bulk of the spacetime, looping back to reconnect to it at earlier times in a way such that it contained closed timelike curves running along these slices, then it may not violate faithfulness: the spacetime could be flat (locally Minkowskian) everywhere with just no image point on the thin slice looping back. Though thus consistent with the letter of faithfulness, such a situation would arguably violate its spirit: that the emergent spacetime not contain any relevant features not at least implicitly present in the fundamental structure. In sum, if causal set theory is regarded as a cosmological theory, there appears to be quite literally little space for an emergent spacetime to naturally accommodate closed timelike curves. \subsection{Loop quantum gravity as a cosmological theory} \label{ssec:loopcosmo} Many of the conclusions arrived at in the case of causal set theory qua cosmological theory hold in the case of loop quantum gravity in this regime as well. In fact, there is explicit consideration of a sector of loop quantum gravity, known as `loop quantum cosmology', which studies symmetry-reduced models of loop quantum gravity. By imposing isotropy and homogeneity already at the classical level, the constraint equations simplify sufficiently to admit explicit solutions \citep{boj11}. In those models, a `cosmic' time totally orders all events and there is consequently no possibility for time travel. More generally, the Hamiltonian formalism presupposes the physical system at stake---be it a pendulum, a planet, or spacetime itself---is spatially extended and evolves over time, following the dynamics of Hamilton's equation. Classically, this assumes, as we saw above, that the spacetime has topology $\Sigma\times \mathbb{R}$ such that the spatial time slices $\Sigma$ are again totally ordered in `time' by the reals. Moving to the quantum theory, as (to repeat) the canonical programme has stalled, the question remains open what the states in the physical Hilbert space are, and so how they ought to be interpreted physically. Alternatively, covariant loop quantum gravity does not easily lend itself to a cosmological interpretation. The `initial' and `final time' slices are intended as such, and the spacetime region they enfold is finite and generally rather small. Independently of the size of the region enveloped by the time slices, a truly cosmological model cannot in general be expected to have a first or last `moment' in time. Although one could in principle identify the initial and final time slices and so create a model with the equivalent of closed timelike curves, such constructions would be an abuse of the theory clearly beyond its intended ambit. Thus, fundamentally, cosmological loop quantum gravity does not permit time travel. Could there be time travel in emergent spacetime, perhaps by means of some more or less artificial construction? Unfortunately, this cannot be conclusively answered, since the emergence of spacetime from states in loop quantum gravity is yet to be fully understood.\footnote{See \citet{wut17} for a more detailed sketch of the current state of the art.} Although the possibility of finely carved emergent spacetimes with closed timelike curves cannot be excluded, it seems as if such spacetimes should not emerge from full-sized cosmological states in loop quantum gravity. \subsection{Quantum gravity as `astrophysics'} \label{ssec:qgastro} There is an alternative to considering a quantum theory of gravity as offering a cosmological theory: it may be deemed, rather, as describing much more local phenomena, such as astrophysical black holes or the very early universe.\footnote{The latter is of course not really a `local' phenomenon as it concerns the early stages of the whole cosmos; however, since the description is really of a very small universe during the first few `Planck times', the description would be only of what is really a very small part of spacetime. This is indeed the remit of `quantum cosmology', which thus becomes an `astrophysical' theory under the present use of the term.} In fact, these are the phenomena where most physicists expect that only a quantum theory of gravity could deliver a satisfactory account, motivating quantum gravity in the first place. Although there are some efforts in this direction (such as the estimation of an entropy bound in \citealt{ridzoh06}), causal set theory lacks well-developed astrophysical applications. This is largely owed to the fact that it is to date a classical theory whose transformation to a quantum theory has been but roughly sketched. Apart from the cosmological applications mentioned in \S\ref{ssec:loopcosmo}, loop quantum gravity has also seen some research on black holes (such as the derivation of an expression for the black hole entropy similar to the usual Bekenstein-Hawking formula in \citealt{rov96} or studies of black hole singularity resolution e.g.\ in \citealt{gampul13}). As far as I can tell, there is no indication of the possibility of time travel in any of these applications. But there remains another option. Perhaps quantum gravity will ever only be concerned with local phenomena, offering a fundamental description of the finest threads of what is spacetime macroscopically, while never amounting to a theory of global structure. If so, a quantum theory of gravity should not be considered cosmological. Instead, the global structure would emerge from patching together smaller pieces of fundamental quantum gravitational structures to cosmological totalities following principles or laws distinct from those asserted in quantum gravity. In fact, in GR itself, we cannot infer from a locally causally well behaved spacetime that it contains no closed timelike curves and so is globally well behaved. There exist pairs of locally isometric spacetimes such that one of them contains closed timelike curves while the other does not. For instance, Minkowski spacetime $\langle \mathbb{R}^4, \eta\rangle$ and a slice of Minkowksi spacetime rolled up along the timelike direction are locally isometric and so physically indistinguishable, as is illustrated in Figure \ref{fig:twospts}. \begin{figure} \centering \epsfig{figure=twospacetimes2,scale=0.46} \caption{\label{fig:twospts} Two locally isometric spacetimes, only one of which contains closed timelike curves.} \end{figure} Whether or not time travel remains possible in those constructs thus depends on the nature of these laws governing the global structure. If they are as permissive as those in GR (or indeed \textit{are} those of GR), then the resulting global structure will admit (whatever corresponds to) closed timelike curves and time travel in this sense is possible. Of course, these laws may also be more restrictive and preclude the possibility of time travel. For now, the question remains wide open. \section{Conclusions}\label{sec:conc} One may hope, with \citet*{andnemwut}, that a theory more fundamental than GR would deliver insight into the physical mechanism (such as rotation or `antirotation') behind `acausalities' arising in GR such as the presence of closed timelike curves in some relativistic spacetimes. This hope may be disappointed, even though a more fundamental theory may well admit structures amounting to closed timelike curves and thus permit time travel. As the deliberations in this article show, this clearly remains a live option at the present stage of knowledge. Unfortunately, it is also presently impossible to pronounce any even tentatively conclusive lessons concerning the possibility of time travel to be drawn from quantum gravity. Any more definite insight must await a fuller development of the field. In fact, the preliminary analysis above illustrates just how little we currently know regarding the relationship between these more fundamental theories of quantum gravity and GR. While a fuller analysis of the relationship between quantum gravity and GR is beyond the scope of the present article,\footnote{\citet{hugwut} consider the state of the art regarding the relationship between quantum theories of gravity and GR much more fully.} the issue of what can be said about the causal structure of spacetime as a `classical' limit of the underlying theories of quantum gravity in general, and about the emergence of closed timelike curves in particular, exemplifies that much work remains to be done in quantum gravity.\footnote{I thank the anonymous referee for pressing this conclusion. I agree that this is an important upshot of my discussion.} Formulated more positively, although we have yet to learn whether time travel is possible or not, our study blazes a trail forward: using the possibility of time travel and its attendant consideration of the causal structure of spacetime as our foil, the above analysis has led us into the heart of the nature of quantum gravity, its ambit, and---centrally---its relation to GR. For this reason alone, the question of time travel beyond GR is worth our while. Even as we await more determinate answers. \bibliographystyle{plainnat}
1,314,259,995,700
arxiv
\section{Introduction} Norway has a large tolerance towards dialectal variation \cite{nhs} and, as such, one can find examples of dialectal use in many areas of the public sphere, including politics, news media, and social media. % Although there has been much variation in writing Norwegian, since the debut of Nynorsk in the 1850's, the acceptance of dialect use in certain settings is relatively new. The official language policy after World War 2 was to include forms belonging to all layers of society into the written norms, and a ``dialect wave'' has been going on since the 1970's \cite[235-238]{nhs}. From 1980 to 1983 there was an ongoing project called \textit{Den første lese- og skriveopplæring på dialekt} `The first training in reading and writing in dialect' \cite{lesing-og-barns-talemaal-bull}, where primary school students were allowed to use their own dialect in school, with Tove Bull as project leader. \newcite{nhs} also point out that later interest in writing in dialect in media such as e-mail and text messages can be seen as an extension of the interest in dialectal writing in the 1980s \cite[239]{nhs}. They also note that the tendency has been the strongest in the county of Trøndelag initially, but later spreading to other parts of the country, also spreading among adults. At the same time, there are two official main writing systems, \textit{i.e.}\xspace Bokmål and Nynorsk, which offer prescriptive rules for how to write the spoken variants. This leads to a situation where people who typically use their dialect when speaking often revert to one of the written standards when writing. However, despite there being only two official writing systems, there is considerable variation within each system, as the result of years of language policies. Today we can find both `radical' and `conservative' versions of each writing system, where the radical ones try to bridge the gap between the two norms, while the conservative versions attempt to preserve differences. However, it is still natural that these standards have a regularizing effect on the written varieties of people who normally speak their dialect in most situations \cite{Gal2017}. As such, it would be interesting to know \emph{to what degree dialect users deviate from these established norms and use dialect traits when writing informal texts}, \textit{e.g.}\xspace on social media. This could also provide evidence of the vitality of certain dialectal traits. In this paper, we propose a first step towards creating a corpus of written dialectal Norwegian by identifying the best methods to collect, clean, and annotate tweets into Bokmål, Nynorsk, or dialectal Norwegian. We concentrate on geolects, rather than sociolects, as we observe these are easier to collect on Twitter, \textit{i.e.}\xspace the traits that identify a geolect are more likely to be written than those that identify a sociolect. This is a necessary simplification, as dialect users rarely write with full phonetic awareness, making it impossible to find dialect traits that lie mainly in the phonology. As such, our corpus is relies more on lexical and clear phonetic traits to determine whether a tweet is written in a dialect. We collect a corpus of 1,073 tweets which are annotated as \texttt{Bokmål}, \texttt{Nynorsk}, \texttt{Dialect}, or \texttt{Mixed} and perform a first set of experiments to classify tweets as containing dialectal traits using state-of-the-art methods. We find that fine-tuning a Norwegian BERT model (NB-BERT) leads to the best results. We perform an analysis of the data to find useful features for searching for tweets in the future, confirming several linguistic observations of common dialectal traits and find that certain dialectal traits (those from Trøndelag) are more likely to be written, suggesting that since their traits strongly diverge from Bokmål and Nynorsk, they are more likely to deviate from the established norms when composing tweets. Finally, we release the annotations and dialect prediction models for future research.\footnote{Available at \url{https://github.com/jerbarnes/norwegian_dialect}} \section{Related Work} The importance of incorporating language variation into natural language processing approaches has gained visibility in recent years. The VarDial workshop series deals with computational methods and language resources for closely related languages, language varieties, and dialects and have offered shared tasks on language variety identification for Romanian, German, Uralic languages \cite{zampieri-etal-2019-report}, among others. Similarly, there have been shared tasks on Arabic dialect identification \cite{bouamor2019madar, abdulmageedetal2020nadi}. To our knowledge, however, there are no available written dialect identification corpora for Norwegian. Many successful approaches to dialect identification use linear models (\textit{e.g.}\xspace Support Vector Machines, Multinomial Naive Bayes) with word and character n-gram features \cite{wu-etal-2019-language,jauhiainen-etal-2019-discriminating}, while neural approaches often perform poorly \cite{zampieri-etal-2019-report} (see \citet{jauhiainen-etal-2019} for a full discussion). More recent uses of pretrained language models based on transformer architectures \cite{devlin-etal-2019-bert}, however, have shown promise \cite{bernier-colborne-etal-2019-improving}. Corpus-related work on Norwegian dialects has mainly focused on spoken varieties. There are two larger corpora available for Norwegian: the newer Nordic Dialect Corpus \cite{johannessen-etal-2009-nordic}, which contains spoken data from several Nordic languages, and the Language Infrastructure made Accessible (LIA) Corpus, which in addition to Norwegian also contain S\'{a}mi language clips.\footnote{\href{https://www.hf.uio.no/iln/english/research/projects/language-infrastructure-made-accessible/}{https://www.hf.uio.no/iln/english/research/projects/language-infrastructure-made-accessible/}} There is also the \textit{Talk of Norway} Corpus \cite{lapponi-etal-2018}, which contains transcriptions of parliamentary speeches in a variety of language varieties. While they contain rich dialectal information, this information is not kept in writing, as they are normalized to Bokmål and Nynorsk. These resources are useful for working with speech technology and questions about Norwegian dialects as they are spoken, but they are likely not sufficient to answer research questions about how dialects are expressed when written. The transcriptions in these corpora also differ from written dialect sources in the sense that they are in a way truer representations of the dialects in question. In writing dialect representations tend to focus more on a few core words, even if the actual phonetic realization of certain words could have been marked in writing. \section{Data collection} In this first round of annotations, we search for tweets containing Bokmål, Nynorsk, and Dialect terms (See Appendix \ref{appendix}), discarding tweets that are shorter than 10 tokens. The terms were collected by gathering frequency bigram lists from the Nordic Dialect Corpus \cite{johannessen-etal-2009-nordic} from the written representation of the dialectal varieties. Two native speakers annotated these tweets with four labels: \texttt{Bokmål}, \texttt{Nynorsk}, \texttt{Dialect}, and \texttt{Mixed}. The \texttt{Mixed} class refers to tweets where there is a clear separation of dialectal and non-dialectal texts, \textit{e.g.}\xspace reported speech in \texttt{Bokmål} with comments in \texttt{Dialect}. This class can be very problematic for our classification task, as the content can be a mix of all the other three classes. We nevertheless keep it, as it still reflects one of the written representations of Norwegian. In Example \ref{ex:nordiaexample}, we show two phrases from the Nordic Dialect Corpus, from a speaker in Ballangen, Nordland county. We show it in dialectal form (a) and the Bokmål (b) transcription, but with added punctuation marks. To exemplify the two other categories we have manually translated it to Nynorsk (c) and added a mixed version (d), as well as an English translation (e) for reader comprehension. \begin{table}[] \centering \resizebox{.45\textwidth}{!}{ \begin{tabular}{lrrrrrrrrrrrr} \toprule & Bokmål & Nynorsk & Dialect & Mixed & \textbf{Total }\\ \cmidrule(lr){2-2}\cmidrule(lr){3-3}\cmidrule(lr){4-4}\cmidrule(lr){5-5}\cmidrule(lr){6-6} Train & 348 & 174 & 274 & 52 & 848 \\ Dev & 52 & 20 & 30 & 4 & 106\\ Test & 38 & 31 & 35 & 6 & 110\\ \cmidrule(lr){2-2}\cmidrule(lr){3-3}\cmidrule(lr){4-4}\cmidrule(lr){5-5}\cmidrule(lr){6-6} \textbf{Total} & 438 & 225 & 348 & 62 & 1,073 \\ \bottomrule \end{tabular} } \caption{Data statistics for the corpus, including number of tweets per split.} \label{tab:stats} \end{table} \begin{covexample} \begin{itemize} \item[(a)] Æ ha løsst å fær dit. Æ har løsst å gå på skole dær.\\ \item[(b)] Jeg har lyst å fare dit. Jeg har lyst å gå på skole der.\\ \item[(c)] Eg har lyst å fara dit. Eg har lyst å gå på skule der.\\ \item[(d)] Æ ha løsst å fær dit. Jeg har lyst å gå på skole der. \\ \item[(e)] I want to go there. I want to go to school there. \end{itemize} \label{ex:nordiaexample} \end{covexample} The two annotators doubly annotated a subset of the data in order to assess inter annotator agreement. On a subset of 126 tweets, they achieved a Cohen's Kappa score of 0.76, which corresponds to substantial agreement. Given the strong agreement on this subset, we did not require double annotations for the remaining tweets. Table \ref{tab:stats} shows the final distribution of tweets in the training, development, and test splits. \texttt{Bokmål} tweets are the most common, followed by \texttt{Dialect} and \texttt{Nynorsk}, and as can be seen, \texttt{Mixed} represents a smaller subset of the data. Certain traits made the annotation difficult. Many tweets, especially those written in dialect, are informal, and therefore contain more slang and spelling mistakes. For example, \textit{jeg} `I' can be misspelled as \textit{eg}, which if found in a non-Nynorsk setting could indicate dialectal variation. Spelling mistakes should not interfere with dialect identification, but as some tweets can contain as little as one token that serve to identify the language variety as dialectal, this can cause problems. Some dialects are also quite similar to either Bokmål or Nynorsk, and speakers might switch between them when speaking or writing. Similarly, certain elements can be indicative of either a geolect or a sociolect, \textit{e.g.}\xspace the pronoun \textit{dem} `they' as the third person plural subject pronoun (\textit{de} in Bokmål and Nynorsk), which in a rural setting might be typical for an East Norwegian dialect, while in an urban setting might be a strong sociolectal indicator. Tweets with similar problems are annotated in favor of the dialect class. Additionally, there is the problem of internal variation. A tweet can belong to a radical or conservative variety of standardized Norwegian, \textit{e.g.}\xspace Riksmål, and thereby not be dialectal. However, this distinction can be difficult to make if a writer uses forms that are now removed from the main standards (Bokmål and Nynorsk), and therefore become more marked, such as \textit{sprog} instead of \textit{språk} `language'. \section{Dialectal traits} \begin{table}[] \centering \begin{tabular}{lrlr} \toprule \multicolumn{2}{c}{Bokmål-Dialect} & \multicolumn{2}{c}{Nynorsk-Dialect} \\ \cmidrule(lr){1-2}\cmidrule(lr){3-4} `e' & 288.7 & `e' & 131.8\\ `æ' & 188.0 & `æ' & 92.5\\ `ska' & 55.0 & `ska' & 23.9\\ `hu' & 36.6 & `ei' & 18.9\\ `te' & 28.9 & `berre' & 14.5\\ (`æ', `e') & 27.5 & `hu' & 14.4\\ `ka' & 22.0 & `heilt' & 13.8\\ `mæ' & 21.6 & (`æ', `e') & 13.2\\ `går' & 19.9 & `meir' & 12.3\\ `va' & 12.4 & `mæ' & 11.9\\ \bottomrule \end{tabular} \caption{Top 10 features and $\chi^{2}$ values between Bokmål -- Dialect tweets and Nynorsk -- Dialect.} \label{tab:chisquared} \end{table} To find the most salient written dialect traits compared to Bokmål and Nynorsk, we perform a $\chi^{2}$ test \cite{Pearson1900} on the occurrence of unigrams, bigrams, and trigrams pairwise between Bokmål and Dialect, and then Nynorsk and Dialect and set $p = 0.5$. The most salient features (see Table \ref{tab:chisquared}) are mainly unigrams that contain dialect features, \textit{e.g.}\xspace \textit{æ} `I', \textit{e} `am/is/are', \textit{ska} `shall/will', \textit{te} `to', \textit{mæ} `me', \textit{frå} `from', although there are also two statistically significant bigrams, \textit{e.g.}\xspace \textit{æ e} `I am', \textit{æ ska} `I will'. We notice that many of these features likely correspond to Trøndersk and Nordnorsk variants. Similar features from other dialects (\textit{i}, \textit{jæ}, \textit{je} `I') are not currently found in the corpus. This may reflect the natural usage, but it is also possible that the original search query should be improved. Example \ref{ex:dialect_example} shows an example of a \texttt{Dialect} tweet (the English translation is 'Now you know how I've felt for a few years') where the dialectal words have been highlighted. \begin{covexample} \texttt{Nå vet du \trait{åssen æ} har hatt det i noen år \emoji} \label{ex:dialect_example} \end{covexample} \section{Experiments} We propose baseline experiments on a 80/10/10 split for training, development and testing and use a Multinomial Naive Bayes (MNB) and a linear SVM. As features, we use tf–idf word and character (1-5) n-gram features, with a minimum document frequency of 5 for words, and 2 for characters. We use MNB with alpha=0.01, and SVM with hinge loss and regularization of 0.5 and use grid search to identify the best combination of parameters and features. We also compare two Norwegian BERT models: NorBERT\footnote{\url{https://huggingface.co/ltgoslo/norbert}} \cite{KutBarVel21} and NB-BERT\footnote{\url{https://huggingface.co/NbAiLab/nb-bert-base}} \cite{Kummervold2021}, which use the same architecture as BERT base cased \cite{devlin-etal-2019-bert}. NorBERT uses a 28,600 entry Norwegian-specific sentence piece vocabulary and was jointly trained on 200M sentences in Bokmål and Nynorsk, while NB-BERT uses the vocabulary from multilingual BERT and is trained on 18 billion tokens from a variety of sources\footnote{See \url{https://github.com/NBAiLab/notram}.}, including historical texts, which presumably contain more examples of written dialect. We use the huggingface transformers implementation and feed the final `[CLS]' embedding to a linear layer, followed by a softmax for classification. The only hyperparameter we optimize is the number of training epochs. We use weight decay on all parameters except for the bias and layer norms and set the learning rate for AdamW \cite{loshchilov2018decoupled} to $1e-5$ and set all other hyperparameters to default settings. We train the model for 20 epochs, and keep the model that achieves the best macro F$_1$\xspace on the dev set. \begin{table}[t] \centering \begin{tabular}{llrrr} \toprule & & Precision & Recall & F$_1$\xspace \\ \cmidrule(lr){2-2}\cmidrule(lr){3-3}\cmidrule(lr){4-4}\cmidrule(lr){5-5} \multirow{4}{*}{\rotatebox{90}{DEV}} & MNB & 0.70 & 0.67 & 0.68 \\ & SVM & 0.87 & 0.69 & 0.73\\ & NorBERT & 0.73 & 0.72 & 0.72 \\ & NB-BERT & \textbf{0.89} & \textbf{0.90} & \textbf{0.89} \\ \cmidrule(lr){2-2}\cmidrule(lr){3-3}\cmidrule(lr){4-4}\cmidrule(lr){5-5} \multirow{4}{*}{\rotatebox{90}{TEST}} & MNB & 0.60 & 0.61 & 0.60 \\ & SVM & \textbf{0.86} & 0.67 & 0.69 \\ & NorBERT & 0.73 & 0.72 & 0.72\\ & NB-BERT & 0.81 & \textbf{0.78} & \textbf{0.79} \\ \bottomrule \end{tabular} \caption{Precision, recall, and macro F$_1$\xspace for each model, on the dev and test sets.} \label{tab:results} \end{table} Table \ref{tab:results} shows the results for all models. MNB is the weakest model on both dev and test on all metrics. Despite the fact that it usually gives good results for dialect identification, it is quite clear that it does not fit our dataset. We think that this might mainly be due to the large vocabulary overlap between the datasets, especially in the \texttt{Mixed} class. SVM has the best precision on both dev (0.87) and test (0.86) and the best F$_1$\xspace on dev, while recall on each is lower (0.69/0.67). NB-BERT has the best recall on both dev and test, and is the best overall model on F$_1$\xspace (0.79), followed by NorBERT. \section{Error analysis} \begin{figure}[t] \centering \includegraphics[width=.45\textwidth]{notram_confusion.pdf} \caption{Confusion matrix of NB-BERT on Bokmål (BK), Nynorsk (NN), Dialect (DI), and Mixed (MIX). } \label{fig:confusion} \end{figure} Figure \ref{fig:confusion} shows a confusion matrix of NB-BERT's predictions on the test data. The main three categories (\texttt{Bokmål}, \texttt{Nynorsk}, and \texttt{Dialect}) are generally well predicted, while \texttt{Mixed} is currently the hardest category to predict. This is expected, as the \texttt{Mixed} class comprises all of the three other forms. The model has a tendency to predict \texttt{Nynorsk} or \texttt{Mixed} for \texttt{Dialect} and struggles with \texttt{Mixed}, predicting either \texttt{Bokmål} or \texttt{Dialect}. The same observations apply to NorBERT, MNB, and SVM classifiers. Given that our main interest lies in the ability to predict future \texttt{Dialect} tweets, we compute precision, recall, and F$_1$\xspace on only this label. The NB-BERT model achieves 0.82, 0.91, and 0.86, respectively while NorBERT follows with 0.84, 0.77, and 0.81. The SVM model achieves 0.80, 0.69, and 0.74 respectively, while MNB obtains slightly less scores with respectively 0.77, 0.66, and 0.71. This suggests that future experiments should consider using NB-BERT. \begin{comment} \begin{figure}[t] \centering \includegraphics[width=.4\textwidth]{svm_cm.pdf} \caption{Confusion matrix of SVM on Bokmål (BK), Nynorsk (NN), Dialect (DI), and Mixed (MIX). } \label{fig:confusionSVM} \end{figure} \end{comment} \section{Conclusion and Future Work} In this paper we have described our first annotation effort to create a corpus of dialectal variation in written Norwegian. In the future, we plan to use our trained models to expand the corpus in a semi-supervised fashion by refining our searches for tweets with dialectal traits in order to have a larger corpus of dialectal tweets, effectively pursuing a high-precision low-recall path. In parallel, we will begin to download large numbers of tweets and use our trained models to automatically annotate these (low-precision, high-recall). At the same time we plan to perform continuous manual evaluations of small amounts of the data in order to identify a larger variety of dialectal tweets, which we will incorporate into the training data for future models. Second, we would like to annotate these dialectal tweets with their specific dialect. To avoid collecting too many tweets from overrepresented dialects, we will first annotate the current dialectal tweets with their dialect, and perform a balanced search to find a similar number of tweets for each dialect. Finally, we would like to incorporate texts from different sources which contain rich dialectal variation, as \textit{e.g.}\xspace books, music, poetry. \bibliographystyle{acl_natbib}
1,314,259,995,701
arxiv
\section{Introduction} \label{intro} Let $X$ be a smooth projective variety over $\mathbb{C}$ of dimension $n$. The Chow groups $A^iX$ (of codimension $i$ algebraic cycles modulo rational equivalence) are notoriously hard to understand. For instance, the following conjecture dating from 1974 is still completely open for $i>1$: \begin{conjecture}[Hartshorne \cite{H}]\label{harts} If $Y\subset X$ is a smooth hyperplane section, restriction induces isomorphisms \[ A^iX_{\mathbb{Q}}\ \stackrel{\cong}{\to}\ A^{i}Y_{\mathbb{Q}}\] for $2i<n-1$. \end{conjecture} Since this seems a very difficult problem, in this note we try and formulate a covariant weak Lefschetz property for Chow groups and hope this is easier. To emphasize that we consider the Chow groups as a homology theory, we now switch to the notation $A_iX=A^{n-i}X$. Let $A_i^{hom}$ and $A_i^{AJ}$ denote the subgroup of homologically trivial resp. Abel--Jacobi trivial cycles. To fix ideas, let's now consider $A_0X$, the Chow group of $0$--cycles. Since $H^{2n}(X,\mathbb{Q})$ is one--dimensional, obviously \[ A_0Y_{\mathbb{Q}}\ \to\ {A_0X_{\mathbb{Q}}/ A_0^{hom}X_{\mathbb{Q}}}\] is surjective for any point $Y$ of $X$---and in particular, for a $0$--dimensional complete intersection $Y\subset X$. The next step is that (by weak Lefschetz applied to $H^{2n-1}(X,\mathbb{Q})$) \[ A_0Y_{\mathbb{Q}}\ \to\ {A_0X_{\mathbb{Q}}/ A_0^{AJ}X_{\mathbb{Q}}}\] is surjective, for any smooth complete intersection curve $Y\subset X$. Going beyond the Abel--Jacobi map, it is conjectured there is a filtration $F^\ast$ on $A_0$, of which the first two steps are $F^1=A_0^{hom}$ and $F^2=A_0^{AJ}$ (cf. \cite{J2}, \cite{Mu}, \cite{MNP}, \cite{Vo}). One can then ask: \begin{question}\label{question} Is it true that \[ A_0Y_{\mathbb{Q}}\ \to\ {A_0X_{\mathbb{Q}}/ F^{\ell+1}}\] is surjective, for any smooth complete intersection $Y\subset X$ of dimension $\ell$ ? \end{question} This question is motivated (pun intended) by the expectation that the quotient ${A_0X_{\mathbb{Q}}/ F^{\ell+1}}$ is determined by the cohomology groups $H^{2n}X, H^{2n-1}X, \ldots, H^{2n-\ell}X$. Since the filtration $F^\ast$ only exists conjecturally, this question is not falsifiable. However, it is expected that $F^{\ell+1}$ vanishes exactly when $H^nX,\ldots, H^{\ell+1}X$ are supported in codimension $1$. This gives the following conjecture, in which $F^\ast$ does not appear: \begin{conjecture}\label{conjecture} Let $X$ be a smooth projective variety, and suppose $H^i(X,\mathbb{Q})=N^1 H^i(X,\mathbb{Q})$ for $i\in[\ell+1,n]$. Then \[ A_0Y_{\mathbb{Q}}\ \to\ A_0X_{\mathbb{Q}}\] is surjective, for any smooth complete intersection $Y\subset X$ of dimension $\ell$. \end{conjecture} The main result of this note provides a verification of this conjecture in some special cases. As a by--product, we also get the injectivity part of conjecture \ref{harts} in these special cases: \begin{nonumbering}{(=Theorem \ref{main})} Suppose the Voisin standard conjecture (conjecture \ref{csv}) holds. Let $X$ be a smooth projective variety of dimension $n$, and suppose \item{(\rom1)} Either the motive of $X$ is finite--dimensional, or $\hbox{Griff}^n(X\times X)_{\mathbb{Q}}=0$; \item{(\rom2)} The Lefschetz standard conjecture $B(X)$ holds; \item{(\rom3)} There exists $r$ such that $H^i(X,\mathbb{Q})_{}= N^r H^i(X,\mathbb{Q})$ for all $i\in [n-r+1,n]$. Then for any codimension $r$ smooth complete intersection $Y\subset X$ of class $[Y]=L^{r}\in H^{2r}(X,\mathbb{Q})$ with $L$ ample, push--forward maps \[A_i(Y)_{\mathbb{Q}}\ \to\ A_i(X)_{\mathbb{Q}}\] are surjective for $i< r$. Moreover, restriction maps \[ A^i_{AJ}(X)_{\mathbb{Q}}\ \to\ A^i_{AJ}(Y)_{\mathbb{Q}}\] are injective for $i\le r+1$. \end{nonumbering} In certain cases some of the hypotheses are automatically satisfied, and the statement simplifies: \begin{nonumberingc}{(cf. Corollary \ref{nocsv})} Let $X$ be a smooth projective 3fold which is dominated by a product of curves. Suppose \[H^3(X,\mathbb{Q})=N^1 H^3(X,\mathbb{Q})\ .\] Then for any smooth ample hypersurface $Y\subset X$, the push--forward map \[ A_0(Y)_{\mathbb{Q}}\ \to\ A_0(X)_{\mathbb{Q}}\] is surjective, and \[A^2_{AJ}(X)_{\mathbb{Q}}\ \to\ A^2_{AJ}(Y)_{\mathbb{Q}}\] is injective. \end{nonumberingc} \begin{nonumberingc}{(=Corollary \ref{nocsv2})} Let $X$ be a product of smooth projective surfaces \[ X=S_1\times\cdots\times S_m\ ,\] where each $S_j$ is either a $K3$ surface of Picard number $19$ or $20$, or has $A_0^{AJ}(S_j)_{\mathbb{Q}}=0$. Suppose at least one $S_j$ has $A_0^{AJ}(S_j)_{\mathbb{Q}}=0$. Then for any smooth ample hypersurface $Y\subset X$, the push--forward map \[ A_0(Y)_{\mathbb{Q}}\ \to\ A_0(X)_{\mathbb{Q}}\] is surjective, and \[A^2_{AJ}(X)_{\mathbb{Q}}\ \to\ A^2_{AJ}(Y)_{\mathbb{Q}}\] is injective. \end{nonumberingc} It was already known that in situations like these two corollaries, $A_0X_{\mathbb{Q}}$ is supported on {\sl some\/} divisor (this follows for instance from \cite[Theorem 3.32]{Vo}); thus, our only contribution is the precision that any ample hypersurface does the job. The injectivity statement, on the other hand, seems to be genuinely new: as far as we know, these are the first examples of varieties with non--trivial $A^2_{AJ}$ for which this injectivity is known to hold.\footnote{This is not strictly true: indeed, \cite[Corollary 5]{Fu} gives non--trivial examples of varieties where the injectivity part of conjecture \ref{harts} is verified.} The proof of the theorem is an easy exercice in using the meccano of correspondences; the only ``deep'' ingredient is Kimura's nilpotence theorem \cite{K}. We end this introduction with a challenge. As is well--known \cite{BS}, the hypothesis of conjecture \ref{conjecture} is verified when $A_0X_{\mathbb{Q}}$ is supported in dimension $\ell$. This gives the following special case of conjecture \ref{conjecture}: \begin{conjecture}\label{support} Let $X$ be a smooth projective variety, and suppose $A_0X_{\mathbb{Q}}$ is supported on a closed subvariety of dimension $\ell$. Then any smooth complete intersection $Y\subset X$ of dimension $\ell$ supports $A_0X_{\mathbb{Q}}$. \end{conjecture} This is true for $\ell\le 1$, but for $\ell>1$ I have no idea how to prove this... \begin{convention} In this note, the word {\sl variety\/} refers to a quasi--projective algebraic variety over $\mathbb{C}$. A {\sl subvariety\/} will be a (possibly reducible) reduced subscheme which is equidimensional. The Chow group of $i$--dimensional cycles on $X$ is denoted $A_iX$; for $X$ smooth of dimension $n$ the notations $A_iX$ and $A^{n-i}X$ will be used interchangeably. The Griffiths group $\hbox{Griff}_i$ is the group of $i$--dimensional cycles that are homologically trivial modulo algebraic equivalence. In diagrams, we will sometimes write $H^jX$ or $H_jX$ to designate singular cohomology $H^j(X,\mathbb{Q})$ resp. Borel--Moore homology $H_j(X,\mathbb{Q})$. \end{convention} \section{Preliminary} \begin{definition}[Coniveau filtration \cite{BO}] Let $X$ be a quasi--projective variety. The coniveau filtration on cohomology and on homology is defined as \[\begin{split} N^c H^i(X,\mathbb{Q})&= \sum \hbox{Im}\bigl( H^i_Y(X,\mathbb{Q})\to H^i(X,\mathbb{Q})\bigr)\ ;\\ N_c H_i(X,\mathbb{Q})&=\sum \hbox{Im} \bigl( H_i(Z,\mathbb{Q})\to H_i(X,\mathbb{Q})\bigr)\ ,\\ \end{split}\] where $Y$ runs over codimension $\ge c$ subvarieties of $X$, and $Z$ over dimension $\le c$ subvarieties. \end{definition} We recall the statement of the ``Voisin standard conjecture'': \begin{conjecture}[Voisin standard conjecture \cite{V0}]\label{csv} Let $X$ be a smooth projective variety, and $Y\subset X$ closed with complement $U$. Then the natural sequence \[ N_i H_{2i}(Y,\mathbb{Q})\to N_i H_{2i}(X,\mathbb{Q})\to N_i H_{2i}(U,\mathbb{Q})\to 0\] is exact for any $i$. \end{conjecture} \begin{remark} Hodge theory gives an exact sequence \[ \hbox{Gr}^W_{-2i} H_{2i}Y\cap F^{-i}\to H_{2i}X\cap F^{-i}\to \hbox{Gr}^W_{-2i} H_{2i}U\cap F^{-i}\to 0\ ,\] where $W$ denotes Deligne's weight filtration, and $F$ the Hodge filtration on $H_\ast(-,\mathbb{C})$. Hence if the Hodge conjecture (that is, its homology version for singular varieties \cite{J}) is true, then conjecture \ref{csv} is true. What's more, this conjecture fits in very neatly with the classical standard conjectures: Voisin shows that conjecture \ref{csv} plus the algebraicity of the K\"unneth components of the diagonal is equivalent to the Lefschetz standard conjecture \cite[Proposition 1.6]{V0}. \end{remark} \begin{remark}\label{csvtrue} Conjecture \ref{csv} is obviously true for $i\le 1$ (this follows from the truth of Hodge conjecture for curve classes), and for $i\ge \dim Y-1$ (where it follows from the Hodge conjecture for divisors). \end{remark} The main ingredient used in this note is Kimura's nilpotence theorem: \begin{theorem}[Kimura \cite{Kim3}]\label{nilp} Let $X$ be a smooth projective variety of dimension $n$ with finite--dimensional motive. Let $\Gamma\in A^n(X\times X)_{\mathbb{Q}}$ be a correspondence which is homologically trivial. Then there is $N\in\mathbb{N}$ such that \[ \Gamma^{\circ N}=0\ \ \ \ \in A^n(X\times X)_{\mathbb{Q}}\ .\] \end{theorem} \begin{remark}\label{examples} We refer to \cite{Kim3}, \cite{An}, \cite{MNP} for the definition of finite--dimensional motive. Conjecturally, any variety has finite--dimensional motive \cite{Kim3}. What mainly concerns us in the scope of this note, is that there are quite a few examples which are known to have finite--dimensional motive: varieties dominated by products of curves \cite{Kim3}, $K3$ surfaces with Picard number $19$ or $20$ \cite{P}, any surface with vanishing geometric genus for which Bloch's conjecture has been verified \cite[Theorem 2.11]{GP}, 3folds with nef tangent bundle \cite{I}, certain 3folds of general type \cite[Section 8]{Vial}. \end{remark} There is also the following nilpotence result, which predates Kimura's theorem: \begin{theorem}[Voisin \cite{V9}, Voevodsky \cite{Voe}]\label{VV} Let $X$ be a smooth projective algebraic variety of dimension $n$, and $\Gamma\in A^n(X\times X)_{\mathbb{Q}}$ a correspondence which is algebraically trivial. Then there is $N\in\mathbb{N}$ such that \[ \Gamma^{\circ N}=0\ \ \ \ \in A^n(X\times X)_{\mathbb{Q}}\ .\] \end{theorem} \section{Main} We now proceed with the proof of the main result of this note: \begin{theorem}\label{main} Suppose the Voisin standard conjecture holds. Let $X$ be a smooth projective variety of dimension $n$, and suppose \item{(\rom1)} Either the motive of $X$ is finite--dimensional, or $\hbox{Griff}^n(X\times X)_{\mathbb{Q}}=0$; \item{(\rom2)} The Lefschetz standard conjecture $B(X)$ holds; \item{(\rom3)} $H^i(X,\mathbb{Q})_{}= N^r H^i(X,\mathbb{Q})$ for all $i\in [n-r+1,n]$. Then for any codimension $r$ smooth complete intersection $Y\subset X$ of class $[Y]=L^{r}\in H^{2r}(X,\mathbb{Q})$ with $L$ ample, push--forward maps \[A_i(Y)_{\mathbb{Q}}\ \to\ A_i(X)_{\mathbb{Q}}\] are surjective for $i< r$. Moreover, restriction maps \[ A^i_{AJ}(X)_{\mathbb{Q}}\ \to\ A^i_{AJ}(Y)_{\mathbb{Q}}\] are injective for $i\le r+1$. \end{theorem} In certain cases, some of the hypotheses can be removed: \begin{corollary}\label{nocsv} Let $X$ be a smooth projective variety of dimension $n\le 3$, and suppose \item{(\rom1)} Either the motive of $X$ is finite--dimensional, or $\hbox{Griff}^n(X\times X)_{\mathbb{Q}}=0$; \item{(\rom2)} $H^n(X,\mathbb{Q})_{}= N^1H^n(X,\mathbb{Q})$. Then for any smooth ample hypersurface $Y\subset X$, push--forward maps \[A_0(Y)_{\mathbb{Q}}\ \to\ A_0(X)_{\mathbb{Q}}\] are surjective, and restriction \[ A^2_{AJ}(X)_{\mathbb{Q}}\ \to\ A^2_{AJ}(Y)_{\mathbb{Q}}\] is injective. \end{corollary} \begin{corollary}{\label{nocsv2}} Let $X$ be a product of smooth projective surfaces \[ X=S_1\times\cdots\times S_m\ ,\] where each $S_j$ is either a $K3$ surface of Picard number $19$ or $20$, or has $A_0^{AJ}(S_j)_{\mathbb{Q}}=0$. \item{(\rom1)} Suppose at least one $S_j$ has $A_0^{AJ}(S_j)_{\mathbb{Q}}=0$. Then for any smooth ample hypersurface $Y\subset X$, \[ A_0(Y)_{\mathbb{Q}}\ \to\ A_0(X)_{\mathbb{Q}}\] is surjective, and \[A^2_{AJ}(X)_{\mathbb{Q}}\ \to\ A^2_{AJ}(Y)_{\mathbb{Q}}\] is injective. \item{(\rom2)} Suppose there are at least $4$ surfaces $S_j$ with $A_0^{AJ}(S_j)_{\mathbb{Q}}=0$. Let $Y\subset X$ be a codimension $2$ complete intersection of class $[Y]=L^2\in H^4(X,\mathbb{Q})$ with $L$ ample. Then \[ A_i(Y)_{\mathbb{Q}}\ \to\ A_i(X)_{\mathbb{Q}}\] is surjective for $i\le 1$, and \[A^i_{AJ}(X)_{\mathbb{Q}}\ \to\ A^i_{AJ}(Y)_{\mathbb{Q}}\] is injective for $i\le 3$. \end{corollary} \begin{proof}(of theorem \ref{main}) Let $\tau\colon Y\hookrightarrow X$ be a smooth complete intersection of class $L^{r}$ as in the statement of the theorem. Let \[ L^j\colon H^iX(\mathbb{Q})\to H^{i+2j}(X,\mathbb{Q})\] denote the result of cupping with a power of $L$; we use the same notation $L^j$ for the correspondence inducing this action. Since $B(X)$ is true, for any $i<n$ there exists a correspondence $C_i\in A^{i}(X\times X)_{\mathbb{Q}}$ inducing an isomorphism \[ (C_i)_\ast\colon H^{2n-i}(X,\mathbb{Q})\ \stackrel{\cong}{\to}\ H^i(X,\mathbb{Q})\] that is inverse to $L^{n-i}$. $B(X)$ being true, the K\"unneth components $\pi_i$ of the diagonal of $X$ are algebraic \cite{K}. Since $B(X)$ implies $B(Y)$ \cite{K}, the same holds for the K\"unneth components $\pi_i^Y$ of $Y$. We now proceed to relate them: \begin{lemma} For each $i\le n-r$, define \[ \Pi_i:=(C_i)\circ(L^{n-i-r})\circ ((\tau\times\tau)_\ast (\pi_i^Y))\ \ \in A^{n}(X\times X,\mathbb{Q})\ .\] Then for each $i\le n-r$, we have equality \[ \Pi_i=\pi_i\ \ \in H^{2n}(X\times X,\mathbb{Q})\ .\] \end{lemma} \begin{proof} We consider the action on $H^j(X,\mathbb{Q})$. There is a factorization \[ \begin{array}[c]{ccccccc} H^jX& \xrightarrow{((\tau\times\tau)_\ast (\pi_i^Y))_\ast}& H^{j+2r}X&\xrightarrow{L^{n-i-r}}& H^{2n-2i+j}X&\xrightarrow{(C_i)_\ast}&H^jX\\ \downarrow&&\uparrow &&&&\\ H^jY&\xrightarrow{(\pi_i^Y)_\ast}& H^jY &&&&\\ \end{array}\] Hence, if $j\not=i$ then \[ (\Pi_i)_\ast H^jX=0\ ,\] and for $j=i$ we have \[ \Pi_i=(C_i)\circ(L^{n-i-r})\circ(L^r)=\hbox{id}\ \ \colon H^iX\to H^iX\ .\] It follows that for any variety $Z$, the action of $\Pi_i$ on $H^j(X\times Z)$ is projection on $H^iX\otimes H^{j-i}Z$; thus by Manin's identity principle, $\Pi_i$ and $\pi_i$ coincide as homological correspondences. \end{proof} \qed \begin{lemma}\label{noaction} For each $i\le n-r$, and each $j<r$, we have \[ (\Pi_i)_\ast A_jX_{\mathbb{Q}}=0\ .\] \end{lemma} \begin{proof} For any correspondence $C\in A^{n-r}(Y\times Y)_{\mathbb{Q}}$, there is a factorization \[\begin{array}[c]{ccc} A_jX_{\mathbb{Q}}&\xrightarrow{((\tau\times\tau)_\ast C)_\ast}& A_{j-r}X_{\mathbb{Q}}\\ \downarrow&&\uparrow\\ A_{j-r}Y_{\mathbb{Q}}&\xrightarrow{C_\ast}&A_{j-r}Y_{\mathbb{Q}}\\ \end{array}\] In particular, taking $C=\pi_i^Y$, we see that the action of $(\tau\times\tau)_\ast (\pi_i^Y)$ on $A_jX_{\mathbb{Q}}$ factors over $A_{j-r}Y_{\mathbb{Q}}$, hence is $0$ for $j<r$. \end{proof} \qed \begin{lemma}\label{supported} Let ${}^t\Pi_i$ denote the transpose of $\Pi_i$. For each $i\le n-r$, and each $j$, we have \[ ({}^t\Pi_i)_\ast A_jX_{\mathbb{Q}}\subset \hbox{Im}\bigl( A_jY_{\mathbb{Q}}\to A_jX_{\mathbb{Q}}\bigr)\ .\\ \] Moreover, for each $j\le r+1$, we have \[ ({}^t\Pi_i)_\ast A^j_{AJ}X_{\mathbb{Q}}=0\ .\] \end{lemma} \begin{proof} It is immediate from the definition that \[ {}^t\Pi_i={} ((\tau\times\tau)_\ast ({}^t \pi_i^Y))\circ {}^t(L^{n-i-r})\circ{}^tC_i\ \ \in A^n(X\times X)_{\mathbb{Q}}\ .\] Using the same diagram as in the proof of lemma \ref{noaction}, one can find a factorization \[\begin{array}[c]{ccccc} A_jX_{\mathbb{Q}}&\ \ \xrightarrow{({}^t(L^{n-i-r})\circ{}^tC_i)_\ast}\ \ \ & A_{j+r}X_{\mathbb{Q}}&\xrightarrow{{}((\tau\times\tau)_\ast ({}^t\pi_i^Y))_\ast}& A_{j}X_{\mathbb{Q}}\\ && \downarrow&&\uparrow\\ && A_jY_{\mathbb{Q}}&\xrightarrow{{}^t(\pi_i^Y)_\ast}&A_jY_{\mathbb{Q}}\ ,\\ \end{array}\] and the lemma is proven. \end{proof} \qed By hypothesis (\rom3), we have \[H^i(X,\mathbb{Q})=N^r H^i(X,\mathbb{Q})\ \ \forall n-r<i\le n\ .\] Applying hard Lefschetz, one finds \[ H^i(X,\mathbb{Q})=N^r H^i(X,\mathbb{Q})\ \ \forall n-r<i<n+r\ .\] This means that in the range $n-r<i<n+r$, the K\"unneth component $\pi_i$ is supported in codimension $r$. That is, there exists a subvariety $Z\subset X$ of codimension $r$, such that for each $n-r<i<n+r$, $\pi_i$ goes to $0$ under the restriction \[ H^{2n}(X\times X,\mathbb{Q})\ \to\ H^{2n}((X\times X)\setminus (Z\times Z),\mathbb{Q})\ .\] Using the Voisin standard conjecture (conjecture \ref{csv}), this implies the existence of an algebraic cycle $P^\prime_i\in A_n(Z\times Z)_{\mathbb{Q}}$ such that (denoting by $P_i$ the push--forward of $P_i^\prime$ to $X\times X$) we have \[ P_i=\pi_i\ \ \in H^{2n}(X\times X,\mathbb{Q})\ \ \forall n-r<i<n+r\ .\] \begin{lemma}\label{noaction2} For any $i\in [n-r+1,n+r-1]$, and any $j<r$, we have \[ (P_i)_\ast A_jX_{\mathbb{Q}}=0\ .\] Moreover, for any $j\le r+1$, we have \[ (P_i)_\ast A^j_{AJ}X_{\mathbb{Q}}=0\ .\] \end{lemma} \begin{proof} Let $\psi\colon Z\to X$ denote the inclusion, so $P_i=(\psi\times\psi)_\ast (P_i^\prime)$. Similar to lemma \ref{noaction}, there is a factorization \[\begin{array}[c]{ccc} A_jX_{\mathbb{Q}}&\xrightarrow{(P_i)_\ast}& A_{j}X_{\mathbb{Q}}\\ \downarrow&&\uparrow\\ A_{j-r}Z_{\mathbb{Q}}&\xrightarrow{(P_i^\prime)_\ast}&A_{j}Z_{\mathbb{Q}}\ .\\ \end{array}\] That is, the action of $P_i$ in the indicated range factors over groups that vanish for dimension reasons and the lemma follows. \end{proof} \qed Putting together the various parts, we find a decomposition of the diagonal \[ \Delta= \sum_{i=0}^{n-r} \Pi_i + \sum_{i={n-r+1}}^{n+r-1} P_i+\sum_{i=0}^{n-r} {}^t \Pi_i\ \ \in H^{2n}(X\times X,\mathbb{Q})\ .\] This is an equality of cycles modulo homological equivalence. Now, applying Kimura's nilpotence theorem (theorem \ref{nilp}), we get that there exists $N$ such that \[ \Bigl( \Delta - \sum_{i=1}^{n-r} \Pi_i - \sum_{i={n-r+1}}^{n+r-1} P_i-\sum_{i=1}^{n-r} {}^t \Pi_i \Bigr)^{\circ N}=0\ \ \in A^n(X\times X)_{\mathbb{Q}}\ .\] Developing this expression (and noting that $\Delta^{\circ N}=\Delta$), we find \[ \Delta=\sum_j Q_j\ \ \in A^n(X\times X)_{\mathbb{Q}}\ ,\] where each $Q_j$ is a composition of elements $\Pi_\ell$ and $P_{\ell^\prime}$ and ${}^t\Pi_{\ell^{\prime\prime}}$. Let $Q_j^0$ denote the ``tail element'' of $Q_j$, i.e. we write \[ Q_j= Q_j^0\circ Q_j^1\circ\cdots\circ Q_j^{N^\prime}\ \ \in A^n(X\times X)_{\mathbb{Q}}\ ,\] with $Q_j^0\not=\Delta$ (so that $N^\prime\le N$). Let's consider the action of $Q_j$ on $A_iX_{\mathbb{Q}}$, for $i< r$: If $Q_j^0$ is a $\Pi_\ell$ (for some $\ell\in[0,n-r]$), it follows from lemma \ref{noaction} that \[(Q_j)_\ast \bigl(A_iX_{\mathbb{Q}}\bigr)=0\ .\] Likewise, if $Q_j^0$ is of the form $P_\ell$ (for some $n-r<\ell<n+r$), then applying lemma \ref{noaction2}, we find again \[ (Q_j)_\ast \bigl(A_iX_{\mathbb{Q}}\bigr)=0\ .\] Finally, if $Q_j^0$ is of the form ${}^t\Pi_\ell$ (for some $\ell\in[0,n-r]$), it follows from lemma \ref{supported} that \[ (Q_j)_\ast \bigl(A_iX_{\mathbb{Q}}\bigr)\subset \hbox{Im}\bigl( A_iY_{\mathbb{Q}}\to A_iX_{\mathbb{Q}}\bigr)\ .\] Since $\Delta$ acts as the identity, we conclude that for $i< r$, push--forward \[A_iY_{\mathbb{Q}}\ \to\ A_iX_{\mathbb{Q}}\] is surjective. The argument for the injectivity statement is similar: we consider the action of $\Delta=\sum_j Q_j$ on $A^i_{AJ}X_{\mathbb{Q}}$ for $i\le r+1$. If $Q_j$ is such that its ``head'' $Q_j^{N^\prime}$ is of type ${}^t\Pi_\ell$ or $P_\ell$, then $Q_j$ does not act (by lemma \ref{supported} resp. lemma \ref{noaction2}). It follows that we can write \[ A^i_{AJ}X_{\mathbb{Q}}=\Delta_\ast A^i_{AJ}X_{\mathbb{Q}}= \bigl(\sum \hbox{something}\circ (\tau\times\tau)_\ast (\hbox{something})\bigr)_\ast A^i_{AJ}X_{\mathbb{Q}}\ ;\] the injectivity is then obvious. Finally, if the hypothesis in (\rom1) of the theorem is that \[\hbox{Griff}^n(X\times X)_{\mathbb{Q}}=0\ ,\] the proof goes as follows: the decomposition of $\Delta$ is now an equality modulo algebraic equivalence (since by hypothesis, algebraic and homological equivalence coincide on $X\times X$). Then, instead of applying Kimura's theorem, we apply the Voisin/Voevodsky nilpotence theorem (theorem \ref{VV}). The rest of the proof is verbatim the same. \end{proof} \qed \begin{proof} (of corollary \ref{nocsv}) In case $n=2$, we know $B(X)$ holds since it holds for any surface \cite{K}. The Voisin standard conjecture is used to get that some Hodge classes in $H_4(Z\times Z,\mathbb{Q})$ are algebraic, where $\dim Z=1$; this is trivially true. Next, the case $n=3$. Under the hypothesis $H^3X=N^1H^3X$, $X$ is ``motivated by a surface'' in the sense of \cite{A}, so $B(X)$ is known to hold \cite{A}. The Voisin standard conjecture is only used to get that some Hodge classes in $H_6(Z\times Z,\mathbb{Q})$ are algebraic, where $\dim Z=2$; this is OK by the Hodge conjecture for divisors (remark \ref{csvtrue}). \end{proof} \qed \begin{proof} (of corollary \ref{nocsv2}) As we noted in remark \ref{examples}, it follows from work of Pedrini \cite{P} and Guletski\u{\i}--Pedrini \cite{GP} that the $S_j$ have finite--dimensional motive. Hence $X$ has finite--dimensional motive. We also know $B(X)$ is true since the Lefschetz standard conjecture is true for all surfaces \cite{K}. In case (\rom1), since there is at least one surface with $H^2(S_j)=N^1$, we obviously have \[ H^{2m}(X,\mathbb{Q})=N^1 H^{2m}(X,\mathbb{Q})\ .\] The corollary now follows from theorem \ref{main}; note that we don't need to assume the Voisin standard conjecture, since we can find cycles $P_i^\prime$ by using the Hodge conjecture on the surfaces with vanishing geometric genus. In case (\rom2), the assumptions imply \[ \begin{split} H^{2m}(X,\mathbb{Q})&=N^2 H^{2m}(X,\mathbb{Q})\ ;\\ H^{2m-1}(X,\mathbb{Q})&=N^2 H^{2m-1}(X,\mathbb{Q})\ ,\\ \end{split} \] and we again apply theorem \ref{main}. \end{proof} \begin{remark} The hypothesis on $\hbox{Griff}^n(X\times X)$ in theorem \ref{main} is mainly of theoretical interest, and not practically useful. Indeed, there are precise conjectures predicting when Griffiths groups should vanish \cite{J3}; for instance, if $X$ is a 4fold with $h^{2,0}=h^{4,0}=h^{3,0}=h^{2,1}=0$, \cite[Corollary 6.8]{J3} implies that if the Bloch--Beilinson conjectures are true then \[ \hbox{Griff}^4(X\times X)_{\mathbb{Q}}=0\ .\] Unfortunately, no non--trivial examples seem to be known. Specifically, I am not aware of any example of a variety $X$ of dimension $n$ that satisfies $\hbox{Griff}^n(X\times X)_{\mathbb{Q}}=0$, but not $A^i_{AJ}X_{\mathbb{Q}}=0\forall i$. \end{remark} \begin{remark} In \cite{Lat}, I study a certain hard Lefschetz property for Chow groups. Using arguments similar to the present note, this hard Lefschetz property can be proven in some special cases \cite{Lat}. \end{remark} \begin{acknowledgements} This note was written while preparing for the Strasbourg ``groupe de travail'' based on the monograph \cite{Vo}. I wish to thank all the participants of this groupe de travail for the very pleasant and stimulating atmosphere. \end{acknowledgements}
1,314,259,995,702
arxiv
\section{Introduction} \label{intro} The discovery of a Higgs boson at the Large Hadron Collider (LHC) heralds the completion of the standard model (SM)~\cite{Aad:2012tfa,Chatrchyan:2012xdj} and a great hope for the discovery of new physics. Obviously, the completion of the SM naturally leads to the quest of microscopic structure to its next chapter, which will be further searched by the LHC \cite{Morrissey:2009tf}. In the long list of questions which might be the key to the next chapter, a few are interesting and crucial. For example, what is the dynamics for the electroweak (EW) symmetry breaking, what is the origin of mass of neutrinos \cite{Mohapatra:2006gs}, how are the parity and CP symmetries broken, and what is nature of dark matter and dark energy \cite{Sahni:2004ai}, etc. To answer these questions has been motivating various new physics models beyond the SM (BSM) at the TeV scale. In the history of early universe, from the Planck time to today, phase transitions might have occurred when the symmetries at different energy scales are broken. For example, the symmetry breaking of grand unified theory (GUT) and supersymmetry (SUSY) breaking can induce the corresponding phase transitions at the GUT scale and SUSY breaking scale. For new physics beyond the SM, new dynamics and a larger symmetry are usually introduced at the TeV region or a higher-energy scale. Such new physics models are of special interests, as they might accommodate baryogenesis and thus explain the matter-antimatter asymmetry observed in the universe \cite{Cohen:1993nk,Rubakov:1996vz,Trodden:1998ym,Morrissey:2012db}. Furthermore, some of the new physics models are within the reach of the LHC and the future high-energy colliders, such as the International Liear Collider (ILC)~\cite{Baer:2013cma}, Circular Electron-Positron Collider (CEPC)~\cite{CEPC-SPPCStudyGroup:2015csa}, Future Circular collider (FCC-hh)~\cite{FCC-hh} and Super Proton-Proton Collider (SPPC)~\cite{Tang:2015qga}. First-order phase transition (FOPT) can fulfil one of the Sakharov's conditions for successful baryogenesis~\cite{Sakharov:1967dj}. One of byproduct of strong FOPT is a sizeable production of gravitational waves (GWs). The production of GWs include three physics processes~\cite{Cai:2017cbj}: bubble collision~\cite{Kosowsky:1991ua, Kosowsky:1992vn, Huber:2008hg, Kosowsky:1992rz, Kamionkowski:1993fg, Caprini:2007xq}, acoustic wave production~\cite{Hindmarsh:2013xza, Giblin:2013kea, Giblin:2014qia, Hindmarsh:2015qta}, and chaotic magnetohydrodynamic (MHD) turbulence~\cite{Caprini:2006jb, Kahniashvili:2008pf, Kahniashvili:2008pe, Kahniashvili:2009mf, Caprini:2009yp}. In the non-runaway scenario, the GWs of acoustic wave production is the dominant one. The strong FOPTs caused by new physics can produce a significant magnitude of GWs \cite{Grojean:2006bp,Ellis:2019oqb}, which can be probed by the proposed GW experiments TianQin~\cite{Luo:2015ght}, Taiji~\cite{Guo:2018npi}, LISA~\cite{Audley:2017drz, Cornish:2018dyw}, ALIA~\cite{Gong:2014mca}, MAGIS~\cite{Coleman:2018ozp}, DECIGO~\cite{Musha:2017usi}, BBO~\cite{Corbin:2005ny}, Cosmic Explorer (CE)~\cite{Evans:2016mbw}, Einstein Telescope (ET)~\cite{Punturo:2010zz}, aLIGO~\cite{LIGOScientific:2019vkc} and aLIGO+~\cite{aLIGO+}. Since the successful detection of GWs produced by the merging of two massive objects~\cite{Abbott:2016blz, TheLIGOScientific:2017qsa}, direct GW detection has been established as a novel method to probe the early universe. Furthermore, the direct detection of thermal GWs becomes accessible to probe phase transitions of the early universe in the multi-messager era \cite{Meszaros:2019xej}. Compared to the chirp-like GW signals from the merge of massive objects which have clear sources and can most exist in a short period, the thermal GW signal is continuous, isotropic, and lasting for a very long time. Generally speaking, its peak frequencies are intimately related to the dynamics of phase transition \cite{Dev:2016feu,Weir:2017wfa}. This opens up an active and interesting study to explore phase transitions of a new physics beyond the SM at the TeV-scale and the corresponding signals at colliders and GW detectors. For example, such a study has been conducted in the effective field theory method \cite{Huang:2016odd,Huang:2016cjm}. The condition of the strong FOPT in the new physics beyond the SM can be more easily realized when the Higgs sector includes more scalars \cite{Ivanov:2017dad}. For example, there are works on a singlet extension of the SM~\cite{Vaskonen:2016yiu,Beniwal:2017eik,Alves:2018jsw,Chen:2019ebq} or more than one singlet extension~\cite{Kakizaki:2015wua, Hashino:2016rvx,Hashino:2016xoj,Kang:2017mkl}, two-Higgs-doublet models (2HDMs)~\cite{Cline:1996mga,Basler:2016obg,Dorsch:2016nrg,Huang:2017rzf} or other doublet extensions~\cite{Wang:2019pet,Paul:2020wbz}, models with triplet extension~\cite{Chala:2018opy}, SUSY models~\cite{Apreda:2001us,Huber:2015znp,Huber:2007vva,Demidov:2017lzf}, composite models~\cite{Chala:2016ykx,Bruggisser:2018mrt,Bian:2019kmg} and walking technicolor models~\cite{Jarvinen:2009mh, Chen:2017cyc,Miura:2018dsy}, twin Higgs models~\cite{Fujikura:2018duw}, Pati-Salam model~\cite{Croon:2018kqn,Huang:2020bbe}, the left-right SU(4) model \cite{Fornal:2018dqn,Fornal:2020ngq} motived by the B physics anomalies, Gorgi-Machacek model~\cite{Zhou:2018zli}, axion or axion-like particle models~\cite{Dev:2019njv, DelleRose:2019pgi, Ghoshal:2020vud}, extra dimensional models~\cite{Yu:2019jlb}, models with charged singlet~\cite{Ahriche:2018rao}, seesaw models~\cite{Brdar:2018num}, models with hidden sectors~\cite{Espinosa:2008kw, Croon:2018erz, Fairbairn:2019xog} and dark matter (DM) models~\cite{Jaeckel:2016jlh, Bird:2016dcv,Beniwal:2018hyi,Bertone:2019irm,Huang:2020mso,Ghosh:2020ipy}, etc. These models reveal that the strong FOPT can produce GW signatures near or above the EW scale~\cite{Dev:2016feu,Weir:2017wfa}. Among various new physics candidates, except interpreting the EW symmetry breaking by the Higgs mechanism, the minimal left-right symmetric model (LRSM)~\cite{Pati:1974yy, Mohapatra:1974gc, Senjanovic:1975rk} offers an elegant solution to some key fundamental questions in or beyond the SM, such as parity violation/restoration, CP violation, and generation of tiny neutrino masses at the TeV-scale, which are among the focuses of experimental searches of new physics at the high-energy colliders and high-precision experiments. In this work, we examine phase transitions in the LRSM and the resultant features of the corresponding GWs. Compared with the recent and former study \cite{Brdar:2019fur}, the new things of this paper lie in the following aspects: (i) we have implemented the correct EW vacuum conditions \cite{Chauhan:2019fji} and set $\alpha_2 = 0$ ($\alpha_2$ is a quartic coupling in the scalar potential Eq.~(\ref{eqn:potential})), (ii) we have taken into account more recent LHC experimental bounds, which are collected in Table~\ref{table:bounds} and Fig.~\ref{fig:spectra}, (iii) we have found more general parameter space where the strong FOPT can occur and detectable GWs can be produced, and (iv) we have also explored the complementarity of GW probes of LRSM and the direct searches of the heavy (or light long-lived) particles in the LRSM at the high-energy colliders, and examined how the self couplings of SM Higgs can be affected in the LRSM. With all the theoretical and experimental limits taken into consideration, it is found that the strong FOPT at the right-handed scale $v_R$ in the LRSM favors relatively small quartic and neutrino Yukawa couplings, which corresponds to relatively light BSM scalars and right-handed neutrinos (RHNs), as seen in Figs.~\ref{figvT}, \ref{fig:random1} and \ref{fig:random2}. The scatter plot in Fig.~\ref{GWpeak} reveals that the phase transition in the LRSM can generate GW signals with the strength of $10^{-17}$ to $10^{-12}$, with a frequency ranging from 0.1 to 10 Hz, which can be probed by the experiments BBO and DECIGO, or even by ALIA and MAGIS. The GW spectra for five benchmark points (BPs) are demonstrated in Fig.~\ref{fig:GWcurves}, which reveals that the GW signal strength and frequency are very sensitive to the value of $\rho_1$. Although some other quartic and neutrino Yukawa couplings are very important for the GW production, the quartic coupling $\rho_1$ plays the most crucial role and it also determines the mass of $SU(2)_R$-breaking scalar $H_3^0$. In the parameter space where it does not mix with other scalars, the scalar $H_3^0$ couples only to the heavy scalars, gauge bosons and RHNs in the LRSM~\cite{Dev:2016dja}, which makes it effectively a singlet-like particle, and thus the experimental limits on it are very weak~\cite{Dev:2016vle,Dev:2017dui}. As presented in Fig.~\ref{fig:complementarity}, the GW probe of $H_3^0$ is largely complementary to the direct searches of $H_3^0$ at the high-energy colliders~\cite{Dev:2016dja} as well as the searches of $H_3^0$ as a long-lived particle (LLP) at the high-energy frontier~\cite{Dev:2016vle,Dev:2017dui}. In addition, in a sizeable region of parameter space, the strong FOPT and GWs are sensitive to a large quartic coupling $\lambda_{hhhh}$ of the SM-like Higgs, which is potentially accessible at a future high-energy muon collider~\cite{Chiesa:2020awd}. The rest of the paper is organized as follows. In Section~\ref{sec:lrsm} we briefly review the minimal LRSM and summarize the main existing experimental and theoretical constraints on the BSM particles in this model. Phase transition are explored in Section~\ref{sec:phasetransition}, and the GW production is presented in Section~\ref{sec:GW}. Section~\ref{sec:complementarity} focuses on the complementarity of the GW probes of LRSM and the collider signals of LRSM. After some discussions, we conclude in Section~\ref{sec:conclusion}. For the sake of completeness, the masses and thermal self-energies are collected in Appendix~\ref{appendix:masses}, and the conditions for vacuum stability and correct vacuum are put in Appendix~\ref{appendix:vacuum}. \section{A brief review of left-right symmetric models} \label{sec:lrsm} \subsection{Left-right symmetric model} \label{sec:model} The basic idea of LRSMs is to extend the EW sector of $SU(2)_L \times U(1)_Y$ of the SM gauge group to be left-right symmetric, i.e. $SU(2)_L \times SU(2)_R \times U(1)_{B-L}$. Various LRSMs have been proposed to understand the parity symmetry and CP breaking of the SM, the origin of masses of matters or even DM candidates and the matter-antimatter asymmetry of the universe. The main differences between these LRSMs could be in the gauge structure, the scalar fields, the matter contents, and/or the seesaw mechanisms. The most popular, or say conventional, LRSM is the version with a Higgs bidoublet $\Phi$, a left-handed triplet $\Delta_L$ and a right-handed triplet $\Delta_R$~\cite{Pati:1974yy, Mohapatra:1974gc, Senjanovic:1975rk} \begin{eqnarray} \Phi = \left(\begin{array}{cc}\phi^0_1 & \phi^+_2\\\phi^-_1 & \phi^0_2\end{array}\right) , \; && \Delta_L = \left(\begin{array}{cc}\Delta^+_L/\sqrt{2} & \Delta^{++}_L\\\Delta^0_L & -\Delta^+_L/\sqrt{2}\end{array}\right), \;\;\; \Delta_R = \left(\begin{array}{cc}\Delta^+_R/\sqrt{2} & \Delta^{++}_R\\\Delta^0_R & -\Delta^+_R/\sqrt{2}\end{array}\right). \label{eq:scalar} \end{eqnarray} When the right-handed triplet $\Delta_R$ acquires a vacuum expectation value (VEV) $v_R$, the gauge symmetry $SU(2)_L \times SU(2)_R \times U(1)_{B-L}$ in the LRSM is broken to the SM gauge group $SU(2)_L \times U(1)_{Y}$. Two triplets $\Delta_L$ and $\Delta_R$ are introduced to give Majorana masses to the active neutrinos and RHNs, respectively, which enables the type-I~\cite{Minkowski:1977sc, Mohapatra:1979ia, Yanagida:1979as, GellMann:1980vs, Glashow:1979nm} and type-II~\cite{Mohapatra:1980yp, Magg:1980ut, Schechter:1980gr, Cheng:1980qt, Lazarides:1980nt} seesaw mechanisms for the tiny neutrino masses. The $SU(2)_R \times U(1)_{B-L}$ symmetry can also be broken only by a right-handed doublet $H_R$~\cite{Babu:1988mw,Babu:1989rb}. In this case, heavy vector-like fermions have to be introduced to generate the SM quark and lepton masses via seesaw mechanism (see also~\cite{Mohapatra:2014qva}). There are also LRSM scenarios with inverse seesaw~\cite{Mohapatra:1986aw,Mohapatra:1986bd}, linear seesaw~\cite{Akhmedov:1995ip,Malinsky:2005bi}, or extended seesaw~\cite{Gavela:2009cd,Barry:2011wb,Zhang:2011vh,Dev:2012sg} in the literature. Cold DM is not included in the conventional LRSM (a light RHN can only be a warm DM candidate \cite{Nemevsek:2012cd}), but it is easy to add a fermion or boson multiplet, where the lightest neutral component is naturally stabilized by the residual $Z_2$ symmetry from $U(1)_{B-L}$ breaking~\cite{Heeck:2015qra, Garcia-Cely:2015quu}. Alternatively, based on the gauge group $SU(2)_L \times SU(2)_R \times U(1)_{Y_L} \times U(1)_{Y_R}$ (with $Y_L$ the hypercharge in the SM and $Y_R$ the ``right-handed'' counter partner), heavy RHNs can be the cold DM candidate~\cite{Dev:2016qbd, Dev:2016xcp, Dev:2016qeb}. In this work, we focus on the minimal LRSM with one bidoublet $\Phi$ and two triplets $\Delta_L$ and $\Delta_R$ in the scalar sector. The most general scalar potential in the LRSM can be written as~\cite{Deshpande:1990ip} \begin{eqnarray} \label{eqn:potential} {\cal V} &=& -\mu_{1}^{2} \operatorname{Tr}[\Phi^{\dagger} \Phi]-\mu_{2}^{2}\left(\operatorname{Tr}[\tilde{\Phi} \Phi^{\dagger}]+\operatorname{Tr}[\tilde{\Phi}^{\dagger} \Phi]\right)-\mu_{3}^{2}\left(\operatorname{Tr}[\Delta_{L} \Delta_{L}^{\dagger}]+\operatorname{Tr}[\Delta_{R} \Delta_{R}^{\dagger}]\right) \nonumber \\ && +\rho_{1}\left(\operatorname{Tr}[\Delta_{L} \Delta_{L}^{\dagger}]^{2}+\operatorname{Tr}[\Delta_{R} \Delta_{R}^{\dagger}]^{2}\right)+\rho_{2}\left(\operatorname{Tr}[\Delta_{L} \Delta_{L}] \operatorname{Tr}[\Delta_{L}^{\dagger} \Delta_{L}^{\dagger}]+\operatorname{Tr}[\Delta_{R} \Delta_{R}] \operatorname{Tr}[\Delta_{R}^{\dagger} \Delta_{R}^{\dagger}]\right) \nonumber \\ && +\rho_{3} \operatorname{Tr}[\Delta_{L} \Delta_{L}^{\dagger}] \operatorname{Tr}[\Delta_{R} \Delta_{R}^{\dagger}]+\rho_{4}\left(\operatorname{Tr}[\Delta_{L} \Delta_{L}] \operatorname{Tr}[\Delta_{R}^{\dagger} \Delta_{R}^{\dagger}]+\operatorname{Tr}[\Delta_{L}^{\dagger} \Delta_{L}^{\dagger}] \operatorname{Tr}[\Delta_{R} \Delta_{R}]\right) \nonumber \\ && +\lambda_{1} \operatorname{Tr}[\Phi^{\dagger} \Phi]^{2}+\lambda_{2}\left(\operatorname{Tr}[\tilde{\Phi} \Phi^{\dagger}]^{2}+\operatorname{Tr}[\tilde{\Phi}^{\dagger} \Phi]^{2}\right) \nonumber \\ && +\lambda_{3} \operatorname{Tr}[\tilde{\Phi} \Phi^{\dagger}] \operatorname{Tr}[\tilde{\Phi}^{\dagger} \Phi]+\lambda_{4} \operatorname{Tr}[\Phi^{\dagger} \Phi]\left(\operatorname{Tr}[\tilde{\Phi} \Phi^{\dagger}]+\operatorname{Tr}[\tilde{\Phi}^{\dagger} \Phi]\right)\nonumber \\ && +\alpha_{1} \operatorname{Tr}[\Phi^{\dagger} \Phi]\left(\operatorname{Tr}[\Delta_{L} \Delta_{L}^{\dagger}]+\operatorname{Tr}[\Delta_{R} \Delta_{R}^{\dagger}]\right)+\alpha_{3}\left(\operatorname{Tr}[\Phi \Phi^{\dagger} \Delta_{L} \Delta_{L}^{\dagger}]+\operatorname{Tr}[\Phi^{\dagger} \Phi \Delta_{R} \Delta_{R}^{\dagger}]\right)\nonumber \\ && +\left[ \alpha_{2} e^{i \delta} \left(\operatorname{Tr}[\Delta_{L} \Delta_{L}^{\dagger}] \operatorname{Tr}[\tilde{\Phi} \Phi^{\dagger}]+\operatorname{Tr}[\Delta_{R} \Delta_{R}^{\dagger}] \operatorname{Tr}[\tilde{\Phi}^{\dagger} \Phi] \right )+\mathrm{H.c.} \right] \nonumber \\ && +\beta_{1}\left(\operatorname{Tr}[\Phi \Delta_{R} \Phi^{\dagger} \Delta_{L}^{\dagger}]+\operatorname{Tr}[\Phi^{\dagger} \Delta_{L} \Phi \Delta_{R}^{\dagger}]\right)+\beta_{2}\left(\operatorname{Tr}[\tilde{\Phi} \Delta_{R} \Phi^{\dagger} \Delta_{L}^{\dagger}]+\operatorname{Tr}[\tilde{\Phi}^{\dagger} \Delta_{L} \Phi \Delta_{R}^{\dagger}]\right) \nonumber \\ && +\beta_{3}\left(\operatorname{Tr}[\Phi \Delta_{R} \tilde{\Phi}^{\dagger} \Delta_{L}^{\dagger}]+\operatorname{Tr}[\Phi^{\dagger} \Delta_{L} \tilde{\Phi} \Delta_{R}^{\dagger}]\right), \label{pot} \end{eqnarray} where $\tilde{\Phi} = \sigma_2 \Phi^\ast \sigma_2$ (with $\sigma_2$ the second Pauli matrix). Required by left-right symmetry, all the quartic couplings in the potential above are real parameters. The CP violating phase $\delta$ associated with $\alpha_2$ is shown explicitly. At the zero temperature, the neutral components of the scalar fields can develop non-zero VEVs, i.e. \begin{equation} \label{eqn:vev} \langle \Phi \rangle=\frac{1}{\sqrt{2}}\left( \begin{array}{cc} \kappa_1 & 0 \\0 & \kappa_2 e^{i \theta_\kappa}\end{array} \right),\quad \langle \Delta_{L} \rangle=\frac{1}{\sqrt{2}}\left( \begin{array}{cc} 0 & 0 \\v_L e^{i \theta_L}& 0 \end{array} \right),\quad \langle \Delta_{R} \rangle=\frac{1}{\sqrt{2}}\left( \begin{array}{cc} 0 & 0 \\v_R & 0 \end{array} \right) \,, \end{equation} where $\theta_\kappa$ and $\theta_L$ are CP violating phases. The two bidoublet VEVs are related to the EW VEV $v_{\rm EW} \simeq (\sqrt 2 G_F)^{-1/2}\simeq 246$ GeV (with $G_F$ the Fermi constant) via $\sqrt{\kappa_1^2+\kappa_2^2}= v_{\rm EW}$. In light of the hierarchy of top and bottom quark masses $m_b \ll m_t$ in the SM, it is a reasonable assumption that $\kappa_2 \ll \kappa_1$~\cite{Deshpande:1990ip}. There are three key energy scales in the LRSM, i.e. the right-handed scale $v_R$, the EW scale $v_{\rm EW}$ and the scale $v_{L}$ which is relevant to tiny active neutrino masses via type-II seesaw. Furthermore, from the first-order derivative of the scalar potential~(\ref{eqn:potential}), $v_L$ is related to the EW and right-handed VEVs via~\cite{Mohapatra:1980yp, Deshpande:1990ip,Kiers:2005gh} \begin{eqnarray} v_L = \frac{v_{\rm EW}^2/v_R}{ (1 + \xi^2 ) (2 \rho_1 - \rho_3) }\left[ \beta_1 \xi \cos(\alpha - \theta_L) + \beta_2 \cos \theta_L + \beta_3 \xi^2 \cos(2 \alpha - \theta_L )\right] \,, \end{eqnarray} where $\xi = \kappa_2/\kappa_1$. Due to the tiny masses of active neutrinos, it is a good approximation to set $v_L=0$, therefore we will set $\beta_i=0$ so as to simplify our discussions below. With $v_L = 0$, there are only two energy scales in the LRSM, i.e. the EW scale $v_{\rm EW}$ and the right-handed scale $v_R$. In light of the hierarchy structure $v_{\rm EW} \ll v_R$, a two-step phase transition is supposed to occur in the LRSM. In the early universe, the temperature is so high $T\gg v_R$ that the symmetry $SU(2)_L \times SU(2)_R \times U(1)_{B-L}$ is restored. As the universe keeps expanding, the temperature decreases. When the temperature is lower than a critical temperature but much higher than EW scale, i.e. $v_{\rm EW} \ll T \sim v_R$, $\Delta_R^0$ develops a non-vanishing VEV and the gauge symmetry $SU(2)_L \times SU(2)_R \times U(1)_{B-L}$ is spontaneously broken to $SU(2)_L \times U(1)_Y$. When the temperature becomes lower than the EW scale $T \sim v_{\rm EW}$, $\Phi_{1,2}^0$ obtain their VEVs and the symmetry is further broken into the electromagnetic (EM) group $U(1)_{\rm EM}$. After symmetry breaking at the $v_R$ scale, we can rewrite the bidoublet $\Phi$ in terms of two $SU(2)_L$ doublets, i.e. $\Phi = \big( i\sigma_2 H_1^\ast | H_2 \big)$. Then the bidoublet relevant terms in the potential (\ref{eqn:potential}) can be recast in terms of $H_{1,\,2}$: \begin{align} \label{eqn:potential2} {\cal V} (\Phi) \ \supset \ &-m_{11}^2H_1^{\dagger}H_1+m_{22}^2H_2^{\dagger}H_2-m_{12}^2(H_1^{\dagger}H_2+\text{H.c.}) \nonumber \\ &+\lambda_1(H_1^{\dagger}H_1)^2+\lambda_1(H_2^{\dagger}H_2)^2+2\lambda_1H_1^{\dagger}H_1H_2^{\dagger}H_2+4\lambda_3 H_1^{\dagger}H_2 H_2^{\dagger}H_1 \nonumber \\ &+[4\lambda_2(H_1^{\dagger}H_2)^2+2\lambda_4(H_1^{\dagger}H_1+H_2^{\dagger}H_2)H_1^{\dagger}H_2+\text{H.c.}] \,, \end{align} where the mass terms are respectively \begin{eqnarray} \label{eqn:m11} m_{11}^2& \ = \ & -\frac{\alpha_{3}}{2} \frac{\kappa_{2}^{2} v_{R}^{2}}{\kappa_{1}^{2}-\kappa_{2}^{2}}+\lambda_{1} v_{\rm EW}^2 +2 \lambda_{4} \kappa_{1} \kappa_{2} \,, \\ \label{eqn:m22} m_{22}^2& \ = \ & -m_{11}^2+\frac{\alpha_3}{2}v_R^2 \,, \\ \label{eqn:m12} m_{12}^2& \ = \ & \frac{\alpha_3}{2}\frac{\kappa_1\kappa_2v_R^2}{\kappa_{1}^{2}-\kappa_{2}^{2}}+2(2\lambda_2+\lambda_3)\kappa_1\kappa_2+\lambda_4 v_{\rm EW}^2 \,. \end{eqnarray} Although the potential in Eq.~(\ref{eqn:potential2}) seems to be very similar to that in a general 2HDM~\cite{Branco:2011iw}, there are still some obvious differences: In presence of the scale $v_R$, all the states predominately from the heavy doublet $H_2$ are at the $v_R$ scale, and their masses are degenerate at the leading-order, which is clearly distinct from the 2HDMs, where all the scalars in the 2HDMs are at the EW scale, and the BSM scalar masses depend on different quartic couplings~\cite{Branco:2011iw}. In the LRSM, the BSM particles include the heavy $W_R$ and $Z_R$ bosons, three RHNs $N_i$ (with $i= 1,\, 2,\,3$), neutral CP-even scalar $H_1^0$ and CP-odd $A_1^0$, singly-charged scalar $H_1^\pm$ predominately from the bidoublet $\Phi$, neutral CP-even scalar $H_2^0$ and CP-odd $A_2^0$, singly-charged scalar $H_2^\pm$ and doubly-charged scalar $H_1^{\pm\pm}$ mostly from the left-handed triplet $\Delta_L$, and the neutral CP-even scalar $H_3^0$ and doubly-charged scalar $H_2^{\pm\pm}$ mostly from the right-handed triplet $\Delta_R$. Thorough studies of the scalar sector of LRSM at future high-energy colliders can be found e.g. in Ref.~\cite{Gunion:1989in, Deshpande:1990ip, Polak:1991vf, Barenboim:2001vu, Azuelos:2004mwa, Zhang:2007da, Jung:2008pz, Bambhaniya:2013wza, Dutta:2014dba, Bambhaniya:2014cia, Bambhaniya:2015ipg, Maiezza:2015lza, Bambhaniya:2015wna, Bonilla:2016fqd, Maiezza:2016ybz, Maiezza:2016bzp, Nemevsek:2016enw, Chakrabortty:2016wkl, Dev:2016dja, Dev:2016vle, Dev:2017dui, Cao:2017rjr, Dev:2018foq, Dev:2018kpa, Chauhan:2019fji}. In this paper, we assume that the gauge coupling $g_R$ for $SU(2)_R$ can be different from the gauge coupling $g_L$ for $SU(2)_L$, which might originate from renormalization group running effects such as in the $D$-parity breaking LRSM versions~\cite{Chang:1983fu}. \subsection{Theoretical Constraints} \label{sec:theoretical} For completeness, we collect all the theoretical constraints on the gauge and scalar sectors of the LRSM in the literature, which will be taken into consideration in the calculations of phase transition and GW production below. \begin{itemize} \item {\it Perturbativity limits}: In some versions of the LRSM, the right-handed gauge coupling $g_R$ can be different from $g_L$~\cite{Chang:1983fu}. As the gauge couplings have the relationship (with $g_{BL}$ the gauge coupling for $U(1)_{B-L}$) \begin{eqnarray} \frac{1}{e^2} \ = \ \frac{1}{g_L^2} + \frac{1}{g_Y^2} \ = \ \frac{1}{g_L^2} + \frac{1}{g_R^2} + \frac{1}{g_{BL}^2} \,, \end{eqnarray} the gauge couplings $g_R$ and $g_{BL}$ can not be either too large or too small if we want them to be perturbative. Renormalization group running these gauge couplings up to a higher energy scale put more stringent limits on them. Perturbativity up to the GUT scale requires that the ratio $r_g \equiv g_R / g_L$ to satisfy~\cite{Chauhan:2018uuy}\footnote{Note that the perturbativity limits in Ref.~\cite{Chauhan:2018uuy} are on the LRSM without the left-handed triplet $\Delta_L$ at the TeV-scale. In presence of $\Delta_L$ at the TeV-scale, the perturbativity limits should be to some extent different. As an approximation we will adopt the limits from Ref.~\cite{Chauhan:2018uuy}.} \begin{eqnarray} 0.65 < r_g < 1.60 \,. \end{eqnarray} Furthermore, as the masses $\sqrt{\alpha_3/2} v_R$ of $H_1^0$, $A_1^0$ and $H_1^\pm$ (cf. Table~\ref{tab:mass} in Appendix~\ref{appendix:masses}) are severely constrained by the neutral meson mixings (see Section~\ref{sec:theoretical} and Table~\ref{table:bounds}), perturbativity also implies an lower bound on the $v_R$ scale~\cite{Chauhan:2018uuy}: \begin{eqnarray} v_R \gtrsim 10\, {\rm TeV} \,. \end{eqnarray} For $v_R$ below this value, $\alpha_3$ is so large that all the quartic and gauge couplings will hit the Landau pole very quickly before reaching the GUT or Planck scale~\cite{Rothstein:1990qx, Chakrabortty:2013zja, Chakrabortty:2016wkl, Maiezza:2016ybz}. \item {\it Unitarity conditions}: The parameters in the potential (\ref{pot}) should satisfy the unitarity conditions~\cite{Chakrabortty:2016wkl} when we consider the scattering amplitudes of the scalar fields at the high-energy scale $\sqrt{s} \gg \mu^2_i$ (for simplicity we neglect here the effects of all the scalar masses). In other words, the partial wave amplitudes should not violate the bound of unitarity so as to guarantee that the probability is conserved. The tree-level unitarity conditions turn out to be~\cite{Chakrabortty:2016wkl} \begin{eqnarray} && \lambda_{1,\,4} < \frac{4 \pi}{3} \,, \quad \lambda_1 + 4 \lambda_2 + 2 \lambda_3 < 4 \pi \,, \quad \lambda_1 - 4 \lambda_2 + 2 \lambda_3 < \frac{4 \pi}{3} \,, \nonumber \\ && \rho_1 < \frac{4 \pi}{3} \,, \quad \rho_1 + \rho_2 < 2 \pi \,, \quad \rho_{2,\,4} < 2 \sqrt{2} \pi \,, \quad \rho_3 < 8 \pi \,, \nonumber \\ && \alpha_1 < 8 \pi \,, \quad \alpha_2 < 4 \pi \,, \quad \alpha_1 + \alpha_3 < 8 \pi \,. \end{eqnarray} \item {\it Vacuum stability conditions}: The vacuum stability conditions require that \cite{Chakrabortty:2013zja, Chakrabortty:2013mha, Chakrabortty:2016wkl} (see also~\cite{Kannike:2016fmd}) \begin{eqnarray} \lambda_1 > 0\,\,, \quad \rho_1 > 0\,\,, \quad \rho_1 + \rho_2 > 0\,\,, \quad \rho_1 + 2 \rho_2 > 0\,\,. \end{eqnarray} \item {\it Correct vacuum criteria}: After the spontaneous symmetry breaking, all the scalar fields have to form some specific structure in the phase space such that we reside in the correct vacuum, i.e. the vacuum with the lowest VEV in the potential~\cite{Dev:2018foq, Chauhan:2019fji}. For completeness, the correct vacuum criteria have been collected in Appendix~\ref{appendix:vacuum}, which are obtained with the assumption $\alpha_2=0$. Therefore, we will set $\alpha_2=0$ throughout this paper. In the limit of $\kappa_2 \ll \kappa_1 \ll v_R$, in Eq.~(\ref{eqn:m22}), the quadratic coefficient of $H_2$ term is proportional to $\alpha_{3}v_R^2/2$, thus the heavy doublet scalars $H_1^0, A_1^0, H_2^\pm$ will obtain a mass of $ \sqrt{\alpha_3/2} v_R$ at the leading-order. To get the correct EW vacuum, a necessary condition is $m_{11}^2>0$, i.e. \begin{equation} -\frac{\alpha_{3}}{2} \frac{\kappa_{2}^{2} v_{R}^{2}}{\kappa_{1}^{2}-\kappa_{2}^{2}}+\lambda_{1} v_{\rm EW}^2 +2 \lambda_{4} \kappa_{1} \kappa_{2}>0. \end{equation} This yields an upper bound of $\xi$. Approximately, we have \begin{eqnarray} \label{eqn:xi} \xi & \ \lesssim \ & \frac{\sqrt{\lambda_1} v_{\rm EW}}{M_{H_1^0}} \nonumber \\ & \ \lesssim \ & 8.9 \times 10^{-3} \left( \frac{\lambda_1}{0.13} \right)^{1/2} \left( \frac{m_{H_1^0}}{10\,{\rm TeV}} \right)^{-1} \,. \end{eqnarray} \end{itemize} \subsection{Experimental constraints} \label{sec:experimental} All the current LHC limits on the BSM particles in the LRSM are collected in Table~\ref{table:bounds} and also depicted in Fig.~\ref{fig:spectra}. Here are more details: \begin{itemize} \item At the LHC, the $W_R$ boson in the LRSM can be produced via the right-handed charged quark currents. After its production, it can decay predominately into two quark jets (including the $\bar{t}b$ channel) and RHNs plus a charged lepton, i.e. $W_R \to jj,\, \bar{t}b,\, N_i^{(\ast)} \ell_\alpha$ (with $\alpha = e,\,\mu,\,\tau$). If the RHNs are lighter than the $W_R$ boson, as a result of the Majorana nature of RHNs, the same-sign dilepton plus jets $W_R \to N \ell \to \ell_\alpha \ell_\beta j j$ constitute a smoking-gun signal of the $W_R$ boson~\cite{Keung:1983uu}. Assuming $g_R = g_L$, the current most stringent LHC data require that the $W_R$ mass $m_{W_R}> (3.8 - 5)$ TeV for a RHN mass $100 \, {\rm GeV} < m_{N} < 1.8$ TeV~\cite{Aaboud:2018spl, Aaboud:2019wfg}. The dijet~\cite{Aad:2019hjw,Sirunyan:2019vgj} and $\bar{t}b$~\cite{Sirunyan:2017ukk,Sirunyan:2017vkm} limits are relatively weaker, which are respectively 4 TeV and 3.4 TeV. The strongest $W_R$ limit of $(3.8 - 5)$ TeV is presented in Fig.~\ref{fig:spectra}. \item The most stringent limits on the $Z_R$ boson is from the dilepton data $pp \to Z_R \to \ell^+ \ell^-$. The current dilepton limit on a sequential $Z'$ boson is 5.1 TeV~\cite{Aad:2019fac}. Following e.g. Ref.~\cite{Chauhan:2018uuy}, one can rescale the production cross section times branching fraction $\sigma (pp \to Z' \to \ell^+ \ell^-)$ for the sequential $Z'$ model, which leads to the LHC dilepton limit of 4.82 TeV on the $Z_R$ boson in the LRSM. This is shown in Fig.~\ref{fig:spectra} as the $Z_R$ limit. There are also dijet searches of the $Z'$ boson, however the corresponding limits are relatively weaker~\cite{Aad:2019hjw,Sirunyan:2019vgj}. \item At the leading order, the scalars $H_2^0$, $A_2^0$, $H_2^\pm$ and $H_1^{\pm\pm}$ from the left-handed triplet $\Delta_L$ have the same mass~\cite{Zhang:2007da} (see Table~\ref{tab:mass}). The doubly-charged scalar $H_1^{\pm\pm}$ can decay into either same-sign dilepton or same-sign $W$ bosons, i.e. $H_1^{\pm\pm} \to \ell_\alpha^\pm \ell_\beta^\pm,\, W^\pm W^\pm$, which constitute the most promising channels to probe $\Delta_L$ at the LHC, and the branching fractions ${\rm BR} (H_1^{\pm\pm} \to \ell_\alpha^\pm \ell_\beta^\pm)$ and ${\rm BR} (H_1^{\pm\pm} \to W^\pm W^\pm)$ depend on the Yukawa coupling $f_L$ and the left-handed triplet VEV $v_L$. Assuming $H_1^{\pm\pm}$ decays predominately into electrons and muons, the current LHC limits are around 770 to 870 GeV, depending on the flavor structure~\cite{Aaboud:2017qph}. In the di-tauon channel $H_1^{\pm\pm} \to \tau^\pm \tau^\pm$, the LHC limit is relatively weaker, i.e. 535 GeV~\cite{CMS:2017pet}.\footnote{ As the singly-charged scalar $H_2^{\pm}$ and doubly-charged scalar $H_1^{\pm\pm}$ are mass degenerate at the leading order in the LRSM, here we have adopted the combined LHC limit from the pair production $pp \to H_1^{++} H_1^{--}$ and the associate production $pp \to H_1^{\pm\pm} H_2^{\mp}$. In these two channels, the separate channels are respectively 396 GeV and 479 GeV~\cite{CMS:2017pet}.} If the doubly-charged scalar $H_1^{\pm\pm}$ decays predominately into same-sign $W$ bosons, the LHC limits are much weaker, around 200 to 220 GeV~\cite{Aaboud:2018qcu}. There are also some searches of singly-charged scalar $H_2^{\pm} \to \tau^\pm \nu$ at the LHC~\cite{Aaboud:2016dig, Aaboud:2018gjj, Sirunyan:2019hkq}. However these searches assume $H_2^\pm$ is produced from its interaction with top and bottom quarks, therefore these limits are not applicable to $H_2^{\pm}$ in the LRSM which does not couple directly to the SM quarks. The strongest same-sign dilepton limits of $(530 - 870)$ GeV on $H_1^{\pm\pm}$ (and also on other scalars from $\Delta_L$) is shown in Fig.~\ref{fig:spectra}. \item As the $W_R$ boson is very heavy, the TeV-scale right-handed doubly-charged scalar $H_2^{\pm\pm}$ decays only into same-sign dileptons. The couplings of $H_2^{\pm\pm}$ to the photon and $Z$ boson have opposite signs, therefore the production cross section of $H_2^{\pm\pm}$ at the LHC is smaller than that for the left-handed doubly-charged scalar $H_1^{\pm\pm}$. Rescaling the LHC13 cross section of $H_1^{\pm\pm}$ by a factor of 1/2.4, The same-sign dilepton limits on $H_2^{\pm\pm}$ turn out to be 271 to 760 GeV for all the six combinations $ee,\, e\mu,\, \mu\mu,\, e\tau,\, \mu\tau,\, \tau\tau$ of lepton flavors, which is presented in Fig.~\ref{fig:spectra}. \item The scalars $H_1^0$, $A_1^0$ and $H_1^\pm$ from the bidoublet $\Phi$ are degenerate in mass at the leading order. $H_1^0$ and $A_1^0$ has tree-level flavor-changing neutral-current (FCNC) couplings to the SM quarks, and contribute to $K-\overline{K}$, $B_d-\overline{B}_d$ and $B_s-\overline{B}_s$ mixings significantly. As a result, their masses are required to be at least $(10 - 25)$ TeV, depending on the nature of left-right symmetry (either generalized parity or generalized charge conjugation), the hadronic uncertainties~\cite{Ecker:1983uh, Zhang:2007da, Maiezza:2010ic, Bertolini:2014sua} and the potentially large QCD corrections~\cite{Bernard:2015boz}. The stringent FCNC limits on the heavy bidoublet scalars is shown in Fig.~\ref{fig:spectra}. \item The neutral scalar $H_3^0$ from the right-handed triplet $\Delta_R$ is hadrophobic, i.e. it does not couples directly to the SM quarks in the Lagrangian. It can be produced at the LHC and future higher energy colliders either in the scalar portal through coupling to the SM Higgs (and the heavy scalars $H_1^0$ and $A_1^0$), or in the gauge portal via coupling to the $W_R$ and $Z_R$ bosons. Therefore the {\it direct} LHC limits are very weak~\cite{Dev:2016vle,Dev:2017dui}. However, when it is sufficiently light, say at the GeV-scale, $H_3^0$ can be produced from (invisible) decay of the SM Higgs or even from the meson decays~\cite{Dev:2016vle,Dev:2017dui}. More details can be found in Section~\ref{sec:H3}. \item The RHNs in the LRSM can be either very light, e.g. at the keV scale to be a warm DM~\cite{Nemevsek:2012cd} candidate, or very heavy at the $v_R$ scale, and there are almost no laboratory limits on their masses, although their mixings with the active neutrinos are tightly constrained in some regions of the parameter space~\cite{Bolton:2019pcu}. For simplicity, in the following sections we will set the masses of RHNs to be free parameters and neglect their mixings with the active neutrinos. \end{itemize} \begin{center} \begin{table} \begin{center} \caption{Current most stringent experimental limits on the masses of $W_R$, $Z_R$, $H_1^{\pm\pm}$, $H_2^{\pm\pm}$, and $H_1^0$, $A_1^0$ in the LRSM. The particles in parentheses are mass degenerate with them, if there is any. See text for more details. \vspace{5pt} \label{table:bounds}} \begin{tabular}{c|ccc} \hline\hline Particle & Channel & Lower Limit & References \\ \hline \multirow{3}{*}{$W_R$} & $\ell\ell jj$ & $3.8 - 5.0$ TeV & \cite{Aaboud:2019wfg, Aaboud:2018spl} \\ & $jj$ & $4.0$ TeV & \cite{Aad:2019hjw,Sirunyan:2019vgj} \\ & $t\bar{b}$ & $3.4$ TeV & \cite{Sirunyan:2017ukk,Sirunyan:2017vkm} \\ \hline $Z_R$ & $\ell^+\ell^-$ & $4.8$ TeV & \cite{Aad:2019fac} \\ \hline $H_1^{\pm\pm}$ & $\ell_\alpha^{\pm} \ell_\beta^{\pm}$ & $535 - 870$ GeV & \cite{Aaboud:2017qph, CMS:2017pet} \\ ($H_2^0$, $A_2^0$, $H_2^\pm$) & $W^{\pm}W^{\pm}$ & $200 - 220$ GeV & \cite{Aaboud:2018qcu} \\ \hline $H_2^{\pm\pm}$ & $\ell_\alpha^{\pm} \ell_\beta^{\pm}$ & $271 - 760$ GeV & \cite{Aaboud:2017qph} \\ \hline $H_1^0$, $A_1^0$ ($H_1^{\pm}$) & meson mixing & 10 - 25 TeV & \cite{Ecker:1983uh, Zhang:2007da, Maiezza:2010ic, Bertolini:2014sua} \\ \hline \hline \end{tabular} \end{center} \end{table} \end{center} \begin{figure}[!t] \centering \includegraphics[width=0.75\textwidth]{mass.pdf} \caption{Experimental limits on the scalars and gauge bosons in Table~\ref{table:bounds}, indicated by the blue and pink arrows, with the heights of the horizontal lines denoting the ranges of experimental limits. The horizontal black lines are the masses of SM Higgs $h$, top quark $t$, and $W$, $Z$ bosons. \label{fig:spectra} } \end{figure} To be complete, the masses of 100 GeV scale SM particles, i.e. the SM Higgs $h$, the top quark $t$ and the $W$ and $Z$ bosons, are depicted in Fig.~\ref{fig:spectra} as horizontal black lines. See Fig.~\ref{fig:cmbspectra} for complementarity of GW prospects of the BSM particle masses and the current experimental limit. \section{Phase transition in LRSM} \label{sec:phasetransition} \subsection{One-loop effective potential} To study phase transitions in the LRSM, we consider the effective potential at finite temperature, which includes contributions of the one-loop corrections and daisy resummations. Renormalized in the $\overline{\text{MS}}$ scheme, the effective potential can be cast into the following form \cite{Basler:2018cwe} \begin{eqnarray} \label{eqn:Veff} {\cal V}_{\rm eff}(\phi_i,v) & \ = \ & V_0(\phi_i,v)+V_1^{T=0}(\phi_i,v)+V_1^{T \neq 0}(\phi_i,v)+V_D(\phi_i,v) \nonumber \\ & \ = \ & V_0(\phi_i,v)+\frac{1}{64 \pi^2}\sum_{i} g_i m_i^4(\phi_i,v) \left(\log\frac{m_i^2(\phi_i,v)}{\mu^2}-C_i \right) \nonumber \\ && +\frac{T^4}{2 \pi^2}\sum_{i} g_i J_{\pm}\left(\frac{m_i^2(\phi_i,v)}{T^2} \right) \nonumber \\ && -\frac{T}{12\pi}\sum_{i={\rm bosons}} \left[\left( m_i^2(\phi_i,v)+\Pi_i(T)\right)^{3/2} - \left(m_i^2(\phi_i,v) \right)^{3/2} \right], \end{eqnarray} where $V_0(\phi_i,v)$ is the tree-level potential, $V_1^{T=0}$ is the Coleman-Weinberg one-loop effective potential~\cite{Coleman:1973jx}, and $V_1^{T\neq 0}$ and $V_D$ are the thermal contributions at finite temperature. The $V_1^{T\neq0}$ term includes only the one-loop contributions, and $V_D$ denotes the high-order contributions from daisy diagrams. In Eq.~(\ref{eqn:Veff}) the sum runs over all the particles in the model. The scalar mass matrices $m_i^2(\kappa_i,v_R)$ in the LRSM can be found in Ref.~\cite{Deshpande:1990ip}, and the corresponding thermal self-energies $\Pi_i(T)$ are provided in Appendix~\ref{appendix:masses}. As for the fermions, we consider only the third generation quarks and three RHNs. In the LRSM their masses are respectively \begin{eqnarray} m_t=\frac{1}{\sqrt{2}}(y_t\kappa_1+y_b\kappa_2) \,, \quad m_b=\frac{1}{\sqrt{2}}(y_b\kappa_1+y_t\kappa_2) \,, \quad M_N=\sqrt{2}y_Nv_R \,, \end{eqnarray} with $y_{t,\,b}$ the Yukawa couplings for top and bottom quarks in the SM, $M_N$ the RHN masses and $y_N$ the corresponding Yukawa coupling. In the following study, for the sake of simplicity, we will assume three RHNs are mass degenerate and does not have any mixings among them. The degrees of freedom $g_i$ and constants $C_i$ in Eq.~(\ref{eqn:Veff}) are given by \begin{equation} \begin{aligned} (g_i,C_i)=\left\{\begin{array}{ll} (1,\frac{3}{2}), & \mbox{for scalars} \,,\\ (-2\lambda,\frac{3}{2}), & \mbox{for fermions} \,, \\ (3,\frac{5}{6}), & \mbox{for gauge bosons} \,, \end{array}\right. \end{aligned} \end{equation} with $\lambda=1 \, (2)$ for Weyl (Dirac) fermions, and the functions $J_{-} (J_{+})$ for bosons (fermions) are defined as \begin{equation} J_{\pm}(x^2)= \int_{0}^{\infty}dk k^2 \log \left(1 \pm e^{-\sqrt{x^2+k^2}} \right) \,. \end{equation} In the limit of small $x^2 = m^2/T^2$, we can use the approximations~\cite{Basler:2018cwe}: \begin{eqnarray} J_{+}\left(x^{2}\right) & \ = \ & \frac{7 \pi^{4}}{360}-\frac{\pi^{2}}{24} x^{2}-\frac{1}{32} x^{4}\log \frac{x^{2}}{a_F}+\mathcal{O}(x^4) \,, \\ J_{-}\left(x^{2}\right) & \ = \ & -\frac{\pi^{4}}{45}+\frac{\pi^{2}}{12} x^{2}-\frac{\pi}{6}\left(x^{2}\right)^{3 / 2}-\frac{1}{32} x^{4}\log \frac{x^{2}}{a_B}+\mathcal{O}(x^4) \,, \end{eqnarray} where \begin{eqnarray} a_F \ = \ \pi^2 e^{3/2 -2\gamma_E} \,, \quad a_B \ = \ 16\pi^2 e^{ 3/2 -2\gamma_E }\,. \end{eqnarray} In this paper we focus on the phase transition at the $v_R$ scale, thus as an approximation all the effects of SM components on the symmetry breaking $ SU(2)_R \times U(1)_{B-L} \to U(1)_Y$ can be neglected. Neglecting the daisy contributions, the effective potential ${\cal V}_{\rm eff}$ can be written down explicitly in the following form~\cite{Cohen:1993nk}: \begin{equation} \begin{aligned} V_{eff}( v, \Pi_i =0) \ \simeq \ D \, (T^2 - T_0^2)\, v^2 - E \, T \, v^3 + \frac{\rho_T}{4} \, v^4\,, \end{aligned} \label{vpara} \end{equation} where $D$, $T_0$, $E$ and $\rho_T$ can be expressed by the model parameters as \begin{eqnarray} \label{eqn:D} \label{eqn:D} D & \ = \ & \frac{1}{8 v_R^2}\left(M_{Z_R}^2+2M_{W_R}^2+M_{N}^2\right)+D_H \,, \\ \label{eqn:T0} T_0^2 & = & \frac{M_{H_3^0}^2}{4D}+T^2_H \,, \\ \label{eqn:E} E & \ = \ & \frac{ M_{Z_R}^3+2M_{W_R}^3}{4\pi v_R^3}+E_H \,, \\ \label{eqn:rhoT} \rho_T & \ = \ & \rho_1-\frac{3\left(M_{Z_R}^4+2M_{W_R}^4\right)}{16\pi^2 v_R^4}\left(\frac{5}{6}+\log\frac{\mu^2}{a_BT^2}\right) \nonumber \\ && +\frac{6M_{N}^4}{16\pi^2 v_R^4}\left(\frac{3}{2}+\log\frac{\mu^2}{a_FT^2}\right)+\rho_H \,, \label{eqply1} \end{eqnarray} where $M_X$ is the mass for the particle $X$, and $\mu$ is the renormalization scale. Since there are lots of scalars in the LRSM, we deliberately separate their contributions from the vector bosons and RHNs. The contributions of scalars for each of the terms in Eq.~(\ref{eqn:D}) to (\ref{eqn:rhoT}) can be written in terms of the scalar masses via \begin{eqnarray} \label{eqn:DH} D_H & \ = \ & \frac{1}{24 v_R^2}\left( 4M^2_{H_1^0} +6M_{H_2^0}^2 +7M_{H_3^0}^2 +2M_{H_2^{\pm\pm}}^2 \right) \,, \\ \label{eqn:TH} T^2_H & \ = \ & \frac{M_{H_3^0}^2}{D}\frac{ 6M_{H_2^0}^2 + 7M_{H_3^0}^2+2M_{H_2^{\pm\pm}}^2}{64\pi^2v_R^2}\left(\frac{3}{2}+\log \frac{\mu^2}{a_BT^2} \right) \,, \\ \label{eqn:EH} E_H & \ = \ & \frac{1}{16\pi v_R^3} \left\{ \frac{16}{3} M_{H_1^0}^3 + \sqrt2 M_{H_3^0}^3 \left( 1 - r_v \right)^{3/2} + \sqrt6 M_{H_3^0}^3 \left( 1 - \frac13 r_v \right)^{3/2} \right. \nonumber \\ && + \left. 2\sqrt2 \left[ M_{H_3^0}^2 \left( 1 - r_v \right) + 2 M_{A_2^0}^2 \right]^{3/2} + \frac{2\sqrt2}{3} \left[ M_{H_3^0}^2 \left( 1 - r_v \right) + 2 M_{H_2^{\pm\pm}}^2 \right]^{3/2} \right\} \,, \\ \label{eqn:rhoH} \rho_H & \ = \ & -\frac{ 4M_{H_1^0}^4 + 6M_{H_2^0}^4 + 5M_{H_3^0}^4 + 2M_{H_2^{\pm\pm}}^4 +6M_{H_2^0}^2M_{H_3^0}^2 + 2M_{H_3^0}^2M_{H_2^{\pm\pm}}^2}{16\pi^2 v_R^4} \nonumber \\ && \quad\times\left(\frac{3}{2}+\log\frac{\mu^2}{a_BT^2}\right) \,, \end{eqnarray} where we have defined $r_v \equiv v_R^2/v^2$. It should be pointed out that all the masses in Eqs.~(\ref{eqn:DH}) to (\ref{eqn:rhoH}) depend upon the right-handed VEV $v_R$ instead of $v$. It is observed that the RHNs can also contribute to the symmetry breaking $SU(2)_R \times U(1)_{B-L} \to U(1)_{Y} $ via affecting the parameters $D$, $T_0$ and $\rho_T$, while the parameter $E$ receives only contributions from the scalars and gauge bosons. As seen in Eqs.~(\ref{eqn:rhoT}) and (\ref{eqn:rhoH}), the parameter $\rho_T$ receive not only tree-level contribution from the quartic coupling $\rho_1$ which corresponds to the $H_3^0$ mass via $\rho_1 \simeq M_{H_3^0}^2/2v_R^2$ (see Table~\ref{tab:mass}), but also loop-level contributions from the heavy scalars, gauge bosons and RHNs in the LRSM. In particular, when the quartic coupling $\rho_1$ is small, or equivalently the scalar $H_3^0$ is much smaller than the $v_R$ scale, which is the parameter space of interest for phase transition and GW production in the LRSM (cf. Figs.~\ref{figvT}, \ref{fig:random1} and \ref{fig:GWcurves}), the loop-level contributions in Eq.~(\ref{eqn:rhoT}) might dominate $\rho_T$. Furthermore, $\rho_T$ depends also on the gauge coupling $g_R$ via the heavy gauge boson masses $M_{W_R}$ and $M_{Z_R}$. To have strong FOPT, the cubic terms proportional to $- E T v^3$ are crucial. In the limit of $E \to 0$, the phase transition is of second-order. In the SM, the effective coefficient $E$ of $\phi^3$ term is dominated by the gauge boson contributions, while in the LRSM, it receives contributions from both the scalars and gauge bosons, As a result of the large degree of freedom in the scalar sector of LRSM, it is remarkable that the scalar contributions to $E$ can even be much larger. The order parameter describing the FOPT is given by $v_c/T_c$, where $v_c$ is the non-vanishing location of the minimum at the critical temperature $T_c$ at which the effective potential ${\cal V}_{\rm eff}$ has two degenerate minima. In the EW baryogenesis~\cite{Kuzmin:1985mm, Shaposhnikov:1986jp, Shaposhnikov:1987tw}, to avoid the washout effects in the broken phase within the bubble wall, a strong FOPT is typically required to satisfy the following condition \begin{eqnarray} \label{eqn:vcTc} \frac{v_c}{T_c} = \frac{2 E}{\rho_T} \geq 1 \,. \end{eqnarray} \subsection{Strong first-order phase transition at the $v_R$ scale} The effective potential (\ref{eqn:Veff}) is a function of temperature $T$. Meanwhile, the minima of the effective potential vary when the temperature changes. In order to find the quantity $v_c/T_c$ which measures the strength of FOPT, we need to find both the critical temperature $T_c$ and the critical VEV $v_c$.\footnote{There might be some theoretical uncertainties in perturbative calculations of FOPTs and resultant GWs, which can be found, e.g. in Ref.~\cite{Croon:2020cgk}. } In term of the parametrization given in Eq.~(\ref{vpara}), the critical temperature can be approximately expressed as \begin{eqnarray} T_c^2 \simeq T_0^2 \frac{\rho_T D }{\rho_T D - E^2}\,. \end{eqnarray} Thus it is clear that $T_c \sim v_R \gg v_{\rm EW}$. Therefore, it is justified to neglect the contributions of SM particles to the phase transition at the right-handed scale $v_R$, since their masses $m_{\rm SM}$ are at most close to $v_{\rm EW}$ and their contributions are suppressed due to their tiny couplings to the right-handed triplet. For given $v_R$ and heavy particle masses in the LRSM, the two key parameters $T_c$ and $v_c$ can be obtained from the effective potential (\ref{eqn:Veff}) by requiring the two conditions $V_{eff}(T_c; v_c)=V_{eff}(T_c; 0)$ and $v_c \neq 0$. In the numerical evaluations, we change the temperature from a sufficiently highly energy scale, say $v_R$, toward lower values around the EW scale. A reasonable critical temperature $T_c$ for the phase transition $SU(2)_R \times U(1)_{B-L} \to \times U(1)_Y$ is assumed to be within this range. The dependence of $v_c/T_c$ on the parameters in the LRSM is exemplified in Fig.~\ref{figvT}, where in the numerical calculations we have included all the contributions in Eq.~(\ref{eqn:Veff}). Taking into account all the theoretical and experimental constraints in Section~\ref{sec:lrsm}, we first consider scenarios with the simplifications $\lambda_2=\lambda_3=\lambda_4=\alpha_1=\alpha_2=0$. In order to identify the parameter space where the phase transition is of first-order, we calculate $v_c/T_c$ at the critical temperature $T_c$ with different values of the quartic couplings $\rho_1$, $\rho_2$, $\rho_3 - 2 \rho_1$, $\alpha_3$. When we calculate the dependence of $v_c/T_c$ on two of the quartic couplings, all others are fixed in the way that their corresponding scalar masses equal the $W_R$ mass, and the gauge coupling $g_R = g_L$. To be concrete, we have set the renormalization scale $\mu$ to be the $v_R$ scale in Eq.~(\ref{eqn:Veff}). The corresponding results are shown in the first three panels of Fig.~\ref{figvT}. The dependence of $v_c/T_c$ on the couplings $\rho_1$ and $\alpha_3$, $\rho_3 - 2 \rho_1$ and $\rho_1$, and $\rho_2$ and $\rho_1$ are shown respectively in the upper left, upper right and lower left panels. The quantity $v_c/T_c$ is a dimensionless parameter and it is independent of the right-handed scale $v_{R}$ in the limit of $v_R \gg v_{\rm EW}$. As the quartic couplings $\rho_1$, $\rho_2$, $\rho_3 - 2 \rho_1$, $\alpha_3$ are related directly to the scalar masses $M_{H_3^0}$, $M_{H_2^{\pm\pm}}$, $M_{H_2^0}$ and $M_{H_1^0}$ (cf. Table~\ref{tab:mass}), the dependence of $v_c/T_c$ on the quartic couplings in Fig.~\ref{figvT} can also be understood effectively as the dependence of $v_c/T_c$ on the mass$-v_R$ ratios $M_{H_1^0}/v_R$, $M_{H_2^0}/v_R$, $M_{H_3^0}/v_R$ and $M_{H_2^{\pm\pm}}/v_R$. Through the gauge boson masses $M_{W_R}$ and $M_{Z_R}$, the parameter $v_c/T_c$ depends also on the gauge coupling ratio $r_g$, or equivalently on the right-handed gauge coupling $g_R$. This is shown in the lower right panel in Fig.~\ref{figvT}; as seen in this figure, the $v_c/T_c$ limit on $\rho_1$ has a moderate or weak dependence on $r_g$, depending on the value of $\rho_1$. \begin{figure}[!t] \includegraphics[height=0.4\linewidth]{vcTc_1.pdf} \hspace{0.45cm} \includegraphics[height=0.415\linewidth]{vcTc_2.pdf} \\ \includegraphics[height=0.4\linewidth]{vcTc_3.pdf} \hspace{-0.3cm} \includegraphics[height=0.41\linewidth]{vcTc_4.pdf} \caption{$v_c/T_c$ at critical temperature in the plane of $\rho_1$ versus $\alpha_3$ (upper left), $\rho_1$ versus $\rho_3 -2\rho_1$ (upper right), $\rho_1$ versus $\rho_2$ (lower left) and $\rho_1$ versus $r_g$ (lower right). The color indicates the value of $v_c/T_c$. In all the panel the other parameters are fixed in the way that their corresponding scalar masses are set to be the $W_R$ mass. \label{figvT}} \end{figure} Given the information on $v_c/T_c$ in Fig.~\ref{figvT}, a few more comments are now in order: \begin{itemize} \item As seen in Fig.~\ref{figvT}, a strong FOPT in the LRSM require a relatively small quartic coupling $\rho_1 \lesssim 0.07$ for the parameter space we are considering, which is qualitatively similar to the SM case where a light Higgs boson (say $M_h < 80 $ GeV) is needed in order to have a first-order EW phase transition~\cite{Cline:2006ts}. It turns out that a small $\rho_1$ (and resultantly light $H_3^0$) is not only crucial for the prospects of GWs in future experiments (cf. Fig.~\ref{fig:GWcurves}), but also triggers rich phenomenology for the searches of LLPs at the high-energy colliders and dedicated detectors~\cite{Dev:2016vle, Dev:2017dui}. \item The phase transition at the $v_R$ scale occurs when the neutral component $\Delta_R^0$ of the right-handed triplet $\Delta_R$ develops a non-vanishing VEV $v_R$. As a result, the strong FOPT is more sensitive to the mass of $H_3^0$, or equivalently to the value of $\rho_1$, than other heavy scalar masses. This is also clearly demonstrated in the plots of Fig.~\ref{figvT}. As seen in the upper left, upper right and lower left panels, the quartic coupling $\alpha_3$, $\rho_3 - 2\rho_1$ and $\alpha_3$ can reach up to order one, while $\rho_1 \lesssim 0.1$ in the Fig.~\ref{figvT}. \item Although the quartic couplings $\alpha_3$, $\rho_3$ and $\rho_3 - 2\rho_1$ is less constrained by the FOPT than the critical coupling $\rho_1$, as seen in the first three panels of Fig.~\ref{figvT}, if either of these couplings is sufficiently large, it will invalidate the strong FOPT at the $v_R$ scale, no matter how small $\rho_1$ is. Meanwhile, the white areas in the plots of Fig.~\ref{figvT} indicate that in these regions the perturbation method starts to break down and theoretical predictions become more difficult. \end{itemize} In Fig.~\ref{figvT} we have fixed some parameter in the LRSM and vary two of them. To see more details of the correlation of $v_c/T_c$ and the parameters in the LRSM, we take a more thorough scan of the parameter space of the LRSM. To be specific, we adopt the following ranges: \begin{eqnarray} & \xi =10^{-3}, \quad \alpha_2=\beta_i=\lambda_{2,3,4}=0,\quad r_g=1,\quad v_R = 10\, {\rm TeV},\;\; 20 \, {\rm TeV} \,, \nonumber \\ &\rho_1 \in [0, 0.5], \quad \alpha_3 \in [0, 10], \quad \rho_3-2\rho_1,\rho_2,y_{N} \in [0,2],\quad \lambda_1\in [0.13,2] \label{scan} \end{eqnarray} and apply all the theoretical and experimental constraints in Section~\ref{sec:lrsm}. Here follows some comments: \begin{itemize} \item We have chosen $\xi = \kappa_2/\kappa_1=0.001$ in order to satisfy the theoretical constraint in Eq.~(\ref{eqn:xi}). \item We have chosen $\alpha_2=0$ in order to meet the requirement of the correct vacuum conditions given in Eq.~(\ref{eqn:correctvacuum}). \item It is known from Fig.~\ref{figvT} that the strongly FOPT need a small $\rho_1$, therefore we have chosen $\rho_{1} < 0.5$. \item $\rho_3-2\rho_1$ has set to be larger than zero, as it corresponds to the masses of the left-handed triplet scalars (see Table~\ref{tab:mass}). \item The quartic coupling $\alpha_1$ is not a free parameter here, as it is related to $\lambda_1$ and the SM coupling $\lambda$ via Eq.~(\ref{eqn:lambda1}). As $\alpha^2/4\rho_1$ is always positive, it turn out that the quartic coupling $\lambda_1 \geq \lambda \simeq 0.13$. \item We have chosen two benchmark values of $10$ TeV and $20$ TeV for the right-handed scale $v_R$ to examine the dependence of FOPT on $v_R$. It turns out that the phase transition is almost independent of the values of $v_R$, as expected. \end{itemize} \begin{figure}[!t] \centering \subfigure{\includegraphics[width=0.49\linewidth]{scatter1.pdf}} \subfigure{\includegraphics[width=0.49\linewidth]{scatter2.pdf}} \caption{Scatter plots of $\rho_1$ and $\alpha_3$, with the blue points have $v_c/T_c<1$ and the red ones $v_c/T_c>1$. In the left panel, the FCNC limits on $\alpha_3$ for $v_R = 10$ TeV and $M_{H_1^0} < 15$ TeV are indicated by the pink shaded regions. In the right panel, the case with $v_R = 20$ TeV is shown.\label{fig:random1}} \end{figure} The resultant scatter plots of $v_c/T_c$ are presented in Fig.~\ref{fig:random1} as functions of the parameters $\rho_1$ and $\alpha_3$. The data points of strong FOPT with $v_c/T_c>1$ are shown in red while those with $v_c/T_c<1$ are in blue. When we set $v_R = 10$ TeV and take the FCNC limit of $M_{H_1^0} > 15$ TeV~\cite{Zhang:2007da}, the quartic coupling $\alpha_3$ should meet the condition $\alpha_3 > 2 M_{H_1^0}^2/v_R^2 = 4.5$. The region shaded by the light pink in the left panel of Fig.~\ref{fig:random1} is excluded by such conditions. It is found that only a small amount of the data points can survive and have strong FOPT. When the $v_R$ scale is higher, say $v_R = 20$ TeV, the quartic coupling $\alpha_3$ is significantly smaller, i.e. $\alpha_3 > 1.13$. The region denoted by the light pink shaded region in the right panel of Fig.~\ref{fig:random1} is excluded. Then there will be more points that can have a strong FOPT with $v_c/T_c>1$, as clearly shown in the right panel of Fig.~\ref{fig:random1}. \section{Gravitational waves} \label{sec:GW} The thermal stochastic GWs can be generated by three physics processes in phase transition~\cite{Caprini:2015zlo}: collisions of bubbles, sound waves (SWs) in the plasma after the bubble collision, and the MHD turbulence forming after the bubble collision. For non-runaway scenarios, GWs are dominated by the latter two sources~\cite{Caprini:2015zlo}, and the corresponding GW spectrum can be approximated as \begin{equation} h^2\Omega_{\rm GW} \ \simeq \ h^2\Omega_{\rm SW}+h^2\Omega_{\rm MHD} \,. \end{equation} The SW contribution has the form of~\cite{Hindmarsh:2015qta} \begin{eqnarray} h^2\Omega_{\rm SW} (f) & \ \simeq \ & 2.65 \times 10^{-6}\left(\frac{H_*}{\beta}\right)\left(\frac{\kappa_v \alpha}{1+\alpha}\right)^2\left(\frac{100}{g_*}\right)^{1/3} v_w \left( \frac{f}{f_{\rm SW}}\right) ^3\left[ \frac{7}{4+3 \left(\frac{f}{f_{\rm SW}}\right)^2} \right]^{7/2} \,, \nonumber \\ && \end{eqnarray} where $f$ is the frequency, $g_\ast$ and $H_\ast$ are respectively the number of relativistic degrees of freedom in the plasma and the Hubble parameter at the temperature $T_\ast$, $v_w$ is the bubble wall velocity, $\alpha$ describes the strength of phase transition, $\beta/H_{*}$ measures the rate of the phase transition, and \begin{equation} \kappa_v=\frac{\alpha}{0.73+0.083\sqrt{\alpha}+\alpha}, \end{equation} is the fraction of vacuum energy that is converted to bulk motion. The peak frequency $f_{\rm SW}$ is approximated by \begin{eqnarray} f_{\rm SW} & \ \simeq \ & 1.9\times 10^{-2}\frac{1}{v_w}\left(\frac{\beta}{H_*} \right)\left(\frac{T_*}{100\text{GeV}} \right) \left(\frac{g_*}{100} \right)^{1/6} \text{mHz} \,. \end{eqnarray} The MHD turbulence contribution is~\cite{Caprini:2009yp, Binetruy:2012ze} \begin{eqnarray} h^2\Omega_{\rm MHD} (f) & \ \simeq \ & 3.35 \times 10^{-4}\left(\frac{H_*}{\beta}\right)\left(\frac{\kappa_{\rm MHD} \alpha}{1+\alpha}\right)^{3/2} \left(\frac{100}{g_*}\right)^{1/3} v_w \frac{\left( \frac{f}{f_{\rm MHD}} \right)^3}{\left( 1+ \frac{f}{f_{\rm MHD}}\right)^{11/3} \left( 1+ \frac{8\pi f}{h_*} \right)} \,, \nonumber \\ && \label{omigaturb} \end{eqnarray} where $\kappa_{\rm MHD} \simeq 0.05\kappa_v$ is the fraction of vacuum energy that is transformed into the MHD turbulence, $h_\ast$ is the inverse Hubble time at the GW production (red-shifted to today), and is given by \begin{equation} h_*=16.5\times 10^{-6}\left(\frac{T_*}{100\text{GeV}} \right) \left(\frac{g_*}{100} \right)^{1/6} \text{Hz} \,, \end{equation} and the peak frequency is \begin{eqnarray} f_{\rm MHD} & \ \simeq \ & 2.7\times 10^{-2}\frac{1}{v_w}\left(\frac{\beta}{H_*} \right)\left(\frac{T_*}{100\text{GeV}} \right) \left(\frac{g_*}{100} \right)^{1/6} \text{mHz} \,. \end{eqnarray} As shown in the formula above, the gravitational wave spectrum from FOPTs are generally characterized by two parameters related to the phase transition, namely $\alpha$ and $\beta$ \cite{Grojean:2006bp}. The parameter $\alpha$ is defined as the ratio of the vacuum energy density $\epsilon_\ast$ released at the phase transition temperature $T_*$ to the energy density of the universe in the radiation era, i.e. \begin{eqnarray} \alpha \ = \ \frac{\epsilon_{*}}{g_{*}\pi^2 T_{*}^4 / 30} \,, \end{eqnarray} where $\epsilon_{*}$ is the latent heat and can be expressed as \begin{eqnarray} \epsilon_{*} \ = \ \left.\left( - \Delta V_{\rm eff} +T \frac{d \Delta V_{\rm eff}}{d T} \right) \right|_{T=T_{*}} \,. \end{eqnarray} The $\Delta V_{\rm eff}$ denotes the difference of potential energy between the false vacuum and true vacuum, i.e. $\Delta V_{\rm eff} = - V_{\rm eff} (0, T) + V_{\rm eff} (v, T)$, which can be simply determined by $T_*$ and the parameters of LRSM. The parameter $\beta$ describes the rate of variation of the bubble nucleation rate during phase transition, and its inverse describes the duration of phase transition. To describe rate of the phase transition, a dimensionless parameter $\frac{\beta}{H^*}$ is defined from the following equation \begin{equation} \frac{\beta}{H_{*}}=\left. T \frac{d(S_3/T)}{dT}\right|_{T=T_{*}}, \end{equation} where $S_3$ denotes the three-dimensional Euclidean action of a critical bubble. The $T_*$ denotes the temperature when the phase transition is ended and can be determined by requiring that the probability for nucleating one bubble per horizon volume equals 1, i.e. \begin{equation} \int_{T_*}^{T_c}\frac{dT}{T}\frac{\Gamma(T)}{H^4}=1 \,, \end{equation} where $\Gamma(T)$ is the probability of bubble nucleation per horizon volume, which can be expressed as $\Gamma(T) = \Gamma_0 \exp\{- {S_3}/{T} \}$, with $\Gamma_0 = T^4 (S_3/2\pi T)^{3/2}$~\cite{Coleman:1977py,Linde:1980tt,Linde:1981zj}. In this paper, $S_3$ is computed using the code {\tt CosmoTransitions}~\cite{Wainwright:2011kj} to solve the bounce equation of bubbles. The parameters $\alpha$ and $\beta$ set respectively the strength and time variation of GWs during the phase transition, and their typical values in the LRSM are shown respectively in the left and right panels of Fig.~\ref{alphabetavrTc}. As demonstrated by the data points, the value of $\alpha$ varies roughly from $0.001$ to $0.1$, and $\beta / H_\ast$ can range from $10^2$ to $10^4$. In the numerical calculations, all the data points in Fig.~\ref{alphabetavrTc} have strong FOPT. \begin{figure}[!t] \includegraphics[width=0.49\linewidth]{alpha} \includegraphics[width=0.5\linewidth]{beta} \caption{The values of $\alpha$ (left) and $\beta/H_*$ (right) for data points which have strong FOPT, as function of $v_c/T_c$.} \label{alphabetavrTc} \end{figure} \begin{figure}[!t] \includegraphics[height=0.3\linewidth]{GW_1} \includegraphics[height=0.305\linewidth]{GW_2} \caption{ GW peaks for the data points in Fig.~\ref{alphabetavrTc}, as function of $v_c/T_c$ (left) and frequency $f$ (right). Also shown in the right panel are the prospects of LISA~\cite{Audley:2017drz, Cornish:2018dyw}, TianQin~\cite{Luo:2015ght}, Taiji~\cite{Guo:2018npi}, ALIA~\cite{Gong:2014mca}, MAGIS~\cite{Coleman:2018ozp}, BBO~\cite{Corbin:2005ny}, DECIGO~\cite{Musha:2017usi}, ET~\cite{Punturo:2010zz}, and CE~\cite{Evans:2016mbw}.} \label{GWpeak} \end{figure} Assuming the bubble wall velocity $v_w\sim 1$, the corresponding GW signals of the data points in Fig.~\ref{alphabetavrTc} are shown in Fig.~\ref{GWpeak}. The correlation of the ratio $v_c/T_c$ and GW signal peaks are presented in the left panel. We can read from Fig.~\ref{alphabetavrTc} and the left panel of Fig.~\ref{GWpeak} that with large $v_c/T_c$ the value $\alpha$ is typically larger, thus yielding stronger GW signals. The GW strength and frequency peaks are shown in the right panel of Fig.~\ref{GWpeak}. The potential sensitivities of LISA~\cite{Audley:2017drz, Cornish:2018dyw}, TianQin~\cite{Luo:2015ght}, Taiji~\cite{Guo:2018npi}, ALIA~\cite{Gong:2014mca}, MAGIS~\cite{Coleman:2018ozp}, BBO~\cite{Corbin:2005ny}, DECIGO~\cite{Musha:2017usi}, ET~\cite{Punturo:2010zz}, and CE~\cite{Evans:2016mbw} are also depicted in the right panel of Fig.~\ref{GWpeak}. As seen in this figure, the frequency peak in the LRSM can range from $10^{-1}$ to $10^2$ Hz. Furthermore, there are some data points of the LRSM with frequencies in the range of roughly from $0.1$ to $10$ Hz and GW strength larger than $10^{-17}$, which can be detected in the future by BBO and DECIGO, or even by ALIA and MAGIS. \begin{figure}[!t] \centering \subfigure{\includegraphics[width=0.49\linewidth]{GW_mass_1}} \subfigure{\includegraphics[width=0.49\linewidth]{GW_mass_2}} \caption{ Distributions of data points as function of the masses of $H_1^0$, $H_2^0$, $H_2^{\pm\pm}$, $H_3^0$ and $N$, with the strong FOPT $v_c/T_c>1$ (left), and for the data points that can be detected by BBO and DECIGO (right).} \label{fig:massofsamples} \end{figure} For the data points in Fig.~\ref{GWpeak} with strong FOPT, the mass spectra of the scalars $H_1^0$, $H_2^0$, $H_3^0$, $H_2^{\pm\pm}$ and the mass of RHNs $N$ are shown in the left panel of Fig.~\ref{fig:massofsamples}, and the mass spectra of these particles for the data points that are achievable in the BBO and DECIGO experiments are presented in the right panel of Fig.~\ref{fig:massofsamples}. The two plots of Fig.~\ref{fig:massofsamples} clearly show that the masses of $H_1^0$, $H_2^0$ and $H_2^{\pm\pm}$ can reach up to few times 10 TeV, with their lower mass limits roughly round the experimental constraints in Section~\ref{sec:experimental} (see also Table~\ref{table:bounds} and Fig.~\ref{fig:spectra}). The mass of $H_3^0$ can go to much smaller values, i.e. from 20 GeV up to 10 TeV. This can be easily understood: on one hand, the theoretical and experimental constraints on $H_3^0$ mass are rather weak (see Section~\ref{sec:lrsm}); on the other hand, the strong FOPT and GW production in the LRSM favor a relatively light $H_3^0$ (see Figs.~\ref{figvT}, \ref{fig:random1} and \ref{fig:GWcurves}). As seen in Fig.~\ref{fig:massofsamples}, the RHN masses $M_N$ can range roughly from 300 GeV up to 40 TeV. It is expected that the GW probe of $H_3^0$ and RHNs are largely complementary to the direct searches of them at the high-energy colliders, including the searches of long-lived $H_3^0$ and $N$. See Section~\ref{sec:H3} for more details. \begin{figure}[!t] \centering \includegraphics[width=0.75\textwidth]{mass2.pdf} \caption{Combined plot of the experimental limits in Fig.~\ref{fig:spectra} (blue and pink blocks with arrows) and the GW prospects of the masses of $H_1^0$, $H_2^0$, $H_2^{\pm\pm}$, $H_3^0$ and $N$ in the right panel of Fig.~\ref{fig:massofsamples} (green hatched regions). The horizontal black lines are the masses of SM Higgs $h$, top quark $t$, and $W$, $Z$ bosons. \label{fig:cmbspectra} } \end{figure} For the purpose of comparison, we present in Fig.~\ref{fig:cmbspectra} the experimental limits on the masses of $H_1^0$, $H_2^0$ and $H_2^{\pm\pm}$ in Fig.~\ref{fig:spectra} and the GW sensitive ranges of the masses of $H_1^0$, $H_2^0$, $H_2^{\pm\pm}$, $H_3^0$ and $N$ in Fig.~\ref{fig:massofsamples}, where the mass ranges within the sensitivities of GW detectors are represented by green hatched areas. It is clear that the GWs from phase transition can probe a large region of parameter space in the LRSM that goes beyond the current collider limits. To expose more features of GWs from the phase transition at the $v_R$ scale in the LRSM, we have chosen five specific BPs. For the sake of concreteness and simplification, we have chosen $v_R =10$ TeV, $\xi = 10^{-3}$, and set the quartic couplings $\lambda_1= \lambda = 0.13$, $\alpha_1=\alpha_2=\lambda_2=\lambda_3=\lambda_4=0$. The BSM particle masses $M_{H_1^0}$, $M_{H_2^0}$, $M_{H_3^0}$, $M_{H_2^{\pm\pm}}$ and $M_N$ are collected in the first few columns of Table~\ref{tab:BPs}. The resultant $v_c$, $T_c$, $T_\ast$ and the parameters $\alpha$ and $\beta/H_\ast$ are also shown in Table~\ref{tab:BPs}. The GW spectra $h^2 \Omega$ as function of the frequency $f$ for the five BPs are presented in Fig.~\ref{fig:GWcurves}. There are a few comments on the five BPs. \begin{figure}[!t] \centering \includegraphics[width=0.6\linewidth]{GW_BP} \caption{The same as in the right panel of Fig.~\ref{GWpeak}, but for the five BPs in Table~\ref{tab:BPs}.} \label{fig:GWcurves} \end{figure} \begin{table}[!t \begin{center} \caption{Five BPs studied in this paper. Parameters not shown in the table are set to be $v_R=10$ TeV, $\xi=10^{-3}$, $\lambda_1=0.13$, $\alpha_1=\alpha_2=\lambda_2=\lambda_3=\lambda_4=0$. Their GW spectra are shown in Fig.~\ref{fig:GWcurves}. It is also noticed that all these BPs are non-runaway scenarios in term of the criteria defined in Eq.~(25) of \cite{Caprini:2015zlo}. The suppression factor $\Upsilon$ in the last row is defined in Eq.~(\ref{eqn:upsilon})~\cite{Guo:2020grp,Fornal:2020esl}. } \label{tab:BPs} \vspace{5pt} \begin{tabular}{cccccc} \hline\hline BPs & BP1 & BP2 & BP3 & BP4 & BP5 \\ \hline $M_{H_1^0}$ & 10 TeV & 10 TeV & 10 TeV & 10 TeV & 10 TeV \\ \hline $M_{H_2^0}$ & 8 TeV & 8 TeV & 8 TeV & 8 TeV & 10 TeV \\ \hline $M_{H_3^0}$ & 40 GeV & 500 GeV & 1 TeV & 2 TeV & 2 TeV \\ \hline $M_{H_2^{\pm\pm}}$ & 8 TeV & 8 TeV & 8 TeV & 8 TeV & 10 TeV \\ \hline $M_{N}$ & 1 TeV & 1 TeV & 1 TeV & 1 TeV & 2 TeV \\ \hline $v_c$ & 8.02 TeV & 8.01 TeV & 7.98 TeV & 7.72 TeV & 7.18 TeV \\ \hline $T_c$ & 3.42 TeV & 3.50 TeV & 3.73 TeV & 4.49 TeV & 5.44 TeV \\ \hline $T_*$ & 2.17 TeV & 2.27 TeV & 2.75 TeV & 3.92 TeV & 4.89 TeV \\ \hline $\alpha$ & 0.056 & 0.053 & 0.037 & 0.019 & 0.0083 \\ \hline $\alpha_\infty$ & 0.18 & 0.16 & 0.11 & 0.053 & 0.037 \\ \hline $\beta/H_*$ & 265 & 272 & 493 & 1373 & 1908 \\ \hline $\Upsilon$ & 0.16 & 0.16 & 0.13 & 0.10 & 0.15\\ \hline\hline \end{tabular} \end{center} \end{table} \begin{itemize} \item It is clear in Fig.~\ref{fig:GWcurves} that the BPs (from BP1 to BP4) with the same values of $M_{H_1^0}$, $M_{H_2^0}$, $M_{H_2^{\pm\pm}}$ and $M_N$ but different $M_{H_3^{0}}$ can be probed in the future by BBO and DECIGO, and even by ALIA and MAGIS. It seems that the $H_3^0$ mass $M_{H_3^0}$, or equivalently the quartic coupling $\rho_1$, is crucial for the GWs in the LRSM. The BPs (like BP5) with a heavier $H_3^0$, or equivalently larger $\rho_1$, tends to generate a small $\alpha$ and large $\beta$, and thus produce weaker GW signals with a larger frequency. This is consistent with the findings in Ref.~\cite{Brdar:2019fur}. The BPs BP1 and BP2 with $H_3^0$ mass below TeV-scale can produce GWs of order $10^{-13}$ with frequency at around 0.1 Hz, far above the prospects of BBO and DECIGO. The BP4 with a 2 TeV $H_3^0$ can only produce GWs of order $10^{-16}$ with frequency peaked at 1 Hz, which can be marginally detected by BBO and DECIGO. \item Comparing BP4 and BP5, it is clear that only the masses of $H_2^{0}$, $H_2^{\pm\pm}$ and $N$ are heavier in BP5 than in BP4, while all other parameters are the same. As seen in Fig.~\ref{fig:GWcurves}, the GW signal in BP5 is so weak that it can escape the detection of all the planned GW experiments in the figure. This reveals that the masses $M_{H_2^{0}}$, $M_{H_2^{\pm\pm}}$ and $M_N$, or equivalently the couplings $\rho_3 - 2\rho_1$, $\rho_2$ and $y_N$, are also important for GW production in the LRSM. More data points in the numerical calculations reveal that the coupling $\alpha_3$ is also very important for the GW signals in the LRSM. \end{itemize} \section{Complementarity of GW signal and collider searches of LRSM} \label{sec:complementarity} In spite of the large number of BSM scalars, fermions and gauge bosons in the LRSM and the larger number of quartic couplings in the potential~(\ref{eqn:potential}), it is phenomenologically meaningful to examine the role of some couplings, or equivalently the BSM particle masses, in the strong FOPT and the subsequent GW production in the early universe, as well as the potential correlations of GWs to the direct laboratory searches of these particles and the SM precision data at the high-energy colliders. In this section, we will elaborate on (i) the effects of the quartic coupling $\lambda_1$ in the scalar potential (\ref{eqn:potential}) which corresponds to the self-coupling $\lambda$ in the SM, and (ii) the complementarity of GW signal, the collider searches of (light) $H_3^0$ and the heavy (or light) RHNs in the LRSM. \subsection{Self-couplings of SM-like Higgs boson in the LRSM} \begin{table}[!] \begin{center} \caption{Comparison of the masses squared, trilinear and quartic couplings of the SM-like Higgs $h$ in the SM and LRSM~\cite{Dev:2016dja, Maiezza:2016ybz}. \label{hselfcpl}} \vspace{5pt} \begin{tabular}{cccc} \hline\hline models & mass squared & $\lambda_{hhh}$ & $\lambda_{hhhh}$ \\ \hline SM & $2 \lambda^{} v_{\rm EW}^2$& $\lambda^{} v_{\rm EW} $ & $\frac14 \lambda^{}$\\ \hline LRSM& $(2 \lambda_1 - \frac{\alpha_1^2}{2\rho_1}) v_{\rm EW}^2$ & $\frac{1}{4}\left(4\lambda_1-\frac{\alpha_1^2}{\rho_1}\right)v_{\rm EW} + \left(4\lambda_4-\frac{\alpha_1\alpha_2}{\rho_1}\right)\xi v_{\rm EW}$ & $\frac14 {\lambda_{1}}$ \\ \hline\hline \end{tabular} \end{center} \end{table} It is interesting to examine how the self-coupling $\lambda$ of the SM-like Higgs boson $h$ can be affected by the BSM scalars in the LRSM. The SM-like Higgs mass square, the trilinear coupling $\lambda_{hhh}$ and the quartic coupling $\lambda_{hhhh}$ in the SM and LRSM are collected in Table~\ref{hselfcpl}. Comparing the mass square of $h$ in the SM and LRSM, we can approximately identify the following relation among the SM and LRSM quartic couplings~\cite{Dev:2016dja, Maiezza:2016ybz} \begin{eqnarray} \label{eqn:lambda1} \lambda_1 - \frac{\alpha_1^2}{4\rho_1} \simeq \lambda \,. \end{eqnarray} As seen in the third column of Fig.~\ref{hselfcpl}, the trilinear coupling $\lambda_{hhh}$ of the SM-like Higgs in the LRSM only differs from the SM value by a small amount of $\xi\sim10^{-3}$~\cite{Dev:2016dja, Maiezza:2016ybz}. On the contrary, the quartic coupling $\lambda_{hhhh}$ in the LRSM might be significantly different from the SM prediction: as shown in the last column of Table~\ref{hselfcpl}~\cite{Dev:2016dja, Maiezza:2016ybz}, \begin{eqnarray} \label{eqn:diff} \frac14 \lambda_1 - \frac14 \lambda \simeq \frac{\alpha_1^2}{16\rho_1} \,. \label{ldev} \end{eqnarray} In other words, at the leading-order of the approximations of $v_R \gg v_{\rm EW} \simeq \kappa_1 \gg \kappa_2$, the difference of quartic coupling of SM-like Higgs boson in the SM and LRSM is dominated by the $\alpha_1^2/16\rho_1$ term. As the FOPT and GW in the LRSM favor a small $\rho_1$ coupling, the difference in Eq.~(\ref{eqn:diff}) tends to be significant for sufficiently large $\alpha_1$. Adopting the parameter ranges in Eq.~(\ref{scan}) and taking into account the theoretical and experimental limits in Section~\ref{sec:lrsm}, the scatter plots of the quartic coupling $\lambda_{hhhh}$ and the couplings $\rho_1$, $\alpha_1$ and $y_N$ are shown respectively in the left, middle and right panels of Fig.~\ref{fig:random2}, where the data points with strong FOPT $v_c/T_c>1$ is shown in red, while those with $v_c/T_c<1$ are in blue. It is very clear in Fig.~\ref{fig:random2} that the deviation of the quartic scalar coupling $\lambda_1$ from the SM value $\lambda$ is always positive and can be very large, even up to the order of 10, as expected in Table~\ref{hselfcpl} and Eq.~(\ref{eqn:diff}). We can also read from the left and middle panels of Fig.~\ref{fig:random2} that a large deviation of the quartic coupling of SM-like Higgs need a relatively small $\rho_1$ and/or large $\alpha_1$. As given in Eq.~(\ref{eqn:rhoT}), a large $y_N$ tends to decrease $\rho_T$, thus increasing the value of $v_c/T_c$. However, if $y_N$ is too large, say $y_N \gtrsim 1.5$, a negative $\rho_T$ will be obtained which leads to a non-stable vacuum. Thus, the phase transition and GW in the LRSM favor a Yukawa coupling $y_N \sim {\cal O}(0.1)$ to ${\cal O}(1)$. \begin{figure}[!t] \centering \subfigure{\includegraphics[width=0.32\linewidth]{scatter3}} \subfigure{\includegraphics[width=0.32\linewidth]{scatter4}} \subfigure{\includegraphics[width=0.32\linewidth]{scatter5}} \caption{Scatter plots of $\lambda_1/\lambda$ and $\rho_1$ (left), $\alpha_1$ (middle) and $y_N$ (right), with the blue points have $v_c/T_c<1$ and the red ones $v_c/T_c>1$. } \label{fig:random2} \end{figure} On the experimental side, the combined results of di-Higgs searches can be found e.g. in Refs.~\cite{Sirunyan:2018ayu,Aad:2019uzh}. Data from LHC 13 TeV with a luminosity of $36$ fb$^{-1}$ only set a weak constraint $\lambda_{hhh}/\lambda^{SM}_{hhh}\in(-5, 12)$. The LHC 14 TeV with an integrated luminosity of 3 ab$^{-1}$ can probe the trilinear coupling of SM Higgs within the range of $\lambda_{hhh}/\lambda^{\rm SM}_{hhh}\in(0.7,1.3)$~\cite{Barger:2013jfa}, while the future 100 TeV collider with a luminosity of 30 ab$^{-1}$ can help to improve the sensitivity up to $\lambda_{hhh}/\lambda^{\rm SM}_{hhh}\in(0.9,1.1)$~\cite{Kilian:2017nio}. However, this is not precise enough to see the deviation of trilinear coupling in the LRSM, which is of order $10^{-3}$ or smaller. Although the quartic coupling measurements can not be greatly improved at hadronic colliders~\cite{Kilian:2017nio, Chen:2015gva}, a future muon collider with the center-of-mass energy of 14 TeV and a luminosity of 33 ab$^{-1}$ can probe a deviation of the quartic Higgs self-coupling at the level of $50\%$~\cite{Chiesa:2020awd}. This can probe a sizable region of parameter space in Fig.~\ref{fig:random2}. \subsection{Searches of $H_3^0$ and RHNs in the LRSM} \label{sec:H3} As implied by the BPs in Figs.~\ref{fig:massofsamples} and \ref{fig:GWcurves}, the GW signals favor a relatively light $H_3^0$ in the LRSM, and this can be correlated to the direct searches of a (light) $H_3^0$ at the high-energy frontier. At the high-energy colliders, the scalar $H_3^0$ can be produced in two portals~\cite{Dev:2016dja}: \begin{itemize} \item The scalar portal, i.e. the production of $H_3^0$ through its coupling to the SM Higgs $h$. This includes the channels $pp \to h^\ast \to h H_3^0$ and $pp \to h^{(\ast)} \to H_3^0 H_3^0$. The production amplitudes in both the two channels are proportional to the quartic coupling $\alpha_1$. As the trilinear couplings $\lambda_{hH_3^0 H_3^0}$ and $\lambda_{hh H_3^0}$ are respectively proportional to the VEVs $v_{\rm EW}$ and $v_R$, even if $\alpha_1$ is small say $\alpha_1 \sim 10^{-2}$, the production cross sections are still sizable. Assuming $\alpha_1= 0.01$ and $v_R = 10$ TeV, the prospects of $H_3^0$ at the LHC 14 TeV with an integrated luminosity of 3 ab$^{-1}$ and the future 100 TeV collider with a luminosity of 30 ab$^{-1}$ are shown as the yellow and brown bands in Fig.~\ref{fig:complementarity}~\cite{Dev:2016dja}. \item The gauge portal, i.e. the production of $H_3$ through its couplings to the heavy $W_R$ and $Z_R$ gauge bosons, in the Higgsstrahlung process $pp \to V_R^\ast \to H_3^0 V_R$ (with $V_R = W_R,\; Z_R$) and the vector boson fusion (VBF) process $pp \to H_3^0 jj$. In light of the current direct LHC constraints on $W_R$ and $Z_R$ (see Section~\ref{sec:experimental}), the prospects of $H_3^0$ at the LHC in these channels are very limited, which however can be largely improved at future 100 TeV colliders. The FCC-hh prospects in the $H_3^0 jj$ and $H_3^0 V_R$ channels are shown respectively as the green and magenta bands in Fig.~\ref{fig:complementarity}. \end{itemize} In obtaining both the scalar and gauge portal prospects, we have set a lower bound on the $H_3^0$ mass, i.e. $M_{H_3^0} > m_h/2\simeq 62.5$ GeV, such that the exotic decay of the SM Higgs $h \to H_3^0 H_3^0$ is kinematically forbidden~\cite{Curtin:2013fra}. The scalar $H_3^0$ mixes with the SM Higgs $h$ and the heavy bidoublet scalar $H_1^0$, which induces the tree-level FCNC couplings of $H_3^0$ to the SM quarks. Therefore for sufficiently light $H_3^0$, it can be produced from flavor-changing meson decays, such as $K \to \pi H_3^0$~\cite{Dev:2016vle,Dev:2017dui}. The high-precision SM meson data have set very severe constraints on the mixing angles of $H_3^0$ with $h$ and $H_1^0$. Therefore in a large region of parameter space the light $H_3^0$ decays predominately into two photons $H_3^0 \to \gamma\gamma$ through the $W_R$ and heavy charged scalar loops in the LRSM. Suppressed by the heavy particle masses in the loops, the scalar $H_3^0$ tends to be long-lived, and can thus be searched in the multi-purpose detectors at the high-energy colliders as well as the dedicated LLP experiments therein. The prospects of long-lived $H_3$ at the LHC 14 TeV, FCC-hh, and MATHUSLA~\cite{Curtin:2018mvb} are presented Fig.~\ref{fig:complementarity} respectively as the orange, red and pink bands~\cite{Dev:2016vle,Dev:2017dui}. \begin{figure}[!t] \centering \subfigure{\includegraphics[width=0.55\linewidth]{complementarity}} \caption{Complementarity of $H_3^0$ at the colliders and GWs: the orange, pink and red bands are the prospects of a light $H_3^0$ at FCC-hh (100 TeV and 30 ab$^{-1}$), MATHUSLA and LHC (14 TeV and 3 ab$^{-1}$), the brown, yellow, green and magenta bands are the prospects of direct searches of $H_3^0$ at the FCC-hh (and LHC) in the channels $H_3^0 H_3^0$, $h H_3^0$, $H_3^0jj$ and $H_3^0V_R$. The blue band is the GW prospect of $H_3^0$ mass in the right panel of Fig.~\ref{fig:massofsamples}. } \label{fig:complementarity} \end{figure} The GW prospect of $M_{H_3}$ in Fig.~\ref{fig:massofsamples} is indicated by the blue band in Fig.~\ref{fig:complementarity}. As clearly seen in Fig.~\ref{fig:complementarity}, the direct searches of $H_3^0$ at the LHC and future 100 TeV colliders can probe a mass range of roughly 100 GeV up to 3 TeV, while the searches of a long-lived $H_3^0$ at the high-energy colliders can cover the mass range of 10 GeV down to 100 MeV. As a new avenue to probe the phase transition in the LRSM, GWs are sensitive to a wide mass range of $H_3^0$, from the 10 GeV scale up to 10 TeV, which is largely complementary to the searches of (light) $H_3^0$ at the high-energy colliders. Note that one of the important decay modes of $H_3^0$ is the RHN channel, i.e. $H_3^0 \to NN$, which will induce the strikingly clean signal of same-sign dilepton plus jets~\cite{Dev:2016dja,Maiezza:2015lza, Nemevsek:2016enw}. The heavy RHNs can also be produced through their gauge couplings to the $W_R$ and $Z_R$ bosons, e.g. the smoking-gun Keung-Senjanovi\'{c} signal $pp \to W_R \to N \ell^\pm \to \ell^\pm \ell^\pm jj$ at the high-energy $pp$ colliders~\cite{Keung:1983uu}. If the RHNs are very light, say below 100 GeV scale, the decay widths of RHNs will be highly suppressed by $W_R$ mass, which makes the RHNs long-lived~\cite{Helo:2013esa, Cottin:2018kmq}. The light long-lived RHNs can be searched directly at the high-energy colliders via displaced vertex, or even from meson decays~\cite{Helo:2010cw, Cvetic:2010rw, Drewes:2015iva, Bondarenko:2018ptm}. The prospects of RHNs at the high-energy colliders and in meson decays depend largely on the heavy scalar or gauge boson masses (see also~\cite{Mitra:2016kov, Ruiz:2017nip}). However, it is worth pointing out that, as seen in Fig.~\ref{fig:massofsamples}, GWs are sensitive to the RHN masses in the range of 200 GeV up to 40 TeV, which is largely complementary to the direct searches of (light) RHNs at the high-energy frontier. \section{Discussions and Conclusion} \label{sec:conclusion} Before the conclusion we would like to comment on some open questions in the phase transition and GW production in the LRSM: \begin{itemize} \item In the calculations we have assumed that at the epoch of phase transition the bubbles expanding in the plasma can reach a relativistic terminal velocity, i.e. the non-runaway scenarios, where the velocity of bubble wall is taken to be $v_w \simeq 1 $ in our analysis, which corresponds to the denotation case \cite{Espinosa:2010hh}. A recent numerical analysis~\cite{Cutting:2019zws} has revealed that the SW contribution might be suppressed by a factor of $10^{-3}$ in the deflagration case when $\alpha > 0.1$ where the reheated droplet can suppress the formation of GW signals. While there is no such a huge suppression for the denotation case with $\alpha<0.1$, our results could still be valid, although the GW signals might be suppressed by a factor two or three. The bubble wave velocity, in principle, can be computed from the parameters of a given model, as demonstrated in \cite{Moore:1995ua, Moore:1995si, Bodeker:2009qy}. Furthermore, according to the recent calculations in Ref.~\cite{Guo:2020grp}, it is found that the finite lifetime of SWs can lead to a suppression factor $\Upsilon$, which can be parameterized in the following form~\cite{Fornal:2020esl} \begin{eqnarray} \label{eqn:upsilon} \Upsilon = 1-\left[1+\frac{8\pi^{1/3}}{\sqrt{3}}v_{w}\frac{H_*}{\beta}\left(\frac{\alpha\kappa_v}{1+\alpha}\right)^{-1/2}\right]^{-1/2}\,. \end{eqnarray} We have calculated the $\Upsilon$ factors for the five BPs in Table~\ref{tab:BPs}, and listed it in the last row of this table. It is observed that the GW signals in these BPs might be suppressed by up to a factor of 6 to 10. It might be interesting to explore how the model parameters of LRSM can affect the bubble wall velocity and the effects of the suppression factor $\Upsilon$, which will be a topic for our future study. \item It is remarkable that for the scalar $H_3^0$, which is mainly the CP-even neutral component of the right-handed triplet $\Delta_R$, both the theoretical and experimental constraints on it are very weak. As a result, its mass could span a wide range, say from below GeV-scale up to tens of TeV. In the case that all other new particles in the LRSM are heavier than 5 TeV but with a relatively light $H^0_3$ below the TeV-scale (for instance the BPs BP1 and BP2 in Table~\ref{tab:BPs}), at the scale below 1 TeV, the scalar potential of LRSM given in Eq. (\ref{pot}) can be reduced to the effective model with the SM extended by a real singlet $S$, where the scalar potential has the following form: \begin{eqnarray} V(H, S) & \ = \ & -\mu^2 (H^\dagger H) + \frac{1}{2} m_S^2 S^2 + \frac{1}{4} \lambda (H^\dagger H)^2 + \lambda_{3S} S^3 + \lambda_{4S} S^4 \nonumber \\ && + \lambda_{3X} S (H^\dagger H) + \lambda_{4X} S^2 (H^\dagger H) \,. \label{xsm} \end{eqnarray} The trilinear and quartic couplings in Eq.~(\ref{xsm}) can be written as functions of the right-handed VEV $v_R$ and the quartic couplings in the LRSM, which are collected in Table~\ref{xsmcpl}. Obviously, when $\alpha_1$ is switched off, $H^0_3$ will not affect the EW phase transition directly, and the EW phase transition should be of second-order as in the SM. When $\alpha_1$ is switched on, it might be interesting to examine whether the light $H_3^0$ can affect the phase transitions at both the $v_R$ scale and the EW scale. When it is possible, a multi-step strong FOPT could be expected \cite{Angelescu:2018dkk}. \end{itemize} \begin{table}[!] \begin{center} \caption{Trilinear and quartic couplings given in Eq. (\ref{xsm}) for a SM+singlet model derived from the LRSM. \label{xsmcpl}} \begin{tabular}{|c|c|} \hline trilinear couplings & expressions \\ \hline $\lambda_{3S}$ & $ \sqrt{2} \rho_1 v_R$ \\ \hline $\lambda_{3X}$ & $\frac{1}{\sqrt{2}} \alpha_1 v_R$ \\ \hline \hline quartic couplings & expressions \\ \hline $\lambda$ & $\lambda_1$\\ \hline $\lambda_{4S}$ & $\frac{1}{4} \rho_1$ \\ \hline $\lambda_{4X}$ & $\frac{1}{2} \alpha_1$ \\ \hline \end{tabular} \end{center} \end{table} To summarize, in this paper we have studied the prospects of GW signals from phase transition in the minimal LRSM with a bidoublet $\Phi$, a left-handed triplet $\Delta_L$ and a right-handed triplet $\Delta_R$, which is a well-motivated framework to restore parity and accommodate the seesaw mechanisms for tiny neutrino masses at the TeV-scale. We have considered the theoretical limits on the LRSM from perturbativity, unitarity, vacuum stability and correct vacuum criteria, as well as the experimental constraints on the heavy gauge bosons and the BSM scalars. The experimental limits are collected in Table~\ref{table:bounds} and Fig.~\ref{fig:spectra}. With these theoretical and experimental constraints taken into account, we have analyzed the parameter space of strong FOPT and the resultant GWs in the LRSM. As demonstrated in Figs.~\ref{figvT}, \ref{fig:random1} and \ref{fig:random2}, the strong FOPT at the $v_R$ scale favors relatively small quartic and Yukawa couplings, which corresponds to relatively light BSM scalars and RHNs. The GWs for some BPs in the LRSM in Fig.~\ref{GWpeak} reveal that the phase transition in the LRSM can generate the GW signals of $10^{-17}$ to $10^{-12}$, with a frequency ranging from 0.1 to 10 Hz, which can be probed by the experiments BBO and DECIGO, or even by ALIA and MAGIS. Setting $v_R = 10$ TeV, as seen in Fig.~\ref{fig:massofsamples}, the GWs are sensitive to the following mass ranges: \begin{itemize} \item The heavy bidoublet scalars $H_1^0$, $A_1^0$, $H_1^\pm$, the scalars $H_2^0$, $A_2^0$, $H_2^\pm$ and $H_1^{\pm\pm}$ from the left-handed triplet $\Delta_L$, and the doubly-charged scalar $H_2^{\pm\pm}$ from the right-handed triplet $\Delta_R$, with masses up to tens of TeV, with the lower bounds of their masses roughly set by the experimental limits in Fig.~\ref{fig:spectra}. \item The scalar $H_3^0$ with mass in the range of roughly from 20 GeV up to 10 TeV. As presented in Fig.~\ref{fig:complementarity}, the GW prospects of $H_3^0$ are largely complementary to the direct searches of heavy $H_3^0$ at the LHC and future 100 TeV colliders, and the searches of light $H_3^0$ from displaced vertex signals at the LHC, future higher energy colliders, and the LLP experiments such as MATHUSLA. \item The RHNs with masses from roughly 300 GeV up to 40 TeV. The GW sensitivity of $M_N$ is also largely complementary to the direct searches of prompt signals and displaced vertices from RHNs at the high-energy colliders, as well as the production of RHNs from meson decays. \end{itemize} The GW spectra in Fig.~\ref{fig:GWcurves} for the BPs in Table~\ref{tab:BPs} shows that the quartic coupling $\rho_1$ is crucially important for both the frequency and strength of the GW signals in the LRSM, while other couplings such as $\rho_2$, $\rho_3-2\rho_1$, $\alpha_3$ and $y_N$ are also important. In addition, the precision measurement of the quartic coupling of the SM Higgs at a future muon collider can probe a sizable region of the parameter space in LRSM, which can have strong FOPT and observable GW signals, as exemplified in Fig.~\ref{fig:random2}. \section*{Acknowledgments} This work is supported by the Natural Science Foundation of China under the grant no. 11575005. Y.Z. would like to thank P. S. Bhupal Dev and Yiyang Zhang for the helpful discussions at the early stage of this paper. The authors would also like to thank Dr. Yi-Dian Chen, Dr. Huaike Guo, Dr. Bartosz Fornal, Dr. White Graham Albert, and Dr. Zhi-Wei Wang for some useful information.
1,314,259,995,703
arxiv
\section{Introduction} \label{sec:introduction} In recent years convex relaxations of many fundamental, yet combinatorially hard, optimization problems in engineering, applied mathematics, and statistics have been introduced \citep{Tro2006}. Good, and sometimes nearly optimal solutions, can be achieved at affordable computational prices for problems that appear at first blush to be computationally intractable. In this paper, we introduce two new algorithmic frameworks based on variable splitting that generalize and extend recent efforts to convexify the classic unsupervised problem of clustering. \citet{LinOhlLju2011} and \citet{HocVerBac2011} formulate the clustering task as a convex optimization problem. Given $n$ points $\bm{\mathbf{x}}_1,\ldots,\bm{\mathbf{x}}_n$ in $\mathbb{R}^p$, they suggest minimizing the convex criterion \begin{eqnarray} F_{\gamma}(\bm{\mathbf{U}}) & = & \frac{1}{2}\sum_{i=1}^n \|\bm{\mathbf{x}}_i-\bm{\mathbf{u}}_i\|_2^2 + \gamma \sum_{i<j}w_{ij} \|\bm{\mathbf{u}}_i-\bm{\mathbf{u}}_j \|, \label{eq:objective_function} \end{eqnarray} where $\gamma$ is a positive tuning constant, $w_{ij}$ is a nonnegative weight, and the $i$th column $\bm{\mathbf{u}}_i$ of the matrix $\bm{\mathbf{U}}$ is the cluster center attached to point $\bm{\mathbf{x}}_i$. \citet{LinOhlLju2011} consider an $\ell_p$ norm penalty on the differences $\bm{\mathbf{u}}_i - \bm{\mathbf{u}}_j$ while \citet{HocVerBac2011} consider $\ell_1$, $\ell_2,$ and $\ell_\infty$ penalties. In the current paper, an arbitrary norm defines the penalty. The objective function bears some similarity to the fused lasso signal approximator \citep{TibSauRos2005}. When the $\ell_1$ penalty is used in definition (\ref{eq:objective_function}), we recover a special case of the General Fused Lasso \citep{Hoe2010,TibTay2011}. In the graphical interpretation of clustering, each point corresponds to a node in a graph, and an edge connects nodes $i$ and $j$ whenever $w_{ij} > 0$. \Fig{graph} depicts an example. In this case, the objective function $F_{\gamma}(\bm{\mathbf{U}})$ separates over the connected components of the underlying graph. Thus, one can solve for the optimal $\bm{\mathbf{U}}$ component by component. Without loss of generality, we assume the graph is connected. \begin{figure} \centering \begin{tikzpicture}[>=stealth',shorten >=1pt,auto,node distance=2cm, main node/.style={circle,draw}] \node[main node] (1) {1}; \node[main node] (2) [right of=1] {2}; \node[main node] (3) [right of=2] {3}; \node[main node] (4) [right of=3] {4}; \node[main node] (5) [right of=4] {5}; \path[every node/.style={font=\sffamily\small}] (1) edge [bend right] node[left] {} (5) edge node [left] {} (2) (3) edge node [right] {} (4); \end{tikzpicture} \caption{A graph with positive weights $w_{12}$, $w_{15}$, $w_{34}$ and all other weights $w_{ij} = 0$.} \label{fig:graph} \end{figure} When $\gamma=0$, the minimum is attained when $\bm{\mathbf{u}}_i=\bm{\mathbf{x}}_i$, and each point occupies a unique cluster. As $\gamma$ increases, the cluster centers begin to coalesce. Two points $\bm{\mathbf{x}}_i$ and $\bm{\mathbf{x}}_j$ with $\bm{\mathbf{u}}_i=\bm{\mathbf{u}}_j$ are said to belong to the same cluster. For sufficiently high $\gamma$ all points coalesce into a single cluster. Because the objective function $F_{\gamma}(\bm{\mathbf{U}})$ in equation \Eqn{objective_function} is strictly convex and coercive, it possesses a unique minimum point for each value of $\gamma$. If we plot the solution matrix $\bm{\mathbf{U}}$ as a function of $\gamma$, then we can ordinarily identify those values of $\gamma$ giving $k$ clusters for any integer $k$ between $p$ and $1$. In theory, $k$ can decrement by more than 1 as certain critical values of $\gamma$ are passed. Indeed, when points are not well separated, we observe that many centroids will coalesce abruptly unless care is taken in choosing the weights $w_{ij}$. The benefits of this formulation are manifold. As we will show, convex relaxation admits a simple and fast iterative algorithm that is guaranteed to converge to the unique global minimizer. In contrast, the classic $k$-means problem has been shown to be NP-hard \citep{AloDesHan2009,DasFre2009}. In addition, the classical greedy algorithm for solving $k$-means clustering often gets trapped in suboptimal local minima \citep{For1965,Llo1982, Mac1967}. \begin{figure} \centering \includegraphics[scale=0.35]{cluster_path} \caption{Cluster path assignment: The simulated example shows five well separated clusters and the assigned clustering from applying the convex clustering algorithm using an $\ell_2$-norm. The lines trace the path of the individual cluster centers as the regularization parameter $\gamma$ increases.} \label{fig:clusterpath} \end{figure} Another vexing issue in clustering is determining the number of clusters. Agglomerative hierarchical clustering \citep{GowRos1969,Joh1967,LanWil1967,Mur1983,War1963} finesses the problem by computing an entire clustering path. Agglomerative approaches, however, can be computationally demanding and tend to fall into suboptimal local minima since coalescence events are not reversed. The alternative convex relaxation considered here performs continuous clustering just as the lasso \citep{CheDonSau1998, Tib1996} performs continuous variable selection. \Fig{clusterpath} shows how the solutions to the alternative convex problem traces out an intuitively appealing, globally optimal, and computationally tractable solution path. \subsection{Contributions} Our main contributions are two new methods for solving the convex relaxation and their application to clustered regression problems. Relatively little work has been published on algorithms for solving this optimization problem. In fact, the only other paper introducing dedicated algorithms for minimizing criterion (\ref{eq:objective_function}) that we are aware of is \cite{HocVerBac2011}. \citet{LinOhlLju2011} used the off-the-shelf convex solver CVX \citep{CVX2012,GraBoy2008} to generate solution paths. \citet{HocVerBac2011} note that CVX is useful for solving small problems but a dedicated formulation is required for scalability. Thus, they introduced three distinct algorithms for the three most commonly encountered norms. Given the $\ell_1$ norm and unit weights $w_{ij}$, the objective function separates, and they solve the convex clustering problem by the exact path following method designed for the fused lasso \citep{Hoe2010}. For the $\ell_1$ and $\ell_2$ norms with arbitrary weights $w_{ij}$, they employ subgradient descent in conjunction with active sets. Finally, they solve the convex clustering problem under the $\ell_\infty$ norm by viewing it as minimization of a Frobenius norm over a polytope. In this guise, the problem succumbs to the Frank-Wolfe algorithm \citep{FraWol1956} of quadratic programming. In contrast to this piecemeal approach, we introduce two similar generic frameworks for minimizing the convex clustering objective function with an arbitrary norm. One approach solves the problem by the alternating direction method of multipliers (ADMM), while the other solves it by the alternating minimization algorithm (AMA). The key step in both cases computes the proximal map of a given norm. Consequently, both of our algorithms apply provided the penalty norm admits efficient computation of its proximal map. In addition to introducing new algorithms for solving the convex clustering problem, the current paper contributes in other concrete ways: (a) We combine existing results on AMA and ADMM with the special structure of the convex clustering problem to characterize both of the new algorithms theoretically. In particular, the clustering problem formulation gives a minimal set of extra assumptions needed to prove the convergence of the ADMM iterates to the unique global minimum. We also explicitly show how the computational and storage complexity of our algorithms scales with the connectivity of the underlying graph. Examination of the dual problem enables us to identify a fixed step size for AMA that is associated with the Laplacian matrix of the underlying graph. Finally, our complexity analysis enables us to rigorously quantify the efficiency of the two algorithms so the two methods can be compared. (b) We provide new proofs of intuitive properties of the solution path. These results are tied solely to the minimization of the objective function \Eqn{objective_function} and hold regardless of the algorithm used to find the minimum point. (c) We provide guidance on how to choose the weights $w_{ij}$. Our suggested choices diminish computational complexity and enhance solution quality. In particular, we show that employing $k$-nearest neighbor weights allows the storage and computation requirements of our algorithms to grow linearly in the problem size. \subsection{Related Work} \label{sec:related} The literature on clustering is immense; the reader can consult the books \citep{Gor1999,Har1975,KauRou1990,Mir1996,WuWun2009} for a comprehensive review. The clustering function \Eqn{objective_function} can be viewed as a convex relaxation of either $k$-means clustering \citep{LinOhlLju2011} or hierarchical agglomerative clustering \citep{HocVerBac2011}. Both of these classical clustering methods \citep{Sne1957,So1948,War1963} come in several varieties. The literature on $k$-means clustering reports notable improvements in the computation \citep{Elk2003} and quality of solutions \citep{ArtVas2007,BraManStr1997,KauRou1990} delivered by the standard greedy algorithms. Faster methods for agglomerative hierarchical clustering have been developed as well \citep{Fra1998}. Many statisticians view the hard cluster assignments of $k$-means as less desirable than the probabilistic assignments generated by mixture models \citep{McL2000,TitSmiMak1985}. Mixture models have the advantage of gracefully assigning points to overlapping clusters. These models are amenable to an EM algorithm and can be extended to infinite mixtures \citep{Fer1973,Ras2000,Nea2000}. Alternative approaches to clustering involve identifying components in the associated graph via its Laplacian matrix. Spectral clustering \citep{Lux2007} can be effective in cases when the clusters are non-convex and linearly inseparable. Although spectral clustering is valuable, it does not conflict with convex relaxation. Indeed, \citet{HocVerBac2011} demonstrate that convex clustering can be effectively merged with spectral clustering. Although we agree with this point, the solution path uncovered by convex clustering is meritorious in its own right because it partially obviates the persistent need for determining the number of clusters. \subsection{Notation} \label{sec:notation} Throughout, scalars are denoted by lowercase letters ($a$), vectors by boldface lowercase letters ($\bm{\mathbf{u}}$), and matrices by boldface capital letters ($\bm{\mathbf{U}}$). The $j$th column of a matrix $\bm{\mathbf{U}}$ is denoted by $\bm{\mathbf{u}}_{j}$. At times in our derivations, it will be easier to work with vectorized matrices. We adopt the convention of denoting the vectorization of a matrix $(\bm{\mathbf{U}})$ by its lower case letter in boldface ($\bm{\mathbf{u}}$). Finally, we denote sets by upper case letters ($B$). \subsection{Organization} The rest of the paper is organized as follows. We first characterize the solution path theoretically. Previous papers take intuitive properties of the path for granted. We then review the ADMM and AMA algorithms and adapt them to solve the convex clustering problem. Once the algorithms are specified, we discuss their computational and storage complexity, convergence, and acceleration. We then present some numerical examples of clustering. The paper concludes with a general discussion. \section{Properties of the solution path} \label{sec:solutionpath} The solution path $\bm{\mathbf{U}}(\gamma,\bm{\mathbf{w}})$ has several nice properties as a function of the regularization parameter $\gamma$ and its weights $\bm{\mathbf{w}} = \{w_{ij}\}$ that expedite its numerical computation. The proof of the following two propositions can be found in the Supplemental Materials. \begin{proposition} \label{prop:solution_path_continuity} The solution path $\bm{\mathbf{U}}(\gamma)$ exists and depends continuously on $\gamma$. The path also depends continuously on the weight matrix $\bm{\mathbf{w}}$. \end{proposition} Existence and uniqueness of $\bm{\mathbf{U}}$ sets the stage for a well-posed optimization problem. Continuity of $\bm{\mathbf{U}}$ suggests employing homotopy continuation. Indeed, empirically we find great time savings in solving a sequence of problems over a grid of $\gamma$ values when we use the solution of a previous value of $\gamma$ as a warm start or initial value for the next larger $\gamma$ value. We also would like a rigorous argument that the centroids eventually coalesce to a common point as $\gamma$ becomes sufficiently large. For the example shown in \Fig{graph}, we intuitively expect for sufficiently large $\gamma$ that the columns of $\bm{\mathbf{U}}$ satisfy $\bm{\mathbf{u}}_3 = \bm{\mathbf{u}}_4 = \bar{\bm{\mathbf{x}}}_{34}$ and $\bm{\mathbf{u}}_1 = \bm{\mathbf{u}}_2 = \bm{\mathbf{u}}_5 = \bar{\bm{\mathbf{x}}}_{125}$, where $\bar{\bm{\mathbf{x}}}_{34}$ is the mean of $\bm{\mathbf{x}}_3$ and $\bm{\mathbf{x}}_4$ and $\bar{\bm{\mathbf{x}}}_{125}$ is the mean of $\bm{\mathbf{x}}_1$, $\bm{\mathbf{x}}_2,$ and $\bm{\mathbf{x}}_5$. The next proposition confirms our intuition. \begin{proposition} \label{prop:coalesce} Suppose each point corresponds to a node in a graph with an edge between nodes $i$ and $j$ whenever $w_{ij} > 0$. If this graph is connected, then $F_{\gamma}(\bm{\mathbf{U}})$ is minimized by $\bar{\bm{\mathbf{X}}}$ for $\gamma$ sufficiently large, where each column of $\bar{\bm{\mathbf{X}}}$ equals the average $\bar{\bm{\mathbf{x}}}$ of the $n$ vectors $\bm{\mathbf{x}}_i$. \end{proposition} We close this section by noting that in general the clustering paths are not guaranteed to be agglomerative. In the special case of the $\ell_1$ norm with uniform weights $w_{ij} =1$, \citet{HocVerBac2011} prove that the path is agglomerative. In the same paper they give an $\ell_2$ norm example where the centroids fuse and then unfuse as the regularization parameter increases. This behavior, however, does not seem to occur very frequently in practice. Nonetheless, in the algorithms we describe next, we allow for such fission events to ensure that our computed solution path is truly the global minimizer of the convex criterion (\ref{eq:objective_function}). \section{Algorithms to Compute the Clustering Path} \label{sec:algorithms} Having characterized the solution path $\bm{\mathbf{U}}(\gamma)$, we now tackle the task of computing it. We present two closely related optimization approaches: the alternating direction method of multipliers (ADMM) \citep{BoyParChu2011,GabMer1976, GloMar1975} and the alternating minimization algorithm (AMA) \citep{Tse1991}. Both approaches employ variable splitting to handle the shrinkage penalties in the convex clustering criterion \Eqn{objective_function}. \subsection{Reformulation of Convex Clustering} \label{sec:reformulation} Let us first recast the convex clustering problem as the equivalent constrained problem \begin{equation} \label{eq:split_objective_cluster} \begin{split} &\text{minimize} \; \frac{1}{2}\sum_{i=1}^n \|\bm{\mathbf{x}}_i-\bm{\mathbf{u}}_i\|_2^2 + \gamma \sum_{l \in \mathcal{E}} w_{l} \|\bm{\mathbf{v}}_{l} \| \\ &\text{subject to} \; \bm{\mathbf{u}}_{l_1} - \bm{\mathbf{u}}_{l_2} - \bm{\mathbf{v}}_{l} = \bm{\mathbf{0}}. \end{split} \end{equation} Here we index a centroid pair by $l = (l_1, l_2)$ with $l_1 < l_2$, define the set of edges over the non-zero weights $\mathcal{E} = \{ l = (l_1,l_2) : w_l > 0\}$, and introduce a new variable $\bm{\mathbf{v}}_l = \bm{\mathbf{u}}_{l_1} - \bm{\mathbf{u}}_{l_2}$ to account for the difference between the two centroids. The purpose of variable splitting is to simplify optimization with respect to the penalty terms. Splitting methods such as ADMM and AMA have been successfully used to attack similar problems in image restoration \citep{GolOsh2009}. ADMM and AMA are now motivated as variants of the augmented Lagrangian method (ALM) \citep{Hes1969,NocWri2006,Pow1969,Roc1973}. Let us review how ALM approaches the constrained optimization problem \begin{equation} \label{eq:split_objective} \begin{split} &\text{minimize} \; f(\bm{\mathbf{u}}) + g(\bm{\mathbf{v}}) \\ &\text{subject to} \; \bm{\mathbf{A}}\bm{\mathbf{u}} + \bm{\mathbf{B}}\bm{\mathbf{v}} = \bm{\mathbf{m}}{\mathbf{c}}, \end{split} \end{equation} which includes the constrained minimization problem \Eqn{split_objective_cluster} as a special case. ALM solves the equivalent problem \begin{equation} \begin{split} \label{eq:split_objective_alm} &\text{minimize}\; f(\bm{\mathbf{u}}) + g(\bm{\mathbf{v}}) + \frac{\nu}{2} \| \bm{\mathbf{m}}{\mathbf{c}} - \bm{\mathbf{A}}\bm{\mathbf{u}} - \bm{\mathbf{B}}\bm{\mathbf{v}} \|_2^2, \\ &\text{subject to} \; \bm{\mathbf{A}}\bm{\mathbf{u}} + \bm{\mathbf{B}}\bm{\mathbf{v}} = \bm{\mathbf{m}}{\mathbf{c}} \end{split} \end{equation} by imposing a quadratic penalty on deviations from the feasible set. The two problems \Eqn{split_objective} and \Eqn{split_objective_alm} are equivalent because their objective functions coincide for any point $(\bm{\mathbf{u}}, \bm{\mathbf{v}})$ satisfying the equality constraint. We will see in a moment what the purpose of the quadratic penalty term is. First, recall that finding the minimizer to an equality constrained optimization problem is equivalent to the identifying the saddle point of the associated Lagrangian function. The Lagrangian for the ALM problem \begin{eqnarray*} \mathcal{L}_{\nu}(\bm{\mathbf{u}},\bm{\mathbf{v}},\bm{\mathbf{\lambda}}) & = & f(\bm{\mathbf{u}}) + g(\bm{\mathbf{v}}) + \langle \bm{\mathbf{\lambda}}, \bm{\mathbf{m}}{\mathbf{c}} - \bm{\mathbf{A}}\bm{\mathbf{u}} - \bm{\mathbf{B}}\bm{\mathbf{v}} \rangle + \frac{\nu}{2} \| \bm{\mathbf{m}}{\mathbf{c}} - \bm{\mathbf{A}}\bm{\mathbf{u}} - \bm{\mathbf{B}}\bm{\mathbf{v}} \|_2^2 \end{eqnarray*} invokes the dual variable $\bm{\mathbf{\lambda}}$ as a vector of Lagrange multipliers. If $f(\bm{\mathbf{u}})$ and $g(\bm{\mathbf{v}})$ are convex and $\bm{\mathbf{A}}$ and $\bm{\mathbf{B}}$ have full column rank, then the objective \Eqn{split_objective_alm} is strongly convex, and the dual problem reduces to the unconstrained maximization of a concave function with Lipschitz continuous gradient. The dual problem is therefore a candidate for gradient ascent. In fact, this is the strategy that ALM takes in the updates \begin{equation} \label{eq:alm_updates} \begin{split} (\bm{\mathbf{u}}^{m+1}, \bm{\mathbf{v}}^{m+1}) & \mathop{\:\:\,}\nolimits = \mathop{\:\:\,}\nolimits \underset{\bm{\mathbf{u}}, \bm{\mathbf{v}}}{\arg\min}\; \mathcal{L}_\nu(\bm{\mathbf{u}},\bm{\mathbf{v}}, \bm{\mathbf{\lambda}}^m) \\ \bm{\mathbf{\lambda}}^{m+1} & \mathop{\:\:\,}\nolimits = \mathop{\:\:\,}\nolimits \bm{\mathbf{\lambda}}^m + \nu (\bm{\mathbf{m}}{\mathbf{c}} - \bm{\mathbf{A}}\bm{\mathbf{u}}^{m+1} - \bm{\mathbf{B}}\bm{\mathbf{v}}^{m+1}). \\ \end{split} \end{equation} Unfortunately, the minimization of the augmented Lagrangian over $\bm{\mathbf{u}}$ and $\bm{\mathbf{v}}$ jointly is often difficult. ADMM and AMA adopt different strategies in simplifying the minimization subproblem in the ALM updates \Eqn{alm_updates}. ADMM minimizes the augmented Lagrangian one block of variables at a time. This yields the algorithm \begin{equation} \label{eq:admm_updates} \begin{split} \bm{\mathbf{u}}^{m+1} & \mathop{\:\:\,}\nolimits = \mathop{\:\:\,}\nolimits \underset{\bm{\mathbf{u}}}{\arg\min}\; \mathcal{L}_\nu(\bm{\mathbf{u}},\bm{\mathbf{v}}^m, \bm{\mathbf{\lambda}}^m) \\ \bm{\mathbf{v}}^{m+1} & \mathop{\:\:\,}\nolimits = \mathop{\:\:\,}\nolimits \underset{\bm{\mathbf{v}}}{\arg\min}\; \mathcal{L}_\nu(\bm{\mathbf{u}}^{m+1},\bm{\mathbf{v}}, \bm{\mathbf{\lambda}}^m) \\ \bm{\mathbf{\lambda}}^{m+1} & \mathop{\:\:\,}\nolimits = \mathop{\:\:\,}\nolimits \bm{\mathbf{\lambda}}^m + \nu(\bm{\mathbf{m}}{\mathbf{c}} - \bm{\mathbf{A}}\bm{\mathbf{u}}^{m+1} - \bm{\mathbf{B}}\bm{\mathbf{v}}^{m+1}). \end{split} \end{equation} AMA takes a slightly different tack and updates the first block $\bm{\mathbf{u}}$ without augmentation, assuming $f(\bm{\mathbf{u}})$ is strongly convex. This change is accomplished by setting the positive tuning constant $\nu$ to be 0. Later we will see that this seemingly innocuous change will pay large dividends in the convex clustering problem. The overall algorithm iterates according to \begin{equation} \label{eq:ama_updates} \begin{split} \bm{\mathbf{u}}^{m+1} & \mathop{\:\:\,}\nolimits = \mathop{\:\:\,}\nolimits \underset{\bm{\mathbf{u}}}{\arg\min}\; \mathcal{L}_0(\bm{\mathbf{u}},\bm{\mathbf{v}}^m, \bm{\mathbf{\lambda}}^m) \\ \bm{\mathbf{v}}^{m+1} & \mathop{\:\:\,}\nolimits = \mathop{\:\:\,}\nolimits \underset{\bm{\mathbf{v}}}{\arg\min}\; \mathcal{L}_\nu(\bm{\mathbf{u}}^{m+1},\bm{\mathbf{v}}, \bm{\mathbf{\lambda}}^m) \\ \bm{\mathbf{\lambda}}^{m+1} & \mathop{\:\:\,}\nolimits = \mathop{\:\:\,}\nolimits \bm{\mathbf{\lambda}}^m + \nu (\bm{\mathbf{m}}{\mathbf{c}} - \bm{\mathbf{A}}\bm{\mathbf{u}}^{m+1} - \bm{\mathbf{B}}\bm{\mathbf{v}}^{m+1}). \end{split} \end{equation} Although block descent appears to complicate matters, it often markedly simplifies optimization in the end. In the case of convex clustering, the updates are either simple linear transformations or evaluations of proximal maps. \subsection{Proximal Map} \label{sec:proximal} For $\sigma > 0$ the function \begin{eqnarray*} \mathop{\rm prox}\nolimits_{\sigma \Omega}(\bm{\mathbf{u}}) & = & \underset{\bm{\mathbf{v}}}{\arg\min}\;\left[\sigma \Omega(\bm{\mathbf{v}})+ \frac{1}{2} \| \bm{\mathbf{u}} - \bm{\mathbf{v}} \|_2^2 \right] \end{eqnarray*} is a well-studied operation called the proximal map of the function $\Omega(\bm{\mathbf{v}})$. The proximal map exists and is unique whenever the function $\Omega(\bm{\mathbf{v}})$ is convex and lower semicontinuous. Norms satisfy these conditions, and for many norms of interest the proximal map can be evaluated by either an explicit formula or an efficient algorithm. \Tab{prox} lists some common examples. The proximal maps for the $\ell_1$ and $\ell_2$ norms have explicit solutions and can be computed in $\mathcal{O}(p)$ operations for a vector $\bm{\mathbf{v}} \in \mathbb{R}^p$. Another common example is the $\ell_{1,2}$ norm \begin{eqnarray*} \|\bm{\mathbf{v}}\|_{1,2} & = & \sum_{g \in \mathcal{G}} \| \bm{\mathbf{v}}_g \|_2, \end{eqnarray*} which partitions the components of $\bm{\mathbf{v}}$ into non-overlapping groups $\mathcal{G}$. In this case there is also a simple shrinkage formula. The proximal map for the $\ell_\infty$ norm requires projection onto the unit simplex and lacks an explicit solution. However, there are good algorithms for projecting onto the unit simplex \citep{DucShaSin2008,Mic1986}. In particular, Duchi et al.\@'s projection algorithm makes it possible to evaluate $\mathop{\rm prox}\nolimits_{\sigma\| \cdot \|_\infty}(\bm{\mathbf{v}})$ in $\mathcal{O}(p\log p)$ operations. \begin{table}[th] \caption[Proximal Map]{Proximal maps for common norms. \label{tab:prox}} \centering \begin{tabular}{cccc}\\ \toprule Norm & $\Omega(\bm{\mathbf{v}})$ & $\mathop{\rm prox}\nolimits_{\sigma\Omega}(\bm{\mathbf{v}})$ & Comment \\ \midrule $\ell_1$ & $\| \bm{\mathbf{v}} \|_1$ & $\left [ 1 - \frac{\sigma}{| v_l |} \right ]_+ v_l$ & Element-wise soft-thresholding \\ \midrule $\ell_2$ & $\| \bm{\mathbf{v}} \|_2$ & $\left [1 - \frac{\sigma}{\| \bm{\mathbf{v}} \|_2} \right]_+ \bm{\mathbf{v}}$ & Block-wise soft-thresholding \\ \midrule $\ell_{1,2}$ & $\sum_{g \in \mathcal{G}} \| \bm{\mathbf{v}}_g \|_2$ & $\left [1 - \frac{\sigma}{\| \bm{\mathbf{v}}_g \|_2} \right]_+ \bm{\mathbf{v}}_g$ & $\mathcal{G}$ is a partition of $\{1, \ldots, p\}$ \\ \midrule $\ell_\infty$ & $\| \bm{\mathbf{v}} \|_\infty$ & $\bm{\mathbf{v}} - \mathcal{P}_{\sigma S}(\bm{\mathbf{v}})$ & $S$ is the unit simplex \\ \bottomrule \end{tabular} \end{table} \subsection{ADMM updates} \label{sec:ADMM} The augmented Lagrangian is given by \begin{equation} \label{eq:augmented_Lagrangian} \begin{split} \mathcal{L}_\nu(\bm{\mathbf{U}},\bm{\mathbf{V}},\bm{\mathbf{\Lambda}}) & \mathop{\:\:\,}\nolimits = \mathop{\:\:\,}\nolimits \frac{1}{2}\sum_{i=1}^n \|\bm{\mathbf{x}}_i-\bm{\mathbf{u}}_i\|_2^2 + \gamma \sum_{l \in \mathcal{E}} w_{l} \|\bm{\mathbf{v}}_{l} \| \\ & + \sum_{l \in \mathcal{E}} \langle \bm{\mathbf{\lambda}}_{l}, \bm{\mathbf{v}}_{l}-\bm{\mathbf{u}}_{l_1} +\bm{\mathbf{u}}_{l_2} \rangle + \, \frac{\nu}{2}\, \sum_{l \in \mathcal{E}} \|\bm{\mathbf{v}}_{l}- \bm{\mathbf{u}}_{l_1} +\bm{\mathbf{u}}_{l_2} \|_2^2, \end{split} \end{equation} where $\mathcal{E}$ is the set of edges corresponding to non-zero weights. To update $\bm{\mathbf{U}}$ we need to minimize the following function \begin{eqnarray*} f(\bm{\mathbf{U}}) = \frac{1}{2}\sum_{i=1}^n \|\bm{\mathbf{x}}_i-\bm{\mathbf{u}}_i\|_2^2 + \frac{\nu}{2}\, \sum_{l \in \mathcal{E}} \|\bm{\tilde{\mathbf{v}}}_{l} - \bm{\mathbf{u}}_{l_1} +\bm{\mathbf{u}}_{l_2} \|_2^2, \end{eqnarray*} where $\bm{\tilde{\mathbf{v}}}_{l} = \bm{\mathbf{v}}_{l} + \nu^{-1}\bm{\mathbf{\lambda}}_{l}$. We can rewrite the above function in terms of $\bm{\mathbf{u}}$ instead of the columns $\bm{\mathbf{u}}_i$ of the matrix $\bm{\mathbf{U}}$, namely \begin{eqnarray*} f(\bm{\mathbf{u}}) = \frac{1}{2} \lVert \bm{\mathbf{x}} - \bm{\mathbf{u}} \rVert_2^2 + \frac{\nu}{2}\sum_{l \in \mathcal{E}} \| \bm{\mathbf{A}}_{l} \bm{\mathbf{u}} - \bm{\tilde{\mathbf{v}}}_{l} \|_2^2, \end{eqnarray*} where $\bm{\mathbf{A}}_{l} = \left [ (\bm{\mathbf{m}}{\mathbf{e}}_{l_1} - \bm{\mathbf{m}}{\mathbf{e}}_{l_2})^t \Kron \bm{\mathbf{I}} \right]$ and $\Kron$ denotes the Kronecker product. One can see this by noting that $\bm{\mathbf{u}}_{l_1} - \bm{\mathbf{u}}_{l_2} = \bm{\mathbf{U}}(\bm{\mathbf{m}}{\mathbf{e}}_{l_1} - \bm{\mathbf{m}}{\mathbf{e}}_{l_2})$ and applying the identity \begin{eqnarray} \label{eq:vec} \text{vec$(\bm{\mathbf{S}}\bm{\mathbf{T}})$ = $[\bm{\mathbf{T}}^t \Kron \bm{\mathbf{I}}]$vec$(\bm{\mathbf{S}})$.} \end{eqnarray} We can further simplify $f(\bm{\mathbf{u}})$. If $\varepsilon = \lvert \mathcal{E} \rvert$ denotes the number of non-zero weights, then \begin{eqnarray*} f(\bm{\mathbf{u}}) = \frac{1}{2}\lVert \bm{\mathbf{u}} - \bm{\mathbf{x}} \rVert_2^2 + \frac{\nu}{2} \lVert \bm{\mathbf{A}} \bm{\mathbf{u}} - \bm{\tilde{\mathbf{v}}} \rVert_2^2, \end{eqnarray*} where \begin{eqnarray*} \begin{gathered} \bm{\mathbf{A}}^t \mathop{\:\:\,}\nolimits = \mathop{\:\:\,}\nolimits \begin{pmatrix} \bm{\mathbf{A}}_{1}^t & \cdots & \bm{\mathbf{A}}_{\varepsilon}^t \end{pmatrix} \qtext{and} \bm{\tilde{\mathbf{v}}}^t \mathop{\:\:\,}\nolimits = \mathop{\:\:\,}\nolimits \begin{pmatrix} \bm{\tilde{\mathbf{v}}}_{1}^t & \cdots & \bm{\tilde{\mathbf{v}}}_{\varepsilon} ^t \end{pmatrix}. \end{gathered} \end{eqnarray*} The stationary condition requires solving the linear system of equations \begin{eqnarray*} [\bm{\mathbf{I}} + \nu\bm{\mathbf{A}}^t\bm{\mathbf{A}}]\bm{\mathbf{u}} = \bm{\mathbf{x}} + \bm{\mathbf{A}}^t\bm{\tilde{\mathbf{v}}}. \end{eqnarray*} The above system consists of $np$ equations in $np$ unknowns but has quite a bit of structure that we can exploit. In fact, solving the above linear system is equivalent to solving a smaller system of $n$ equations in $n$ unknowns. Note that \begin{eqnarray*} \bm{\mathbf{I}} + \nu\bm{\mathbf{A}}^t\bm{\mathbf{A}} & = & \left [ \bm{\mathbf{I}} + \nu\sum_{l \in \mathcal{E}} (\bm{\mathbf{m}}{\mathbf{e}}_{l_1} - \bm{\mathbf{m}}{\mathbf{e}}_{l_2})(\bm{\mathbf{m}}{\mathbf{e}}_{l_1} - \bm{\mathbf{m}}{\mathbf{e}}_{l_2})^t \right ] \Kron \bm{\mathbf{I}} \\ \bm{\mathbf{A}}^t\bm{\tilde{\mathbf{v}}} & = & \sum_{l \in \mathcal{E}} [(\bm{\mathbf{m}}{\mathbf{e}}_{l_1} - \bm{\mathbf{m}}{\mathbf{e}}_{l_2}) \Kron \bm{\mathbf{I}}]\bm{\tilde{\mathbf{v}}}_{l}. \end{eqnarray*} Applying the above equalities, the identity (\ref{eq:vec}), and the fact that $[\bm{\mathbf{S}} \Kron \bm{\mathbf{T}}]^{-1} = \bm{\mathbf{S}}^{-1} \Kron \bm{\mathbf{T}}^{-1}$ when $\bm{\mathbf{S}}$ and $\bm{\mathbf{T}}$ are invertible gives the following equivalent linear system \begin{eqnarray} \label{eq:u_update_linear_system} \bm{\mathbf{U}} \bm{\mathbf{M}} = \bm{\mathbf{X}} + \sum_{l \in \mathcal{E}} \bm{\tilde{\mathbf{v}}}_{l} (\bm{\mathbf{m}}{\mathbf{e}}_{l_1} - \bm{\mathbf{m}}{\mathbf{e}}_{l_2})^t, \end{eqnarray} where \begin{eqnarray*} \bm{\mathbf{M}} = \bm{\mathbf{I}} + \nu\sum_{l \in \mathcal{E}} (\bm{\mathbf{m}}{\mathbf{e}}_{l_1} - \bm{\mathbf{m}}{\mathbf{e}}_{l_2})(\bm{\mathbf{m}}{\mathbf{e}}_{l_1} - \bm{\mathbf{m}}{\mathbf{e}}_{l_2})^t. \end{eqnarray*} If the edge set $\mathcal{E}$ contains all possible edges, then the update for $\bm{\mathbf{U}}$ can be computed analytically. The key observation is that in the completely connected case \begin{eqnarray*} \sum_{l \in \mathcal{E}} (\bm{\mathbf{m}}{\mathbf{e}}_{l_1} - \bm{\mathbf{m}}{\mathbf{e}}_{l_2})(\bm{\mathbf{m}}{\mathbf{e}}_{l_1} - \bm{\mathbf{m}}{\mathbf{e}}_{l_2})^t = n\bm{\mathbf{I}} - \bm{\mathbf{1}}\bOne^t. \end{eqnarray*} Thus, the matrix $\bm{\mathbf{M}}$ can be expressed as the sum of a diagonal matrix and a rank-1 matrix, namely \begin{eqnarray*} \bm{\mathbf{M}} = (1 + n\nu)\bm{\mathbf{I}} - \nu \bm{\mathbf{1}}\bOne^t. \end{eqnarray*} Applying the Sherman-Morrison formula, we can write the inverse of $\bm{\mathbf{M}}$ as \begin{eqnarray*} \bm{\mathbf{M}}^{-1} = \frac{1}{1 + n\nu} \left [\bm{\mathbf{I}} + \nu \bm{\mathbf{1}}\bOne^t \right]. \end{eqnarray*} Thus, \begin{eqnarray*} \bm{\mathbf{U}} = \frac{1}{1 + n\nu} \left [\bm{\mathbf{X}} + \sum_{l \in \mathcal{E}} \bm{\tilde{\mathbf{v}}}_{l} (\bm{\mathbf{m}}{\mathbf{e}}_{l_1} - \bm{\mathbf{m}}{\mathbf{e}}_{l_2})^t \right ]\left [\bm{\mathbf{I}} + \nu\bm{\mathbf{1}}\bOne^t \right ]. \end{eqnarray*} After some algebraic manipulations on the above equations, we arrive at the following updates \begin{eqnarray*} \bm{\mathbf{u}}_i & = & \frac{1}{1+n\nu} \bm{\mathbf{y}}_i + \frac{n \nu}{1 + n\nu} \bar{\bm{\mathbf{x}}}, \end{eqnarray*} where $\bar{\bm{\mathbf{x}}}$ is the average column of $\bm{\mathbf{X}}$ and \begin{eqnarray*} \bm{\mathbf{y}}_i & = & \bm{\mathbf{x}}_i+ \sum_{l_1 = i} [\bm{\mathbf{\lambda}}_{l} + \nu \bm{\mathbf{v}}_{l} ]-\sum_{l_2 = i}[\bm{\mathbf{\lambda}}_{l} + \nu \bm{\mathbf{v}}_{l} ]. \end{eqnarray*} Before deriving the updates for $\bm{\mathbf{V}}$, we remark that while using a fully connected weights graph allows us to write explicit updates for $\bm{\mathbf{U}}$, doing so comes at the cost of increasing the number of variables $\bm{\mathbf{v}}_l$ and $\bm{\mathbf{\lambda}}_l$. Such choices are not immaterial, and we will discuss these tradeoffs later in the paper. To update $\bm{\mathbf{V}}$, we first observe that the Lagrangian $\mathcal{L}_\nu(\bm{\mathbf{U}},\bm{\mathbf{V}},\bm{\mathbf{\Lambda}})$ is separable in the vectors $\bm{\mathbf{v}}_{l}$. A particular difference vector $\bm{\mathbf{v}}_{l}$ is determined by the proximal map \begin{equation} \label{eq:update_v} \begin{split} \bm{\mathbf{v}}_{l} & \mathop{\:\:\,}\nolimits = \mathop{\:\:\,}\nolimits \underset{\bm{\mathbf{v}}_l}{\arg\min} \; \frac{1}{2} \left [\|\bm{\mathbf{v}}_l - (\bm{\mathbf{u}}_{l_1} - \bm{\mathbf{u}}_{l_2} - \nu^{-1}\bm{\mathbf{\lambda}}_{l})\|_2^2 + \frac{\gamma w_{l}}{\nu} \| \bm{\mathbf{v}}_l \| \right ] \\ & \mathop{\:\:\,}\nolimits = \mathop{\:\:\,}\nolimits \mathop{\rm prox}\nolimits_{\sigma_l \| \cdot \|}(\bm{\mathbf{u}}_{l_1} - \bm{\mathbf{u}}_{l_2} - \nu^{-1}\bm{\mathbf{\lambda}}_{l}), \end{split} \end{equation} where $\sigma_l = \gamma w_l/\nu$. Finally, the Lagrange multipliers are updated by \begin{eqnarray*} \bm{\mathbf{\lambda}}_{l} & = & \bm{\mathbf{\lambda}}_{l} + \nu( \bm{\mathbf{v}}_{l}-\bm{\mathbf{u}}_{l_1}+\bm{\mathbf{u}}_{l_2}). \end{eqnarray*} \Alg{ADMM} summarizes the updates. To track the progress of ADMM we use standard methods given in \citep{BoyParChu2011} based on primal and dual residuals. Details on the stopping rules that we employ are given in the Supplemental Materials. \begin{algorithm}[t] \caption{ADMM} \label{alg:ADMM} Initialize $\bm{\mathbf{\Lambda}}^0$ and $\bm{\mathbf{V}}^0$. \begin{algorithmic}[1] \For{$m = 1, 2, 3, \ldots$} \For{$i = 1, \ldots, n$} \State $\bm{\mathbf{y}}_i = \bm{\mathbf{x}}_i + \sum_{l_1 = i} [\bm{\mathbf{\lambda}}^{m-1}_l + \nu \bm{\mathbf{v}}^{m-1}_l] - \sum_{l_2 = i} [\bm{\mathbf{\lambda}}^{m-1}_l + \nu \bm{\mathbf{v}}^{m-1}_l] $ \EndFor \State $\bm{\mathbf{U}}^{m} = \frac{1}{1+n\nu} \bm{\mathbf{Y}} + \frac{n\nu}{1 + n\nu} \bar{\bm{\mathbf{X}}}$ \ForAll{$l$} \State $\bm{\mathbf{v}}^m_l = \mathop{\rm prox}\nolimits_{\sigma_l \| \cdot \|}(\bm{\mathbf{u}}^{m}_{l_1} - \bm{\mathbf{u}}^{m}_{l_2} - \nu^{-1}\bm{\mathbf{\lambda}}^{m-1}_{l})$ \State $\bm{\mathbf{\lambda}}^m_{l} = \bm{\mathbf{\lambda}}^{m-1}_{l} + \nu( \bm{\mathbf{v}}^m_{l}-\bm{\mathbf{u}}^m_{l_1}+\bm{\mathbf{u}}^m_{l_2})$ \EndFor \EndFor \end{algorithmic} \end{algorithm} \subsection{AMA updates} \label{sec:AMA} Since AMA shares its update rules for $\bm{\mathbf{V}}$ and $\bm{\mathbf{\Lambda}}$ with ADMM, consider updating $\bm{\mathbf{U}}$. Recall that AMA updates $\bm{\mathbf{U}}$ by minimizing the ordinary Lagrangian ($\nu =0$ case), namely \begin{eqnarray*} \bm{\mathbf{U}}^{m+1} & = & \underset{\bm{\mathbf{U}}}{\arg\min}\; \frac{1}{2} \sum_{i=1}^n \| \bm{\mathbf{x}}_i - \bm{\mathbf{u}}_i \|_2^2 + \sum_{l} \langle \bm{\mathbf{\lambda}}^m_{l}, \bm{\mathbf{v}}_l - \bm{\mathbf{u}}_{l_1} + \bm{\mathbf{u}}_{l_2} \rangle. \\ \end{eqnarray*} In contrast to ADMM, this minimization separates in each $\bm{\mathbf{u}}_i$ and gives an update that does not depend on $\bm{\mathbf{v}}_l$ \begin{eqnarray*} \bm{\mathbf{u}}_i^{m+1} & = & \bm{\mathbf{x}}_i + \sum_{l_1 = i} \bm{\mathbf{\lambda}}^m_{l} - \sum_{l_2 = i} \bm{\mathbf{\lambda}}^m_{l}. \\ \end{eqnarray*} Further scrutiny of the updates for $\bm{\mathbf{V}}$ and $\bm{\mathbf{\Lambda}}$ reveals additional simplifications. Moreau's decomposition \citep{ComWaj2005} \begin{eqnarray*} \bm{\mathbf{z}} & = & \mathop{\rm prox}\nolimits_{t h}(\bm{\mathbf{z}})+t \mathop{\rm prox}\nolimits_{t^{-1}h^\star}(t^{-1}\bm{\mathbf{z}}) \end{eqnarray*} allows one to express the proximal map of a function $h$ in terms of the proximal map of its Fenchel conjugate $h^\star$. This decomposition generalizes the familiar orthogonal projection decomposition, namely $\bm{\mathbf{z}} = \mathcal{P}_W(\bm{\mathbf{z}}) + \mathcal{P}_{W^\perp}(\bm{\mathbf{z}})$ where $W$ is a closed Euclidean subspace and $W^\perp$ is its orthogonal complement. If $h(\bm{\mathbf{z}}) = \|\bm{\mathbf{z}}\|$ is a norm, then $h^\star(\bm{\mathbf{z}}) = \delta_B(\bm{\mathbf{z}})$ is the convex indicator function of the unit ball $B=\{ \bm{\mathbf{y}} : \| \bm{\mathbf{y}} \|_\dagger \leq 1\}$ of the dual norm $\| \cdot \|_\dagger$, namely the function that is 0 on $B$ and $\infty$ otherwise. Because the proximal map of the indicator function of a closed convex set collapses to projection onto the set, Moreau's decomposition leads to the identity \begin{eqnarray} \mathop{\rm prox}\nolimits_{t h}(\bm{\mathbf{z}}) & = & \bm{\mathbf{z}}- t \mathop{\rm prox}\nolimits_{t^{-1}\delta_B}(t^{-1}\bm{\mathbf{z}}) \mathop{\:\:\,}\nolimits = \mathop{\:\:\,}\nolimits \bm{\mathbf{z}} - t \mathcal{P}_{B}(t^{-1}\bm{\mathbf{z}}) \label{eq:Moreau_projection} \mathop{\:\:\,}\nolimits = \mathop{\:\:\,}\nolimits \bm{\mathbf{z}} - \mathcal{P}_{tB}(\bm{\mathbf{z}}), \end{eqnarray} where $\mathcal{P}_B(\bm{\mathbf{z}})$ denotes projection onto $B$. In this derivation the identity $t^{-1}\delta_B = \delta_B$ holds because $\delta_B$ takes only the values 0 and $\infty$. Applying the projection formula \Eqn{Moreau_projection} to the $\bm{\mathbf{v}}_l$ update \Eqn{update_v} yields the revised update \begin{eqnarray*} \bm{\mathbf{v}}^{m+1}_{l} & = & \bm{\mathbf{u}}^{m+1}_{l_1} - \bm{\mathbf{u}}^{m+1}_{l_2} - \nu^{-1}\bm{\mathbf{\lambda}}^{m}_l - \mathcal{P}_{tB}[\bm{\mathbf{u}}^{m+1}_{l_1} -\bm{\mathbf{u}}^{m+1}_{l_2} - \nu^{-1}\bm{\mathbf{\lambda}}^m_l], \end{eqnarray*} for the constant $t = \sigma_l = \gamma w_l/\nu$. The update for $\bm{\mathbf{\lambda}}_l$ is given by \begin{eqnarray*} \bm{\mathbf{\lambda}}^{m+1}_{l} & = & \bm{\mathbf{\lambda}}^{m}_{l} + \nu( \bm{\mathbf{v}}^{m+1}_{l}-\bm{\mathbf{u}}^{m+1}_{l_1}+\bm{\mathbf{u}}^{m+1}_{l_2}). \end{eqnarray*} Substituting for the above alternative expression for $\bm{\mathbf{v}}_l^{m+1}$ leads to substantial cancellations and the revised formula \begin{eqnarray*} \bm{\mathbf{\lambda}}^{m+1}_{l} & = & - \nu \mathcal{P}_{tB}[\bm{\mathbf{u}}^{m+1}_{l_1} - \bm{\mathbf{u}}^{m+1}_{l_2} - \nu^{-1}\bm{\mathbf{\lambda}}^m_{l}]. \end{eqnarray*} The identities $-P_{tB}(\bm{\mathbf{z}}) = \mathcal{P}_{tB}(-\bm{\mathbf{z}})$ and $a\mathcal{P}_{tB}(\bm{\mathbf{z}}) = \mathcal{P}_{atB}(a\bm{\mathbf{z}})$ for $a > 0$ further simplify the update to \begin{eqnarray*} \bm{\mathbf{\lambda}}^{m+1}_{l} & = & \mathcal{P}_{C_l}(\bm{\mathbf{\lambda}}^{m}_{l} - \nu \bm{\mathbf{m}}{\mathbf{g}}^{m+1}_l), \end{eqnarray*} where $\bm{\mathbf{m}}{\mathbf{g}}^{m}_l = \bm{\mathbf{u}}^{m}_{l_1} - \bm{\mathbf{u}}^{m}_{l_2}, C_l = \{\bm{\mathbf{\lambda}}_l: \|\bm{\mathbf{\lambda}}_l\|_\dagger \le \gamma w_l\}$. \Alg{AMA} summarizes the AMA algorithm. We highlight the fact that we no longer need to compute and store $\bm{\mathbf{v}}$ to perform the AMA updates. Note that the algorithm look remarkably like a projected gradient algorithm. Indeed, \citet{Tse1991} shows that AMA is actually performing proximal gradient ascent to maximize the dual problem. The dual of the convex clustering problem (\ref{eq:split_objective_cluster}) is \begin{equation} \label{eq:dual_function} \begin{split} D_\gamma(\bm{\mathbf{\Lambda}}) & \mathop{\:\:\,}\nolimits = \mathop{\:\:\,}\nolimits \underset{\bm{\mathbf{U}}, \bm{\mathbf{V}}}\inf \; \mathcal{L}_{0}(\bm{\mathbf{U}},\bm{\mathbf{V}},\bm{\mathbf{\Lambda}}) \\ & \mathop{\:\:\,}\nolimits = \mathop{\:\:\,}\nolimits -\frac{1}{2} \sum_{i=1}^n \| \bm{\mathbf{\Delta}}_i \|_2^2 - \sum_l \langle \bm{\mathbf{\lambda}}_l, \bm{\mathbf{x}}_{l_1} - \bm{\mathbf{x}}_{l_2} \rangle - \sum_{l} \delta_{C_l}(\bm{\mathbf{\lambda}}_l), \end{split} \end{equation} where \begin{eqnarray*} \bm{\mathbf{\Delta}}_i & = & \sum_{l : l_1 = i} \bm{\mathbf{\lambda}}_l - \sum_{l : l_2 = i} \bm{\mathbf{\lambda}}_l . \end{eqnarray*} A derivation of the dual is given in the Supplemental Materials. Since the dual is essentially a constrained least squares problem, it is hardly surprising that it can be solved numerically by the classic projected gradient algorithm. We will see in the next section that in addition to providing a simple interpretation of the AMA method, the dual allows us to derive a rigorous stopping criterion for AMA. Before proceeding, however, let us emphasize that AMA requires tracking of only as many dual variable $\bm{\mathbf{\lambda}}_l$ as there are non-zero weights. We will find later that sparse weights often produces better quality clusterings. Thus, when relatively few weights are non-zero, the number of variables introduced by splitting does not become prohibitive under AMA. \begin{algorithm}[t] \caption{AMA} \label{alg:AMA} Initialize $\bm{\mathbf{\lambda}}^0$. \begin{algorithmic}[1] \For{$m = 1, 2, 3, \ldots$} \For{$i = 1, \ldots, n$} \State $\bm{\mathbf{\Delta}}^{m}_i = \sum_{l_1 = i} \bm{\mathbf{\lambda}}^{m-1}_l - \sum_{l_2 = i} \bm{\mathbf{\lambda}}_l^{m-1}$ \EndFor \ForAll{$l$} \State $\bm{\mathbf{m}}{\mathbf{g}}_l^{m} = \bm{\mathbf{x}}_{l_1} - \bm{\mathbf{x}}_{l_2} + \bm{\mathbf{\Delta}}_{l_1}^{m} - \bm{\mathbf{\Delta}}_{l_2}^{m}$ \State $\bm{\mathbf{\lambda}}^{m}_l = \mathcal{P}_{C_l}(\bm{\mathbf{\lambda}}^{m-1}_l - \nu \bm{\mathbf{m}}{\mathbf{g}}_l^{m})$ \EndFor \EndFor \end{algorithmic} \end{algorithm} \subsubsection{Stopping Criterion for AMA} Recall that the duality gap at the $m$th iterate, $F_\gamma(\bm{\mathbf{U}}^{m}) - D_\gamma(\bm{\mathbf{\Lambda}}^m)$, is an upper bound on how far $F_\gamma(\bm{\mathbf{U}}^{m})$ is from the optimally minimal value of the objective function. It is a certificate of optimality as there is a zero duality gap at an optimal solution. In short, if we can compute the duality gap, we can compute how suboptimal the last iterate is when the algorithm terminates. The explicit functional forms \Eqn{split_objective_cluster} and \Eqn{dual_function} of the primal and dual functions make it trivial to evaluate the duality gap for feasible variables, since they depend on the quantities $\bm{\mathbf{\Delta}}_i$ and $\bm{\mathbf{m}}{\mathbf{g}}_l = \bm{\mathbf{u}}^{m}_{l_1} - \bm{\mathbf{u}}^{m}_{l_2}$, which are computed in the process of making the AMA updates. Thus, we stop the AMA algorithm when \begin{eqnarray*} F_\gamma(\bm{\mathbf{U}}^{m}) - D_\gamma(\bm{\mathbf{\Lambda}}^m) < \tau \end{eqnarray*} for $\tau>0$ small. \section{Convergence} \label{sec:convergence} Both ADMM and AMA converge under reasonable conditions. Nonetheless, of the two, ADMM converges under broader conditions as its convergence is guaranteed for any $\nu > 0$. Convergence for AMA is guaranteed provided that $\nu$ is not too large. As we will see below, however, that bound is modest and easily identified in the convex clustering problem. \subsection{AMA} \label{sec:convergence_ama} \cite{Tse1991} provides sufficient conditions to ensure the convergence of AMA. In the following list of assumptions, the functions $f(\bm{\mathbf{u}})$ and $g(\bm{\mathbf{v}})$ and parameters $\bm{\mathbf{A}}, \bm{\mathbf{B}},$ and $\bm{\mathbf{m}}{\mathbf{c}}$ refer to problem \Eqn{split_objective}. \begin{assumption}[Assumptions B and C in \cite{Tse1991}] \label{as:Tseng} \begin{itemize} \item[(a)] $f(\bm{\mathbf{u}})$ and $g(\bm{\mathbf{v}})$ are convex lower-semicontinuous functions. \item[(b)] $f(\bm{\mathbf{u}})$ is strongly convex with modulus $\alpha > 0$. \item[(c)] Problem \Eqn{split_objective} is feasible. \item[(d)] The function $g(\bm{\mathbf{v}}) + \| \bm{\mathbf{B}}\bm{\mathbf{v}} \|_2^2$ has a minimum. \item[(e)] The dual of \Eqn{split_objective} has an optimal Lagrange multiplier corresponding to the constraint $\bm{\mathbf{A}}\bm{\mathbf{u}} + \bm{\mathbf{B}}\bm{\mathbf{v}} = \bm{\mathbf{m}}{\mathbf{c}}$. \end{itemize} \end{assumption} It is straightforward to verify that the functions and parameters in problem \Eqn{split_objective} satisfy \As{Tseng}. In particular, the strong convexity modulus $\alpha = 1$ for the convex clustering problem. In the derivation of the dual problem given in the Supplemental Materials, we briefly discuss how these assumptions are related to sufficient conditions for ensuring the convergence of the proximal gradient method applied to the dual problem. \begin{proposition}[Proposition 2 in \cite{Tse1991}] \label{prop:ama_convergence} Under \As{Tseng} the iterates generated by the AMA updates \Eqn{ama_updates} satisfy the following: \begin{itemize} \item[(a)] $\lim_{m \to \infty} \bm{\mathbf{u}}^m = \bm{\mathbf{u}}^*$, \item[(b)] $\lim_{m \to \infty} \bm{\mathbf{B}}\bm{\mathbf{v}}^m = \bm{\mathbf{m}}{\mathbf{c}} - \bm{\mathbf{A}}\bm{\mathbf{u}}^*$, \item[(c)] $\lim_{m \to \infty} \bm{\mathbf{\lambda}}^m = \bm{\mathbf{\lambda}}^*$, \end{itemize} provided that $\nu < 2\alpha/\rho(\bm{\mathbf{A}}^t\bm{\mathbf{A}})$, where $\rho(\bm{\mathbf{A}}^t\bm{\mathbf{A}})$ denotes the largest eigenvalue of $\bm{\mathbf{A}}^t\bm{\mathbf{A}}$. \end{proposition} The parameter $\nu$ controlling the gradient step must be strictly less than twice the Lipschitz constant $1/\rho(\bm{\mathbf{A}}^t\bm{\mathbf{A}})$. To gain insight into how to choose $\nu$, let $\varepsilon \leq \binom{n}{2}$ denote the number of edges. Then $\bm{\mathbf{A}} = \bm{\mathbf{\Phi}} \Kron \bm{\mathbf{I}}$, where $\bm{\mathbf{\Phi}}$ is the $\varepsilon \times n$ oriented edge-vertex incidence matrix \begin{eqnarray*} \bm{\mathbf{\Phi}}_{lv} = \begin{cases} 1 & \text{If node $v$ is the head of edge $l$} \\ -1 & \text{If node $v$ is the tail of edge $l$} \\ 0 & \text{otherwise.} \end{cases} \end{eqnarray*} Therefore, $\bm{\mathbf{A}}^t\bm{\mathbf{A}} = \bm{\mathbf{L}} \Kron \bm{\mathbf{I}}$, where $\bm{\mathbf{L}} = \bm{\mathbf{\Phi}}^t\bm{\mathbf{\Phi}}$ is the Laplacian matrix of the associated graph. It is well known that the eigenvalues of $\bm{\mathbf{Z}} \Kron \bm{\mathbf{I}}$ coincide with the eigenvalues of $\bm{\mathbf{Z}}$. See for example Theorem~6 in Chapter 9 of \cite{Mil1987}. Therefore, $\rho(\bm{\mathbf{A}}^t\bm{\mathbf{A}}) = \rho(\bm{\mathbf{L}})$. In lieu of computing $\rho(\bm{\mathbf{L}})$ numerically, one can bound it by theoretical arguments. In general $\rho(\bm{\mathbf{L}}) \leq n$ \citep{AndMor1985}, with equality when the graph is fully connected and $w_{ij} > 0$ for all $i < j$. Choosing a fixed step size of $\nu < 2/n$ works in practice when there are fewer than 1000 data points and the graph is dense. For a sparse graph with bounded node degrees, the sharper bound \begin{eqnarray*} \rho(\bm{\mathbf{L}}) & \leq & \max\{ d(i) + d(j) : (i,j) \in \mathcal{E} \} \label{node_degree_bound} \end{eqnarray*} is available, where $d(i)$ is the degree of the $i$th node \citep{AndMor1985}. This bound can be computed quickly in $\mathcal{O}(n + \varepsilon)$ operations. Section \ref{timing_section} demonstrates the overwhelming speed advantage AMA on sparse graphs. \subsection{ADMM} Modest convergence results for the ADMM algorithm have been proven under minimal assumptions, which we now restate. \begin{proposition} \label{prop:ADMM_convergence} If the functions $f(\bm{\mathbf{x}})$ and $g(\bm{\mathbf{x}})$ are closed, proper, and convex, and the unaugmented Lagrangian has a saddle point, then the ADMM iterates satisfy \begin{eqnarray*} \lim_{m \to \infty} \bm{\mathbf{r}}^m & = & \bm{\mathbf{0}} \\ \lim_{m \to \infty} \left[ f(\bm{\mathbf{U}}^m) + g(\bm{\mathbf{V}}^m) \right] & = & F^\star \\ \lim_{m \to \infty} \bm{\mathbf{\lambda}}^m & = & \bm{\mathbf{\lambda}}^*, \end{eqnarray*} where $\bm{\mathbf{r}}^m = \bm{\mathbf{m}}{\mathbf{c}} - \bm{\mathbf{A}}\bm{\mathbf{u}}^m - \bm{\mathbf{B}}\bm{\mathbf{v}}^m$ denotes the primal residuals and $F^\star$ denotes the minimal objective value of the primal problem. \end{proposition} Proofs of the above result can be found in the references \citep{BoyParChu2011,EckBer1992,Gab1983}. Note, however, the above results do not guarantee that the iterates $\bm{\mathbf{U}}^m$ converge to $\bm{\mathbf{U}}^\star$. Since the convex clustering criterion $F_\gamma(\bm{\mathbf{U}})$ defined by equation \Eqn{objective_function} is strictly convex and coercive, we next show that we have the stronger result that the ADMM iterate sequence converges to the unique global minimizer $\bm{\mathbf{U}}^*$ of $F_\gamma(\bm{\mathbf{U}})$. \begin{proposition} The iterates $\bm{\mathbf{U}}^m$ in \Alg{ADMM} converge to the unique global minimizer $\bm{\mathbf{U}}^*$ of the clustering criterion $F_\gamma(\bm{\mathbf{U}})$. \end{proposition} \begin{proof} The conditions required by \Prop{ADMM_convergence} are obviously met by $F_\gamma(\bm{\mathbf{U}})$. In particular, the unaugmented Lagrangian possesses a saddle point since the primal problem has a global minimizer. To validate the conjectured limit, we first argue that the iterates $(\bm{\mathbf{U}}^m,\bm{\mathbf{V}}^m)$ are bounded. If on the contrary some subsequence is unbounded, then passing to the limit along this subsequence contradicts the limit \begin{eqnarray} \lim_{m \to \infty} H_{\gamma}(\bm{\mathbf{U}}^m,\bm{\mathbf{V}}^m) & = & F_{\gamma}(\bm{\mathbf{U}}^*) \label{eq:fgamma_limit} \end{eqnarray} guaranteed by \Prop{ADMM_convergence} for the continuous function \begin{eqnarray*} H_{\gamma}(\bm{\mathbf{U}},\bm{\mathbf{V}}) & = & \frac{1}{2}\sum_{i=1}^n \|\bm{\mathbf{x}}_i-\bm{\mathbf{u}}_i\|_2^2 + \gamma \sum_{l}w_{l} \| \bm{\mathbf{v}}_l \| .\end{eqnarray*} To prove convergence of the sequence $(\bm{\mathbf{U}}^m,\bm{\mathbf{V}}^m)$, it therefore suffices to check that every limit point coincides with the minimum point of $F_\gamma(\bm{\mathbf{U}})$. Let $(\bm{\mathbf{U}}^{m_n},\bm{\mathbf{V}}^{m_n})$ be a subsequence with limit $(\bm{\tilde{\mathbf{U}}}, \bm{\tilde{\mathbf{V}}})$. According to \Prop{ADMM_convergence}, the differences $\bm{\mathbf{u}}^m_{l_1} - \bm{\mathbf{u}}^m_{l_2} - \bm{\mathbf{v}}^m_l$ tend to $\bm{\mathbf{0}}$. Thus, the limit $(\bm{\tilde{\mathbf{U}}},\bm{\tilde{\mathbf{V}}})$ is feasible. Furthermore, \begin{eqnarray*} \lim_{n \to \infty} H_\gamma(\bm{\mathbf{U}}^{m_n},\bm{\mathbf{V}}^{m_n}) & = & H_\gamma(\bm{\tilde{\mathbf{U}}},\bm{\tilde{\mathbf{V}}}) = F_\gamma(\bm{\tilde{\mathbf{U}}}). \end{eqnarray*} This limit contradicts the limit \Eqn{fgamma_limit} unless $F_\gamma(\bm{\tilde{\mathbf{U}}}) = F_\gamma(\bm{\mathbf{U}}^*)$. Because $\bm{\mathbf{U}}^*$ uniquely minimizes $F_\gamma(\bm{\mathbf{U}})$, it follows that $\bm{\tilde{\mathbf{U}}} = \bm{\mathbf{U}}^*$. \end{proof} \section{Acceleration} \label{sec:acceleration} Both AMA and ADMM admit acceleration at little additional computational cost. Given that AMA is a proximal gradient algorithm, \citet{GolODSet2012} show that it can be effectively accelerated via Nesterov's method \citep{BecTeb2009}. \Alg{AMA_fast} conveys the accelerated AMA method. \citet{GolODSet2012} also present methods for accelerating ADMM not considered in this paper. \begin{algorithm}[t] \caption{Fast AMA} \label{alg:AMA_fast} Initialize $\bm{\mathbf{\lambda}}^{-1} = \bm{\tilde{\mathbf{\lambda}}}^{0}, \alpha_0 = 1$ \begin{algorithmic}[1] \For{$m = 0, 1, 2, \ldots$} \For{$i = 1, \ldots, n$} \State $\bm{\mathbf{\Delta}}^{m}_i = \sum_{l_1 = i} \bm{\mathbf{\lambda}}^{m-1}_l - \sum_{l_2 = i} \bm{\mathbf{\lambda}}_l^{m-1}$ \EndFor \ForAll{$l$} \State $\bm{\mathbf{m}}{\mathbf{g}}_l^{m} = \bm{\mathbf{x}}_{l_1} - \bm{\mathbf{x}}_{l_2} + \bm{\mathbf{\Delta}}_{l_1}^{m} - \bm{\mathbf{\Delta}}_{l_2}^{m}$ \State $\bm{\tilde{\mathbf{\lambda}}}^m_l = P_{C_l}(\bm{\mathbf{\lambda}}^{m-1}_l - \nu \bm{\mathbf{m}}{\mathbf{g}}_l^{m})$ \EndFor \State $\alpha_{m} = (1 + \sqrt{1 + 4\alpha_{m-1}^2})/2$ \State $\bm{\mathbf{\lambda}}^{m+1} = \bm{\tilde{\mathbf{\lambda}}}^{m} + \frac{\alpha_{m-1}}{\alpha_{m}}[\bm{\tilde{\mathbf{\lambda}}}^m - \bm{\tilde{\mathbf{\lambda}}}^{m-1}]$ \EndFor \end{algorithmic} \end{algorithm} \section{Computational Complexity} \label{sec:complexity} \subsection{AMA} In the sequel, we apply existing theory on the computational complexity of AMA to estimate the total number of iterations required by our AMA algorithm. The amount of work per iteration is specific to the variable splitting formulation of the clustering problem and depends on the sparsity of the matrix $\bm{\mathbf{A}}$ in the clustering problem. Suppose we wish to compute for a given $\gamma$ a solution such that the duality gap is at most $\tau$. We start by tallying the computational burden for a single round of AMA updates. Inspection of \Alg{AMA} shows that computing all $\bm{\mathbf{\Delta}}_i$ requires $p(2\varepsilon - n)$ total additions and subtractions. Computing all vectors $\bm{\mathbf{m}}{\mathbf{g}}_l$ in \Alg{AMA} takes $\mathcal{O}(\varepsilon p)$ operations, and taking the subsequent gradient step also costs $\mathcal{O}(\varepsilon p)$ operations. Computing the needed projections costs $\mathcal{O}(\varepsilon p)$ operations for the $\ell_1$ and $\ell_2$ norms and $\mathcal{O}(\varepsilon p\log p)$ operations for the $\ell_\infty$ norm. Finally computing the duality gap costs $\mathcal{O}(np + \varepsilon p)$ operations. The assumption that $n$ is $\mathcal{O}(\varepsilon)$ entails smaller costs. A single iteration with gap checking then costs just $\mathcal{O}(\varepsilon p)$ operations for the $\ell_1$ and $\ell_2$ norms and $\mathcal{O}(\varepsilon p \log p)$ operations for the $\ell_\infty$ norm. Estimation of the number of iterations until convergence for proximal gradient descent and its Nesterov variant complete our analysis. The $np \times \varepsilon p$ matrix $\bm{\mathbf{A}}^t$ is typically short and fat. Consequently, the function $f^\star(\bm{\mathbf{A}}^t\bm{\mathbf{\lambda}})$ is not strongly convex, and the best known convergence bounds for the proximal gradient method and its accelerated variant are sublinear \citep{BecTeb2009}. Specifically we have the following non-asymptotic bounds on the convergence of the objective values: \begin{eqnarray*} D_\gamma(\bm{\mathbf{\lambda}}^*) - D_\gamma(\bm{\mathbf{\lambda}}^m) & \le & \frac{\rho(\bm{\mathbf{A}}^t\bm{\mathbf{A}}) \|\bm{\mathbf{\lambda}}^* - \bm{\mathbf{\lambda}}^0\|_{2}^2}{2m} \end{eqnarray*} for the unaccelerated proximal gradient ascent and \begin{eqnarray*} D_\gamma(\bm{\mathbf{\lambda}}^*) - D_\gamma(\bm{\mathbf{\lambda}}^m) & \le & \frac{2 \rho(\bm{\mathbf{A}}^t\bm{\mathbf{A}}) \|\bm{\mathbf{\lambda}}^* - \bm{\mathbf{\lambda}}^0\|_{2}^2}{(m+1)^2}, \end{eqnarray*} for its Nesterov accelerated alternative. Thus taking into account operations per iteration, we see that the unaccelerated version and acceleration algorithms respectively require a computational effort of $\mathcal{O}(\frac{\varepsilon p}{\tau})$ and $\mathcal{O}(\frac{\varepsilon p}{\sqrt{\tau}})$ respectively for the $\ell_1$ and $\ell_2$ norms to attain a duality gap less than $\tau$. These bounds are respectively $\mathcal{O}(\frac{\varepsilon p\log p}{\tau})$ and $\mathcal{O}(\frac{\varepsilon p\log p)}{\sqrt{\tau}})$ for the $\ell_\infty$ norm. Total storage is $\mathcal{O}(p\varepsilon + np)$. In the worst case $\varepsilon$ is $\binom{n}{2}$. However, if we limit a node's connectivity to its $k$ nearest neighbors, then $\varepsilon$ is $\mathcal{O}(kn)$. Thus, the computational complexity of the problem in the worst case is quadratic in the number of points $n$ and linear under the restriction to $k$-nearest neighbors connectivity. The storage is quadratic in $n$ in the worst case and linear in $n$ under the $k$-nearest neighbors restriction. Thus, limiting a point's connectivity to its $k$-nearest neighbors renders both the storage requirements and operation counts linear in the problem size, namely $\mathcal{O}(knp)$. \subsection{ADMM} We have two cases to consider. First consider the explicit updates for outlined in \Alg{ADMM}, in which the edge set $\mathcal{E}$ contains every possible node pairing. By nearly identical arguments as above, the complexity of a single round of ADMM updates with primal and dual residual calculation requires $\mathcal{O}(n^2 p)$ operations for the $\ell_1$ and $\ell_2$ norms and $\mathcal{O}(n^2 p\log p)$ operations for the $\ell_\infty$ norm. Like AMA, it has been established that $\mathcal{O}(1/\tau)$ ADMM iterations are required to obtain an $\tau$-suboptimal solution \citep{HeYua2012}. Thus, the ADMM algorithm using explicit updates requires the same computational effort as AMA in its worst case, namely when all pairs of centroids are shrunk together. Moreover, the storage requirements are $\mathcal{O}(p n^2 + np)$. The situation does not improve by much when we consider the more storage frugal alternative in which $\mathcal{E}$ contains only node pairings corresponding to non-zero weights. In this case, the variables $\bm{\mathbf{\Lambda}}$ and $\bm{\mathbf{V}}$ have only as many columns as there are non-zero weights. Now the storage requirements are $\mathcal{O}(p\varepsilon + np)$ like AMA, but the cost of updating $\bm{\mathbf{U}}$, the most computationally demanding step, remains quadratic in $n$. Recall we need to solve a linear system of equations (\ref{eq:u_update_linear_system}) \begin{eqnarray*} \bm{\mathbf{U}} \bm{\mathbf{M}} = \bm{\mathbf{X}} + \sum_{l \in \mathcal{E}} \bm{\tilde{\mathbf{v}}}_{l} (\bm{\mathbf{m}}{\mathbf{e}}_{l_1} - \bm{\mathbf{m}}{\mathbf{e}}_{l_2})^t, \end{eqnarray*} where $\bm{\mathbf{M}} \in \mathbb{R}^{n \times n}$. Since $\bm{\mathbf{M}}$ is positive definite and does not change throughout the ADMM iterations, the prudent course of action is to compute and cache its Cholesky factorization. The factorization requires $\mathcal{O}(n^3)$ operations to calculate but that cost can be amortized across the repeated ADMM updates. With the Cholesky factorization in hand, we can update each row of $\bm{\mathbf{U}}$ by solving two sets of $n$-by-$n$ triangular systems of equations, which together requires $\mathcal{O}(n^2)$ operations. Since $\bm{\mathbf{U}}$ has $p$ rows, the total amount of work to update $\bm{\mathbf{U}}$ is $\mathcal{O}(n^2 p)$. Therefore, the overall amount of work per ADMM iteration is $\mathcal{O}(n^2 p + \varepsilon p)$ operations for the $\ell_1$ and $\ell_2$ norms and $\mathcal{O}(n^2 p + \varepsilon p\log p)$ operations for the $\ell_\infty$ norm. Thus, in stark constrast to AMA, both ADMM approaches grow quadratically, either in storage requirements or computational costs, regardless of how we might limit the size of the edge set $\mathcal{E}$. \section{Practical Implementation} \label{sec:practice} This section addresses practical issues of algorithm implementation. \subsection{Choosing weights} \label{sec:weights} The choice of the weights can dramatically affect the quality of the clustering path. We set the value of the weight between the $i$th and $j$th points to be $w_{ij} = \iota^k_{\{i,j\}} \exp(-\phi \| \bm{\mathbf{x}}_i - \bm{\mathbf{x}}_j \|_2^2)$, where $\iota^k_{\{i,j\}}$ is 1 if $j$ is among $i$'s $k$-nearest-neighbors or vice versa and 0 otherwise. The second factor is a Gaussian kernel that slows the coalescence of distant points. The constant $\phi$ is nonnegative; the value $\phi = 0$ corresponds to uniform weights. As noted earlier, limiting positive weights to nearest neighbors improves both computational efficiency and clustering quality. Although the two factors defining the weights act similarly, their combination increases the sensitivity of the clustering path to the local density of the data. \subsection{Making cluster assignments} \label{sec:cluster_assignments} We would like to be able to read off which centroids have fused as the regularization increases, namely determine clustering assignments as a function of $\gamma$. For both ADMM and AMA, such assignments can be performed in $\mathcal{O}(n)$ operations, using the differences variable $\bm{\mathbf{V}}$. In the case of AMA, where we do not store a running estimate of $\bm{\mathbf{V}}$, we compute $\bm{\mathbf{V}}$ using (\ref{eq:update_v}) after the algorithm terminates. In any case, once we have the variable $\bm{\mathbf{V}}$, we simply apply bread-first search to identify the connected components of the following graph induced by the $\bm{\mathbf{V}}$. The graph identifies a node with every data point and places an edge between the $l$th pair of points if and only if $\bm{\mathbf{v}}_l = \bm{\mathbf{0}}$. Each connected component corresponds to a cluster. Note that the graph described here is a function of $\bm{\mathbf{V}}$ and is unrelated to the graph described earlier which is a function of the weights $w_{ij}$. \section{Numerical Experiments} \label{sec:experiments} We now report numerical experiments on convex clustering for a synthetic data set and three real data sets. In particular, we focus on how the choice of the weights $w_{ij}$ affects the quality of the clustering solution. Prior research on this question is limited. Both Lindsten et al.\@ and Hocking et al.\@ suggest weights derived from Gaussian kernels and $k$-nearest neighbors. Because Hocking et al.\@ try only Gaussian kernels, in this section we follow up on their untested suggestion of combining Gaussian kernels and $k$-nearest neighbors. We also compare the run times of our splitting methods to the run times of the subgradient algorithm employed by Hocking et al.\@ for $\ell_2$ paths. We focus our attention on solving the $\ell_2$ path since the rotational invariance of the 2-norm makes it a robust choice in practice. They provide R and C++ code for their algorithms. Our algorithms are implemented in R and C. To make a fair comparison, we run our algorithm until it reaches a primal objective value that is less than or equal to the primal objective value obtained by the subgradient algorithm. To be specific, we first run the Hocking et al.\@ code to generate a clusterpath and record the sequence of $\gamma$'s generated by the Hocking et al.\@ code. We then run our algorithms over the same sequence of $\gamma$'s and stop once our primal objective value falls below those of Hocking et al.'s. We also keep the native stopping rule computations employed by our splitting methods, namely the dual loss calculations for AMA and residual calculations for ADMM. Since AMA already calculates the primal loss, this is not an additional burden. Although convergence monitoring creates additional work for ADMM, the added primal loss calculation at worst only changes the constant in the complexity bound. This follows since computing the primal loss requires $\mathcal{O}(np + \varepsilon p)$ operations to compute. \subsection{Qualitative Comparisons} Our next few examples demonstrate how the character of the solution paths can vary drastically with the choice of weights $w_{ij}$. \subsubsection{Two Half Moons} \label{sec:moons} Consider the standard simulated data of two interlocking half moons in $\mathbb{R}^2$ composed of 100 points each. \Fig{halfmoons} shows four convex clustering paths computed assuming two different numbers of nearest neighbors (10 and 50) and two different kernel constants $\phi$ (0 and 0.5). The upper right panel makes it evident that limiting the number of nearest neighbors ($k=10$) and imposing non-uniform Gaussian kernel weights ($\phi=0.5$) produce the best clustering path. Using too many neighbors and assuming uniform weights results in little agglomerative clustering until late in the clustering path (lower left panel). The two intermediate cases diverge in interesting ways. The hardest set of points to cluster are the points in the upper half moon's right tip and the lower half moon's left tip. Limiting the number of nearest neighbors and omitting the Gaussian kernel (upper left panel) correctly agglomerates the easier points, but waffles on the harder points, agglomerating them only at the very end when all points coalesce at the grand mean. Conversely, using too many neighbors and the Gaussian kernel (lower right panel) leads to a clustering path that does not hedge but incorrectly assigns the harder points. \begin{figure} \centering \includegraphics[scale=0.45]{halfmoons} \caption{Halfmoons Example: The first and second rows show results using $k=10$ and $50$ nearest neighbors respectively. The first and second columns show results using $\phi = 0$ and $0.5$ respectively.} \label{fig:halfmoons} \end{figure} \subsubsection{Fisher's Iris Data} Fisher's Iris data \citep{Fis1936} consists of four measurements on 150 samples of iris flowers. There are three species present: setosa, versicolor, and virginica. \Fig{iris} shows the resulting clustering paths under two different choices of weights. On the left $w_{ij} = 1$ for all $i < j$, and on the right we use 5-nearest neighbors and $\phi = 4$. Since there are four variables, to visualize results we project the data and the fitted clustering paths onto the first two principal components of the data. Again we see that more sensible clustering is observed when we choose weights to be sensitive to the local data density. We even get some separation between the overlapping species virginica and versicolor. \subsubsection{Senate Voting} We consider Senate voting in 2001 on a subset of 15 issues selected by Americans for Democratic Action \citep{LeeMai2009,Dem2002}. The data is binary. We limited our study to the 29 senators with unique voting records. The issues ranged over a wide spectrum: domestic, foreign, economic, military, environmental and social concerns. The final group of senators included 15 Democrats, 13 Republicans, and 1 Independent. \Fig{senate} shows the resulting clustering paths under two different choices of weights. On the left $w_{ij} = 1$ for all $i < j$, and on the right we use 15-nearest neighbors and $\phi = 0.5$. As observed previously, better clustering is observed when we choose the weights to be sensitive to the local data density. In particular, we get clear party separation. Note that we identify an outlying Democrat in Zel Miller and that the clustering seen agrees well with what PCA exposes. \subsubsection{Dentition of mammals} Finally, we consider the problem of clustering mammals based on their dentition \citep{LeeMai2009,Har1975}. Eight different kinds of teeth are tallied up for each mammal: the number of top incisors, bottom incisors, top canines, bottom canines, top premolars, bottom premolars, top molars, and bottom molars. Again we removed observations with teeth distributions that were not unique, leaving us with 27 mammals. \Fig{mammals} shows the resulting clustering paths under two different choices of weights. On the left $w_{ij} = 1$ for all $i < j$, and on the right we use 5-nearest neighbors and $\phi = 0.5$. Once again, weights sensitive to the local density give superior results. In contrast to the iris and Senate data, the cluster path gives a different and perhaps more sensible solution than projections onto the first two components PCA. For example, the brown bat is considered more similar to the house bat and red bat, even though it is closer in the first two PCA coordinates to the coyote and oppossum. \begin{figure} \centering \begin{tabular}{c} \subfloat[Iris Data: Panel on the right (Set B) used $k=5$ nearest neighbors and $\phi=4$.]{\label{fig:iris} \includegraphics[scale=0.2725]{iris_paths_l2}}\\ \subfloat[Senate: Panel on the right (Set B) used $k=15$ nearest neighbors and $\phi=0.5$.]{\label{fig:senate} \includegraphics[scale=0.2725]{senate_paths_l2}} \\ \subfloat[Mammal Data: Panel on the right (Set B) used $k=5$ nearest neighbors and $\phi=0.5$.]{\label{fig:mammals} \includegraphics[scale=0.2625]{mammals_paths_l2}} \\ \end{tabular} \caption{Clustering path under the $\ell_2$ norm. All panels on the left (Set A) used $w_{ij}=1$ for all $i < j$.} \end{figure} \subsection{Timing Comparisons \label{timing_section}} We now present results on two batches of experiments, with dense weights in the first batch and sparse ones in the second. For the first set of experiments, we compared the run times of the subgradient descent algorithm of Hocking et al.\@, ADMM, and accelerated AMA on 10 replicates of simulated data consisting of 100, 200, 300, 400, and 500 points in $\mathbb{R}^2$ drawn from a multivariate standard normal. We limited our study to at most 500 points because the subgradient algorithm took several hours on a single realization of 500 points. Limiting the number of data points allowed us to use the simpler, but less storage efficient, ADMM formulation. For AMA, we fixed the step size at $\nu = 1/n$. For all tests, we assigned full-connectivity weights based on $\iota^k_{\{i,j\}} =1$ and $\phi = -2$. The parameter $\phi$ was chosen to ensure that the smallest weight was bounded safely away from zero. The full-connectivity assumption illustrates the superiority of AMA even under its least favorable circumstances. To trace out the entire clusterpath, we ran the Hocking subgradient algorithm to completion and invoked its default stopping criterion, namely a gradient with an $\ell_2$ norm below $0.001$. As noted earlier, we stopped our ADMM and AMA algorithms once their centroid iterates achieved a primal loss less than or equal to that achieved by the subgradient algorithm. Table~\ref{tab:L2} shows the resulting mean times in seconds, and Figure~\ref{fig:times} shows box-plots of the square root of run time against the number of data points $n$. All three algorithms scale quadratically in the number of points. This is expected for ADMM and AMA because all weights $w_{ij}$ are positive. Nonetheless, the three algorithms possess different rate constants, with accelerated AMA possessing the slowest median growth, followed by the subgradient algorithm and ADMM. Again, to ensure fair comparisons with the subgradient algorithm, we required ADMM to make extra primal loss computations. This change tends to inflates its rate constant. Even so, we see that the spread in run times for the subgradient algorithm becomes very wide at 500 points, so much so that on some realizations even ADMM, with its additional overhead, is faster. In summary, we see that fast AMA leads to affordable computation times, on the order of minutes for hundreds of data points, in contrast to subgradient descent, which incurs run times on the order of hours for 400 to 500 data points. In the second batch of experiments, the same set up is retained except for assignments of weights and step length choice for AMA. We used $\phi = -2$ again, but this time we zeroed out all weights except those corresponding to the $k = \frac{n}{4}$ nearest neighbors of each point. For AMA we used step sizes based on the bound (\ref{node_degree_bound}). Table~\ref{tab:L3} shows the resulting mean run times in seconds, and Figure~\ref{fig:times_sparse} shows box-plots of the square root of run time against the number of data points $n$. As attested by the shorter run times for all three algorithms, incorporation of sparse weights appears to make the problems easier to solve. Sparse weights also make ADMM competitive with the subgradient method for small to modest $n$. Even more noteworthy is the pronounced speed advantage of AMA over the other two algorithms for large $n$. When clustering 500 points, AMA requires on average a mere 7 seconds compared to 6 to 7 minutes for the subgradient and ADMM algorithms. \begin{table}[t] \centering \begin{tabular}{cccccc} & 100 & 200 & 300 & 400 & 500 \\ \hline Subgradient & 44.40 & 287.86 & 2361.84 & 3231.21 & 13895.50\\ AMA & 16.09 & 71.67 & 295.23 & 542.45 & 1109.67 \\ ADMM & 109.13 & 922.72 & 3322.83 & 7127.22 & 13087.09 \\ \end{tabular} \caption{Timing comparison under the $\ell_2$ norm: Dense weights. Mean run times are in seconds. Different methods are listed on each row. Each column reports times for varying number of points.} \label{tab:L2} \end{table} \begin{figure} \centering \includegraphics[scale=0.55]{comparison_times} \caption{Comparison of run times: Dense weights. The square root of the time is plotted against the number of points clustered.} \label{fig:times} \end{figure} \begin{table}[t] \centering \begin{tabular}{cccccc} & 100 & 200 & 300 & 400 & 500 \\ \hline Subgradient & 6.52 & 37.42 & 161.68 & 437.32 & 386.45 \\ AMA & 1.50 & 2.94 & 4.46 & 6.02 & 7.44 \\ ADMM & 5.42 & 30.93 & 88.63 & 192.54 & 436.49 \\ \end{tabular} \caption{Timing comparison under the $\ell_2$ norm: Sparse weights. Mean run times are in seconds. Different methods are listed on each row. Each column reports times for varying number of points.} \label{tab:L3} \end{table} \begin{figure} \centering \includegraphics[scale=0.55]{comparison_times_sparse} \caption{Comparison of run times: Sparse weights. The square root of the time is plotted against the number of points clustered.} \label{fig:times_sparse} \end{figure} \section{Conclusion \& Future Work} \label{sec:conclusion} In this paper, we introduce two splitting algorithms for solving the convex clustering problem. The splitting perspective encourages path following, one of the chief benefits of convex clustering. The splitting perspective also permits centroid penalties to invoke an arbitrary norm. The only requirement is that the proximal map for the norm be readily computable. Equivalently, projection onto the unit ball of the dual norm should be straightforward. Because proximal maps and projection operators are generally well understood, it is possible for us to quantify the computational complexity and convergence properties of our algorithms. It is noteworthy that ADMM did not fare as well as AMA. ADMM has become quite popular in machine learning circles in recent years. Applying variable splitting and using ADMM to iteratively solve the convex clustering problem seemed like an obvious and natural initial strategy. Only later during our study did we implement the less favored AMA algorithm. Considering how trivial the differences are between the generic block updates for ADMM (\ref{eq:admm_updates}) and AMA (\ref{eq:ama_updates}), we were surprised by the performance gap between them. In the convex clustering problem, however, there is a non-trivial difference between minimizing the augmented and unaugmented Lagrangian in the first block update. This task can be accomplished in less time and space by AMA. Two features of the convex clustering problem make it an especially good candidate for solution by AMA. First, the objective function is strongly convex and therefore has a Lipschitz differentiable dual. Lipschitz differentiability is a standard condition ensuring the convergence of proximal gradient algorithms. For this reason \As{Tseng} (b) invokes strong convexity. Second, a good step size can be readily computed from the Laplacian matrix generated by the edge set $\mathcal{E}$. Without this prior bound, we would have to employ a more complicated line-search. Our complexity analysis and simulations show that the accelerated AMA method appears to be the algorithm of choice. Nonetheless, given that alternative variants of ADMM may close the performance gap \citep{DenYin2012,GolMaSch2012}, we are reluctant to dismiss ADMM too quickly. Both algorithms deserve further investigation. For instance, in both ADMM and AMA, updates of $\bm{\mathbf{\Lambda}}$ and $\bm{\mathbf{V}}$ could be parallelized. Hocking et al.\@ also employed an active set approach to reduce computations as the centroids coalesce. A similar strategy could be adopted in our framework, but it incurs additional overhead as checks for fission events have to be introduced. An interesting and practical question brought up by Hocking et al.\@ remains open, namely under what conditions or weights are fusion events guaranteed to be permanent as $\gamma$ increases? In all our experiments, we did not observe any fission events. Identifying those conditions would eliminate the need to check for fission in such cases and expedite computation. For AMA, the storage demands and computational complexity of convex clustering depend quadratically on the number of edges of the associated weight graph in the worst case. Limiting a point's connections to its $k$-nearest neighbors, for example, ensures that the number of edges in the graph is linear in the number of nodes in the graph. Eliminating long-range dependencies is often desirable anyway. Choosing sparse weights can improve both cluster quality and computational efficiency. Moreover, finding the exact $k$-nearest neighbors is likely not essential, and we conjecture that the quality of solutions would not suffer greatly if approximate nearest neighbors are used and algorithms for fast computation of approximately nearest neighbors are leveraged \citep{SlaCas2008}. On very large problems, the best strategy might be to exploit the continuity of solution paths in the weights. This suggests starting with even sparser graphs than the desired one and generating a sequence of solutions to increasingly dense problems. A solution with fewer edges can serve as a warm start for the next problem with more edges. The splitting perspective also invites extensions that impose structured sparsity on the centroids. Witten and Tibshirani \citep{WitTib2010} discuss how sparse centroids can improve the quality of a solution, especially when only a relatively few features of data drive clustering. Structured sparsity can be accomplished by adding a sparsity-inducing norm penalty to the $\bm{\mathbf{U}}$ updates. The update for the centroids for both AMA and ADMM then rely on another proximal map of a gradient step. Introducing a sparsifying norm, however, raises the additional complication of choosing the amount of penalization. Except for a few hints about weights, our analysis leaves the topic of optimal clustering untouched. Recently, \citet{Lux2010} suggested some principled approaches to assessing the quality of a clustering assignment via data perturbation and resampling. These clues are worthy of further investigation.
1,314,259,995,704
arxiv
\section{\label{sec:level1}Introduction} Ghost imaging (GI) provides a way to recover the object images via intensity correlation between reference patterns and bucket intensity signals. It was primitively demonstrated by using entangled light \cite{Pittman1995}, then was also experimentally realized with thermal or pseudo-thermal (a laser passing through a rotating ground glass) light \cite{DaZhang2005,Ferri2005,Jun2005,Liu2014}, as well as X-ray \cite{Hong2016,Zhang2018}. As long as the light field of reference arm and the object arm are conjugated, the lenses in GI with thermal light can be removed \cite{CaoPRA2005,Scarcelli2006}, which makes the imaging setup simpler and more flexible. Thus, the thermal light GI has been widely used in many fields, such as microscopic imaging \cite{YuOC2016}, optical encryption \cite{Clemente2010,YuAO2013,YuAO2019} and lidar \cite{Gong2015,YuSR2014}. To solve two key problems existing in GI, i.e., the image quality and measurement number, various GI methods have sprung up, such as background-removal GI \cite{Gatti2004}, differential GI (DGI) \cite{Ferri2010}, adaptive GI \cite{YuOE2014}, iterative denoising GI \cite{Yao2014}, blind GI \cite{Bertolotti2019}, super sub-Nyquist GI \cite{YuSensors2019}. Among these methods, the bucket values are served as the weights, reflecting the total intensities from the modulated object. Recently, an interesting experimental study found that one could generate the positive and negative ghost images by only conditionally averaging partial reference patterns. This method was named correspondence imaging (CI) \cite{Luo2011,Luo2012,Shih2012,MJSunAO2015}. It seemed that the bucket weights no longer participated in the correlation calculations involved in the second-order or high-order correlation functions, but actually they were completely binarized. Some confusing questions were raised, why could CI generate positive-negative images using only a few reference patterns, and why could CI work without involving bucket weights in the calculations? Their theoretical explanations have been the hot spots in this field for a long time, but after a few attempts \cite{Wen2012,YuCPB2015,YaoCOL2015}, researches were still exploring the path. Lately, a strict explantation based on a probability theory \cite{CaoPRA2018} was provided, which regarded the light intensities as stochastic variables and deduced a joint probability density function between the bucket and reference signals, giving us some inspiration. However, this theory was based on a fundamental assumption of the simplified model that consists of the negative exponential distributed light field and binary objects, thus it still had its limitations, especially in universality. The imaging mechanism of CI deserves further research. In this paper, we assume a general model in which the targets are of gray-scale (each gray value has a large enough number of pixels), any two thermal speckles in the light field are independent of each other, all following an arbitrary identical distribution, and the whole reference speckles constitute a set of independent stochastic variables. The bucket values can be treated as many linear combinations of all pixels, also constituting a random variable. With above assumptions, we can deduce the joint probability density function between the bucket variable and each reference thermal speckle variable. After that, the forming formulas of the positive and negative images are also provided. Both simulation and experimental results have demonstrated the correctness of our derivation. Furthermore, we use this theoretical model to investigate how image quality varies with specific selection intervals used to average reference patterns. \section{\label{sec:level2}Probability theory} \subsection{\label{sec:level2.1}Statistical model of ghost imaging} As we know, for a continuous random variable $X$, the probability of $X<x$ (i.e., the distribution function) can be written as $F_X(x)=P\{X<x\}$, then we have $F_X(-\infty)=0$, and $F_X(+\infty)=1$. Suppose the probability density function $f_X(x)$ of $X$ is the derivative of $F_X(x)$, i.e., $f_X(x)=F'_X(x)$, then $\int_{-\infty}^{+\infty}f_X(x)dx=F_X(+\infty)-F_X(-\infty)=1$. Next, we will use its two typical mathematical properties of the random variable $X$, one is the mathematical expectation (also known as the mean) $E(X)$, defined as: \begin{equation} E(X)=\int_{-\infty}^{+\infty}xf_X(x)dx, \end{equation} the other is the variance $D(X)$, defined as: \begin{align} D(X)&=\int_{-\infty}^{+\infty}(x-E(x))^2f_X(x)dx\nonumber\\ &=E(X^2)-E(X)^2. \end{align} We assume that the gray-scale object has a total of $M$ pixels, with $d$ representing the gray value of a pixel. The gray value of the $m$th point (pixel) is denoted by $d_m$, ranging from 0 to 1, with 0 being completely opaque and 1 being completely transparent. Accordingly, each reference pattern can also be divided into $M$ pixels, each of which has a light intensity expressed by $I_m$. This intensity value can be regarded as a random variable, which obeys an arbitrary identical probability distribution $I$. For simplicity of mathematics, it is assumed that the intensities of any two thermal speckles (pixels) in the reference spatial light field are statistically independent of each other. Then, the distribution function of the $m$th random variable $I_m$ can be written as $F_{I_m}(i_m)$, and its probability density function can be denoted by $f_{I_m}(i_m)$, where $i_m\in[0,\infty)$. The $m$th pixel of the object is illumined by the corresponding thermal speckle. On the plane after the thermal light passing through the gray-scale object, the $m$th point will have the value $d_mI_m$. Not only that, since there is still a certain distance between the object plane and the bucket detector, along with some existing influence factors such as diffraction, refraction, etc., a certain loss of light intensity should be considered here, expressed by the coefficient factor $a$. Thus, when the light reaches the sensing surface of bucket detector, the intensity becomes $Y_m=ad_mI_m$. Then, we have the following relationship between the $m$th point in the object arm and the $n$th point in the reference arm: \begin{align}\label{eq:BE} E(Y_mI_n)&=ad_mE(I_mI_n)\nonumber\\ &=\begin{cases} ad_nE(I^2)& m=n,\\ ad_mE(I)^2& m\ne n. \end{cases} \end{align} The above formula is the the basic equation of ghost imaging with thermal light (GITL). Besides, the bucket light intensity can be written as \begin{equation} S=\sum_{m}^M Y_m=a\sum_{m}^Md_mI_m, \end{equation} whose distribution function and probability density function are denoted by $F_S(s)$ and $f_S(s)(s\in[0,\infty))$, respectively. For the convenience of calculation, suppose the subscript of the point of our interest is $n$, then we define a physical quantity $S_n$ that is very similar to the bucket value $S$, but excluding the bucket intensity with the subscript $n$: \begin{equation} S_n=\sum_{m\ne n}^M Y_m=a\sum_{m\ne n}^M d_mI_m. \end{equation} Obviously, $S_n$ is independent of $I_n$. According to the definition of $S_n$, we can immediately have \begin{equation} S=S_n+Y_n. \end{equation} We let $F_{S_n}(s_n)$ and $f_{S_n}(s_n)$ ($s_n\in[0,\infty)$) denote the distribution function and the probability density function of $S_n$, respectively. With the above definitions, it is natural to calculate the second-order correlation $E(SI_n)$: \begin{align} E(SI_n)&=E[(S_n+Y_n)I_n]\nonumber\\ &=E(S_nI_n)+E(Y_nI_n)\nonumber\\ &=E\left[\left(\sum_{m\ne n}^M Y_m\right)I_n\right]+E(Y_nI_n)\nonumber\nonumber\\ &=\sum_{m\ne n}^ME(Y_mI_n)+E(Y_nI_n)\nonumber\\ &=\sum_{m\ne n}^Mad_mE(I)^2+ad_nE(I^2)\nonumber\\ &=a\sum_m^M d_mE(I)^2+a[E(I^2)-E(I)^2]d_n\nonumber\\ &=\gamma_1+\gamma_2d_n, \end{align} where both $\gamma_1$ and $\gamma_2$ are constants. Since $d_n$ is the gray value of any object point, the physical meaning of the second-order correlation function is to perform the same linear transformation on the gray value of each object point. This is the essential reason why the second-order correlation algorithm can recover the object images. Thus, the basic formula of GITL, i.e., Eq.~(\ref{eq:BE}), plays a decisive role. \subsection{\label{sec:level2.2}Approximation of model} In this section, we begin by proving a theorem as follow to deduce the approximate distribution expressions of $S$ and $S_n$, which is only related to the mean $E(I)$ and the variance $D(I)$ of the light intensity $I$, independent of the specific distribution of $I$. \noindent\textbf{Theorem 1:} \textit{When each gray value in the object image has infinite points (pixels), the bucket value} $S$ \textit{in GITL strictly obeys a normal distribution.} \textit{Proof:} Let arbitrary gray value of the object be $d^{(k)}$ ($k\in\{1,2...K\}$), and its number of points (pixels) be $l^{(k)}$, which tends to infinity. We define the variable $S^{(k)}$ as the sum of all the points with the same gray value $d^{(k)}$ in the object arm as \begin{align} S^{(k)}&=\sum_{\{d_m=d^{(k)}\}}Y_m=\sum_{\{d_m=d^{(k)}\}}ad_mI_m\nonumber\\ &=ad^{(k)}\sum_{m=1}^{l^{(k)}}I_m. \end{align} Since $l^{(k)}$ tends to infinity, according to the central limit theorem for independently and identically distributed variables in the probability theory, $S^{(k)}$ follows a normal (Gaussian) distribution with a mean of $\mu^ {(k)}=l^{(k)}ad^{(k)}E(I)$ and a variance of $(\sigma^ {(k)})^2=l^{(k)}a^2(d^{(k)})^2D(I)$. Therefore, according to the gray value, we can rewrite the definition $S=\sum_m^M Y_m$ of $S$ as \begin{equation} S=\sum_m^M Y_m=\sum_k^K \left(\sum_{\{d_m=d^{(k)}\}}Y_m\right)=\sum_k^K S^{(k)}. \end{equation} Then, $S$ is the sum of $k$ Gaussian distributions. According to the probability theory, $S$ obeys a Gaussian distribution \begin{align} &F_S(s)\approx\phi\left(\frac{s-\mu}{\sigma}\right),\\ &f_S(s)\approx\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(s-\mu)^2}{2\sigma^2}}, \end{align} with a mean of \begin{equation} \mu=\sum_k^K \mu^{(k)}=\sum_k^K l^{(k)}ad^{(k)}E(I)=a\sum_m^Md_mE(I) \end{equation} and a variance of \begin{align} \sigma^2&=\sum_k^K (\sigma^{(k)})^2=\sum_k^K l^{(k)}a^2(d^{(k)})^2D(I)\nonumber\\ &=a^2\sum_m^Md_m^2D(I).\ \blacksquare \end{align} Thus, suppose each gray value owns sufficient points (pixels), the requirements of above theorem can be satisfied. Then, we will have that $S$ approximately follows a normal distribution with a mean of $\mu=a\sum_m^M d_mE(I)$ and a variance of $\sigma^2=a^2\sum_m^Md_m^2D(I)$. Similarly, $S_n$ also approximately fulfills a normal distribution with a mean of $\mu_n=a\sum_{m\ne n}^M d_mE(I)$ and a variance of $\sigma_n^2=a^2\sum_{m\ne n}^M d_m^2D(I)$. \subsection{\label{sec:level2.3}Explaination for correspondence imaging} With the obtained distributions of $S$ and $S_n$, we will start the calculation for CI. The joint probability density function between $S$ and $Y_n$ can be deduced as \begin{align} f_{S,Y_n}(s,y_n)&=f_{S_n}(s_n)\otimes f_{Y_n}(y_n)\nonumber\\ &=f_{S_n}(s-y_n)f_{Y_n}(y_n). \end{align} To average the patterns corresponding to the bucket value $S$ above or below its ensemble average, we define \begin{align} &s_+=\begin{cases} 1&s\ge\mu,\\ 0&s<\mu; \end{cases}\\ &s_-=1-s_+. \end{align} Obviously, there are \begin{align} &\int s_+f_S(s)ds=\int_{\mu}^\infty f_S(s)ds=\frac{1}{2},\\ &\int s_-f_S(s)ds=\int (1-s_+)f_S(s)ds=\frac{1}{2}. \end{align} To obtain the average of the patterns that correspond to the bucket values above the ensemble average, i.e., $E(s_+I_n)$, we should first compute \begin{align} E(s_+Y_n)&=\frac{\int s_+y_nf_{S,Y_n}(s,y_n)dsdy_n}{\int s_+f_S(s)ds}\nonumber\\ &=2\int_\mu^\infty\left[\int_0^sf_{S_n}(s-y_n)y_nf_{Y_n}(y_n)dy_n\right]ds. \end{align} Since $E(Y)\ll\mu$, we can treat $y_n$ in the above integral as a very small amount: $f_{S_n}(s-y_n)\approx f_{S_n}(s)-f'_{S_n}(s)y_n$. Besides, $s$ can be regarded as a very large amount: $\int_0^s\approx\int_0^\infty$. Then, we have \begin{align} &E(s_+Y_n)\nonumber\\ \approx&2\int_\mu^\infty\left \{\int_0^\infty[f_{S_n}(s)-f'_{S_n}(s)y_n]y_nf_{Y_n}(y_n)dy_n\right\}ds\nonumber\\ =&2E(Y_n)\int_\mu^\infty f_{S_n}(s)ds-2E(Y_n^2)\int_\mu^\infty f'_{S_n}(s)ds\nonumber\\ =&2E(Y_n)[1-F_{S_n}(\mu)]-2E(Y_n^2)[0-f_{S_n}(\mu)]\nonumber\\ =&2E(Y_n)\{1-F_{S_n}[\mu_n+E(Y_n)]\}\nonumber\\ &+2E(Y_n^2)f_{S_n}[\mu_n+E(Y_n)]\nonumber\\ \approx&2E(Y_n)[1-F_{S_n}(\mu_n)-F'_{S_n}(\mu_n)E(Y_n)]\nonumber\\ &+2E(Y_n^2)[f_{S_n}(\mu_n)+f'_{S_n}(\mu_n)E(Y_n)]. \end{align} Since \begin{align} &F_{S_n}(\mu_n)=\frac{1}{2},\\ &F'_{S_n}(\mu_n)=f_{S_n}(\mu_n)=\frac{1}{\sqrt{2\pi}\sigma_n},\\ &f'_{S_n}(\mu_n)=0, \end{align} then \begin{align} E(s_+Y_n)\approx&2E(Y_n)[\frac{1}{2}-\frac{1}{\sqrt{2\pi}\sigma_n}E(Y_n)]+2E(Y_n^2)\frac{1}{\sqrt{2\pi}\sigma_n}\nonumber\\ =&E(Y_n)+2D(Y_n)\frac{1}{\sqrt{2\pi}\sigma_n}, \end{align} where \begin{align} &E(Y_n)=E(ad_nI_n)=ad_nE(I),\\ &D(Y_n)=D(ad_nI_n)=a^2d_n^2D(I),\\ &E(s_+Y_n)=E[s_+(ad_nI_n)]=ad_nE(s_+I_n). \end{align} So, we will get \begin{equation}\label{eq:Ep} E(s_+I_n)=E(I)+\sqrt{\frac{2}{\pi}}\frac{a}{\sigma_n}D(I)d_n. \end{equation} Use the standard deviation $\sigma=a\sqrt{\sum_m^Md_m^2D(I)}$ of $S$ to approximately replace the standard deviation $\sigma_n$ of $S_n$, we can acquire \begin{align} E(s_+I_n)&\approx E(I)+\sqrt{\frac{2D(I)}{\pi\sum_m^M d_m^2}}d_n\nonumber\\ &=C_2+C_1d_n, \end{align} where \begin{align} &C_1=\sqrt{\frac{2D(I)}{\pi\sum_m^M d_m^2}}\\ &C_2=E(I). \end{align} Similarly, to calculate the average of the patterns that correspond to the bucket values below the ensemble average, i.e., $E(s_-I_n)$, we should first compute \begin{align} E(s_-Y_n)&=\frac{\int s_-y_nf_{S,Y_n}(s,y_n)dsdy_n}{\int s_-f_S(s)ds}\nonumber\\ &=2\int(1-s_+)y_nf_{S,Y_n}(s,y_n)dsdy_n\nonumber\\ &=2E(Y)-E(s_+Y_n)\nonumber\\ &\approx E(Y_n)-2D(Y_n)\frac{1}{\sqrt{2\pi}\sigma_n}. \end{align} Using the exact same processing method as $E(s_+Y_n)$, we will have \begin{align} E(s_-I_n)&\approx E(I)-\sqrt{\frac{2D(I)}{\pi\sum_m^M d_m^2}}d_n\nonumber\\ &=C_2-C_1d_n. \end{align} Then we can compute the formula of the difference image: \begin{align} CI_\pm&=E(s_+I_n)-E(s_-I_n)\nonumber\\ &=2C_1d_n. \end{align} Since $C_1$ and $C_2$ are all constants, the positive-negative images and $CI_\pm$ are all the linear transformations of the original object. For the reason that the efficient $C_1$ before $d_n$ in $E(s_+I_n)$ is positive, its result presents a positive image, while the efficient $-C_1$ before $d_n$ in $E(s_-I_n)$ is negative, the result is rendered as a negative image. \section{\label{sec:level3}Verification for correspondence imaging} The theoretical averages of the positive and negative images and $CI_\pm$ have been given above, but the gray value of each pixel in the actual reconstructed images generally fluctuates around the mean, following a certain distribution. Below, we will focus on this distribution and make a verification. Let us suppose there are a total of $T$ measurements, containing $T_+$ bucket values $S\ge\langle S\rangle$, and $T_-$ bucket values $S<\langle S\rangle$, where $\langle\cdots\rangle$ denotes the ensemble average of the signal. The operators $s_+$ and $s_-$ still use the definitions mentioned above. We denote the $n$th point in the $t$th speckle pattern as $I_{nt}$. Then, the positive and negative image formulas in the actual image reconstruction can be written as \begin{align} &\langle s_+I_n\rangle=\frac{1}{T_+}\sum_{t=1}^{T}s_{+}I_{nt},\\ &\langle s_-I_n\rangle=\frac{1}{T_-}\sum_{t=1}^{T}s_{-}I_{nt}. \end{align} According to the central limit theorem for independently and identically distributed variables, when $T_+$ is large enough, $\langle s_+I_n\rangle$ approximatively obeys a Gaussian distribution with a mean of $E(s_+I_n)$ and a variance of $\frac{D(s_+I_n)}{T_+}$; and similarly, when $T_-$ is large enough, $\langle s_-I_n\rangle$ approximatively follows a Gaussian distribution with a mean of $E(s_-I_n)$ and a variance of $\frac{D(s_-I_n)}{T_-}$. Now, we will compute the variances $D(s_+I_n)$ and $D(s_-I_n)$. In a similar way of calculating $E(s_+I_n)$ and $E(s_-I_n)$, we first derive the following functions $E(s_+I_n^2)$ and $E(s_-I_n^2)$: \begin{align} &E(s_+I_n^2)\nonumber\\ \approx&E(I^2)+\sqrt{\frac{2}{\pi\sum_m^Md_m^2D(I)}}[E(I^3)-E(I^2)E(I)]d_n,\\ &E(s_-I_n^2)\nonumber\\ \approx&E(I^2)-\sqrt{\frac{2}{\pi\sum_m^Md_m^2D(I)}}[E(I^3)-E(I^2)E(I)]d_n. \end{align} By using the formula $D(X)=E(X^2)-E(X)^2$, the variances can be calculated as \begin{align} \label{eq:Dp}D(s_+I_n)&=E(s_+I_n^2)-E(s_+I_n)^2,\\ D(s_-I_n)&=E(s_-I_n^2)-E(s_-I_n)^2. \end{align} So far, we can theoretically calculate the distribution curve of a certain gray value $d^{(k)}$ (occupying a region that consists of several pixels) after reconstructing the images. For the positive image, it follows a Gaussian distribution with a mean of $E(s_+I_n)$ and a variance of $\frac{D(s_+I_n)}{T_+}$; while for the negative image, it obeys a Gaussian distribution with a mean of $E(s_-I_n)$ and a variance of $\frac{D(s_-I_n)}{T_-}$. In both simulation and experiments, we calculate the probability of the recovered pixel values falling in each pixel region where the gray value of the original image equals $d^{(k)}$, and plot the corresponding probability density curves, compared with the theoretical Gaussian curve to demonstrate the correctness of the theory. The Gaussian distribution theoretical curves are obtained from the computed theoretical means and variances. \subsection{\label{sec:level3.1}Simulation} Here, we chose an object image of $200\times200$ pixels, as shown in Fig.~\ref{fig:simulation}(a), and its statistical data of the gray values was given in Table~\ref{tab:table1}. We took the speckle variables of the patterns which obeyed an identical gamma distribution for an example, and the gamma distribution was parameterized in terms of a shape parameter $\alpha=3.57$ and the a scale parameter $\theta=1.4$, and its probability density function could be expressed as \begin{equation} f_I(i)=\frac{i^{\alpha-1}e^{-i/\theta}}{\theta^\alpha\Gamma(\alpha)},\textrm{ for }i>0, \end{equation} as plotted in Fig.~\ref{fig:simulation}(b). The positive and negative images with a total of 50000 frames and their difference image $CI_\pm$ were given in Figs.~\ref{fig:simulation}(c)--\ref{fig:simulation}(e). \begin{figure}[htbp] \centering \includegraphics[width=0.9\linewidth]{figure-1} \caption{\label{fig:simulation}Simulation results. (a) is the original image, a modified head phantom image. (b) is the chosen probability density function curve. (c)--(e) are the reconstructions of positive-negative images and their difference image, respectively.} \end{figure} \begin{table}[htbp] \caption{\label{tab:table1}Statistical data of gray values in the original image} \begin{ruledtabular} \begin{tabular}{ccc} Gray value&Total number of pixels&Proportion\\ \colrule 0 & 23353 & 58.38\%\\ 0.5 & 13147 & 32.87\%\\ 0.7 & 1733 & 4.33\%\\ 1 & 1767 & 4.42\%\\ \end{tabular} \end{ruledtabular} \end{table} Then, for both positive and negative images, we separately computed the probability of the reconstructed pixel values falling in each pixel region corresponding the one that consists of pixel positions with the same gray value $d^{(k)}$ of the original image, and drew their probability density curves to compare with the gamma theoretical curves, as shown in Figs.~\ref{fig:PDF}(a)--\ref{fig:PDF}(b). From the graphs, we could clearly see that the recovered pixel value data is highly consistent with the presupposed gamma distribution. \begin{figure}[htbp] \centering \includegraphics[width=0.9\linewidth]{figure-2} \caption{\label{fig:PDF}Probability density function curves for the recovered pixel values, compared with the theoretical gamma function curves. (a)--(b) are the probability density distributions and theoretical gamma curves of reconstructed pixel values falling in each pixel region where the gray value of the original image equals $d^{(k)}$, for positive and negative images, respectively. The abscissa is the reconstructed pixel value, and the ordinate indicates the probability of occurrence of these values.} \end{figure} \subsection{\label{sec:level3.2}Experiment} For the practical optical experiments, there are many kinds of noise. it is hard to determine the noise distribution, but the superposition of multiple probability distributions will result in a Gaussian distribution with a large probability. In this case, we may assume the measurement noise fulfills Gaussian statistics. In a similar way, suppose the Gaussian noise is a random variable, denoted by $X$, with a mean of $E(X)=0$ and an unknown variance $D(X)$. We add this noise to the bucket variable, then get \begin{equation} S=S_n+Y_n+X. \end{equation} The same as the previous discussion, one only needs to replace the previous $S_n$ with $S_n+X$ for the calculation in actual measurement environment. And $S_n+X$ satisfies a Gaussian distribution with a mean $\mu_n+E(X)$ and a variance $\sigma_n^2+D(X)$. Here, we directly present the results: \begin{align} E(s_+I_n)\approx& E(I)+\sqrt{\frac{2}{\pi}}\sqrt{\frac{1}{\sum_m^M d_m^2D(I)+\frac{D(X)}{a^2}}}D(I)d_n,\\ E(s_-I_n)\approx& E(I)-\sqrt{\frac{2}{\pi}}\sqrt{\frac{1}{\sum_m^M d_m^2D(I)+\frac{D(X)}{a^2}}}D(I)d_n,\\ E(s_+I_n^2)\approx& E(I^2)+\sqrt{\frac{2}{\pi}}\sqrt{\frac{1}{\sum_m^Md_m^2D(I)+\frac{D(X)}{a^2}}}\nonumber\\ &\times[E(I^3)-E(I^2)E(I)]d_n,\\ E(s_-I_n^2)\approx&E(I^2)-\sqrt{\frac{2}{\pi}}\sqrt{\frac{1}{\sum_m^Md_m^2D(I)+\frac{D(X)}{a^2}}}\nonumber\\ &\times[E(I^3)-E(I^2)E(I)]d_n,\\ D(s_+I_n)=&E(s_+I_n^2)-E(s_+I_n)^2,\\ D(s_-I_n)=&E(s_-I_n^2)-E(s_-I_n)^2. \end{align} There is only one pending term introduced by noise and light intensity attenuation, i.e., $\frac{D(X)}{a^2}$. It is hard for us to know its specific value. This can only be obtained empirically in order to match the experimental data to the theoretical curve as much as possible. Our experiment was based on a widely used computational GI setup, as shown in Fig.~\ref{fig:setup}. Unlike double arm GI, it could modulate the illumination light according to the preset patterns without the help of an array detector with spatial resolution. A digital micromirror device (DMD) which consisted of 1,024 $\times$ 768 micro-mirrors, each of size $13.68\times13.68\ \mu\textrm{m}^2$, was used here to perform light intensity modulation. Since each of its micromirror could be oriented either +12$^\circ$ and -12$^\circ$ with respect to the normal of the DMD work plane, corresponding to the bright pixel 1 or the dark pixel 0, the light would be reflected into two directions. In our experiment, the light from a halogen lamp illuminated the DMD through an aperture diaphragm and a beam expander, then the modulated patterns were projected onto an object, which was a black-and-white film printed with ``A'', as shown in Fig.~\ref{fig:expresults}(a). Its statistical data of binary values was provided in Table~\ref{tab:table2}. The 0-1 random patterns used occupied the central $160\times160$ micromirrors (pixels) of the DMD. In each pattern, 0 and 1 had the same probability of occurrence. A 1/1.8 inch charge-coupled device (CCD) was used as a bucket detector to integrate the gray values of all pixels in one frame. The recovered images with 7761 frames were presented in Figs.~\ref{fig:expresults}(b)--\ref{fig:expresults}(d). From the curves shown in Fig.~\ref{fig:expPDF}, the experimental data was in good agreement with the theoretical Gaussian curves. \begin{figure}[htbp] \centering \includegraphics[width=0.95\linewidth]{figure-3} \caption{\label{fig:setup}Optical setup for CI. The thermal light emitted from a halogen lamp passes through an aperture diaphragm and a beam expander, and illuminates a DMD. Then, the modulated light is projected onto a black-and-white film (i.e., the object). The total intensities are recorded by a bucket detector.} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.9\linewidth]{figure-4} \caption{\label{fig:expresults}Experimental results. (a) is the binarized image obtained by a camera. (b)--(d) are the recovered positive-negative images and their difference image, respectively.} \end{figure} \begin{table}[htbp] \caption{\label{tab:table2}Statistical data of binary values in the binarized image taken by a camera} \begin{ruledtabular} \begin{tabular}{ccc} Gray value&Total number of pixels&Proportion\\ \colrule 0 & 24847 & 97.06\%\\ 1 & 753 & 2.94\%\\ \end{tabular} \end{ruledtabular} \end{table} \begin{figure}[htbp] \centering \includegraphics[width=0.9\linewidth]{figure-5} \caption{\label{fig:expPDF}Probability density function curves for the recovered pixel values, compared with theoretical Gaussian function curves. (a)--(b) are the probability density distributions and theoretical Gaussian curves of recovered pixel values falling in each pixel region where the original gray value equals 0 or 1, for positive and negative images, respectively. Here, the value of the pending term $\frac{D(X)}{a^2}$ is set to 120.} \end{figure} \section{\label{sec:level4}Conditional-averaging ghost imaging with a potential application} As mentioned before, the statistical curve of each gray value within a certain pixel region in the positive or negative image corresponds to a Gaussian curve. In Fig.~\ref{fig:twocurves}, we drew two Gaussian curves obtained from two pixel regions corresponding to two gray values. Obviously, the farther the Gaussian curves of two gray values are separated, the bigger is the difference between the two recovered gray values, and the better is the image quality of the reconstructed. We can choose an appropriate measure to describe this distance, e.g., the overlapping area of two curves, denoted by $\Omega$, which can be treated as a criterion for the reconstruction quality. Analogously, it is easy to find that for the reconstructed images using the correlation functions, such as $G_2=\langle S\cdot I_n\rangle$, $g_2=\frac{\langle S\cdot I_n\rangle}{\langle S\rangle\langle I_n\rangle}$, $DGI=\langle S\cdot I_n\rangle-\frac{\langle S\rangle}{\langle S_R\rangle}\langle S_R\cdot I_n\rangle$, etc., the conclusion that the reconstructed pixels in each pixel region obey a Gaussian or Gaussian-like distribution is still valid. Thereby, these functions can also use this overlapping area as the image quality measure. Now, let us calculate this overlapping area $\Omega$. In Fig.~\ref{fig:twocurves}, the two curves that correspond to any two original gray values $\varsigma$ and $\tau$ have two means, i.e., $\mu_1$ and $\mu_2$. Generally, as long as the algorithm can reconstruct the object image, it is obvious that there must be a linear relationship between the reconstructed image and the original image, which will be at most affected by noise. For simplicity of mathematics, we suppose the standard deviations are approximately equal, i.e., $\sigma_1\approx\sigma_2=\sigma$. Actually, in both simulation and experiments, we also observed that the standard deviations of the Gaussian curves for all different original gray values were very close to each other. Because the original speckle intensities are independent and identically distributed, when the number of pixels contained in each pixel region is large enough, the standard deviations of the average values of the reference patterns inside these pixel regions will also tend to the same value. Without loss of generality, we can set $\mu_1<\mu_2$. It is easy to calculate the abscissa of the intersection of two curves, i.e., $\frac{\mu_1+\mu_2}{2}$. The shaded area in Fig.~\ref{fig:twocurves} is $\Omega=2\phi(-\frac{\mu_2-\mu_1}{2\sigma})$, where $\phi(x)$ is the standard Gaussian distribution function (the integral of the standard Gaussian probability density function). Note that the area is negatively correlated with $\frac{\mu_2-\mu_1}{2\sigma}$, which is a term related to the original gray values $\varsigma$ and $\tau$. If the standard deviations are assumed to be approximately equal, then the well-known formula of contrast-to-noise ratio (CNR) \cite{Chan2010} differs from this term $\frac{\mu_2-\mu_1}{2\sigma}$ only by a constant factor $\sqrt{2}$. To some extent, for binary objects, the CNR is a special case of the overlapping area, and can be derived from the latter, thus the physical meaning of CNR is manifested here. However, $\frac{\mu_2-\mu_1}{2\sigma}$ is not very suitable as an assessment metric of reconstruction quality for the following reasons. For the same reconstruction image, the value of $\frac{\mu_2-\mu_1}{2\sigma}$ calculated from two distant original gray-scale values (such as 0 and 1) is much larger than that of two original gray values which are close to each other (e.g., 0.4 and 0.6), but it does not mean that the former result is much better than the latter. Because they are all obtained from the same recovered image, the former get a larger value since they are calculated from two original gray-scale values that are much easier to be resolved. To provide a fair comparison, we will introduce a new imaging quality factor named crosspoint-to-standard-deviation ratio (CSR), which is defined as \begin{equation} \textrm{CSR}=\frac{(\mu_2-\mu_1)/2}{\sigma}\delta, \end{equation} where $\delta=\frac{1}{\tau-\varsigma}$. Since an identical linear relationship is associated with the original gray values and the means, the product between the terms $\frac{1}{\tau-\varsigma}$ and $\mu_2-\mu_1$ eliminates the effects of the specific gray values so that the CSR values obtained by choosing any two original gray values for the reconstructed images are the same. For any two given original gray values, the larger is the CSR value, the smaller is the overlapping area, and the more obviously the two gray values are separated, the better is the imaging quality. \begin{figure}[htbp] \centering \includegraphics[width=0.9\linewidth]{figure-6} \caption{\label{fig:twocurves}Schematic diagram of two Gaussian curves corresponding to two gray values.} \end{figure} As mentioned above, the positive or negative image are obtained by just averaging partial reference patterns corresponding to the bucket values above or below a threshold. Now, we will use the CSR to discuss the effect of using different intervals of partial reference patterns on the reconstruction quality. For the positive image, we define a logic signal, \begin{equation} s_\beta=\begin{cases} 1&s\ge\beta\mu,\\ 0&s<\beta\mu. \end{cases} \end{equation} The number of the patterns that correspond to the bucket values larger than $\beta\mu$ is $T_\beta=T[\int_{\beta\mu}^\infty f_S(s)ds]=T[1-F_S(\beta\mu)]$, where $T$ is the total number of measurements. Then, we can acquire \begin{align} &E(s_\beta Y_n)\nonumber\\ =&\frac{\int s_\beta y_nf_{S,Y_n}(s,y_n)dsdy_n}{\int s_\beta f_S(s)ds}\nonumber\\ =&\frac{\int_{\beta \mu}^\infty\left[\int_0^s f_{S_n}(s-y_n)y_nf_{Y_n}(y_n)dy_n\right]ds}{\int_{\beta\mu}^\infty f_S(s)ds}\nonumber\\ \approx&\frac{\int_{\beta\mu}^\infty\left\{\int_0^\infty [f_{S_n}(s)-f'_{S_n}(s)y_n]y_nf_{Y_n}(y_n)dy_n\right\}ds}{1-F_S(\beta\mu)}\nonumber\\ =&\frac{E(Y_n)[1-F_{S_n}(\beta\mu)]-E(Y_n^2)[0-f_{S_n}(\beta\mu)]}{1-F_S(\beta\mu)}\nonumber\\ =&\frac{E(Y_n)[1-F_{S_n}(\beta\mu)]+E(Y_n^2)f_{S_n}(\beta\mu)}{1-F_S(\beta\mu)}. \end{align} In a similar way, we acquire the formula of $E(s_\beta Y_n^2)$: \begin{equation} E(s_\beta Y_n^2)=\frac{E(Y_n^2)[1-F_{S_n}(\beta\mu)]+E(Y_n^3)f_{S_n}(\beta\mu)}{1-F_S(\beta\mu)}. \end{equation} Then, there are \begin{align} E(s_\beta I_n)&=\frac{E(I)[1-F_{S_n}(\beta\mu)]+aE(I^2)f_{S_n}(\beta\mu)d_n}{1-F_S(\beta\mu)},\\ E(s_\beta I_n^2)&=\frac{E(I^2)[1-F_{S_n}(\beta\mu)]+aE(I^3)f_{S_n}(\beta\mu)d_n}{1-F_S(\beta\mu)}. \end{align} Thus, the CSR can be written as \begin{equation}\label{eq:CSR} \textrm{CSR}=\frac{|E(s_\beta I_n)|_{d_n=\varsigma}-E(s_\beta I_n)|_{d_n=\tau}|}{2\sqrt{\frac{|E(s_\beta I_n^2)|_{d_n=\tau}-E(s_\beta I_n)^2|_{d_n=\tau}|}{T_\beta}}}\frac{1}{|\varsigma-\tau|}. \end{equation} Now, we will discuss the generality of CSR to obtain the trend of CSR changing with $\beta$ without pursuing its specific values. In Eq.~(\ref{eq:CSR}), since each gray value has little effect on the standard deviation, we set $\tau$ in the denominator equals 0; because $E(I)\ll\mu$, the distributions of $S$ and $S_n$ can be considered to be approximately the same, and $\beta\mu$ is not much different from $\mu$. Then, the CSR formula can be simplified to \begin{equation} \textrm{CSR}=\frac{aE(I^2)f_S(\beta\mu)T}{2[1-F_S(\beta\mu)]^{\frac{1}{2}}\sqrt{D(I)}}. \end{equation} Obviously, the larger is the total number of measurements, the higher is the CSR value, and the better is the reconstruction quality. Apart from this, the CSR value also depends on the following factor \begin{equation} g(\beta\mu)=\frac{f_S(\beta\mu)}{[1-F_S(\beta\mu)]^{\frac{1}{2}}}. \end{equation} We take the derivative of this factor with respect to $\beta\mu$ (there is $f^\prime_S(\beta\mu)=0$ under first-order approximation): \begin{equation} g^\prime(\beta\mu)=\frac{\frac{1}{2}f_S^2(\beta\mu)}{[1-F_S(\beta\mu)]^\frac{3}{2}}>0. \end{equation} It can be concluded that $g(\beta\mu)$ is an increasing function, and the CSR value increases gradually as $\beta$ increases. It means that the patterns that correspond to much larger bucket values (above the mean) will undoubtedly generate a positive image with much higher quality, and vice versa for the negative image formation. It also helps to explain the inner mechanism of the previous work in super sub-Nyquist single-pixel imaging \cite{YuSensors2019}. \section{\label{sec:level5}Conclution} In summary, we have developed a probability theory to explain the formation mechanism of CI whose the bucket values are binarized, based on a general model in which the targets are of gray-scale, and any two thermal reference speckles are independent of each other, all following an arbitrary identical distribution. By building the joint probability density function between the bucket variable and each reference thermal speckle variable, and deducing the related means and variances, we find that the positive-negative images and their difference image are all the linear transformations of the object image. Provided that each original gray-scale value has a large enough number of pixels, then the reconstructed values falling in every pixel region of the same original gray value will obey a Gaussian distribution, no matter what kind of distribution the speckles obey. The measurement noise is also considered. We have demonstrated the validity of the derived formulas through both simulation and experiments. On the basis of our theory, we also introduce a new image quality metric CSR, and prove that the patterns that correspond to much larger bucket values (above the mean) will help generate a positive image of much higher quality, and vice versa for the negative one. Therefore, this work will give rise to many potential practical applications. \begin{acknowledgments} This work was supported by the Natural Science Foundation of Beijing Municipality (Grant No. 4184098), the National Natural Science Foundation of China (Grant No. 61801022), the National Key Research and Development Program of China (Grant No. 2016YFE0131500), the Civil Space Project of China (Grant No. D040301), the International Science and Technology Cooperation Special Project of Beijing Institute of Technology (Grant No. GZ2018185101), and the Beijing Excellent Talents Cultivation Project - Youth Backbone Individual Project. \end{acknowledgments} \nocite{*}
1,314,259,995,705
arxiv
\section{Introduction} In a very interesting paper Aharonov et al \cite{Aharonov} proposed the idea of a quantum random walk. Here a random walker is constrained to move left or right depending on the state of an auxiliary quantum mechanical system. One then examines the state of the random walker subject to the measurement of the state of the auxiliary system. As an interesting consequence of this quantum random walk, Aharonov et al \cite{Aharonov} found that the walker's distribution could shift by an amount which could be larger than the width of the initial distribution. Further the displacement could be much larger than the classical displacement. Several proposals \cite{Cavity,Aharonov,Sanders,milburn,lattice,exp,knight,bouwmeester} exist for realizations of the quantum random walk. For example Aharonov et al gave a cavity QED model where the photon number distribution can get displaced. Sanders et al \cite{Sanders} considered a dispersive interaction in the cavity of the form $ S^{z}(a+a^{\dag})$ and considered the random walk of the field on states on a circle. Other interesting theoretical schemes for implementing quantum walks have been suggested in ion-traps \cite{milburn} and in optical lattices \cite{lattice}. Knight et al \cite{knight} further showed that an earlier experiment \cite{bouwmeester} was a realization of quantum random walks. A scheme using linear optical elements has been recently implemented \cite{exp}. Here we propose a method which yields precisely quantum random walk as proposed by Aharonov et al. We use cavity QED however we drive the atoms by an external field. Currently there is considerable progress in realizing a variety of high quality cavities and a variety of interactions and thus one is in a situation where proposals like the one presented here are likely to be implemented. The organization of the paper is as follows. In Sec.II we present the details of our model and show the conditions under which such a model gives rise to an effective Hamiltonian which we use in Sec.III to realize quantum random walk. In this section we also present the results for the Wigner function for the state of the quantum walker. In Sec.IV we show how the homodyne measurements of the field can be used to check the characteristics of the quantum random walk. In Sec.V we incorporate the effects of decoherence due to the decay of the field in the cavity. In the appendix we discuss the state of the walker if no conditional measurements are made and establish relation to classical random walks. \section{Effective hamiltonian for Quantum Random Walk using driven atoms} We consider a two level Rydberg atom having its higher energy state $|e\rangle$ and lower energy state $|g\rangle$, interacting with a single mode of the electromagnetic field in a cavity. The atom passes through the cavity and interacts resonantly with the field. Further the atom is driven by a strong classical field. For simplicity we choose atomic transition frequency, the cavity frequency and the frequency of the driving field to be same. The Hamiltonian for the system in the interaction picture is written as \begin{equation} H=-i\hbar g\left(S^{+} a-a^{\dag}S^{-}\right)+ \hbar\left(S^{+} {\cal{E}} +S^{-}{\cal{E}}^{*}\right),\label{ham} \end{equation} where $g$ and ${\cal{E}}$ are the coupling constants of the interaction of the atom with the cavity field and with the deriving field. We have chosen $g$ as real and ${\cal{E}}$ as complex. The annihilation (creation) operator for the field in the cavity is $a (a^{\dag})$ and $S^{+},~S^{-}$ are atomic spin operators. The last term in Eq.(\ref{ham}) is the interaction with the external field. We further rewrite the above Hamiltonian in a picture in which the interaction with the external field has already been diagonalized. \begin{equation} |\bar{\psi}\rangle=e^{iht}|\psi\rangle;~h=S^+{\cal{E}}+S^-{\cal{E}}^*, \label{h} \end{equation} where $|\bar{\psi}\rangle$ is transformed atomic state in new picture from old atomic state $|\psi\rangle$. The Hamiltonian in this picture is \begin{eqnarray} \label{newH} &&\bar{H}=-ige^{iht}(S^+a-S^-a^{\dag})e^{-iht},\\ \label{trnsf} &&e^{iht}\equiv\cos(|{\cal{E}}|t)+\frac{i h}{|{\cal{E}}|}\sin(|{\cal{E}}|t). \end{eqnarray} The atomic spin operators $S^{\pm}$ transform as \begin{eqnarray} \label{s+} e^{iht}S^{+}e^{-iht}\equiv S^+\cos^2(|{\cal{E}}|t)+\frac{{\cal{E}}^{*2}} {|{\cal{E}}|^2}\sin^2(|{\cal{E}}|t)S^{-}\nonumber\\-\frac{2i{\cal{E}^{*}}}{|{\cal{E}}|} S^{z}\sin(|{\cal{E}}|t)\cos(|{\cal{E}}|t),\\ \label{s-} e^{iht}S^{-}e^{-iht}\equiv S^{-}\cos^2(|{\cal{E}}|t)+\frac{{\cal{E}}^{2}} {|{\cal{E}}|^2}\sin^2(|{\cal{E}}|t)S^{+}\nonumber\\+\frac{2i{\cal{E}}}{|{\cal{E}}|} S^{z}\sin(|{\cal{E}}|t)\cos(|{\cal{E}}|t). \end{eqnarray} Using Eqs.(\ref{s+}) and (\ref{s-}), Eq.(\ref{newH}) becomes \begin{eqnarray} \bar{H}=-ig\left(S^+\cos^2(|{\cal{E}}|t)+\frac{{\cal{E}}^{*2}} {|{\cal{E}}|^2}\sin^2(|{\cal{E}}|t)S^{-}\right.\nonumber\\ \left.-\frac{2i{\cal{E}^{*}}}{|{\cal{E}}|} S^{z}\sin(|{\cal{E}}|t)\cos(|{\cal{E}}|t)\right)a-H.c. \label{Hbar} \end{eqnarray} We note that the Hamiltonians of the above form have been previously used to treat the inhibition of the spontaneous emission \cite{spe} and for the production of Schrodinger cat states \cite{Solano}. We assume that the atom is driven strongly so that $|{\cal{E}}|$ is large and hence we drop rapidly oscillating terms from Eq.(\ref{Hbar}) {\it i.e.} $e^{\pm2i|{\cal{E}}|t}\Rightarrow0$. Then Eq.(\ref{Hbar}) reduces to \begin{equation} \bar{H}=-\frac{ig}{2}\left(S^++\frac{{\cal{E}}^{*2}}{|{\cal{E}}|^2}S^{-}\right)a-H.c. \label{newHbar} \end{equation} We choose ${\cal{E}}^{*2}/{|\cal{E}}|^2=1$, in general, this can also be done by adjusting phases with atomic operators. Then the Eq.(\ref{newHbar}) takes the form \begin{equation} \bar{H}_{eff}=gS^{x}\left(\frac{a-a^{\dag}}{i}\right). \label{netH} \end{equation} Note the appearance of the well known displacement $D(\alpha)=(a^{\dag}\alpha-a\alpha^*)$ in the Eq.(\ref{netH}). In particular we have the momentum operator (out of phase quadrature for the field). Further it should also be noted that $h$ as defined by Eq.(\ref{h}) commutes with $\bar{H}_{eff}$. In the original interaction picture the Hamiltonian for our model will be \begin{equation} H_{eff}=gS^{x}\left(\frac{a-a^{\dag}}{i}\right)+2|{\cal{E}}|S^{x}. \label{heff} \end{equation} In the effective Hamiltonian (\ref{heff}) field displacement operator appears with atomic operator, which can produce displacement in field state depending on the atomic state. \section{realization of random walk} We next examine the evolution of the system of the two level atom and the field inside the cavity. Let us consider that, initially the atom is in the superposition state $|\Phi\rangle=(c_1|e\rangle+c_2|g\rangle)$ and the field is in a coherent state $|\alpha\rangle$. Using Eq.(\ref{heff}) the combined state of the atom-cavity system after time $t$ is given by \begin{eqnarray} |\psi(t)\rangle&=&\exp\left[gtS^{x}(a^{\dag}-a)-2i|{\cal{E}}|tS^{x}\right]|\Phi\rangle|\alpha\rangle,\\ &=&\frac{c_+e^{-i\phi}}{2}\left(|g\rangle+|e\rangle\right)|\alpha+gt/2\rangle\nonumber\\ &+&\frac{c_-e^{i\phi}}{2}\left(|g\rangle-|e\rangle\right)|\alpha-gt/2\rangle,\\ \label{app} &=&|g\rangle \left[\frac{c_+e^{-i\phi}}{2}|\alpha+gt/2\rangle+\frac{c_-e^{i\phi}}{2}|\alpha-gt/2\rangle\right]\nonumber\\ &+&|e\rangle\left[\frac{c_+e^{-i\phi}}{2}|\alpha+gt/2\rangle-\frac{c_-e^{i\phi}}{2}|\alpha-gt/2\rangle\right];\\ \phi&=&\left(|{\cal{E}}|+\frac{g}{2}Im(\alpha)\right)t; \end{eqnarray} where $c_+=c_1+c_2$ and $c_-=c_1-c_2$. Using normalization of atomic states we can select $c_-/c_+=\tan\theta$. Thus the detection of the atom in state $|e\rangle$ or $|g\rangle$ leaves the cavity field in a superposition of states $|\alpha+gt/2\rangle$ and $|\alpha-gt/2\rangle$. For small values of $gt$ the states $|\alpha+gt/2\rangle$ and $|\alpha-gt/2\rangle$ overlap completely and thus quantum interference effects between $|\alpha+gt/2\rangle$ and $|\alpha-gt/2\rangle$ becomes significant. If we assume that the atom is detected in its ground state $|g\rangle$. Then the state of the field inside the cavity can be written as \begin{eqnarray} |\psi_f\rangle\propto\left[e^{-i|{\cal{E}}|t}D(gt/2)+e^{i|{\cal{E}}|t}\tan(\theta)D(-gt/2)\right]|\alpha\rangle, \end{eqnarray} Clearly after passing one atom through the cavity the field inside the cavity is displaced backward or forward along the line in a random way by the step of $gt/2$. We can now iterate the above step to obtain the state of the field after the passage of $N$ atoms. We assume that atoms enter in the cavity in the state $|\Phi\rangle$ and after interaction with the field inside the cavity detected in their ground state $|g\rangle$. Note that the displacement operators appearing in the above state commute each other $[D(gt/2),D(-gt/2)]=0$ for real $gt$. Thus the field state after the passage of $N$ atoms is given by \begin{eqnarray} |\psi_f(N)\rangle&=& C\left[e^{-i|{\cal{E}}|t}D(gt/2)+e^{i|{\cal{E}}|t}\tan(\theta)D(-gt/2)\right]^N|\alpha\rangle,\nonumber\\ &=&C\sum_{m=0}^{N} \left( \begin{array}{c} N \\ m \\ \end{array} \right)\left[e^{-im|{\cal{E}}|t}D^m\left(\frac{gt}{2}\right)\times\right.~~~~~~~~\nonumber\\ &&\left.e^{i(N-m)|{\cal{E}}|t} (\tan\theta)^{N-m}D^{N-m}\left(-\frac{gt}{2}\right)\right]|\alpha\rangle,\nonumber\\ &=&C\sum_{m=0}^{N} \left( \begin{array}{c} N \\ m \\ \end{array} \right) e^{i(N-2m)|{\cal{E}}|t}(\tan\theta)^{N-m}\nonumber\\ &&D^{N-2m}(-gt/2)|\alpha\rangle,\\ &=&C\sum_{m=0}^{N} \left( \begin{array}{c} N \\ m \\ \end{array} \right) e^{i(N-2m)\phi}(\tan\theta)^{N-m}\nonumber\\ &&|\alpha-(N-2m)gt/2)\rangle, \label{final} \end{eqnarray} where $C$ is normalization constant and we have used the property of the displacement operator $D^{-1}(\alpha)=D(-\alpha)$. On writing the above result in coordinate space representation, we get the wavefunction $\psi_N(x,\alpha)=\langle x|\psi_f(N)\rangle$ \begin{eqnarray} \psi_N(x,\alpha)=C\sum_{m=0}^{N} \left( \begin{array}{c} N \\ m \\ \end{array} \right) e^{i(N-2m)\phi}(\tan\theta)^{N-m}\nonumber\\\psi_{\alpha}\left(x+[N-2m]l\right),\label{wave} \end{eqnarray} where $\psi_{\alpha}(x)\equiv\langle x|\alpha\rangle$ is the wavefunction corresponding to the initial cavity field state $|\alpha\rangle$ which is centered at $x=\alpha$ and the step size of the random walker is $l=gt/2$. We note that we have recovered the result of Aharonov et al \cite{Aharonov}. In Fig.\ref{fig1} we have plotted the probability amplitude distribution for initial wave function $\psi_{\alpha}(x)\sim \exp[-(x-\alpha)^2/2]$ for real values of $x$ and $\alpha=0$. The displacement depends on $\theta$, $\phi$ and the number of steps $N$. The unexpected displacement in the state of the random walker is the result of constructive quantum interference between the states generated in various steps which comes from the off diagonal terms in $P(x)=|\psi_N(\alpha,x)|^2$. We have checked this by dropping the off diagonal terms in $P(x)$, in that case $P(x)$ remains same in shape as the initial wave packet but shifts by an amount $Nl$. The displacement of the random walker is not bounded by the classically possible maximum and minimum displacements $\pm Nl$. The quantum interference leads to an arbitrary displacement in the random walker's position and can be much larger than $\pm Nl$. A small squeezing in wavepacket is also generated from these interference effects. The selection of phase $\phi$ is also critical for displacement in the position of quantum walker, for example for the parameters used in Fig.\ref{fig1} the maximum displacement in the position of quantum walker occurs when $\phi$ is integer multiple of $\pi$ and there will be minimum displacement when $\phi$ is half integer multiple of $\pi$. \begin{figure} \includegraphics[width=3in, height=3in]{FIG1.eps} \caption{The probability distribution $P(x)$ for the position of the quantum random walker, assuming initial wave packet as Gaussian $\exp[-(x-\alpha)^2/2]$ for $\alpha=0$, step size $l=0.05$, $\phi=2\pi$ and $\theta=2\pi/3$.} \label{fig1} \end{figure} \begin{figure} \includegraphics[width=5in, height=6in]{FIG2.eps} \caption{The Wigner function $W(x,p)$ of the state of the random walker, after number of steps (a) $N=0$ (b) $N=10$, using same parameters as in FIG.\ref{fig1}.} \label{fig2} \end{figure} For visualizing quantum interferences we plot the Wigner function of the random walker in Fig.\ref{fig2}. The Wigner distribution for any state $\psi(x)$ can be obtained by using the definition \cite{wigner}, \begin{equation} W(x,p)=\frac{1}{\pi\hbar}\int e^{2ipy/\hbar} \psi(x-y)\psi^{*}(x+y)dy. \end{equation} In the Fig.\ref{fig2}(a) the field is in its initial coherent state and the wigner function is perfect Gaussian. As the field is displaced by random steps, by passing atoms through the cavity, quantum interference effects start deforming the shape of the Wigner function from the Gaussian. After few steps the Wigner function is squeezed in $x$ quadrature and gets displaced by an arbitrary distance in $x$. In Fig.\ref{fig2}(b), (see also Fig.\ref{fig4}(a)), we have shown the Wigner function after $10$ random steps for initial Gaussian wave packet. The squeezing is also clear from the Fig.\ref{fig1} which shows the narrowing of the distribution $P(x)$. It is clear that the displacement in the position of random walker comes as a result of quantum interference which is consequence of quantum coherence between the states generated in random steps. \section{measurement of the state of the random walker} We next discuss how we can probe the quantum state of the random walker. We propose homodyne techniques \cite{homodyne} for measuring the state of the random walker. Such homodyne measurement can be performed by mixing an external resonant coherent field to the cavity and then probing the resultant cavity field by passing a test atom through the cavity. In the previous section, we have shown how the cavity field is displaced backward or forward in a random step by passing single atom through the cavity. The state of the field in the cavity after such $N$ steps can be monitored by homodyne measurements which can be implemented in the same experimental set up. After displacing the field inside the cavity by $N$ random steps, by passing $N$ atoms, a resonant external coherent field $|\beta\rangle$ is injected into the cavity. After adding the external field, the state of the resultant field in the cavity is \begin{eqnarray} |\psi_H\rangle&=&C\sum_{m=0}^{N} \left( \begin{array}{c} N \\ m \\ \end{array} \right) e^{i(N-2m)\phi}(\tan\theta)^{N-m}\nonumber\\ &&D(\beta)|\alpha-(N-2m)gt/2)\rangle,\nonumber\\ &=&C\sum_n\sum_{m=0}^{N} \left( \begin{array}{c} N \\ m \\ \end{array} \right) e^{i(N-2m)\phi}(\tan\theta)^{N-m}\nonumber\\ &&\langle n|D(\beta)|\alpha-(N-2m)gt/2)\rangle|n\rangle,\nonumber\\ \label{disp} &=&\sum_n F_n|n\rangle\\ \label{fm} F_n&=&C\sum_{m=0}^{N} \left( \begin{array}{c} N \\ m \\ \end{array} \right) e^{i(N-2m)\phi}(\tan\theta)^{N-m}\nonumber\\ &&\langle n|D(\beta)|\alpha-(N-2m)gt/2)\rangle. \end{eqnarray} Now we bring a similar atom in its lower energy state $|g\rangle$ to probe the cavity field. The probability of detecting the probe atom in its lower state $|g\rangle$ after crossing the cavity in time $t_p$ is \begin{equation} P_g=\sum_n|F_n|^2\cos^{2}(gt_p\sqrt{n}). \end{equation} The interaction time $t_p$ for the probe atom is selected such that if there are photons in the cavity it leaves the cavity in its higher energy state $|e\rangle$ with larger probability. If we choose the external field $|\beta\rangle$ such that $\beta=-\alpha+\delta$, the probe atom will leave the cavity in its ground state with larger probability when the value of $\delta$ will be opposite and equal to the displacement of the random walker from the initial position $\alpha$. Thus the probability of the probe atom leaving the cavity in its lower state $|g\rangle$ would, as a function of $\delta$, have peak corresponding to the positions of the random walker after $N$ steps. In Fig.\ref{fig3}, we plot the probability of detecting the probe atom in its lower state with $\delta$. The solid line curve is result of homodyne measurement of the position of the random walker corresponding to its initial state. The dashed line curve is corresponding to the homodyne measurement after $10$ steps using the same parameter as in Fig.\ref{fig1}. Clearly the homodyne measurement yields the state of the quantum walker (Fig.\ref{fig1}). Thus the homodyne measurement can be an elegant way for monitoring the position of the random walker in our model of realizing quantum random walks. \begin{figure} \centering \includegraphics[width=3in]{FIG3.eps} \caption{The probability of detecting probe atom in its ground state as a function of $\delta$ for the state of the quantum random walker after number of steps $N=0$ ( solid line) and $N=10$ (dashed line). The parameters used are same as in Fig.\ref{fig1} and the interaction time for the probe atom is selected such that $gt_p=1.5 \pi$.} \label{fig3} \end{figure} \begin{figure}[h] \includegraphics[width=5in, height=5in]{FIG4.eps} \caption{The decoherence of the state of the random walker in terms of Wigner function at different times, (a) for $\kappa t=0$, (b) for $\kappa t=1/4N^2l^2$, (c) for $\kappa t=1/2N^2l^2$, (d) for $\kappa t =2/N^2l^2$, other parameters are same as FIG.\ref{fig2} (b).} \label{fig4} \end{figure} \section{decoherence of the generated state of the random walker} Quantum random walks are different from the classical random walks in the sense of quantum interferences which may lead much larger displacements in the position of quantum random walker than the classically possible maximum displacements. These quantum interferences are the consequences of coherence in the system. Clearly we need the coherence to live for a long time and thus it is important to study the effects of the decoherence of the system. In this section we study the decoherence of the state of the random walker due to damping in the cavity. This can be done using the master equation \begin{equation} \dot{\rho}=-\frac{\kappa}{2}(a^{\dag}a\rho-2a\rho a^{\dag}+\rho a^{\dag}a), \end{equation} where $\kappa$ is cavity field decay parameter and we carry analysis in the absence of thermal photons. For initial state (\ref{final}) we find the density matrix after time $t$ \begin{eqnarray} \rho(t)&=&|C|^2\sum_{m=0}^{N}\sum_{n=0}^{N}\left( \begin{array}{c} N \\ m \\ \end{array} \right)\left( \begin{array}{c} N \\ n \\ \end{array} \right) e^{2i(n-m)\phi}(\tan\theta)^{2N-m-n}\nonumber\\ &&\langle \alpha-(N-2m)l|\alpha-(N-2n)l)\rangle^{(1-e^{-\kappa t})}\nonumber\\ &&|\alpha-(N-2m)l\rangle_t\langle\alpha-(N-2n)l|_t~, \label{deco} \end{eqnarray} where $|\zeta\rangle_t\equiv|\zeta e^{-\kappa t/2}\rangle$. In the limit $\kappa t<<1$ the Eq.(\ref{deco}) simplifies to \begin{eqnarray} \rho(t)&=&|C|^2\sum_{m=0}^{N}\sum_{n=0}^{N}\left( \begin{array}{c} N \\ m \\ \end{array} \right)\left( \begin{array}{c} N \\ n \\ \end{array} \right) e^{2i(n-m)\phi}(\tan\theta)^{2N-m-n}\nonumber\\ &&e^{-2\kappa tl^2(n-m)^2}|\alpha-(N-2m)l\rangle\langle\alpha-(N-2n)l|. \end{eqnarray} Thus the coherence of the state decays on the time scales $1/2N^2l^2$. In Fig.\ref{fig4} we show the decoherence effects due to the cavity damping in the state of the quantum random walker in terms of Wigner function. As the time progresses from (a) to (d) the decoherence reduces the quantum interference effects and the state of the random walker decays to its initial state. In Fig.\ref{fig4}(a) the Wigner function for the state of the random walker after $10$ steps using the parameters of Fig.\ref{fig2}(b) is plotted which is squeezed in $x$ quadrature and centered at $x\approx-2$. As a result of decoherence due to cavity damping the quantum interferences start decaying and the Wigner function changes to the perfect Gaussian shape, Fig.\ref{fig4}(c) centered at $x=Nl$. Now the field inside the cavity is almost in coherent state and decays with the cavity damping rate. Further the life time for the state of the quantum random walker is given by $T_N=T_c/2N^2l^2$ where $T_c=1/\kappa$ is life time for field in the cavity. \section{conclusions} In conclusion we have shown a simple possible realization of quantum random walks using cavity QED. We have proposed homodyne detection for monitoring the position of the random walker. We have also discussed the decoherence effects and the time scales at which quantum nature of random walks survives. As a result of new emerging technologies various improved cavities are feasible these days \cite{cavity}, which makes our proposal much interesting and realistic. Such realization of quantum random walks may be useful for implementing various algorithms \cite{algorithms} based on quantum random walks. Finally it should be noted that the generalizations of the present work to more than one dimensions are possible.
1,314,259,995,706
arxiv
\section{Introduction} By the ring we mean a commutative ring with unity. Let $R$ be an itegral domain. We denote by $R^{\ast}$ the group of all invertible elements of $R$. \medskip The main motivation of this paper is description polynomial composites as algebraic object. The related works were started in paper \cite{mm1}, where basic algebraic properties have been investigated. Continued in \cite{mm2}, where the focus was on ACCP properties and atomicity. This paper is the finalization of fundamental research in polynomial composites. \medskip D.D.~Anderson, D.F.~Anderson, M. Zafrullah in \cite{1} called object $A+XB[X]$ as a composite, where $A\subset B$ be fields. \medskip There are many works where composites are used as examples to show some properties. But the most important works are presented below. \medskip In 1976 \cite{y1} authors considered the structures in the form $D+M$, where $D$ be a domain and $M$ be a maximal ideal of ring $R$ with $D\subset R$. Next, Costa, Mott and Zafrullah (\cite{y2}, 1978) considered composites in the form $D+XD_S[X]$, where $D$ be a domain and $D_S$ be a localization of $D$ relative to the multiplicative subset $S$. In 1988 \cite{y5} Anderson and Ryckaert studied classes groups $D+M$. Zafrullah in \cite{y3} continued research on structure $D+XD_S[X]$ but he showed that if $D$ be a GCD-domain, then the behaviour of $D^{(S)}=\{a_0+\sum a_iX^i\mid a_0\in D, a_i\in D_S\}=D+XD_S[X]$ depends upon the relationship between $S$ and the prime ideals $P$ of $D$ such that $D_P$ be a valuation domain (Theorem 1, \cite{y3}). Fontana and Kabbaj in 1990 (\cite{y4}) studied the Krull and valuative dimensions of composite $D+XD_S[X]$. In 1991 there was an article (\cite{1}) that collected all previous results about composites and authors began to create a further theory about composites creating results. In this paper, the considered structures were officially called composites. \medskip In the second section we present many properties in polynomial composites as domains. Recall, we say that an domain $R$ satysfying ACCP condition (has ACCP) if each increasing sequence of principal ideals is stationary (Proposition \ref{pr1}). An domain $R$ be atomic, where every nonzero noninvertible element can be presented as the product of irreducible elements (atoms) (Proposition \ref{pr1}). The domain $R$ is a bounded factorization domain (BFD) if $R$ is atomic and for each nonzero nonunit of $R$ there is a bound on the length of factorizations into products of irreducible elements (Propositions \ref{pr2}, \ref{pr3}). We say that $R$ is a half-factorial domain (HFD) if is atomic and each factorization of a nonzero nonunit of $R$ into a product of irreducible elements has the same length (Propositions \ref{pr4}, \ref{pr10}). The domain $R$ is an idf-domain (for irreducible-divisor-finite) if each nonzero element of $R$ has at most a finite number of nonassociate irreducible divisors (Propositions \ref{pr5}, \ref{pr6}). A domain is called finite factorization domain (FFD) if each nonzero nonunit element has only a finite number of nonassociate divisors (Proposition \ref{pr7}). In general, $$\begin{array}{ccccccccc} &&HFD\\ &\mbox{\rotatebox{-135}{$\Leftarrow$}} & \Uparrow & \mbox{\rotatebox{135}{$\Rightarrow$}} \\ UFD&\Rightarrow &FFD&\Rightarrow &BFD&\Rightarrow ACCP&\Rightarrow& atomic \\ &\mbox{\rotatebox{135}{$\Leftarrow$}}&\Downarrow \\ &&idf \end{array}$$ Recall that $R$ is an S-domain if for each height-one prime ideal $P$ of $R$, $ht P[X]=1$ in $R[X]$ (Proposition \ref{pr8}). A commutative ring $R$ is called a Hilbert ring if every prime ideal of $R$ is an intersection of maximal ideals of $R$ (Theorem \ref{tw9}). In Proposition \ref{pr12} we have information about composite cover. \medskip In the third section we have statements about polynomial composites as Dedekind domains. It turns out that polynomial composites of the form $K+XL[X]$ be a Dedekind rings (Theorem \ref{Dedekind}). \section{Results} In papers \cite{mm1} and \cite{mm2}, polynomial composites with the property of atomicity and ACCP are presented. The results below are complementary. \begin{pr} \label{pr1} Let $T=K+XL[X]$, where $K$, $L$ are fields with $K\subset L$. Let $D$ be a subring of $K$ and $R=D+XL[X]$. Then: \begin{itemize} \item[(a) ] $R$ is atomic if and only if $T$ is atomic and $D$ is a field. \item[(b) ] $R$ satisfies ACCP if and only if $T$ satisfies ACCP and $D$ is a field. \end{itemize} \end{pr} \begin{proof} First suppose that $D$ is not a field. Then $f=d\dfrac{m}{f}$ for each $f\in XL[X]$ and $d\in D^{\ast}$. Thus no element of $XL[X]$ is irreducible ($XL[X]$ is a maximal ideal of $T$). Hence if $R$ is either atomic or satisfies ACCP, $D$ must be a field. So let $D$ be a field. \begin{itemize} \item[(a) ] Up to multiplication by a $\alpha\in K^{\ast}$ (resp. $\alpha\in D^{\ast}$), each element of $T$ (resp. $R$) has the form $f$ or $1+f$ for some $f\in XL[X]$. Each of these elements is irreducible in $R$ if and only if it is irreducible in $T$ (\cite{16}, Lemma 1.5; 27). If $x$ is a product of irreducibles, we may assume that each irreducible factor has the form $f$ or $1+f$ for some $f\in XL[X]$. Thus $x$ is a product of irreducible elements in $R$ if and only if it is a product of irreducible elements in $T$. Hence $R$ is atomic if and only if $T$ is atomic. \item[(b) ] We first observe that a principal ideal of $R$ or $T$ may be generated by either $f$ or $a+f$ for some $f\in XL[X]$. Let $f$, $g\in XL[X]$. It easily verified that $(1+f)R\subset (1+g)R$ if and only if $(1+f)T\subset (1+g)T$, $fR\subset (1+g)R$ if and only if $fT\subset (1+g)T$, and $fR\subset gR$ if and only if $fT\subset gT$. Also, if $fT\subset gT$, then $fR\subset (\alpha g)R$ for some $\alpha\in K^{\ast}$. Hence, to each chain of principal ideals of length $s$ in $R$ starting at $fR$ (resp., $(1+f)R$), there corresponds a chain of principal ideals of length $s$ in $T$ starting at $fT$ (resp., $(1+f)T$), and conversely. Thus $R$ satisfies ACCP if only if $T$ satisfies ACCP. \end{itemize} \end{proof} In \cite{0} Anderson, Anderson and Zafrullah asked the following question: \medskip \noindent {\bf Question 1} If $R$ is atomic, then $R[X]$ is atomic? \medskip In \cite{mm2}, I considered the question and concluded that the answer was negative. \medskip The propositions \ref{pr2}, \ref{pr3} represent the BFD property in polynomial composites. \begin{pr} \label{pr2} If $A+XB[X]$ is a noetherian domain, where $A\subset B$ are domains, then $A+XB[X]$ is a BFD. \end{pr} \begin{proof} \cite{0}, Proposition 2.2. \end{proof} \begin{pr} \label{pr3} Let $T=K+XL[X]$, where $K\subset L$ are fields. Let $D$ be a subring of $K$ and $R=D+XL[X]$. Then $R$ is a BFD if and only if $T$ is a BFD and $D$ is a field. \end{pr} \begin{proof} First suppose that $R$ is BFD. Then $D$ must be a field (\cite{0}, Proposition 1.2). Again from the proof of (\cite{0}, Proposition 1.2) we get that $R$ is a BFD if and only if $T$ is a BFD. \end{proof} The propositions \ref{pr4} and \ref{pr10} represents the HFD property in polynomial composites. \begin{pr} \label{pr4} Let $T=K+XL[X]$, where $K\subset L$ are fields. Let $D$ be a subring of $K$ and $R=D+XL[X]$. Then $R$ is a HFD if and only if $D$ is a field and $T$ is a HFD. \end{pr} \begin{proof} As in Proposition 1.2 (\cite{0}), $D$ is necessarily a field. The proof of Proposition 1.2 shows that a factorization into irreducibles in $R$ has the same length as such a factorization in $T$. Hence $R$ is a HFD if and only if $T$ is a HFD. \end{proof} \begin{pr} \label{pr10} Let $A$ be a subring of a field $K$. Then $R=A+XK[X]$ is a HFD if and only if $A$ is a field. \end{pr} \begin{proof} ($\Rightarrow$) Clearly, $R$ a HFD implies that $A$ is a HFD. Suppose that $A$ is not a field, so there is an irreducible element $a\in A$. Then $X=a^n(X/a^n)$ for all $n\in\mathbb{N}$. Thus $A$ must be a field. \medskip ($\Leftarrow$) Suppose that $A$ is a field. By (moje) $R=A+XK[X]$ is atomic. The proof of Theorem 2.1 \cite{mm1} shows that an irreducible element of $R$ is of the for $aX$, where $a\in K$ or $a(1+Xf[X])$, where $a\in A$, $f(X)\in K[X]$, and $1+XF(X)$ is irreducible in $K[X]$. Thus for any $g(X)\in R$, the number of irreducible factors from $R$ is the same as the number of irreducible factors in a representation of $g(X)$ as a product of irreducible factors from the PID $K[X]$. Hence $R$ is a HFD. \end{proof} Recall that $R$ is an idf-domain if each nonzero element of $R$ has at most a finite number of nonassociate irreducible divisors. \begin{pr} \label{pr5} Let $T=K+XL[X]$, where $K\subset L$ are fields. Let $M$ be a subfield of $K$ and $R=M+XL[X]$. Then: \begin{itemize} \item[(a) ] Suppose that $XL[X]$ contains an irreducible element. Then $R$ is an idf-domain if and only if $T$ is an idf-domain and the multiplicative group $K^{\ast}/M^{\ast}$ is finite. \item[(b) ] Suppose that $XL[X]$ contains no irreducible elements. Then $R$ is an idf-domain if and only if $T$ is an idf-domain. \end{itemize} \end{pr} \begin{proof} (a) We first note that an element of $XL[X]$ is irreducible in $R$ if and only if it is irreducible in $T$. Let $f\in XL[X]$ be irreducible. First suppose that $R$ is an idf-domain. Then $af\mid f^2$ for all $a\in K^{\ast}$. Note that $af$ and $bf$ are irreducible in both $R$ and $T$, and that they are associates in $R$ if and only if $a$ and $b$ lie in the same coset in $K^{\ast}/M^{\ast}$. Hence $K^{\ast}/M^{\ast}$ if finite. Let $y\in T$. By multiplying by a suitable $a\in K^{\ast}$, we may assume that $y\in R$. Let $y_1, y_2, \dots, y_n$ be the distinct nonassociate irreducible divisors of $y$ in $R$. It is easily verified that any irreducible divisor of $y$ in $T$ is associated to one of the $y_i$'s. Thus $T$ is also an idf-domain. Conversely, suppose that $T$ is an idf-domain and that $K^{\ast}/M^{\ast}$ is finite. Let $z\in R$. Let $z_1, z_2, \dots, z_r$ be a complete set of nonassociate irreducible divisors of $z$ in $T$, which we may assume are all in $R$, and let $a_1, a_2, \dots, a_s$ be a set of coset representatives of $K^{\ast}/M^{\ast}$. Then any irreducible divisor of $z$ in $R$ is an associate of some $a_iz_j$. Hence $R$ is an idf-domain. \medskip (b) Since $XL[X]$ has no irreducible elements, an irreducible element in $T$ (resp., in $R$) has the form $a+f$ for some $a\in K^{\ast}$ (resp., $a\in M^{\ast}$) and $f\in XL[X]$. Hence, up to associates, each has the form $1+f$ for some $f\in XL[X]$. It is then easily verified that $\{1+f_1, 1+f_2, \dots, 1+f_n\}$ is a complete set of nonassociate irreducible divisors of a given element with respect to $R$ if and only if it is a complete set of nonassociate irreducible divisors with respect to $T$. \end{proof} \begin{pr} \label{pr6} Let $T$ be a quasilocal integral domain of the form $K+XL[X]$, where $K\subset L$ are fields. Let $D$ be a subring of $K$ and $R=D+XL[X]$. If $D$ is not a field, then $R$ is an idf-domain if and only if $D$ has only a finite number of nonassociate irreducible elements. \end{pr} \begin{proof} Let $d$ be a nonzero nonunit of $D$. Then $f=d(f/d)$ shows that no element of $XL[X]$ is irreducible and $d$ divides each element of $XL[X]$. Also, $y=d+f=d(1+f/d)$ and $1+f/d\in R^{\ast}$ (since $T$ is quasilocal) shows that $y$ is irreducible in $R$ if and only if $d$ is irreducible in $D$. Thus $R$ is an idf-domain if and only if $D$ has only a finite number of nonassociate irreducible elements. \end{proof} \noindent {\bf Question} If $R$ is an idf-domain, then $R[X]$ be an idf-domain? \medskip The proposition \ref{pr7} represents the FFD property in polynomial composites. \begin{pr} \label{pr7} Let $T=K+XL[X]$, where $K\subset L$ are fields. Let $D$ be a subring of $K$ and $R=D+XL[X]$. Then $R$ is a FFD if and only if $T$ is a FFD, $D$ is a field, and $K^{\ast}/D^{\ast}$ is finite. \end{pr} \begin{proof} Proof is similar to \cite{0} Proposition 5.2. \end{proof} Recall an integral domain $D$ is called an S-domain if for each prime ideal $P$ of $D$ with $ht P=1$, $ht P[X]=1$. \begin{lm} \label{l2} For an integral domain $D$, the following statements are equivalent. \begin{itemize} \item[(a) ] $D$ is an S-domain. \item[(b) ] For each prime ideal $P$ of $D$ with $ht P=1$, $D_P$ is an S-domain. \item[(c) ] For each prime ideal $P$ of $D$ with $ht P=1$, $\overline{D_P}$ is a Pr\"ufer domain. \end{itemize} \end{lm} \begin{proof} \cite{1} Lemat 3.1 \end{proof} \begin{lm} \label{l1} For any integral domain $D$, $D[X]$ is an S-domain. \end{lm} \begin{proof} \cite{1}, Theorem 3.2. \end{proof} \begin{pr} \label{pr8} Let $D$ be an integral domain and $S$ a multiplicatively closed subset of $D$. Then $D+XD_S[X]$ is an S-domain. \end{pr} \begin{proof} Let $R=D+XD_S[X]$ and let $P$ be a height-one prime ideal of $R$. First suppose that $P\cap S\neq\emptyset$. Then $P\supseteq XD_S[X]P=XD_S[X]$. But since $ht P=1$, $P=XD_S[X]$. But then $P\cap S=\emptyset$, a cotradiction. Thus we must have $P\cap S=\emptyset$. Then $P_S$ is a height-one prime ideal in $R_S=D_S[X]$. By Lemma \ref{l1}, $R_S$ is an S-domain. Hence $R_P=R_{S_{P_S}}$ is also an S-domain by Lemma \ref{l2} (a)$\Rightarrow$(b). Thus $R$ is an S-domain by Lemma \ref{l2} (b)$\Rightarrow$(a). \end{proof} \medskip Recall, a commutative ring $R$ is called a Hilbert ring if every prime ideal of $R$ is an intersection of maximal ideals of $R$. In \cite{26} it was shown that if $D\subseteq K$, where $K$ is a field, then $D+XK[X]$ is a Hilbert domain if and only if $D$ is a Hilbert domain. Thus if $D$ is a PID that is not a field and $K$ is the qoutient field of $D$, then $D+XK[X]$ is a two-dimensional, non-Noetherian, B\'ezout-Hilbert domain in which every maximal ideal is principal. \begin{tw} \label{tw9} Let $D$ be an integral domain and $S$ a multiplicatively closed subset od $D$ with the property that for a prime $P$ of $D$ with $P\cap S\neq\emptyset$, then $Q\cap S\neq\emptyset$ for each prime $0\neq Q\subseteq P$. Then $R=D+XD_S[X]$ is a Hilbert domain if and only if $D$ and $D_S$ are Hilbert domains. \end{tw} \begin{proof} ($\Rightarrow$) Suppose that $R$ is a Hilbert domain. Then $D\cong R/XD_S[X]$ is also a Hilbert domain. Suppose that $D_S$ is not a Hilbert domain. Let $Q$ be a nonzero prime ideal od $D$ with $Q\cap S\emptyset$. Since $D$ is a Hilbet domain, $Q=\bigcap_{\alpha} M_{\alpha S}$, where $\{M_{\alpha}\}$ is the set of maximal ideals of $D$ containing $Q$. Since $Q\cap S=emptyset$ by the hypothesis on $S$, each $M_{\alpha}\cap S=\emptyset$. Hence $Q_S=\bigcap M_{\alpha S}$ is an intersection of maximal ideals of $D_S$. So every nonzero prime ideal of $D_S$ is an intersection of maximal ideals. Hence there is a nonzero element $u\in D$ such that $u$ is in every nonzero prime ideal of $D_S$. Consider $u+X\in R$. Let $P$ be prime ideal of $R$ minimal over ($u+X$) with $P\cap D=0$. (Such a prime $P$ exists since $(u+X)\cap(D\setminus\{0\})=\emptyset$). If $Q$ is a prime ideal of $R$ with $P\subsetneq Q$, then $Q\cap D\neq 0$. For otherwise in $D_S[X]$, $0\neq P_S\subsetneq Q_S$ would both contract to $0$. Now if $Q\cap S\neq\emptyset$, then $X\in XD_S[X]\subseteq Q$, while if $Q\cap S=\emptyset$, then $u\in(Q_S\cap D_S)\cap D\subseteq Q$. So every prime ideal of $R$ properly containing $P$ contains both $u$ and $X$. Hence $P$ is not the intersection of the maximal ideals containing it, contradicting the fact that $R$ is a Hilbert domain. So $D_S$ must also be a Hilbert domain. \medskip ($\Leftrightarrow$) Let $Q$ be a prime ideal of $R$. Suppose that $Q\cap S\neq\emptyset$. Then $XD_S[X]=XD_S[X]Q\subseteq Q$, so $Q=Q\cap D+XD_S[X]$. Since $D$ is a Hilbert domain, $Q\cap D$ is an intersection of maximal ideals, hence so is $Q$. So we may suppose that $Q\cap S=\emptyset$. Then since $D_S[X]$ is a Hilbert domain, $Q_S=\bigcap_{\alpha} M_{\alpha}$, where $\{M_{\alpha}\}$ is the set of maximal ideals of $D_S[X]$ containing $Q_S$. Then $Q=\bigcap_{\alpha}(M_{\alpha}\cap R)$. So it suffices to show that each $M_{\alpha}\cap R$ is a maximal ideal of $R$. So let $M$ be a maximal ideal of $D_S[X]$. Then $M=N_S$, where $N$ is a prime ideal of $D[X]$. Now $M$ maximal implies $M\cap D_S$ is maximal since $D_S$ is Hilbert domain. If $M\cap D_S=0$, then $D_S$ is a field and hence $R$ is a Hilbert domain (\cite{26}, Theorem 5). So we may assume that $M\cap D_S\neq 0$. Then by hypothesis on $S$, $(M\cap D_S)\cap D=N\cap D$ must also be maximal. Since $N\supsetneq (N\cap D)[X]$, $N$ must be a maximal ideal of $D[X]$. Hence $D[X]/N\subseteq R/M\cap R\subseteq D_S[X]/M=D_S[X].N_S=D[X]/N$ since $D[X]/N$ is a field. Therefore $M\cap R$ is a maximal ideal. \end{proof} The next Proposition says that every polynomial composite is a one-dimensional B\'ezout domain. \begin{pr} \label{pr11} Let $K\subset L$ be a pair of fields with $L$ purely inseparable over $K$ (that is, $char K=p>0$ and for each $l\in L$, there exists a natural number $n=n(l)$ with $l^{p^n}\in K$). Then every ring $R$ between $K[X]$ and $L[X]$ is a one-dimensional almost B\'ezout domain. \end{pr} \begin{proof} Since $K[X]\subset L[X]$ is an integral extension, $\dim R=\dim K[X]=1$. For each $f\in L[X]$, $f^{p^n}\in K[X]$ for $n$ large enough. Hence for $f, g\in R$, $f^{p^n}, g^{p^n}\in K[X]$ for some $n\in\mathbb{N}$. But $(f^{p^n}, g^{p^n})K[X]$ is principal. Hence $(f^{p^n}, g^{p^n})R$ is principal. \end{proof} Let $K$ be a field, $D$ a subring of $K$. Every ring $R$ between $D[X]$ and $K[X]$ has a composite cover, i.e. the unique minimal overring of $R$ that is a composite. Recall $I(B,A)=\{f(X)\in B[X]\mid f(A)\subseteq A\}$. \begin{pr} \label{pr12} (a) Let $R$ be a domain with qoutient field $K$. Suppose that for each $0\neq r\in R$, $R/(r)$ is finite. Then the composite cover of $I(K,R)$ is $R+XK[X]$. \medskip (b) Let $A\subseteq B$ be rings where $A$ is finite. Then the composite cover of $I(B,A)$ is $A+XB[X]$. \end{pr} \begin{proof} (a) Let $r$ be a nonzero nonunit of $R$ and let $R/(r)=\{r_1+(r), \dots, r_n+(r)\}$. Set $f(X)=\dfrac{1}{r}(X-r_1)\dots (X-r_n)\in K[X]$. Now for $a\in R$, $a+(r)=r_i+(r)$ for some $i$, so $a-r_i=sr$ for some $s\in R$. Hence $f(a)=\dfrac{1}{r}(sr)\prod_{j\neq 1}(a-r_j)\in R$. So $f(X)=\dfrac{1}{r}X^n+\dots \in I(K,R)$ and hence $I(K,R)$ has composite cover $R+XK[X]$. \medskip (b) For each $b\in B$, $f(X)=b(\prod_{a\in A}(X-a))\in I(B,A)$. \end{proof} At the end of this section, we have an exact sequence. $$0\rightarrow A+XB[X]\rightarrow B[X]\rightarrow B[X]/A+XB[X]\rightarrow 0$$ \section{Dedekind domain} In this section we will talk about polynomial composites as Dedekind rings. \begin{pr} \label{pr13} Let $A\subset B$ be a pair of integral domains and let $R=A+XB[X]$. $R$ is integrally closed if and only if $B$ is integrally closed and $A$ is integrally closed in $B$. \end{pr} \begin{proof} \cite{1}, Theorem 2.7 \end{proof} By Proposition \ref{pr13} if $D$ be an integral domain with qoutient field $K$ and $D\subset D_1\subset K$, then $D+XD_1[X]$ is integrally closed if and only if $D$ and $D_1$ are both integrally closed. \begin{tw} \label{Dedekind} Let $K\subset L$ be a finite fields extension. Then $K+XL[X]$ be a Dedekind domain. \end{tw} \begin{proof} By \cite{mm1} Theorem 2.1 every nonzero prime ideal is a maximal. By Proposition \ref{pr13} $K+XL[X]$ is integrally closed. By \cite{mm3} Proposition 3.2 $K+XL[X]$ is noetherian domain. Hence $K+XL[X]$ be a Dedekind domain. \end{proof} \begin{pr} \label{pr14} Let $K\subset L$ be an extension fields and let $T=K+XL[X]$. \begin{itemize} \item[(a) ] If $P$ be a nonzero prime ideal of $T$ and $P'=\{x\in T_0; xP\subset T\}$, then $PP'=T$. \item[(b) ] Every nonzero ideal of $T$ has an unambiguous representation in the form product of prime ideals. \item[(c) ] Every nonzero ideal of $T$ is invertible. \item[(d) ] If $I$ is an ideal of $T$, then $T/I$ is a principal ideal domain. \item[(e) ] $Cl(T)$ (a group of class of invertible ideals) be isomorphic to $Pic(T)$ (a group of class of invertible modules). \item[(f) ] If $M$ be a finite generated torsion-free $T$-module, then $M\cong I_1\oplus I_2\oplus\dots \oplus I_k$, where $I_1$, $I_2$, $\dots$, $I_k$ are nonzero ideals of $T$ and $k$ is a rang of $M$. Moreover $$M\cong T^{k-1}\oplus I_1I_2\dots I_k.$$ \item[(g) ] If $M$ be a finite generated $T$-module, then $$M\cong T^{k-1}\oplus I\oplus \bigoplus_{(P_i, n_i)} T/P_i^{n_i},$$ where $k=\dim_{T_0}(M\otimes_T T_0)$, $I\subset T$, $I$ is unambiguously, with the accuracy to isomorphism, a designated ideal, $P_i$ are nonzero prime ideals of $T$, $n_i>0$, and a finite set of pair $(P_i, n_i)$ is designated unambiguously. \end{itemize} \end{pr} \begin{proof} By a Theorem \ref{Dedekind} $T=K+XL[X]$ be a Dedekind's ring. \medskip The proof of (a) -- (g) are similar to proofs in \cite{comm}, III, 3 -- 5. \end{proof} The statements from \cite{mm3} are presented below. These are the characterizations of polynomial composites as Noetherian rings. It is very easy to convert a property of Noetherian into that of Dedekind. The proofs of the following is similar to Propositions in \cite{mm3}. \begin{pr} \label{01} Let $K\subset L$ be a field extension. Put $T=K+XL[X]$. Then $T$ be a Dedekind domain if and only if $[L\colon K]<\infty$. \end{pr} \begin{pr} \label{02} Let $K\subset L$ be a fields extension such that $L^{G(L\mid K)}=K$. Put $T=K+XL[X]$. $T$ be a Dedekind domain if and only if $K\subset L$ be an algebraic extension. \end{pr} \begin{pr} \label{04} Let $K\subset L$ be fields extension such that $K$ be a perfect field and assume that any $K$-isomorphism $\varphi\colon M\to M$, where $\varphi(L)=L$ holds for every field $M$ such that $L\subset M$. Put $T=K+XL[X]$. $T$ be a Dedekind domain if and only if $K\subset L$ be a separable extension. \end{pr} \begin{pr} \label{06} Let $K\subset L$ be fields extension. Assume that if a map $\varphi\colon L\to a(K)$ is $K$-embedding, then $\varphi (L)=L$. Put $T=K+XL[X]$. $T$ be a Dedekind domain if and only if $K\subset L$ be a normal extension. \end{pr} \begin{pr} \label{07} Let $K\subset L$ be fields extension such that $L^{G(L\mid K)}=K$. Put $T=K+XL[X]$. $T$ be a Dedekind domain if and only if $K\subset L$ be a normal extension. \end{pr} \begin{pr} \label{09} Let $T=K+XL[X]$ be Noetherian, where $K\subset L$ be fields. Assume $|G(L\mid K)|=[L\colon K]$ and any $K$-isomorphism $\varphi\colon M\to M$, where $\varphi(L)=L$ holds for every field $M$ such that $L\subset M$. $T$ be a Dedekind domain if and only if $K\subset L$ be a Galois extension. \end{pr} \begin{pr} \label{10} Let $T=K+XL[X]$, where $K\subset L$ be fields such that $K=L^{G(L\mid K)}$. $T$ be a Dedekind domain if and only if $K\subset L$ be a Galois extension. \end{pr}
1,314,259,995,707
arxiv
\section{Introduction} According to the Galaxy Zoo project, spiral galaxies contribute around two thirds of all galaxies in the Local Universe (Lintott et al. 2011; Willett et al. 2013). While the spiral structure is very common and appealing, the mechanism underlying its origin is still not well understood. Several theories aim to explain the nature of spiral arms in disky galaxies, however none of them is believed to be complete and universally applicable (for a review see Dobbs \& Baba 2014). In their seminal work Lin \& Shu (1964) proposed that the spiral arms are non-material, quasi-stationary density waves that rotate with the fixed pattern speed. This theory was later developed reaching a mature state (Bertin et al. 1989; Bertin \& Lin 1996), however one of its main predictions, namely the long lifetime of the spiral pattern, was difficult to reproduce in numerical simulations and little observational evidence for it was available (Sellwood 2011). Numerical studies typically found that spiral arms appear to be transient and short-lived. This kind of spiral arms are often referred to as `dynamic spirals' and seem to be triggered by the swing amplified perturbations or noise in the stellar disk (e.g. Sellwood \& Carlberg 1984; Fujii et al. 2011; Grand et al. 2012; Baba et al. 2013; D'Onghia et al. 2013). While these arms are dynamic and wind up fast, the recurrent mechanism of the perturbations can maintain the spiral structure in the galaxies for cosmological timescales (Fujii et al. 2011). Dynamic spirals in the simulations tend to have flocculent, multi-arm morphologies. Only recently Saha \& Elmegreen (2016) succeeded in creating long-lived ($\sim$ 5 Gyr) spiral wave modes. To accomplish that, they performed simulations of a galaxy with high values of the Toomre $Q$ parameter in the inner region provided by the bulge, which was interpreted as a barrier reflecting the wave and ensuring its long survival. In a recent paper, Hart et al. (2016) showed that the fraction of galaxies with the arm number $m=2$ is greater in regions of higher density, which indicates that they are of a different origin than those with $m>2$. It is well known that some of the grand-design, two-armed spirals originate from tidal interactions with other galaxies. This scenario is in fact the one most established observationally given the evidence of galaxies like M51 interacting with its nearby companion NGC 5195. A list of $\sim 20$ two-armed galaxies interacting with a companion of a different size was recently published by Gunthardt et al. (2016). From the theoretical point of view, the tidally induced spiral structure in interacting galaxies was first seen in the seminal work by Holmberg (1941). Later on, following the development of numerical calculations more detailed studies of the interacting galaxies were performed (Toomre \& Toomre 1972; Eneev et al. 1973). Nowadays, most efforts focus on simulations of a normal-size galaxy interacting with a smaller companion. These studies include pure $N$-body approaches (Oh et al. 2008, 2015) as well as hydrodynamical simulations (Dobbs 2011; Struck et al. 2011; Pettitt et al. 2016). There have also been attempts aiming to reproduce particular observed systems like M51 (Salo \& Laurikainen 2000; Dobbs et al. 2010) or M81 (Yun 1999). From these works the following picture emerges: a flying-by companion is inducing a tidal bridge-tail structure in the main galaxy, that later winds up to transform into grand-design spiral arms. These arms keep winding up and dissipating which manifests itself in the decrease of the pitch angle and the strength of the arms in time. Such a process usually takes about 1 Gyr. The pattern speed of the arms tends to decrease with radius and follows or slightly exceeds the inner Lindblad resonance, which means that the arms are non-material kinematic density waves. The magnitude of the tidal perturbation can be quantified by the dimensionless parameter $S$ defined by Elmegreen et al. (1991): \begin{equation} \label{defs} S=\bigg(\frac{M_{\mathrm{ptb}}}{M_{\mathrm{gal}}}\bigg)\bigg( \frac{R_{\mathrm{gal}}}{d}\bigg)^3\bigg(\frac{\Delta T}{T}\bigg), \end{equation} where $M_{\mathrm{ptb}}$ is the mass of the perturber, $M_{\mathrm{gal}}$, $R_{\mathrm{gal}}$ are the mass and the characteristic size of the perturbed galaxy and $d$ is the distance between both bodies at closest approach. $\Delta T$ is the interaction time defined as the time that the perturber needs to move over an angle of one radian around the progenitor and $T$ is the time for stars in the outer part of the disk of the progenitor to move one radian in their orbits, which can also be expressed as $T=(R_{\mathrm{gal}}^3/G M_{\mathrm{gal}})^{1/2}$. Elmegreen et al. (1991), Oh et al. (2013, 2015) and Pettitt et al. (2016) explored different values of $S$ and found that spiral arms can be triggered by a smaller companion with the parameter $S$ in the range $0.01<S<0.25$. Equation (\ref{defs}) shows that a very similar tidal perturbation can be obtained from a companion dwarf on a tight orbit and a bigger body on an appropriately wider orbit. In this work we use $N$-body simulations to investigate a scenario in which the perturber is a Virgo-like cluster and spiral arms are induced in a Milky Way-like galaxy orbiting around it. In this rescaled configuration including a much larger perturber the range of values of $S$ we find (assuming $M_{\mathrm{ptb}}$ is the mass of the cluster enclosed within $d$) are very similar, $0.09<S<0.14$. Interactions between the cluster potential and the orbiting galaxies were previously discussed by Merritt (1984), where it was shown that the spherical galaxy may be tidally truncated by the cluster. Later $N$-body (Byrd \& Valtonen 1990) and restricted three-body (Valluri 1993) calculations demonstrated that the cluster's tidal field can induce transient two-armed spiral structure in disky galaxies (as well as a bar in the case of Byrd \& Valtonen 1990). Recently Bialas et al. (2015) considered tidal interactions between the cluster and infalling galaxies, however the main focus of that work was to investigate the process of galaxy harassment (Moore et al. 1996, 1998) and its ability to transform disky galaxies into other morphological types. In this paper we focus on the formation and evolution of spiral arms that are tidally excited in a galaxy interacting with the cluster using $N$-body simulations. The paper is organized as follows. In Section 2 we present the simulations used in this study. In Section 3 we discuss in detail the properties of the spiral arms in the galaxy on the most extended orbit, focusing on their shape, amplitude and velocity. Section 4 compares spiral arms induced on different orbits using some of the quantitative methods described in Section 3. In Section 5 we attempt to place the scenario in the observational context. Finally, Section 6 provides the discussion and summary of the most important findings of the paper. \section{The simulations} We use $N$-body simulations of the Milky Way-like galaxy orbiting a Virgo-like cluster to investigate the formation and evolution of tidally induced spiral arms. In this work we use the same simulations as were described in \L{}okas et al. (2016) and used to study tidally induced bars. Initial conditions for the simulations were generated with the procedures described in Widrow \& Dubinski (2005) and Widrow et al. (2008). The Virgo cluster was approximated as a Navarro-Frenk-White (NFW; Navarro et al. 1997) dark matter halo of $10^6$ particles with parameters estimated by McLaughlin (1999) and Comerford \& Natarajan (2007), namely the virial mass $M_{\mathrm{C}}=5.4 \times 10^{14}\;\mathrm{M}_{\odot}$ and the concentration $c=3.8$. The progenitor galaxy was modelled as a two-component system similar to the Milky Way. The two components were an NFW dark matter halo and an exponential stellar disk, each made of $10^6$ particles. The model was similar to the model MWb of Widrow \& Dubinski (2005). The dark matter halo had a virial mass $M_{\mathrm{H}}=7.7\times 10^{11}\;\mathrm{M}_{\odot}$ and concentration $c=27$, while the disk had a mass $M_{\mathrm{D}}=3.4\times10^{10}\;\mathrm{M}_{\odot}$, the scale-length $R_{\mathrm{D}}=2.82$ kpc and thickness $z_{\mathrm{D}}=0.44$ kpc. Both components were smoothly cut off at large radii. The upper panel of Figure~\ref{rot} shows the initial rotation curve of the galaxy and the contributions from the two components. The initial conditions were such that the Toomre parameter was $Q>2.1$ at all radii (see the lower panel of Figure~\ref{rot}), preventing the formation of strong morphological structures when evolving the galaxy in isolation for a few Gyr. The progenitor galaxy was placed on four typical, eccentric orbits in the Virgo cluster with a typical apo- to pericenter distance ratio $D_{\mathrm{apo}}/D_{\mathrm{peri}}=5$ (Ghigna et al. 1998). All the orbits were coplanar and prograde with respect to the galaxy disk, with apo- and pericentric distances summarized in Table 1. The simulations we will refer to here as O1-O4 correspond to S1-S4, respectively, in \L{}okas et al. (2016). The orbital periods for simulations O1-O4 were 1.3, 1.9, 2.5 and 3.7 Gyr. \begin{table} \begin{center} \caption{Orbital parameters of the simulations} \begin{tabular}{lccl} \hline \hline Simulation & $D_{\mathrm{apo}}$ [Mpc] & $D_{\mathrm{peri}}$ [Mpc] & Line color \\ \hline O1 & \ 0.5 & \ 0.1 & \ \ red \\ O2 & \ \, 0.75 & \ \, 0.15 & \ \ green \\ O3 & \ 1.0 & \ 0.2 & \ \ cyan \\ O4 & \ 1.5 & \ 0.3 & \ \ blue \\ \hline \label{initial} \end{tabular} \end{center} \end{table} The evolution was followed for 10 Gyr with the GADGET-2 $N$-body code (Springel et al. 2001; Springel 2005) with outputs saved every 0.05 Gyr. The adopted softening scale for the halo of the Virgo cluster was $\epsilon_{\mathrm{C}}=14$ kpc while for the halo and disk of the progenitor $\epsilon_{\mathrm{H}}=0.7$ kpc and $\epsilon_{\mathrm{D}}=0.1$ kpc. \begin{figure} \begin{center} \includegraphics[width=225pt]{rot_toomre2R.eps} \end{center} \caption{ Upper panel: the initial rotation curve of the progenitor galaxy. Lower panel: the initial radial profile of the Toomre stability parameter $Q$.} \label{rot} \end{figure} \section{Formation and evolution of the spiral structure} The spiral arms form on each orbit in the simulations, however we find that the most persistent arms occur for the most extended orbit O4. Therefore, in this section we will focus on the case O4 and discuss the comparison between the arms forming on different orbits in Section 4. We choose this case because the longevity of the arms and the relatively weak bar (see \L{}okas et al. 2016) allow for more precise quantitative studies, but the general behavior described in this section applies to all orbits after some rescaling. First results concerning orbit O4 were already presented in Semczuk \& {\L}okas (2015). \begin{figure*} \begin{center} \includegraphics[width=500pt]{paper_maps2.eps} \caption{Face-on views of the surface density distribution of stars $\Sigma$ in the disk for orbit O4 at different times. The first and second pericenter passages occurred at 1.9 Gyr and 5.4 Gyr, respectively.} \label{snaps} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=500pt]{paper_phiR.eps} \caption{Face-on views of the perturbed density of stars $(\Sigma-\Sigma_0)/\Sigma_0$ in the $\phi$ - $\ln R$ plane for orbit O4 at different times, the same as in Figure~\ref{snaps}. The first and second pericenter passages occurred at 1.9 Gyr and 5.4 Gyr, respectively.} \label{phiR} \end{center} \end{figure*} \subsection{Overview} We find that the formation of spiral arms is triggered by the pericenter passages. During the pericenters tidal forces of the cluster cause the stars from the galaxy disk to form tidal tails. However, most of the stars in the tails are still bound to the progenitor, hence the structure winds up towards the center of the galaxy to form spiral arms. Later on the arms keep winding up and dissipate to be triggered again during the next pericenter passage. This dynamic and recurrent behavior is well seen in the plots comprising Figure~\ref{snaps}. From the face-on views of the surface density of the stars $\Sigma$ in Figure~\ref{snaps} we can also infer that the induced structure is as expected two-armed and of grand-design type. The shape of the grand-design spiral arms can be often approximated with the logarithmic spiral. In the plots of Figure~\ref{phiR} we show the time evolution of the perturbed density defined as $(\Sigma-\Sigma_0)/\Sigma_0$ (where $\Sigma_0$ is the initial face-on density distribution of the stars) in the $\phi$ - $\ln R$ plane, where $(\phi,\;R)$ are polar coordinates in the plane of the disk (see also e.g. Oh et al. 2008, 2015). If the spiral arms were perfect logarithmic spirals, the overdensities corresponding to the arms would have the shape of straight lines in the plots of Figure~\ref{phiR}. As we can see, these lines are not perfectly straight, however as a first approximation we can treat these arms as logarithmic. Figure~\ref{phiR} also confirms the winding up of the arms and their transient and recurrent nature. The lower plots of Figure~\ref{phiR} (for $t>$ 5 Gyr) also reveal the formation of the bar in the form of vertical overdense regions at smaller radii. \subsection{Fourier analysis} As demonstrated by Figure~\ref{phiR}, the tidally induced spiral arms in our simulations can be approximated as logarithmic spirals. We use this fact to expand the surface distribution of stars in logarithmic spirals as discussed in e.g. Sellwood \& Athanassoula (1986) and Oh et al. (2008, 2015). The expansion is given by the formula \begin{equation} A(m,p)=\frac{1}{N_s}\Sigma_j \exp [i(m \phi_j+p \ln R_j)], \end{equation} where $N_s$ is the number of stars, $(\phi_j,\;R_j)$ are the polar coordinates of the $j$-th star, $m$ is the number of spiral arms (here we will only consider $m=2$) and $p$ is a parameter related to the pitch angle $\alpha$. We calculated the function $|A(2,p)|\equiv|A(p)|$ in the fixed ring 9 kpc $\leq$ $R$ $\leq$ 15 kpc, and then found the $p_{\mathrm{max}}$ that maximizes this function to obtain the pitch angle using the relation $\tan \alpha= 2/p_{\mathrm{max}}$. We chose this range of radii for the ring to make sure that the bar will not influence the results. We justify this choice in Figure~\ref{Ap}, which demonstrates that $|A(p)|$ calculated in $R$ $\leq$ 9 kpc has a maximum around $p \simeq 0$. This means that the bar can be interpreted as spiral arms with the pitch angle $\alpha \simeq 90^{\circ}$ and therefore contaminate our measurements. In Section 4, where we compare the results for different orbits we will pick the ring even further from the center because the bar is there longer. \begin{figure} \begin{center} \includegraphics[width=200pt]{circ_paper.eps} \end{center} \caption{Upper panel: example of $|A(p)|$ calculated in two regions, the inner and outer one. Lower panel: surface density of stars with dashed circles marking two regions for which $|A(p)|$ was calculated in the upper panel.} \label{Ap} \end{figure} The time evolution of $|A(p)|$ calculated in the chosen ring shortly after the first pericenter passage on orbit O4 is shown in Figure~\ref{ap_evo}. The value of $p_{\mathrm{max}}$ is changing very rapidly toward higher values which corresponds to the winding up of the arms and the decrease of the pitch angle $\alpha$. Note that the value of $|A(p_{\mathrm{max}})|$ is also changing non-randomly with time. We define $|A(p_{\mathrm{max}})|$ as the parameter measuring the arm strength and plot its time evolution in Figure~\ref{str}. The time dependence of the pitch angle $\alpha$ is shown in Figure~\ref{pitch} labeled as method 1. \begin{figure} \begin{center} \includegraphics[width=200pt]{ap_evo.eps} \end{center} \caption{Time evolution of $|A(p)|$ calculated in the fixed ring. Note that the pericenter passage occurred at 1.9 Gyr.} \label{ap_evo} \end{figure} \begin{figure} \begin{center} \includegraphics[width=240pt]{str_maxSurf2.eps} \end{center} \caption{Time dependence of the arm strength $|A(p_{\mathrm{max}})|$ (blue line) calculated in the ring 9 kpc $\leq$ $R$ $\leq$ 15 kpc and the maximum arm surface density $\Sigma_{\mathrm{max}}$ (red line) calculated in the annuli of 10.2$\pm$0.3 kpc. Both measurements were made for orbit O4. Dashed vertical lines indicate pericenter passages.} \label{str} \end{figure} \begin{figure} \begin{center} \includegraphics[width=225pt]{pitch2.eps} \end{center} \caption{Time dependence of the pitch angle $\alpha$ for orbit O4, calculated in the ring 9 kpc $\leq$ $R$ $\leq$ 15 kpc with two different methods. Dashed vertical lines indicate pericenter passages.} \label{pitch} \end{figure} Right after the pericenter passage the pitch angle has a value $\alpha \simeq 30^{\circ}$ and afterwards it exponentially decreases to values below $10^{\circ}$. The same behavior occurs after the next pericenter. The decrease of the pitch angle confirms that the spiral arms wind up between the pericenters as was already seen in Figures~\ref{snaps} and \ref{phiR}. Note that the pitch angles in the range of $10^{\circ}$ to $30^{\circ}$ are very realistic values that are indeed measured in observed galaxies (Binney \& Tremaine 1987; Ma 2002). The arm strength $|A(p_{\mathrm{max}})|$ behaves similarly to the pitch angle: it has highest values around the pericenter and then exponentially decreases until the next pericenter. This confirms that the arms are strongest right after they are formed and with time they dissolve. However, there is clearly a difference between the evolution of $|A(p_{\mathrm{max}})|$ and $\alpha$: the peaks of the arm strength are shifted to $\sim0.5$ Gyr after the pericenters. This is due to the fact that the tidal features formed during the passage are strongest in the outer parts of the disk and they need time to wind up and migrate into the ring in which we do the measurements. \subsection{The pitch angle from surface density fits} In the previous subsection we applied one method to derive the pitch angle of the spiral arms. Here we use another approach (which is also based on the assumption that the arms can be described as logarithmic spirals) to confirm our findings. The method is very similar to the one presented in Grand et al. (2013) and it consists of fitting logarithmic spirals to the surface density distribution of the stars $\Sigma$. First, we find $\Sigma$ in the polar coordinates ($\phi$, $R$) and then we look at a given radius $R_j$ for local maxima $\phi_{\mathrm{max},\;j}$ corresponding to the two arms. We select $R_j$ from the same range 9 kpc $\leq$ $R$ $\leq$ 15 kpc as in the previous subsection so that the results of the two methods are comparable. Next, using the least squares method, we fit the logarithmic spiral \begin{equation} \phi=B \ln R + C \end{equation} to the two sets of points ($\phi_{\mathrm{max},\;j}, R_j$). The procedure is illustrated in Figure~\ref{scat} where we plot the points used for the fit, the fitted logarithmic spirals and a subsample of stars of the simulated galaxy. A few examples of the plots of $\Sigma(\phi)$ at fixed radii with marked maxima are presented in Figure~\ref{maxima}. \begin{figure} \begin{center} \includegraphics[width=220pt]{scatter.eps} \end{center} \caption{Face-on view of a random subsample of disk particles (gray). Red points are the selected maxima $\phi_{\mathrm{max},\;j}$ of the surface density at given radii $R_j$. Blue lines are the logarithmic spirals fitted to the red points (see subsection 3.3).} \label{scat} \end{figure} \begin{figure} \begin{center} \includegraphics[width=220pt]{panel_pitch.eps} \end{center} \caption{The dependence of the surface density of stars $\Sigma$ on the azimuthal angle $\phi$ at three different radii: 9 kpc, 12 kpc and 15 kpc. Dashed vertical lines indicate the maxima found numerically. Results are plotted for orbit O4 at $t=2.35$ Gyr. The maxima correspond to the red points at the same radii in Figure~\ref{scat}.} \label{maxima} \end{figure} The pitch angle $\alpha$ of each arm is given by $\tan \alpha=1/|B|$. The time dependence of the average pitch angle of the two arms is plotted in Figure~\ref{pitch} labeled as method 2. We find that the two methods are in very good agreement. We note that method 2 requires more parameters so when using these procedures to automatically deal with a large number of simulation outputs, method 1 seems more straightforward to apply. \subsection{The maximum arm surface density} In subsection 3.2 we have introduced $|A(p_{\mathrm{max}})|$ as an indicator of the strength of the spiral arms. Here we apply a different approach that was recently used in Few et al. (2016). This approach is very simple and consists of tracking the value of the maximum surface density $\Sigma_{\mathrm{max}}\equiv\Sigma(\phi_{\mathrm{max}})$ at a fixed radius as a proxy of the arm strength. The profiles of the surface density at different radii were already shown in Figure~\ref{maxima}. We choose as a measure of the arm strength the mean of the two values of the maxima at $10.2\pm0.3$ kpc (since it shows least noise) and present the time evolution of this quantity in Figure~\ref{str}. For the other radii the behavior of $\Sigma_{\mathrm{max}}$ is very similar. However, the adopted distance from the center of the galaxy affects the values of the maxima due to the growing bar at smaller radii. In general, the evolution of $\Sigma_{\mathrm{max}}$ is very similar to the evolution of $|A(p_{\mathrm{max}})|$: the peaks occur at $\sim0.5$ Gyr after the pericenter passage and then the value exponentially decreases until the next pericenter. The agreement between the two methods of measuring the arm strength confirms our findings concerning the recurrent and transient evolution of the spiral structure. \subsection{The pattern speed} The pattern speed of the spiral arms is an essential parameter concerning the nature of the spiral arms. According to the quasi-stationary density wave theory introduced by Lin \& Shu (1964) the spiral pattern should rotate like a rigid body with a fixed, constant pattern speed $\Omega_{\mathrm{p}}$. However, if the arms are kinematic density waves their rotation should follow the inner Lindblad resonance i.e. $\Omega_{\mathrm{p}}(R)=\Omega(R)-\kappa(R)/2$. Finally, if the spiral arms are material they should rotate in the same way as the stars in the disk, $\Omega_{\mathrm{p}}(R)=\Omega(R)$ (see e.g. Dobbs \& Baba 2014). To find the pattern speed of the arms in our case we use two methods out of many available in the literature, which are most often applied to simulations. The first method we apply was introduced by Oh et al. (2008, 2015). It consists of calculating the normalized cross-correlation of the perturbed surface density at two different times separated by $\Delta t$ \begin{equation} C(R, \phi, t)=\frac{1}{\Sigma_0 (R)^2} \int ^{2 \pi} _0 \delta \Sigma (R, \xi, t) \delta \Sigma (R, \xi+\phi, t+\Delta t) d \xi, \end{equation} where $\delta \Sigma=\Sigma-\Sigma_0$ and $\xi$ is a polar angular coordinate over which the expression is integrated. We choose $\Delta t$=0.15 Gyr (which is $3\times0.05$ Gyr, with 0.05 Gyr being the time step between our saved simulation outputs) because the arms formed on orbit O4 seem to be relatively slow. Once $C(R, \phi, t)$ is calculated, we obtain the pattern speed at a given radius by finding $\phi_{\mathrm{max}}$ that maximizes the cross-correlation. The pattern speed is given by the relation $\Omega_{\mathrm{p}}(R,t)=\phi_{\mathrm{max}}/\Delta t$. The contours of the cross-correlation indicating the locus of the maximum are plotted in Figure~\ref{pattern}. The second method we use was discussed e.g. in Dobbs (2011) and seems more straightforward. It consists of finding the maximum of the density (here we use the surface density $\Sigma$) of stars at a given radius in polar coordinates, at two different epochs. The pattern speed at a fixed radius is expressed by a simple formula \begin{equation} \Omega_{\mathrm{p}}(t)=\frac{\phi(\Sigma_{\mathrm{max}})\at[]{t+\Delta t} -\phi(\Sigma_{\mathrm{max}})\at[]{t}}{\Delta t}. \end{equation} Here for the same reason we also use $\Delta t$=0.15 Gyr. The pattern speeds calculated with this approach for both arms are plotted with red and blue lines in Figure~\ref{pattern} (note that the color coding does not follow the same arm in different plots). \begin{figure} \begin{center} \includegraphics[width=200pt]{pattern2newestxX.eps} \end{center} \caption{Contours of the cross-correlation $C(R,\phi,t)$ for orbit O4 after the first pericenter passage at $t=$1.9 Gyr. Contours are spaced by 10\% of the maximum value of $C(R, \phi, t)$. Red and blue lines indicate the pattern speed of each arm measured separately. Solid green lines mark $\Omega$ of the stars while dashed green lines correspond to the inner Lindblad resonance $\Omega - \kappa/2$.} \label{pattern} \end{figure} The comparison of the lines and contours in the plots of Figure~\ref{pattern} confirms that the two methods give consistent results. Both arms show similar radial dependence, which also overlaps with the contours of the cross-correlation. Although the two tidally induced arms are expected to have slightly different pattern speeds due to the asymmetry of the process (Dobbs 2011), we do not find any systematic offset between them, probably because of the relatively mild tidal forces. We note however that our measurements are a bit noisy and done some time after the arms are formed. Clearly the arms are not quasi-stationary density waves because the pattern speed decreases with radius. In addition, the pattern speed profile lies very close to the inner Lindblad resonance indicating that the arms are kinematic density waves. Over the 1 Gyr for which the measurements are shown in Figure~\ref{pattern}, the range of values of the pattern speed is approximately the same and only varies radially from 10 to 4 km s$^{-1}$ kpc$^{-1}$. The value of $\Omega_{\mathrm{p}}\simeq 6$ km s$^{-1}$ kpc$^{-1}$ in the outer parts of the disk soon after the first pericenter passage is close to the angular velocity of the progenitor on its orbit around the Virgo-like cluster, $\Omega_{\mathrm{orb}}\simeq 6.2$ km s$^{-1}$ kpc$^{-1}$. This concurrence confirms the tidal origin of the arms and was previously noted in the literature (Oh et al. 2008, 2015). \section{Comparison between the orbits} \subsection{General properties} As mentioned at the beginning of the Section 3, we find that the general behavior of the tidally induced arms is qualitatively similar for all orbits considered in this work but still some dependence on the orbit is present. To illustrate these differences we use the approach based on the expansion into logarithmic spirals (described in section 3.2) due to its simplicity, however we also show some qualitative differences in plots similar to those in Figures~\ref{snaps} and \ref{phiR} for the different orbits at the approximately similar evolutionary stages. {\L}okas et al. (2016) demonstrated that the strongest and most extended bar forms for the tightest orbit O1. To make sure that this strong bar does not influence the measurements concerning the spiral arms (see Figure~\ref{Ap}) and to maintain consistency we choose to compare the results for all orbits using the same ring of 12 kpc $\leq$ $R$ $\leq$ 17 kpc. At later times in the simulations the strong bar influences $|A(p)|$ even in this far region, however going even further away would not provide enough stellar particles to obtain smooth $|A(p)|$ functions. To avoid the effects of the growing bar we show the time dependence of the pitch angle $\alpha$ and the arm strength $|A(p_{\mathrm{max}})|$ in Figure~\ref{comp} for orbits O2-O4 only for the first 8 Gyr and for orbit O1 for the first 4 Gyr. \begin{figure} \begin{center} \includegraphics[width=230pt]{paper_comparison.eps} \end{center} \caption{Upper panel: the time evolution of the pitch angle $\alpha$ for all orbits measured in the ring 12 kpc $\leq$ $R$ $\leq$ 17 kpc. Lower panel: the time evolution of the arm strength $|A(p_{\mathrm{max}})|$ for all orbits measured in the same region.} \label{comp} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=500pt]{panel_paper.eps} \caption{Upper panels: face-on views of the surface density distribution of stars $\Sigma$ in the disk for orbits O1-O3 (columns) at $t= 0.3 T_{\mathrm{orb}}$ after the first pericenter passages. Lower panels: the perturbed density of stars $(\Sigma-\Sigma_0)/\Sigma_0$ in the $\phi$ - $\ln R$ plane for the same orbits at the same epochs.} \label{comp2} \end{center} \end{figure*} For all orbits right after the pericenter the pitch angle $\alpha$ (see the upper panel of Figure~\ref{comp}) has a value around $30^{\circ}$-$40^{\circ}$. After that it exponentially decreases to $\sim 5^{\circ}$ and repeats the cycle after the next pericenter passage. The timescale of this process depends on the orbit and the orbital period. However, also the slope of the decrease depends on the orbit to some extent. It is well visible especially when comparing orbits O1-O3 with O4: for the most extended orbit the slope is less steep. This means that the most persistent spiral arms or the ones that are winding up most slowly occur for the most extended orbit O4. However the effect of the steepness of the slope is very weak and the durability of the O4 arms is mostly due to the long orbital period and relatively mild tidal forces. The time dependence of the arm strength $|A(p_{\mathrm{max}})|$ (see the lower panel of Figure~\ref{comp}) also confirms the recurrent and transient evolution of the spiral arms for all orbits. The differences between the slopes are not well visible, however there is a clear difference in the values of $|A(p_{\mathrm{max}})|$. The tighter the orbit, the stronger the arms are in terms of $|A(p_{\mathrm{max}})|$: it ranges from $\sim 0.85$ for O1 to $\sim 0.6$ for O4. This finding is consistent with the same dependence for the bar (\L{}okas et al. 2016). We may therefore conclude that tidally induced (or enhanced) morphological features are stronger for tighter orbits in terms of the Fourier coefficients. Figure~\ref{comp2} shows face-on stellar surface density distributions (upper panels) and the perturbed density distributions in the $\phi$ - $\ln R$ plane (lower panels) for orbits O1-O3 at $t=0.3 T_{\mathrm{orb}}$ (where $T_{\mathrm{orb}}$ is the orbital period) after their first pericenter passages. The spiral arms and the disk in general show some differences between the orbits at this approximately the same evolutionary stage. First, we can see that for orbits O1 and O2 the spiral arms are peeling off from the tidal tails. It is more transparent for O1, while for O2 this effect is visible closer to the tips of the arms. Some time after the pericenter passage the tidal-spiral structure starts to separate: the particles that are still bound to the galaxy wind up and form the spiral arms, while less bound particles detach from the galaxy and form the tidal tails. The same phenomenon occurs for orbit O3, however its timescale is different and the quadrupole structure is seen later. More information about the spiral arms can be inferred from the lower panels of Figure~\ref{comp2}. The plots demonstrate that for the tighter orbits the perturbed density takes higher values in the inner parts. It means that for O1 the strong arm structure reaches deepest into the disk at this particular evolutionary stage. On the other hand, the spirals for the most extended orbit seem slightly wider and more wound up. \begin{figure} \begin{center} \includegraphics[width=200pt]{pattern_comparison0_3.eps} \end{center} \caption{Contours of the cross-correlation $C(R,\phi,t)$ for orbits O1-O3 at $t= 0.3 T_{\mathrm{orb}}$ after the first pericenter passages. Contours are spaced by 10\% of the maximum value of $C(R, \phi, t)$. Red and blue lines indicate the pattern speed of each arm measured separately. Solid green lines mark $\Omega$ of the stars while dashed green lines correspond to the inner Lindblad resonance $\Omega - \kappa/2$.} \label{pattern_comp} \end{figure} We also compared the pattern speed of spiral arms forming on orbits O1-O3 at the same epochs for which the density maps in Figure~\ref{comp2} were made. The results, obtained with the two methods described in section 3.5, are presented in Figure~\ref{pattern_comp}. We find that the ranges of the pattern speed for all orbits are very similar. All of them also seem to follow tightly the inner Lindblad resonance. For orbits O1-O2 the radial dependence seems to be a bit steeper than for O3. However, after analyzing the time evolution of $\Omega_{\mathrm{p}}(R)$ we find that this flatness in O3 is not very significant. The slope of the radial decrease of the pattern speed seems to be rather noisy and its changes are not correlated with any particular events during the orbital evolution. For O3 this decrease happens to be steeper before and after the epoch for which we presented the results here. We note that the plots of Figure~\ref{pattern_comp} also confirm the agreement between the two methods of deriving the pattern speed. \subsection{Radial displacement of the stars} Radial migration of stars in galaxies was first discussed by Sellwood \& Binney (2002) and can be described as a process of changing the orbital angular momentum of the stars without changing the eccentricity of their orbits. Several authors investigated the influence of the non-axisymmetric structures like spiral arms on the radial migration of the stars (e.g. Sellwood \& Binney 2002; Vera-Ciro et al. 2014; Martinez-Medina et al. 2016). To verify how much the orbits of the stars are changed in our simulations we apply one of the methods discussed by Martinez-Medina et al. (2016). In Figure~\ref{rad_mig} we show the initial (at the first apocenter) distributions of the radii of the stars (colored lines) that later, at the second apocenter, were found in the radial bin of similar color (shaded regions). We present these plots for the two extreme orbits O1 and O4 in order to clearly see the influence of the tidal force (for O1 the tidal parameter $S=0.14$, for O4 $S=0.09$) and the bar that is formed during the first pericenter for O1 but not for O4. The displacement of the peak of the distribution with respect to the center of the colored bin contains the information on whether a significant fraction of the stars migrated outwards or inwards. For both orbits in Figure~\ref{rad_mig} we find that the further away from the center of the galaxy the stars initially were, the further outwards are they shifted from their initial positions. This most probably excludes the bar as the driver of the radial migration since for O1 after the first pericenter the bar size was below 5 kpc. For orbit O1 where a greater tidal force was acting on the disk ($S=0.14$), the displacement is larger and the distributions are not symmetric Gaussians. For the milder encounter ($S=0.09$) on orbit O4 the displacements are smaller and the distributions may be approximated by the normal distribution. The difference between O1 and O4 suggests that the tidal force is responsible for pulling the stars outwards, since the effect is greater for the greater force. One may still interpret the smaller shift of the radial distributions of the stars for O4 (if we assume that the tidal force is negligible in comparison with O1) as mainly due to radial migration caused by the spiral arms as discussed in Sellwood \& Binney (2002). However, we verified that in the case of O4 after the pericenter the orbits of the stars become significantly more eccentric and therefore we conclude that the radial shift might be due to the mixed effect of the mild tidal torquing and the scattering of the stars on the spiral arms (Elmegreen \& Struck 2013; 2016). \begin{figure} \begin{center} \includegraphics[width=220pt]{rad_mig4.eps} \end{center} \caption{Initial distribution of stellar radii (during the first apocenter, colored lines) that during the next apocenter are located within the corresponding radial bin (shaded regions of similar color) for two extreme orbits O1 and O4. $N$ is the initial number of stars at the given radius and $N_*$ is the total number of stars located within each radial bin at the next apocenter.} \label{rad_mig} \end{figure} \section{Comparison with observations} In this work we have shown that it is possible to induce the grand design spiral arms in the Milky Way-like galaxy only by the tidal interactions with a galaxy cluster. It remains to be investigated whether the presence of gas can significantly alter this picture. However, it was previously shown that pure $N$-body simulations can produce spiral arms via interactions with a satellite or a similar-sized galaxy (e.g. Toomre \& Toomre 1972; Yun 1999; Oh et al. 2008, 2015). In this study we have extended this list of the possible perturbers to include cluster-size objects and we have measured the general properties of the induced spiral pattern. The question now is whether our setup is realistic and whether this scenario is really taking place in the Local Universe. In order to attempt to answer this question we searched for grand-design spiral galaxies in the Virgo cluster, additionally imposing the condition that they do not show any signs of interaction with a satellite or another galaxy. Our candidates were selected from three extragalactic databases: NASA/IPAC Extragalactic Database (NED), HyperLeda (Makarov et al. 2014) and Galaxy Zoo (Lintott et al. 2011). We first performed the search for all galaxies located within the radius of 10$^{\circ}$ from the position of M87, selecting only those with velocities in the range of $-1000$ km s$^{-1} < v < 3000$ km s$^{-1}$. This criterion corresponds approximately to a $3\sigma$ cut in the velocity and cleans the sample of obvious interlopers. Then we selected only those galaxies for which reliable spiral classification was available. By these we mean galaxies which have been classified as spirals in at least two of these catalogues, giving however twice bigger weight to the classification provided by NED. Such a selection yields a sample of 201 spiral galaxies. \begin{table*} \centering \caption{Selected grand-design spiral galaxies from the Virgo cluster} \begin{tabular}{lllcl} \hline \hline Name & \ $\alpha$ {[}deg{]} & \ $\delta$ {[}deg{]} & Morphological type\textsuperscript{a} & Anemic\textsuperscript{b} \\ \hline NGC 4067 & 181.0481 & 10.8544 & SA(s)b\textsuperscript{c} & -\\ NGC 4208 (4212) & 183.914 & 13.9015 & SAc & no \\ NGC 4450 & 187.12346 & 17.08494 & SA(s)ab & yes\\ NGC 4535 & 188.58462 & \, 8.19775 & SAB(s)c & no\\ M58 (NGC 4579) & 189.43134 & 11.81819 & SAB(rs)b & debatable\\ M91 (NGC 4548) & 188.86022 & 14.49634 & SB(rs)b & yes\\ NGC 4580 & 189.45162 & \, 5.36852 & SAB(rs)a pec & no \\ IC 3267 & 186.02303 & \, 7.04128 & SA(s)cd & -\\ UGC 7133 & 182.33229 & 18.9975 & SABd & -\\ \hline \multicolumn{5}{l}{\textsuperscript{a}\footnotesize{de Vaucouleurs et al. (1991)}, \textsuperscript{b}\footnotesize{Koopmann \& Kenney (2004)}}\\ \multicolumn{5}{l}{\textsuperscript{c}\footnotesize{A bar is clearly seen in this galaxy in newer images so we would classify it as SB(s)b}} \end{tabular} \end{table*} \begin{figure*} \begin{center} \includegraphics[width=500pt]{panel_obs4by.eps} \caption{Upper panels: SDSS images of three galaxies selected from Table 2. Lower panels: surface density maps of the simulated galaxies modified to mimic the images of the corresponding real galaxies. The plots aim only to illustrate the general morphological similarities between the simulated and real objects, in particular the inner parts of the spiral arms and their connection with the bar. Inclinations of the galaxies were adopted from $^{\mathrm{a}}$Vollmer et al. (1999) and $^{\mathrm{b}}$Cayatte et al. (1990). $^{\star}$The inclination of NGC 4067 was estimated with the formula from Bottinelli et al. (1983) and the axis sizes from SIMBAD (Skrutskie et al. 2006).} \label{obs} \end{center} \end{figure*} Then we visually inspected the sample to look for grand-design, two-armed spiral galaxies. This `by eye' selection yielded 24 galaxies. Afterwards we searched the literature for any signatures of past interactions with a similar-sized galaxy or a satellite. We have excluded the objects that were classified in SIMBAD database (Wenger et al. 2000) as a Group of Galaxies, a Pair of Galaxies or Interacting Galaxies. After this brief research we obtained a list of 9 galaxies showing the grand-design spiral pattern and for which there is no evidence of their recent interactions with dwarfs or other galaxies. The identifiers and the basic information about these galaxies are summarized in Table 2. Note that our literature search was very basic and we welcome any comments concerning the possible signatures of interactions with other galaxies for the objects listed in Table 2. One may argue that the fact that we do not see any signs of interactions with other galaxies does not mean that there were none in the past. It is obviously true, but the probability that all 9 galaxies were perturbed by a satellite or a fly-by galaxy and we see no evidence of it must be low. This small sample we present is just intended to show that our idealized scenario from the simulations is possible and the origin of the spiral arms in these galaxies may be due to the interaction with the Virgo cluster. While we do not find any stellar streams pointing towards a satellite or any similar evidence of interaction with other galaxies near the objects listed in Table 2, there is some evidence from the radio observations that they might have been interacting with the intracluster medium (ICM). In particular, M91 (also known as NGC 4548) shows perturbations in its gaseous content (Vollmer et al. 1999) and NGC 4535 shows asymmetry in the structure of its magnetic field (We\.zgowiec et al. 2007). These signatures point toward past ram-pressure stripping (RPS) caused by ICM, that could only favor our scenario of interaction between the galaxies and the cluster. Still, pure ram-pressure induced morphologies would show characteristic, asymmetric, mainly one-armed morphologies (e.g., Kenney et al. 2014), in contrast with our tidally formed two-armed structures. The same applies to the galaxies classified as anemic (Koopmann \& Kenney 2004): it means they have low star formation rate (SFR), that could have been decreased by the RPS. We note however that recent simulations by Steinhauser et al. (2016) show that RPS caused by the cluster has only small influence on the quenching. The selected 9 galaxies do not reside in any particular region of the Virgo cluster, their spatial distribution is more or less uniform. The majority of them have projected distances to M87 smaller than 1.5 Mpc which is the largest apocenter in our simulations. While some of them have greater projected distances than 1.5 Mpc, they still lie within the virial radius of the Virgo cluster ($\sim3$ Mpc), although we obviously cannot determine their orbits. We cannot say whether galaxies on orbits with still larger apocenters would produce tidally induced spirals. Note, however, that the Virgo cluster is not a spherically symmetric system and is known to possess a few substructures so the galaxies that are further away from M87 may have interacted with a closer massive subcluster. In our simulated galaxies we found that the pitch angle changes from initial $30^{\circ}$-$40^{\circ}$ down to $\sim$ 5$^{\circ}$. This covers all the typical values of the pitch angle (van den Bergh 1998) corresponding to different morphological types of the Hubble sequence. Just like Sundelius et al. (1987) we reproduced every Hubble type with different arm winding from Sc to Sa. Morphologies of our galaxies also resemble SDSS pictures of galaxies from the sample listed in Table 2. In Figure~\ref{obs} we compare SDSS images of three galaxies (upper panels) with appropriately inclined and rotated surface density maps of our simulated galaxies (lower panels). The images of the simulated galaxies in Figure~\ref{obs} have also been cut in the density below the threshold corresponding to the surface brightness of 25 mag arcsec$^{-2}$ assuming the mass-to-light ratio of $M/L=2$ solar units. This procedure was performed to hide the extensive tidal tail-bridge structures that may not be visible in the SDSS images due to the limited surface brightness range. Comparing the images in Figure~\ref{obs} we find good agreement between the shapes of the morphological structures in observations and simulations. Although all the maps from the simulations lack the bright core seen in the observations, we note that our simulated galaxy did not include a bulge initially and our goal was not to reproduce these particular galaxies. In addition, in the observed images the outer parts of the spiral arms seem to be more tightly wound, especially for M91 and NGC 4067. Regardless of that, the shape and the thickness of the inner parts of the spiral arms, and their connection with the bar (particularly in the case of NGC 4067) are in very good agreement. This is even more remarkable given the fact that in the simulations we considered a general scenario and not the case of these three galaxies. \section{Discussion and Summary} \subsection{Discussion} The main findings of this paper concerning the nature of the spiral arms are in a good agreement with previous works considering somewhat different setups, yet focusing on the tidally induced spiral structure. Oh et al. (2008) performed 2D and later 3D (Oh et al. 2015) $N$-body simulations of a disky galaxy perturbed by a companion. Recently Pettitt et al. (2016) revisited this configuration including hydrodynamical simulations. In these papers it has been found that the pitch angle of the spiral arms peaks briefly after the closest approach of the companion and exponentially decreases from values $\lesssim40^{\circ}$ to $\sim 5^{\circ}$ in about 1 Gyr. We observe a similar behavior with the same range of values and timescales in our simulations over one orbital period. It is especially well visible for orbit O4 (Figure~\ref{pitch}) where 1 Gyr after the pericenter the pitch angle drops to $\sim 7^{\circ}$ and then, due to the exponential nature of the curve, the decrease is very small. The timescale of the winding of the spiral arms we find here seems greater than the wind-up time of arms in a galaxy orbiting a cluster inferred from the face-on snapshots published by Byrd \& Valtonen (1990) and Valluri (1993). The difference probably arises from the fact that in these early works the particle resolution was significantly lower than nowadays but may also arise from different initial conditions. Pettitt et al. (2016) found some monotonic changes in the time evolution of the pitch angle in different models. However, it is difficult to compare them with our results concerning the different winding rate between orbit O4 and orbits O1-O3 discussed in Section 5 of this paper, because these changes are very subtle and there are too many differences between both simulations (e.g. a different setup, the inclusion of the gas, varying the mass of the perturber and not only the orbit). Besides the similar time evolution of the pitch angle corresponding to the winding of the arms, also the decrease of the arm strength found in the papers of Oh et al. (2008, 2015) and Pettitt et al. (2016) is similar to our results when considered over one orbital period. The exponential decrease of the arm strength reflects the decay of the arms found in each of the simulations, however it is difficult to compare the specific values due to the fact that in each paper a different approach is applied to measure the arm strength. Another similarity is the radial dependence of the pattern speed that follows $\Omega-\kappa/2$ or slightly exceeds this level due to self-gravity (Oh et al. 2015). The resemblance also includes the small variability of this radial dependence. However, the spirals discussed in this paper, induced by the cluster-like halo, seem to have lower pattern speeds than those in the other papers, where the spiral arms are induced by the companions. We note that this may also be a result of the specific properties of our progenitor galaxy. Despite the very similar picture concerning the spiral arms emerging from papers by Oh et al. (2008, 2015), Pettitt et al. (2016) and this work, one may still question the tidal origin of the arms discussed in this paper. The reason for that may be the presence of the bar and its connection with the spiral arms. In the aforementioned papers the bar formation was suppressed by the inclusion of the bulge component in modelling the main galaxy. In this paper, we used the simulations that were originally designed to investigate the bar formation. However, as demonstrated by figures 6 and 8 in {\L}okas et al. (2016), the bar for each orbit grows almost monotonically, even for the isolated case S5, while spirals appear and decay in tight correlation with the pericenter passages and are almost non-existent when the galaxy is evolved in isolation. Because of the fact that the oscillatory behavior in Figure~\ref{comp} does not correlate with the bar growth, while it does with the orbital motion, we can exclude the hypothesis that the spirals originate from the bar and confirm that they are tidally induced. We also note that while the bar is not the driver of the spiral structure and it may appear as an obstacle in the measurements of the properties of the spirals, the fact that it does form in our simulations provides a more complete picture of the evolution of galaxies in clusters. Indeed, a significant fraction of grand-design spirals in clusters, including those listed in Table~2, are barred. Results presented in this paper concern only the cases where the progenitor galaxy is on an exactly prograde orbit around a cluster. One may wonder whether these results would be applicable for other inclinations between the galaxy disk's angular momentum and galaxy orbital angular momentum. To clarify this issue to some extent we used additional simulations already at our disposal performed for orbits O2 and O3 where the initial inclination of the progenitor's disk was exactly retrograde, $i=180^{\circ}$. In this case no well-defined spiral arms form and the overall effect of tidal interactions is very mild. The dependence of the tidal effects on the disk inclination has been recently addressed by {\L}okas et al. (2015) in the case of dwarf galaxies orbiting a Milky-Way like host. The analysis of this scaled-down configurations leads to the expectation that our normal-size progenitor galaxy of the present paper would also form spiral arms for prograde inclinations from $i=0^{\circ}$ to $i=90^{\circ}$. However, the more inclined cases would generally produce a more complicated 3D spiral structure and a warped disk. On the other hand, for inclinations close to retrograde, from $i=90^{\circ}$ to $i=180^{\circ}$, no spiral arms or very weak ones would be formed. \subsection{Summary} In this work we have discussed the scenario for the origin of the grand-design spiral arms in galaxies via tidal interactions with a galaxy cluster. We used $N$-body simulations of a Milky Way-like galaxy evolving inside a Virgo-like cluster on a few different orbits. The most important findings of this paper may be summarized as follows: \begin{itemize} \item Grand-design, two-armed, logarithmic spiral structure forms in galaxies on each orbit around the Virgo-like cluster. \item The formation of spiral arms is triggered during the pericenter passages. Later on the arms wind up and dissipate with time to be triggered again during the next pericenter. This transient and dynamic behavior is reflected in the measurements of the pitch angle and the arm strength. \item The strongest arms form on the tightest orbit, however the most extended orbit produces arms that are winding up most slowly and are therefore most persistent. \item The pattern speed of the arms decreases with radius and follows the inner Lindblad resonance indicating that the arms are kinematic density waves. \item Among the sample of 201 spiral galaxies in the Virgo cluster we find clear 24 grand-design spirals. Nine of those objects show no signatures of recent interactions with another galaxy, that could trigger their spiral structure. The morphologies of a few of these 9 galaxies resemble the morphologies of our simulated galaxies. \end{itemize} \section*{Acknowledgments} This work was supported in part by the Polish National Science Centre under grant 2013/10/A/ST9/00023. We thank L. Widrow for providing procedures to generate $N$-body realizations for initial conditions. We are grateful to an anonymous referee for useful comments. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. In this work we also used the SIMBAD database, operated at CDS, Strasbourg, France. We acknowledge as well the usage of the HyperLeda database (http://leda.univ-lyon1.fr) and the use of Galaxy Zoo database. The galaxy images were provided by the Sloan Digital Sky Survey.
1,314,259,995,708
arxiv
\section{Introduction} Recall that a group $G$ is said to be \emph{torsion} (or \emph{periodic}) if every element of $G$ has finite order. Obviously, every finite group is torsion. Infinite torsion groups can be constructed as direct products of finite groups; note, however, that these groups are not finitely generated. The following famous problem was posed by William Burnside in 1902. \begin{prob} Is every finitely generated torsion group finite? \end{prob} This question and its variations served as a catalyst for research in group theory throughout the 20th century. In 1964, Golod answered it in negative \cite{Gol}. \begin{thm}[Golod] \label{main} There exists a finitely generated infinite torsion group. \end{thm} Golod's proof was based on the Golod-Shafarevich inequality giving a sufficient condition for certain graded algebras to be infinite dimensional. Since then, many alternative constructions of infinite finitely generated torsion groups have been found. Notable examples include free Burnside groups \cite{A}, groups acting on rooted trees \cite{Gri}, and inductive limits of hyperbolic groups \cite{Gro,Ols93}. Although most of these constructions are quite involved, the Grigorchuck example \cite{Gri} and Olshanskii's simplification of the original Golod's argument \cite{Ols95} are simple enough to be discussed in a standard group theory course. The goal of this paper is to provide yet another elementary proof based on the Nielsen-Schreier formula. The idea of our proof is by no means original. In one form or another, it appeared in \cite{LO,OO} and some other papers. However, the proofs in these papers were ``spoiled" by technicalities caused by the desire to ensure certain additional properties. Below we provide the simplified proof along these lines. \section{Proof of Golod's theorem} Given two elements $x,y$ of a group $G$, we write $x^y$ for $y^{-1}xy$. We denote by $\ll S\rr ^G$ the normal closure of a subset $S$ in $G$, i.e., the smallest normal subgroup of $G$ containing $S$. If $G$ is finitely presented, $\d (G)$ denotes the deficiency of $G$. That is, $\d(G)$ is the maximum of the difference between the number of generators and the number of relations over all finite presentations of $G$. \begin{lem}\label{sm} For every finite index subgroup $H$ of a finitely presented group $G$, we have \begin{equation}\label{eq:sm} \d (H)-1\ge (\d (G)-1)|G:H|. \end{equation} \end{lem} \begin{proof} Let $G=F/R$ be a finitely presented group, where $F=\langle x_1, \ldots , x_d\rangle $ is free of rank $d$, $R=\ll R_1, \ldots , R_r\rr ^F$, and $d-r=\d(G)$. Let $H$ be a finite index subgroup of $G$, $K$ the full preimage of $H$ in $F$. By the Nilsen-Schreier formula, $K$ is a free group of rank $(d-1)j+1$, where $j=|F:K|=|G:H|$. It is straightforward to check that $R=\ll \{ R_i^t \mid i=1, \ldots, r,\, t\in T\} \rr ^K$, where $T$ is a set of left transversal for $K$ (i.e., the set of representatives of left cosets of $K$ in $F$). Thus, $H=K/R$ has a presentation with $(d-1)j+1$ generators and $r|T|=rj$ relations, which implies (\ref{eq:sm}). \end{proof} Let $\mathcal D$ denote the class of all finitely presented groups that contain a finite index subgroup of deficiency at least $2$. Let $G\in \mathcal D$ and let $H\le G$ be a finite index subgroup of deficiency at least $2$. Passing to the intersection of all conjugates of $H$, we obtain a finite index normal subgroup $N\lhd G$ such that $N\le H$. Lemma \ref{sm} implies that $\d (N)\ge 2$. Thus, every $G\in \mathcal D$ contains a finite index \emph{normal} subgroup of deficiency at least $2$. For a group $G$, we denote by $\widehat G$ the quotient of $G$ by the intersection of all finite index subgroups of $G$. Basic linear algebra implies that every group of deficiency at least $2$ surjects onto $\mathbb Z$. Therefore, $|\widehat G|=\infty$ for every $G\in \mathcal D$. \begin{prop}\label{quot} Let $G\in \mathcal D$. For every $g\in G$, there exists $m\in \mathbb Z$ such that $Q=G/\ll g^{m}\rr ^G\in \mathcal D$ and the image of $g$ in $\widehat Q$ has finite order. \end{prop} \begin{proof} The idea of the proof is borrowed from \cite{BP}. If the image of $g$ in $\widehat G$ has finite order, we can take $m=0$. Henceforth, we assume that the image of $g$ in $\widehat G$ has infinite order. Let $M$ be a finite index normal subgroup of $G$ such that $\d (M)\ge 2$. By our assumption, there exist finite index subgroups $N\lhd G$ such that $|\langle g\rangle N / N |$ is arbitrarily large; in particular, we can find a finite index subgroup $N\lhd G$ such that $N\le M$ and \begin{equation}\label{o(g)} |\langle g\rangle N / N |> |G:M|. \end{equation} Let $m= |\langle g\rangle N / N |$, $f=g^m$. Obviously, $f\in N$. Let $T$ be a right transversal of $\langle g\rangle N$ in $G$. For every $s\in G$, we have $s=g^knt$ for some $k\in \mathbb Z$, $n\in N$, $t\in T$, and $f^s=f^{nt}=(f^{t})^{n^t}$. Since $n^{t}\in N$, we obtain $\ll f\rr ^G=\ll \{ f^{t} \, |\, t\in T\}\rr^N$. Therefore, $$ \d\Big(N/\ll f\rr^G\Big) \ge \d(N) - |T| =\d(N) - |G:\langle g\rangle N| = \d(N)- \frac{|G/N|}m. $$ Combining this inequality with Lemma \ref{sm} and (\ref{o(g)}), we obtain $$ \d\Big(N/\ll f\rr^G\Big) -1 \ge (\d(M)-1)|M/N| - \frac{|G/N|}m = |M/N|\left(\d(M)-1 - \frac{|G/M|}m\right) > 0. $$ Therefore, $G/\ll f\rr^G\in \mathcal D$. \end{proof} \begin{proof}[Proof of Theorem \ref{main}] Let $M_0=G_0=F_2$ be the free group of rank $2$. Clearly, $G_0\in \mathcal D$. We enumerate all elements of $G=\{ 1=g_0, g_1, g_2, \ldots \} $ and construct an infinite torsion quotient of $G_0$ by the following inductive procedure. Suppose that for some $k\ge 0$, we have already constructed a group $G_k$ and a subgroup $M_k\lhd G_k$ such that the following conditions hold: \begin{enumerate} \item[(a)] $G_k\in \mathcal D$; \item[(b)] the natural image of $g_k$ in $\widehat G_k$ has finite order; \item[(c)] $|G_k/M_k|\ge k$. \end{enumerate} Since $G_k\in \mathcal D$, $G_k$ contains subgroups of arbitrarily large finite index. In particular, we can find a subgroup $L_k\lhd G_k$ such that $L_k\le M_k$ and $\infty > |G_k/L_k|\ge k+1$. If the image of $g_{k+1}$ in $\widehat G_{k+1}$ has finite order, we let $G_{k+1}=G_k$ and $M_{k+1}=L_k$. Otherwise, let $g$ be a non-trivial element of $\langle g_{k+1}\rangle \cap L_k$. By Proposition \ref{quot}, there exists $m\in \mathbb Z$ such that $G_{k+1}= G_{k}/\ll g^m\rr ^{G_{k}}\in \mathcal D$ and the image of $g_{k+1}$ in $\widehat G_{k+1}$ has finite order. Let $M_{k+1}$ be the image of $L_k$ in $G_{k+1}$. Note that we have $G_{k+1}/ M_{k+1} \cong G_{k}/L_k$ since $g\in L_k$; therefore, $G_{k+1}/ M_{k+1}$ naturally surjects onto $G_k/M_k$. Thus we obtain the following commutative diagram \begin{equation}\label{seq} \begin{array}{ccccccc} G_0 & \longrightarrow & G_1 & \longrightarrow & G_2 & \longrightarrow &\ldots\\ \downarrow && \downarrow && \downarrow &&\\ G_0/M_0& \longleftarrow & G_1/M_1 & \longleftarrow & G_2/M_2 & \longleftarrow & \ldots,\\ \end{array} \end{equation} where all arrows are surjective. Let $G$ be the direct limit of the first row. That is, $G=G_0/\bigcup_{k\in \mathbb N}N_k$, where $N_k$ is the kernel of the homomorphism $G_0\to G_k$ obtained by composing the first $k$ maps in the first row of (\ref{seq}). By (b), the image of $g_k$ has finite order in $\widehat G_k$. Since $\widehat G_k$ surjects onto $\widehat G$ for all $k$, $\widehat G$ is a torsion group. On the other hand, $G$ surjects onto $G_k/M_k$ for all $k$. Combining this with (c), we obtain that $\widehat G$ is infinite. \end{proof}
1,314,259,995,709
arxiv
\section{Introduction} Transfer operator based methods involving Perron-Frobenius and Koopman operators are successfully applied for analysis and design of nonlinear dynamical systems in \cite{Dellnitz_Junge,Mezic2000,froyland_extracting,Junge_Osinga,Mezic_comparison,Dellnitztransport,mezic2005spectral,Mehta_comparsion_cdc,Vaidya_TAC,Vaidya_CLM_journal,raghunathan2014optimal,susuki2011nonlinear,mezic_koopmanism,mezic_koopman_stability,surana_observer}. The basic idea behind these methods is to shift the focus from the state space where the system evolution is nonlinear to measure space or space of functions where the system evolution is linear. The linearity of the transfer operator framework offers several advantages for analysis and design problems involving nonlinear systems. More importantly, this framework allows us to carry over our intuition from linear systems to nonlinear systems. One of the challenges in the application of these methods is in the computation of finite dimensional approximation of these infinite dimensional operators. \cite{dellnitz2002set}, proposed set-oriented numerical methods for the finite dimensional approximation of P-F transfer operators using the knowledge of system model. The data-driven model-free methods for the finite dimensional approximation of Koopman operator are proposed in \cite{Mezic2000}. Dynamic Mode Decomposition (DMD) (\cite{DMD_schmitt}) and Extended DMD (\cite{rowley2009spectral,EDMD_williams}) are two of the popular algorithms that are proposed for approximating the spectrum (eigenvalues and eigenfunctions) of Koopman operator. These algorithms rely on mapping the time-series data from state space into space of observables using some finitely many basis functions in dictionary set. Finite dimensional approximation of the Koopman operator is then obtained as a matrix that best describes the evolution of the finite basis functions. The finite dimensional Koopman matrix is obtained as a solution of the least squares optimization problem. However, the existing approximation algorithm involving DMD and EDMD does not preserve some of the important properties of the Koopman, transfer operator. In particular, the Koopman operator is a positive operator, i.e., any positive function is mapped to positive function by Koopman operator \cite{Lasota}. Similarly, P-F and Koopman operators are adjoint operators. Furthermore, the P-F operator is a special class of Markov operator \cite{Lasota}. In the recent work by \cite{klus2015numerical}, the adjoint property of these two operators was exploited to provide a data-driven approximation of both Koopman and P-F operators. The Markov property of the P-F operator combined with the adjoint nature of two operators has important implications on the finite dimensional approximation of these two operators. In this paper, we propose a new algorithm for the finite-dimensional approximation of these two operators that explicitly accounts for the positivity and Markov property and ensure that these features are retained in the finite-dimensional approximation. We show that preserving these properties allows one to better approximate the steady-state dynamics as captured by the spectrum (eigenvalue and eigenfunctions) of these operators but is essential to obtain the actual transient behavior of the system. We call the new algorithm for the finite dimensional approximation of transfer operator while preserving the properties of its infinite dimensional counterpart as Naturally Structured Dynamic Mode Decomposition (NSDMD). We show that the problem of finding the finite dimensional approximation of the Koopman operator using NSDMD is a least square optimization problem with constraints and is convex. Using the adjoint property between the two transfer operators, we also construct the finite dimensional approximation of the P-F transfer operator. The P-F transfer operator is used to compute the finite dimensional approximation of the eigenfunction with eigenvalue one of the P-F operator capturing the steady-state invariant dynamics of the system. Structure preserving the property of our proposed NSDMD algorithm makes this possible. Furthermore, DMD and EDMD algorithm does not lead to stable finite dimensional Koopman matrix since the largest eigenvalue of the Koopman matrix is not guaranteed to be one. Since the Koopman operator obtained using NSDMD preserves the Markov property the largest eigenvalue is always one leading to a stable finite-dimensional approximation. The organization of the paper is as follows. In Section \ref{section_operator}, we provide a brief overview of infinite dimensional operators and discuss the properties of these two operators. In Section \ref{section_numerics}, we present the overview of existing algorithms for the finite dimensional approximation of the operators, namely set-oriented numerics, DMD, and EDMD. We present the main result of this paper in the form of novel NSDMD algorithm that preserves the properties of these operators from infinite dimension to finite dimension in Section \ref{section_NSDMD}. Simulation results are presented in Section \ref{section_simulation} followed by conclusions in Section \ref{section_conclusion}. \section{Transfer operators and their Spectrum}\label{section_operator} Consider a discrete time dynamical system \begin{eqnarray} x_{t+1}=T(x_t)\label{system} \end{eqnarray} where $T:X\subset \mathbb{R}^N\to X$ is assumed to be invertible and smooth diffeomorphism. Furthermore, we denote by ${\cal B}(X)$ the Borel-$\sigma$ algebra on $X$ and ${\cal M}(X)$ vector space of bounded complex valued measure on $X$. Associated with this discrete time dynamical systems are two linear operators namely Koopman and Perron-Frobenius (P-F) operator. These two operators are defined as follows. \begin{definition}[Perron-Frobenius Operator] $\mathbb{P}_T:{\cal M}(X)\to {\cal M}(X)$ is given by \[[\mathbb{P}\mu](A)=\int_{{\cal X} }\delta_{T(x)}(A)d\mu(x)=\mu(T^{-1}(A))\] $\delta_{T(x)}(A)$ is stochastic transition function which measure the probability that point $x$ will reach the set $A$ in one time step under the system mapping $T$. \end{definition} \begin{definition}[Invariant measures] are the fixed points of the P-F operator $\mathbb{P}_T$ that are additionally probability measures. Let $\bar \mu$ be the invariant measure then, $\bar \mu$ satisfies \[\mathbb{P}\bar \mu=\bar \mu\] \end{definition} Under the assumption that the state space $X$ is compact it is known that the P-F operator admits at least one invariant measure. \begin{definition} [Koopman Operator] Given any $h\in\cal{F}$, $\mathbb{U}:{\cal F}\to {\cal F}$ is defined by \[[\mathbb{U} h](x)=h(T(x))\] \end{definition} \begin{properties}\label{property} Following properties for the Koopman and Perron-Frobenius operators can be stated. \begin{enumerate} \item [a).] For ${\cal F}=L_2(X,{\cal B}, \bar \mu)$ as the Hilbert space it is easy to see that \begin{eqnarray*} &&\parallel \mathbb{U}h\parallel^2=\int_X |h(T(x))|^2d\bar \mu(x) \nonumber\\&=&\int_X | h(x)|^2 d\bar\mu(x)=\parallel h\parallel^2 \end{eqnarray*} where we used the fact the $\bar \mu$ is an invariant measure. This implies that Koopman operator is unitary. \item [b).] For any $h\geq 0$, we have $[\mathbb{U}h](x)\geq 0$ and hence Koopman is a positive operator. \item [c).]For invertible system $T$, the P-F operator for the inverse system $T^{-1}:X\to X$ is given by $\mathbb{P}^*$ and $\mathbb{P}^*\mathbb{P}=\mathbb{P}\mathbb{P}^*=I$. Hence, the P-F operator is unitary. \item [d).] If we define P-F operator act on the space of densities i.e., $L_1(X)$ and Koopman operator on space of $L_\infty(X)$ functions, then it can be shown that the P-F and Koopman operators are dual to each others as follows \footnote{with some abuse of notation we are using the same notation for the P-F operator defined on the space of measure and densities.} \begin{eqnarray*} &&\left<\mathbb{U} f,g\right>=\int_X [\mathbb{U} f](x)g(x)dx\nonumber\\&=&\int_Xf(y)g(T^{-1}(y))\left|\frac{dT^{-1}}{dy}\right|dy=\left<f,\mathbb{P} g\right> \end{eqnarray*} where $f\in L_{\infty}(X)$ and $g\in L_1(X)$ and the P-F operator on the space of densities $L_1(X)$ is defined as follows \[[\mathbb{P}g](x)=g(T^{-1}(x))|\frac{dT^{-1}(x)}{dx}|\] \item [e).] For $g(x)\geq 0$, $[\mathbb{P}g](x)\geq 0$. Let $(X,{\cal B},\mu)$ be the measure space where $\mu$ is a positive but not necessarily the invariant measure of $T:X\to X$, then the P-F operator $\mathbb{P}:L_1(X,{\cal B},\mu)\to L_1(X,{\cal B},\mu)$ satisfies following properties. \item [f).] \[\int_X [\mathbb{P}g](x)d\mu(x)=\int_X g(x)d\mu(x)\]\label{Markov_property} \end{enumerate} \end{properties} The linearity of the P-F operator combined with the properties \ref{property} (e) and \ref{property} (f), makes the P-F operator a particular case of Markov operator. This Markov property of P-F operator has significant consequences on its finite dimensional approximation. We will discuss this in section \ref{section_numerics} on set-oriented numerical methods for finite dimensional approximation of P-F operator. Since $\mathbb{P}$ and $\mathbb{U}$ are unitary operators their spectrum lies on the unit circle. Given the adjoint nature of two operators, the spectrum of these operators are related. To study the connection between the spectrum of these two operators, we refer the interested readers to \cite{Mezic_comparison} and \cite{Mehta_comparsion_cdc} (Theorem 5 and Corollary 6) for results connecting the spectrum of transfer Koopman and P-F operator both in infinite dimensional and finite dimensional setting. \section{Set-oriented numerics and Dynamic mode decomposition}\label{section_numerics} \subsection{Set-oriented numerical methods}\label{section_setoriented} Set oriented numerical methods are primarily developed for the finite dimensional approximation of the Perron-Frobenius operator for the case where system dynamics are known \cite{dellnitz2002set, GAIO01}. However, these algorithms can be modified or extended to the case where system information is available in the form of time series data. The basic idea behind set-oriented numerics is to partition the state space, $X$, into the disjoint set of boxes $D_i$ such that $X=\cup_{i=1}^N D_i$. Consider a finite partition $X^{'}=\{D_1,\ldots, D_K\}$. Now, instead of a Borel $\sigma$-algebra, consider a $\sigma$-algebra of all possible subsets of $X$. A real-valued measure $\mu_j$ is defined by ascribing to each element $D_j$ a real number. This allows one to identify the associated measure space with a finite-dimensional real vector space $\mathbb{R}^K$. A given mapping $T : X \to X$ defines a stochastic transition function $\delta_{T(x)}(\cdot)$. This function can be used to obtain a coarser representation of P-F operator denoted by ${\bf P}':\mathbb{R}^{K\times K}\to \mathbb{R}^{K\times K}$ as follows: For $\mu^{'}=(\mu_1^{'},\ldots, \mu_K^{'})$ we define a measure on $X$ as \[d\mu(x)=\sum_{k=1}^K \mu_k^{'}\chi_{D_k}(x)\frac{dm(x)}{m(D_k)}\] where $\chi_{D_k}(x)$ is the indicator function of $D_k$ and $dm$ is the Lebesgue measure. The finite dimensional approximation of the P-F matrix, ${\bf P}'$, can now be obtained as follows: \begin{eqnarray}&&\nu_i'=[{\bf P}'\mu'](D_i)=\sum_{j=1}^K \int_{D_j}\delta_{T(x)}(D_i)\mu_j'\frac{dm(x)}{m(D_j)} \nonumber\\&=&\sum_{j=1}^K \mu_k'{\bf P}'_{ij}\end{eqnarray} where \[{\bf P}'_{ij}=\frac{m(T^{-1}(D_j)\cap D_i)}{m(D_j)}\] The resulting matrix $\bf {P}'$ is a Markov matrix and is row stochastic if we consider state $\mu'$ to be a row vector multiplying from the left of $P$. The individual entries of this Markov matrix can be obtained by Monte-Carlo approach by running simulation over short time interval starting from different initial conditions. Typically individual boxes $D_i$ will be populated with $M$ uniformly distributed initial conditions. The entry ${\bf P}_{ij}$ is then approximated by fraction of initial conditions that are in box $D_j$ in one forward iteration of the mapping $T$. The Monte Carlo based approach can be extended for computation of the P-F transfer operator from time series data. Let $\{x_0,T(x_0),\ldots, T^{N-1}(x_0)\}$ be the time series data set. The number of initial conditions in box $i$ is then given by \[\sum_{k=0}^{N-1} \chi_i(T^k(x_0)) \] where $\chi_i$ is the indicator function of box $i$. The $(i,j)$ entry for P-F matrix ${\bf P}'_{ij}$ is then given by the fraction of these initial conditions from box $i$ that ends up in box $j$ after one iterate of time and is given by following formula. \[{\bf P}'_{ij}=\frac{1}{\sum_{k=0}^{N-1} \chi_i (T^{k}(x_0))}\sum_{k=0}^{N-1}\chi_i(T^{k}(x_0))\chi_j(T^{k+1}(x_0)).\] \subsection{Dynamic mode decomposition (DMD) and Extended DMD} Dynamic Mode Decomposition method (DMD) has been introduced \cite{DMD_schmitt} for the dynamical analysis of the fluid flow field data. In the context of this paper, DMD can be viewed as a computation algorithm for approximating the spectrum of Koopman operator \cite{rowley2009spectral}. Extension of the DMD is presented in the form of Extended DMD (EDMD) which does a better job in approximating the spectrum of Koopman operator for both linear and nonlinear underlying system. In the following, we briefly explain the EDMD algorithm and show how the solution of DMD algorithm can be derived as a special case of EDMD. Consider snapshots of data set obtained from simulating a discrete time dynamical system $z\to T(z)$ or from an experiment \begin{eqnarray} \overline X = [x_1,x_2,\ldots,x_M],&\overline Y = [y_1,y_2,\ldots,y_M] \label{data} \end{eqnarray} where $x_i\in X$ and $y_i\in X$. The two pair of data sets are assumed to be two consecutive snapshots i.e., $y_i=T(x_i)$. Now let $\mathcal{D}= \{\psi_1,\psi_2,\ldots,\psi_K\}$ be the set of dictionary functions or observables. The dictionary functions are assumed to belong to $\psi_i\in L_2(X,{\cal B},\mu)={\cal G}$, where $\mu$ is some positive measure not necessarily the invariant measure of $T$. Let ${\cal G}_{\cal D}$ denote the span of ${\cal D}$ such that ${\cal G}_{\cal D}\subset {\cal G}$. The choice of dictionary functions are very crucial and it should be rich enough to approximate the leading eigenfunctions of Koopman operator. Define vector valued function $\mathbf{\Psi}:X\to \mathbb{C}^{K}$ \begin{equation} \mathbf{\Psi}(\boldsymbol{x}):=\begin{bmatrix}\psi_1(x) & \psi_2(x) & \cdots & \psi_K(x)\end{bmatrix} \end{equation} In this application, $\mathbf{\Psi}$ is the mapping from physical space to feature space. Any function $\phi,\hat{\phi}\in \mathcal{G}_{\cal D}$ can be written as \begin{eqnarray} \phi = \sum_{k=1}^K a_k\psi_k=\boldsymbol{\Psi^T a},\quad \hat{\phi} = \sum_{k=1}^K \hat{a}_k\psi_k=\boldsymbol{\Psi^T \hat{a}} \end{eqnarray} for some set of coefficients $\boldsymbol{a},\boldsymbol{\hat{a}}\in \mathbb{C}^K$. Let \[ \hat{\phi}(x)=[\mathbb{U}\phi](x)+r,\] where $r\in\mathcal{G}$ is a residual function that appears because $\mathcal{G}_{\cal D}$ is not necessarily invariant to the action of the Koopman operator. To find the optimal mapping which can minimize this residual, let $\bf K$ be the finite dimensional approximation of the Koopman operator. Then the matrix $\bf K$ is obtained as a solution of least square problem as follows \begin{equation}\label{edmd_op} \min\limits_{\bf K}\parallel {\bf G}{\bf K}-{\bf A}\parallel_F \end{equation} \begin{eqnarray}\label{edmd1} &&{\bf G}=\frac{1}{M}\sum_{m=1}^M \boldsymbol{\Psi}({x}_m)^\top \boldsymbol{\Psi}({x}_m)\nonumber\\ &&{\bf A}=\frac{1}{M}\sum_{m=1}^M \boldsymbol{\Psi}({x}_m)^\top \boldsymbol{\Psi}({y}_m), \end{eqnarray} with ${\bf K},{\bf G},{\bf A}\in\mathbb{C}^{K\times K}$. The optimization problem (\ref{edmd_op}) can be solved explicitly to obtain following solution for the matrix $\bf K$ \begin{eqnarray} {\bf K}_{EDMD}={\bf G}^\dagger {\bf A}\label{EDMD_formula} \end{eqnarray} where ${\bf G}^{\dagger}$ is the psedoinverse of matrix $\bf G$. Hence, under the assumption that the leading Koopman eigenfunctions are nearly contained within $\mathcal{G}_{\mathcal{D}}$, the subspace spanned by the elements of $\mathcal{D}$. The eigenvalues of $\bf K$ are the EDMD approximation of Koopman eigenvalues. The right eigenvectors of $\bf K$ generate the approximation of the eigenfunctions in (\ref{EDMD_eigfunc_formula}). In particular, the approximation of Koopman eigenfunction is given by \begin{equation}\label{EDMD_eigfunc_formula} \phi_j=\boldsymbol{\Psi} v_j \end{equation} where $v_j$ is the $j$-th right eigenvector of $\bf K$, $\phi_j$ is the eigenfunction approximation of Koopman operator associated with j-th eigenvalue. DMD is a particular case of EDMD, and it corresponds to the case where the dictionary functions are chosen to be equal to ${\cal D}=\{e_1^\top,\ldots, e_K^\top\}$, where $e_i\in \mathbb{R}^N$ is a unit vector with $1$ at $i^{th}$ position and zero elsewhere. With this choice of dictionary function, it can be shown the approximation of the Koopman operator using DMD approach can be written as \[{\bf K}_{DMD}=\overline Y\;\overline X^{\dagger},\] where $\overline X$ and $\overline Y$ are data set as defined in (\ref{data}). \section{Naturally structured dynamic mode decomposition}\label{section_NSDMD} In this section, we provide a new algorithm for the finite dimensional approximation of the Koopman and P-F operator that preserves some of the properties of these two operators. In particular, we develop an algorithm that preserves the positivity property of the Koopman operator. Furthermore, the adjoint nature of Koopman and P-F operators is used to impose additional constraints on the entries of the Koopman operator. These structural properties are not considered in the existing algorithms involving DMD and EDMD for the finite dimensional approximation of the Koopman operator. We show using examples that preserving these properties leads to a better approximation of eigenfunctions and eigenvalues of the transfer operators, but these features are essential to capture the correct transient behavior of the system. Capturing real transient dynamics is of particular importance towards the applications of the transfer operator for data-driven control and estimation problems. In our proposed numerical algorithm for finite dimensional approximation of transfer operators from data we start with the choice of dictionary functions ${\cal D}=\{\psi_1,\ldots,\psi_K\}$, where $\psi_i(x)\in {\cal G}=L_2(X,{\cal B},\mu)$. As already stated the choice of dictionary function is crucial and should be rich enough to approximate the Koopman eigenfunctions. Similarly the data set generated by the dynamics should be rich enough to carry the information about the inherent dynamics of the system. We believe that the proper choice of dictionary function and data set are intimately connected. We made following assumptions on the choice of dictionary function. \begin{assumption}\label{assumption_dic} We assume that the dictionary function $\psi_i(x)\geq 0$ for $i=1,\ldots, K$ and the inner product $\Lambda$ of the dictionary functions, $\Lambda=\langle\boldsymbol{\Psi}(x),\boldsymbol{\Psi}(x)\rangle$ with $[\Lambda]_{ij}=\langle\psi_i,\psi_j\rangle$ is symmetric positive definite matrix. \end{assumption} \begin{remark} Gaussian radial basis function (RBF) given by $\exp^{-\frac{\parallel x-x_i \parallel}{\sigma^2}}$, serves as a good approximation for the choice of dictionary functions satisfying the above assumption. \end{remark} Let ${\cal G}_{\cal D}$ be the span of these dictionary functions. Now consider any function $\phi$ and $\hat \phi$ in ${\cal G}_{\cal D}$, we can express these functions as \begin{eqnarray} \phi = \sum_{k=1}^K a_k\psi_k=\boldsymbol{\Psi^T a},\quad \hat{\phi} = \sum_{k=1}^K \hat{a}_k\psi_k=\boldsymbol{\Psi^T \hat{a}} \end{eqnarray} Again function $\phi$ and $\hat \phi$ are related as follows \[\hat \phi(x)= [\mathbb{U}\phi](x)+r\] where $r\in {\cal G}$ and represents the error and arise because of the fact that ${\cal G}_{\cal D}$ is not necessarily invariant under the action of Koopman operator. The extended DMD seeks to find the matrix ${\bf K}\in \mathbb{R}^{K\times K}$ that does the best job in mapping $\boldsymbol{a}$ to $\boldsymbol{\hat a}$. The matrix $\bf K$ is obtained as a solution of the least square problem as outlined in Eqs. (\ref{EDMD_formula}) and (\ref{edmd1}). Now consider a case where $\phi(x)\geq 0$. Then under Assumption \ref{assumption_dic}, we know that $a_i\geq 0$. Using the positivity property of the Koopman operator, we know that $[\mathbb{U}\phi](x)\geq 0$. The vector $\boldsymbol{a}$ is mapped to $\hat {\boldsymbol a}$ by the finite dimensional matrix $\bf K$. To preserve the positivity property of the Koopman operator (i.e., property \ref{property}b) we require that coefficient $\hat {a}_i$ are also positive. This, in turn, implies that the mapping $\bf K$ should satisfy the property \begin{eqnarray}{\bf K}_{ij}\geq 0,\;\;{\rm for}\;\; i,j=1,\ldots, K. \label{positive} \end{eqnarray} Let $\bf P$ be the finite dimensional approximation of the P-F operator. Since P-F is Markov operator, its finite dimensional approximation constructed on the dictionary function satisfying Assumption \ref{assumption_dic} has some properties. In particular, consider any density function, $\varphi$, expressed as linear combinations of dictionary functions \[\varphi=\sum_{k=1}^K \boldsymbol{b}_k \psi_k,\;\;\;\boldsymbol{b}_k\geq 0.\] We have \[[\mathbb {P}\varphi](x)=\hat \varphi(x)+r=\sum_{k=1}^K \boldsymbol{\hat{b}}_k \psi_k+r,\] where $r\in {\cal G}$ is the residual term which arise because ${\cal G}_{\cal D}$ is not invariant under the action of the P-F operator. The finite dimensional approximation of the P-F operator, $\bf P$ maps coefficient vector $\boldsymbol{b}$ to $\boldsymbol{\hat{b}}$, i.e., $\boldsymbol{\hat{b}}={\bf P}\boldsymbol{b}$ We are interested in approximating P-F operator such that the Markov property \ref{property}(f) of the infinite dimensional P-F operator is preserved. Since $[\mathbb {P}\varphi](x)\geq 0$ we have $\boldsymbol{b}_k\geq 0$ for all $k$. Hence for preserving the Markov property we require that \begin{eqnarray}\boldsymbol{b}^\top {\bf 1}=\boldsymbol{\hat{b}}^\top {\bf1},\label{markov}\end{eqnarray} where $\bf 1$ is a vector of all ones. Based on the adjoint property of Koopman and P-F operators, we have \[\langle\mathbb{U}\phi,\varphi\rangle=\langle\phi,\mathbb{P}\varphi\rangle\] Writing $\varphi$ and $\phi$ as linear combinations of basis function and using the definition of inner product from Assumption \ref{assumption_dic}, we can approximate the adjoint relationship as follows: \begin{eqnarray} \nonumber\langle\mathbb{U}\phi,\varphi\rangle\cong (\bf{K}\boldsymbol{a})^\top\Lambda\boldsymbol{b}&,& \langle\phi,\mathbb{P}\varphi\rangle\cong \boldsymbol{a}^\top\Lambda{\bf P}\boldsymbol{b}\\ \boldsymbol{a}^\top\bf{K}^\top\Lambda\boldsymbol{b}&=&\boldsymbol{a}^\top\Lambda\bf{P}\boldsymbol{b} \end{eqnarray} Since above is true for all $\boldsymbol{a}$ and $\boldsymbol{b}$, we have $\bf{K}^\top\Lambda=\Lambda\bf{P}$. Combining (\ref{positive}), (\ref{markov}) and the adjoint property of P-F and Koopman operator (i.e., ${\bf P}^\top=\Lambda{\bf K}\Lambda^{-1}$), it follows that for the finite dimensional approximation of the transfer operator to preserve the positivity and Markov properties of its infinite dimensional counterpart then $\bf K$ should satisfy following conditions \[{[\Lambda{\bf K}\Lambda^{-1}]}_{ij}\geq 0,\;\;\;\sum_{j=1}^K {[\Lambda{\bf K}\Lambda^{-1}]}_{ij}=1,\;i,j=1,\ldots, K.\] This leads to the following optimization based formulation for the computation of matrix $\bf K$ \begin{eqnarray}\label{optimization_problem} \min\limits_{\bf K} & \parallel {\bf G}{\bf K}-{\bf A}\parallel_F\\\nonumber \text{subject to} & {\bf K}_{ij} \geq 0\\\nonumber & [{\Lambda {\bf K}\Lambda^{-1}}]_{ij}\geq 0\\\nonumber & \Lambda{\bf K}\Lambda^{-1}\mathbbm{1} = \mathbbm{1} \end{eqnarray} where $\bf G$ and $\bf A$ are defined as follows: \begin{eqnarray}\label{edmd2} &&{\bf G}=\frac{1}{M}\sum_{m=1}^M \boldsymbol{\Psi}({x}_m)^\top \boldsymbol{\Psi}({x}_m)\nonumber\\ &&{\bf A}=\frac{1}{M}\sum_{m=1}^M \boldsymbol{\Psi}({x}_m)^\top \boldsymbol{\Psi}({y}_m), \end{eqnarray} with ${\bf K},{\bf G},{\bf A}\in\mathbb{C}^{K\times K}$ and the data set snapshots $\{x_n,y_n\}$ as defined in (\ref{data}). The optimization problem (\ref{optimization_problem}) is a convex and can be solved using one of the standard optimization toolbox for solving convex problem. It is important to emphasize that the matrix $\bf K$ serves two purposes; a) approximation of Koopman operator if we multiply vector from right; b) approximation to P-F operator if we multiply vector from left. \[{\rm Koopman \;operator} \;\;v_{t+1}={\bf K}v_t\] \[{\rm P-F \;operator}\;\;u_{t+1}=u_t\bf P\] where ${\bf P} =\Lambda{\bf K}\Lambda^{-1}$, $v_t\in \mathbb{R}^K$ is column vector and $u_t\in \mathbb{R}^K$ is row vector, and $t$ is the time index. Since $\bf P$ is row stochastic, it is guaranteed to have at least one eigenvalue one. Let, $\bar u_1$ be the left eigenvector with eigenvalue one of the $\bf P$ matrix. Then the approximation to the invariant density for the dynamical system, $T$, i.e., $\bar \varphi_1(x)$, can be obtained using following formula \[\bar \varphi_1(x)=\boldsymbol\Psi(x)\bar u_1^\top.\] Eigenfunction with eigenvalue $\lambda$ can be obtained as $\bar \varphi_{\lambda}=\boldsymbol\Psi(x)\bar u_{\lambda}^\top$, where $\bar u_{\lambda}^\top$ is the left eigenvector with eigenvalue $\lambda$ of matrix $\bf P$. Koopman eigenfunction with eigenvalue $\lambda$. We will refer to these eigenfunctions obtained using the left eigenvector of the $\bf P$ matrix as P-F eigenfunction. Similarly, approximate eigenfunctions of Koopman operator can be obtained using the right eigenvector of the $\bf K$ matrix. Let $\bar v_\lambda$ be the right eigenvector with eigenvalue $\lambda$ of the $\bf K$ matrix then the approximate Koopman eigenfunction $\bar \vartheta_\lambda$ can be obtained as follows: \[\bar \vartheta_\lambda(x)=\boldsymbol{\Psi}(x)\bar v_\lambda.\] We show that NSDMD preserve the stability property of the original system and this is one of the main advantages of the proposed algorithm. In particular, that certificate in the form of Lyapunov measure can be computed using the $\bf K$ matrix. \cite{Vaidya_TAC} introduced the Lyapunov measure for almost everywhere stability verification of general attractor set in the nonlinear dynamical system. The Lyapunov measure is computed using transfer operator-based framework. \cite{Vaidya_TAC} utilized set-oriented numerical methods for the finite dimensional approximation of the P-F operator from system dynamics. However, data-driven approach for verifying the stability of attractor set will involve making use of matrix $\bf K$ for computing Lyapunov measure. The procedure for calculating the Lyapunov measure will remain the same; the only change is that instead of using the P-F matrix constructed using set-oriented numerical method one can use the $\bf K$ build from time series data. In the simulation section, we present results for the computation of stability certificate. Different optimization problems can be formulated based on the main optimization formulation in Eq. (\ref{optimization_problem}). These different optimization formulation will try to preserve one or all the properties of these two operators. In particular, we have following different cases.\\ \textbf{Case I}: With positivity constraint on ${\bf K}$ only \begin{eqnarray}\label{optproblem1} \min\limits_{\bf K} & \parallel {\bf G}{\bf K}-{\bf A}\parallel_F\\\nonumber \text{subject to} & {\bf K}_{ij} \geq 0\\\nonumber \end{eqnarray} \textbf{Case II}: With positivity and Markov constraint on ${\bf P}$ only \begin{eqnarray}\label{optproblem2} \min\limits_{\bf K} & \parallel {\bf G}{\bf K}-{\bf A}\parallel_F\\\nonumber \text{subject to} & [{\Lambda {\bf K}\Lambda^{-1}}]_{ij}\geq 0\\\nonumber & \Lambda{\bf K}\Lambda^{-1}\mathbbm{1} = \mathbbm{1} \end{eqnarray} Both the optimization formulation (\ref{optproblem1}) and (\ref{optproblem2}) are convex formulation. {\bf Case III}: This case corresponds to combining both Case I and Case II and the optimization formulation corresponding to this case is given in Eq. (\ref{optimization_problem}). \section{simulation results}\label{section_simulation} The simulation results in this section are obtained by solving the optimization problems using GUROBI solver coded in MATLAB.\\ {\underline{\it 2D system}}: For this example we use optimization formulation from {\bf Case I}. A simple 2D nonlinear system is considered first. The differential equation of the system is given as follows, \begin{eqnarray}\nonumber \dot x &=& x-x^3+y\\ \dot y &=& 2x-y\label{2d_system} \end{eqnarray} This continuous time system has 2 stable equilibrium points, located at $(\pm\sqrt{3},\pm2\sqrt{3})$ and one saddle point at $(0,0)$. To generate time-series data of $T=10$, $1000$ initial conditions from $[-5,5]\times[-5,5]$ are randomly chosen and propogated using ode23t solver in MATLAB, sampled by $\Delta t=0.1$. The naturally structured dynamic mode decomposition (NSDMD) algorithm is then implemented with Gurobi solver. The following simulation results are obtained with 500 dictionary functions and $\sigma=0.45$. In Fig. \ref{2D_koopman1} and Fig. \ref{2D_koopman2}, we plot the Koopman eigenfunctions associated with eigenvalue 1 using NSDMD algorithm. The eigenfunction with eigenvalue one is clearly shown to separate the two domain of attraction. The separatrix region separating the two domain of attractions is captured by the eigenfunction with second dominant eigenvalue. \begin{figure}[h!] \centering \includegraphics[width=.9\linewidth]{2D_koopman1} \caption{\small {CASE-I: Koopman eigenfunction for eigenvalue $1$ for system (\ref{2d_system}) using NSDMD}}\label{2D_koopman1} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=.9\linewidth]{2D_koopman2} \caption{\small {CASE-I: Koopman eigenfunction for eigenvalue $0.97$ for system (\ref{2d_system}) using NSDMD}}\label{2D_koopman2} \end{figure} {\underline{\it Duffing Oscillator}}: The simulation results for this example is obtained using formulation of {\bf Case I}. The duffing oscillator is given by following differential equation. \begin{eqnarray} \ddot x=-0.5 \dot x-(x^2-1)x \end{eqnarray} The time step for the continuous time system is chosen to be equal to $\Delta t=0.25$ with the total period of $T=2.5$ and $1000$ randomly chosen initial conditions. We solve the differential equation in MATLAB with $ode45$ solver. We use $500$ Gaussian radial basis functions to form the dictionary set with $\sigma=0.1$. In Fig. \ref{duffing_koopman1} and Fig. \ref{duffing_koopman2}, we plot the first two dominant eigenfunctions of the Koopman operator obtained using NSDMD algorithm. Similar to the example 1, we notice the first two dominant Koopman eigenfunctions carry information about the domain of attraction of the two equilibrium point. \begin{figure}[h!] \centering \includegraphics[width=.9\linewidth]{duffing_koopman1} \caption{\small {CASE-I: Koopman eigenfunction for eigenvalue $1$ for Duffing oscillator using NSDMD}}\label{duffing_koopman1} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=.9\linewidth]{duffing_koopman2} \caption{\small {CASE-I: Koopman eigenfunction for eigenvalue $0.93$ for Duffing oscillator using NSDMD}}\label{duffing_koopman2} \end{figure} {\underline{\it Henon Map}}: Consider a following discrete-time system for the Henon map \begin{eqnarray} x_{t+1}&=&1-ax_t^2+y_t\nonumber\\ y_{t+1}&=&b x_t \end{eqnarray} with $a=1.4$ and $b=0.3$. Time series data starting from one initial condition over $5000$ time step is generated. Dictionary set is constructed using $500$ Gaussian radial basis functions. $K$-means clustering method is used for selecting the centers of these Gaussian radial basis functions over the data set with $\sigma=0.005$. In Fig. \ref{henon_pf1} we show the eigenfunction with eigenvalue one of the matrix $\bf P$ capturing the chaotic attractor of Henon map. \begin{figure}[h!] \includegraphics[width=.9\linewidth]{henon_pf1} \centering \caption{\small {CASE-II: P-F eigenfunction for eigenvalue $1$ for Henon map using NSDMD}} \label{henon_pf1} \end{figure} {\underline{\it Van der Pol Oscillator}}: The next step of simulation results is performed with Van der Pol Oscillator. \begin{eqnarray} \ddot x=(1-x^2)\dot x-x. \end{eqnarray} Time-domain simulation are performed by using discretization time-step of $\Delta t=0.1$ over total time period of $T=10$. The differential equation is solved in MATLAB with $ode45$ solver. Simulation results from $100$ different randomly chosen initial conditions are generated. For dictionary set we choose $500$ dictionary functions with centers of the dictionary functions determined using $k$-means clustering algorithm with $\sigma=0.1$. In Fig. \ref{vanderpol_pf1}, we show the P-F eigenfunctions corresponding to eigenvalue one of the ${\bf P}$ matrix obtained using NSDMD algorithm capturing the limit cycling dynamics of the Vanderpol oscillator. \begin{figure}[h!] \includegraphics[width=.9\linewidth]{vanderpol_pf1} \centering \caption{\small {CASE-II: P-F eigenfunction for eigenvalue $1$ for Van der Pol oscillator using NSDMD}} \label{vanderpol_pf1} \end{figure} {\underline{\it Lorenz attractor}}: The simulation results for this example are obtained using optimization formulation in {\bf Case III}. \begin{eqnarray} \dot{x} &= a (y - x), \\\nonumber \dot{y} &= x (b - z) - y, \\\nonumber \dot{z} &= x y - c z. \end{eqnarray} where $a=10$, $b=8/3$ and$c=28$. Time-domain simulation are performed by using discretization time-step of $\Delta t=0.02$ over total time period of $T=100$. The differential equation is solved in MATLAB with $ode45$ solver. Simulation results are generated from one initial condition $(1,1,1)$. For dictionary set we choose $500$ dictionary functions with centers of the dictionary functions determined using $k$-means clustering algorithm with $\sigma=0.5$. In Fig. \ref{lorenz_pf11}, and Fig. \ref{lorenz_pf12}, we show the first two dominant P-F eigenfunctions corresponding to eigenvalue one and $0.9762$ of the ${\bf P}$ matrix obtained using NSDMD algorithm. \begin{figure}[h!] \includegraphics[width=.9\linewidth]{lorenz_pf1} \centering \caption{\small {CASE-III: P-F eigenfunction for eigenvalue $1$ for Lorenz attractor using NSDMD}} \label{lorenz_pf11} \end{figure} \begin{figure}[h!] \includegraphics[width=.9\linewidth]{lorenz_pf2} \centering \caption{\small {CASE-III: P-F eigenfunction for eigenvalue $0.9762$ for Lorenz attractor using NSDMD}} \label{lorenz_pf12} \end{figure} \subsection{Stability Certificate: Lyapunov Measure from Data} We notice that the finite dimensional Koopman matrix obtained using the DMD or EDMD algorithms are not guaranteed to be stable. For example, in the Van der Pol oscillator, the largest eigenvalue of the $\bf K$ matrix using EDMD is $\lambda=1.001$ and hence unstable. However, the matrix $\bf K$ obtained using NSDMD algorithm is guaranteed to be stable. In fact, the stability certificate in the form of Lyapunov measure can be computed using the procedure outlined in \cite{Vaidya_TAC}. This stability certificate provides information about the relative amount of time system trajectories spend in the different region of state space before getting absorbed in the attractor set. In Fig. \ref{lya}, we show the plot for the Lyapunov measure for the Van der Pol oscillator example. \begin{figure} \includegraphics[width=.8\linewidth]{Lya_Meas} \centering \vspace{-0.1in} \caption{\small{Lyapunov Measure for Van der Pol Oscillator}} \label{lya}. \end{figure} \section{Conclusions}\label{section_conclusion} We have provided a new algorithm for computing Koopman and P-F eigenfunction from time series data. This proposed algorithm ensure that important properties of the infinite dimensional transfer operators such as positivity and Markov property are preserved in the finite-dimensional approximation. We show via simulation examples that the proposed algorithm can provide a better approximation of the steady-state dynamics regarding eigenfunctions and eigenvalues of the transfer operators. Furthermore, we demonstrate that preserving the positivity property of the finite dimensional approximation is essential to capture the true transient dynamics of the operators. \bibliographystyle{ieeetr}
1,314,259,995,710
arxiv
\section*{Highlights} \begin{itemize} \item Privacy-preserving multi-agent reinforcement learning is used to coordinate residential energy \item Learning from optimisations improves coordination scalability in stochastic environments \item Marginal reward signals further enhance cooperation relative to previous approaches \item The curse of dimensionality is mitigated by the use of fixed-size Q-tables \item Case studies with large real-life datasets yield 33.7\% local and global cost reductions \end{itemize} \begin{multicols}{2} \section{Introduction} This paper addresses the scalability issue of distributed domestic energy flexibility coordination in a cost-efficient and privacy-preserving manner. A novel class of coordination strategies using optimisation-based multi-agent reinforcement learning (MARL\footnote{A full nomenclature is available in \Cref{app:nomenclature}}) with fixed Q-table size is proposed for household-level decision-making, tackling the challenge of scalability for simultaneously learning independent agents under partial observability in a stochastic environment \citep{Matignon2012}. Multiple versions of the novel strategy are assessed to maximise the statistical expectation of system-wide benefits, including local battery costs, grid costs and greenhouse gas emissions. Widespread electrification of primary energy provision and decarbonisation of the power sector are two vital prerequisites for limiting anthropogenic global warming to 1.5$^o$C above pre-industrial levels. To reduce risks of climate-related impacts on health, livelihood, security and economic growth, intermittent renewable power supplies could be required to supply 70\% to 85\% of electricity by 2050 \citep{IPCC2015}. However, this poses the challenges of the intermittency and limited controllability of resources \citep{Bose2019}. Therefore, a robust, decarbonised power system will rely on two structural features: decentralisation and demand response (DR) \citep{Leautier2019}. The coordination of distributed flexible energy resources can help reduce costs for transmission, storage, peaking plants and capacity reserves, improve grid stability, align demand with decarbonised energy provision, promote energy independence and security, and lower household energy bills \citep{Vazquez-Canteli2019, Pumphrey2020}. Residential sites constitute a significant share of potential DR, representing for example 38.5\% of the 2019 UK electricity demand, and 56.4\% of energy consumption if including transport and heat, which are both undergoing electrification \citep{BEIS2021}. Increasing ownership of EVs and PV panels has been facilitated by regulatory changes, with many countries committing to internal combustion car phase-outs in the near future, and by plummeting costs, with an 82\% and 87\% levelised cost drop between 2010 and 2019 for EVs and PV panels \citep{Agency2018,BloomberNEF2019}. This potential is so far underexploited, as DR primarily focuses on larger well-known industrial and commercial actors that require less coordination and data management \citep{CharlesRiverAssociates2017}, with most customers still limited to trade with utility companies \citep{Chen2019}. The primary hurdles to unlocking residential flexibility are the high capital cost of communication and control infrastructure as the domestic potential is highly fragmented \citep{Leautier2019}, concerns about privacy and hindrance of activities \citep{Bugden2019,Pumphrey2020}, and computational challenges for real-time control at scale \citep{Moret2019}. Traditionally, convex optimisation would be used to maximise global coordination objectives in convex problems with variables known ahead of time. Techniques such as least-squares and linear programming have been well-studied for over a century \citep{Boyd2009}. However, residential energy coordination presents challenges to its application. Firstly, optimisations that are centralised are hindered by privacy, acceptance, and communication constraints, and present exponential time complexity at the scale of millions of homes \citep{Dasgupta2016}. Secondly, standard optimisation methods cannot be used without full knowledge of the system's inputs and dynamics \citep{Recht2018}. In residential energy, agents only have partial observability of the system due to both the stochasticity and uncertainty of environment variables such as individual residential consumption and generation profiles, and to the privacy and infrastructure cost constraints that hinder communication between agents during implementation \citep{FrancoisLavet2017}. Not relying on shared information may also improve the robustness of the solutions to failure of other agents, communication delays, and unreliable information, and improve adaptability to changing environments \citep{Sen1994}. Finally, the real-life complex electricity grid environment may not be amenable to a convex model representation. Due to the heterogeneity of users and behaviours needing different parameters and models, the large-scale use of model-based controllers is cumbersome \citep{Ruelens2017}. A model-free approach instead avoids modelling non-trivial interactions of parameters, including private information \citep{Dasgupta2016}. Given these challenges to residential energy flexibility coordination, and the specific constraints of the problem at play which renders traditional approaches unsuitable, we seek to develop a novel coordination mechanism which satisfies the following criteria, as tested in real-life scenarios: \begin{itemize} \item Computational scalability: minimal and constant computation burden during implementation as the system size increases; \item Performance scalability: no drop in coordination performance as the system size increases, measured in savings obtained per hour and per agent; \item Acceptability: local control of appliances, no communication of personal data, thermal discomfort, or hindrance/delay of activities. \end{itemize} The rest of this paper is organised as follows. In \Cref{sec:gapanalysis} we motivate the novel MARL approach with a literature review and a gap analysis. In \Cref{system}, a system model is presented that includes household-level modelling of EVs, space heating, flexible loads and PV generation. \Cref{RLSection} lays out the MARL methodology, with various methodological options for independent agents to learn to cooperate. In \Cref{data}, the input data used to populate the model is presented. In \Cref{results}, the performance of different MARL strategies is compared to lower and upper bounds in case studies. Finally, we conclude in \Cref{conclusion}. \section{MARL-based energy coordination: literature review and gap analysis}\label{sec:gapanalysis} Reinforcement learning (RL) can overcome the constraints faced by centralised convex optimisation for residential energy coordination, by allowing for decentralised and model-free decision-making based on partial knowledge. RL is an artificial intelligence (AI) framework for goal-oriented agents\footnote{Here agents are independent computer systems acting on behalf of prosumers \citep{Wooldridge2002}. Prosumers are proactive consumers with distributed energy resources actively managing their consumption, production and storage of energy \citep{Morstyn2018_Federated}. } to learn sequential decision-making by interacting with an uncertain environment \citep{Sutton1998}. As an increasing wealth of data is collected in local electricity systems, RL is of growing interest for the real-time coordination of distributed energy resources (DERs) \citep{Antonopoulos2020,Vazquez-Canteli2019}. Instead of optimising based on inherently uncertain data, RL more realistically searches for statistically optimal sequential decisions given partial observation and uncertainty, with no \emph{a priori} knowledge \citep{Recht2018}. Approximate learning methods may be more computationally scalable, more efficient in exploring high-dimensional state spaces and therefore more scalable than exact global optimisation with exponential time complexity \citep{Schellenberg2020, Dasgupta2016}. As classified in \citep{CharbonnierReview}, numerous RL-based coordination methods have been proposed in the literature for residential energy coordination, though with remaining limitations in terms of scalability and privacy protection. On the one hand, in RL-based direct control strategies, a central controller directly controls individual units, and households directly forfeit their data and control to a central RL-based scheduler \citep{ONeill2010}. While most existing AI-based DR research thus assumes fully observable tasks \citep{Antonopoulos2020}, direct controllability of resources from different owners with different objectives and resources and subject to privacy, comfort and security concerns is challenging \citep{Darby2020}. Moreover, centralised policies do not scale due to the curse of dimensionality as the state and action spaces grow exponentially with the system size \citep{Powell2011}. On the other hand, RL-based indirect control strategies consider decision-making at the prosumer level, entering the realm of MARL. This can be achieved using different communication structures, with either centralised, bilateral, or no sharing of personal information, as presented below. Firstly, agents may share information with a central entity, which in turn broadcasts signals based on a complete picture of the coordination problem. For example, the central entity may send unidirectional price signals to customers based on information such as prosumers’ costs, constraints and day-ahead forecasts. RL can inform both the dynamic price signal \citep{Lu2019, Kim2016}, and the prosumer response to price signals \citep{Kim2016,Babar2018}. The central entity may also collect competitive bids and set trades and match prosumers centrally, where RL algorithms are used to refine individual bidding strategies \citep{Vaya2014, Ye2020, Dauer2013,Sun2015,Kim2020} or to dictate the auction market clearing \citep{Chen2019,Claessens2013}. Units may also use RL to cooperate towards common objectives with the mediation of a central entity that redistributes centralised personal information \citep{Zhang2017,Dusparic2015,Dusparic2013,Hurtado2018}. However, information centralisation also raises costs, security, privacy and scalability of computation issues. Biased information may lead to inefficient or even infeasible decisions \citep{Morstyn2020_P2P}. Secondly, RL-based coordination has been proposed where prosumers only communicate information bilaterally without a central authority. For example, in \citep{Taylor2014} agents use transfer learning with distributed W-learning to achieve local and system objectives. Bilateral peer-to-peer communication offers autonomy and expression of individual preferences, though with remaining risks around privacy and bounded rationality \citep{Herbert1982}. There is greater robustness to communication failures compared situations with a single point of failure. However, as the system size increases, the number of communication iterations until algorithmic convergence increases, requiring adequate computational resources and limited communication network latency for feasibility \citep{Guerrero2020}. The safe way of implementing distributed transactions to ensure data protection is an ongoing subject of research \citep{CharbonnierReview}. Finally, in RL-based implicit coordination strategies, prosumers rely solely on local information to make decisions. For example, in \citep{Cao2019, Yang2019}, competitive agents in isolation maximise their profits in RL-based energy arbitrage, though they do not consider the impacts of individual actions on the rest of the system, with potential negative impacts for the grid. For example, a concern is that all loads receive the same incentive, the natural diversity on which the grid relies may be diminished \citep{Crozier2018_Mitigating}, and the peak potentially merely displaced, with overloads on upstream transformers. Implicit cooperation, which keeps personal information at the local level while encouraging cooperation towards global objectives, has been thus far under-researched beyond frequency control. In \citep{Rozada2020}, agents learn the optimal way of acting and interacting with the environment to restore frequency using local information only. This is a promising approach for decentralised control. However, the applicability in more complex scenarios with residential electric vehicles and smart heating load scheduling problems has not been considered. Moreover, the convergence slows down for increasing number of agents, and scalability beyond 8 agents has not been investigated. Indeed, fundamental challenges to the coordination of simultaneously learning independent agents at scale under partial observability in a stochastic environment have been identified when using traditional RL algorithms [1]: independent learners may reach individual policy equilibriums that are incompatible with a global Pareto optimal, the non-stationarity of the environment due to other concurrently learning agents affects convergence, and the stochasticity of the environment prevents agents from discriminating between their own contribution to global rewards and noise from other agents or the environment. Novel methods are therefore needed to develop this approach. We seek to bridge this gap, using implicit coordination to unlock the so-far largely untapped value from residential energy flexibility to provide both individual and system benefits. We propose a new class of MARL-based implicit cooperation strategies for residential DR, to make the best use of the flexibility offered by increasingly accessible assets such as photovoltaic (PV) panels, electric vehicle (EV) batteries, smart heating and flexible loads. Agents learn RL policies using a data-based, model-free statistical approach by exploring a shared environment and interacting with decentralised partially observable Markov decision processes (Dec-POMDPs), either through random exploration or learning from convex optimisation results. In the first rehearsal phase \citep{Kraemer2016} with full understanding of the system, they learn to cooperate to reach system-wide benefits by assessing the global impact of their individual actions, searching for trade-offs between local, grid and social objectives. The pre-learned policies are then used to make decisions under uncertainty given limited local information only. This approach satifies the computational scalability, coordination scalability and acceptance criteria set out in this paper. Firstly, the real-time control method is computationally scalable thanks to fixed-size Q-tables which avoid the curse of dimensionality, and there is only minimal, constant local computation required to implement the pre-learned policies during implementation. No further communication is required for implementation. This increases robustness to communication issues and data inaccuracy relative to when relying on centralised and bilateral communication, and cuts the costs of household computation and two-way communication infrastructure. Secondly, we address the outstanding MARL coordination performance scalability issue for agents with partial observability in a stochastic environment seeking to maximise rewards which also depend on other concurrently learning agents \citep{Busoniu2008,Matignon2012}. The case studies in this paper show that allowing agents to learn from omniscient, stable, and consistent optimisation solutions can successfully act as an equilibrium-selection mechanism, while the use of marginal rewards improves learnability\footnote{\say{the sensitivity of an agent’s utility to its own actions as opposed to actions of others, which is often low in fully cooperative Markov games} \citep{Matignon2012}} by isolating individual contributions to global rewards. This novel methodological combination offers significant improvements on MARL scalability and convergence issues, with high coordination performance maintained as the number of agents increases, where that of standard MARL drops at scale. Finally, this method tackles acceptability issues, with no interference in personal comfort nor communication of personal data. The specific novel contributions of this paper are (a) a novel class of decentralised flexibility coordination strategies, MARL-based implicit cooperation, with no communication and fixed-size Q-tables to mitigate the curse of dimensionality; (b) a novel MARL exploration strategy for agents under partial observability to learn from omniscient, convex optimisations prior to implementation for convergence to robust cooperation at scale; and (c) the design and testing with large banks of real-world data of combinations of reward definitions, exploration strategies and multi-agent learning frameworks for assessing individual impacts on global energy, grid and storage costs. Methodologies are identified which outperform a baseline with increasing numbers of agents despite uncertainty. \section{Local system description}\label{system} \begin{figure*}[!t] \begin{center} \includegraphics[width=0.7\linewidth]{energybalance.pdf} \end{center} \caption{Local system model. Red dotted lines denote energy balances.} \label{fig:EnergyBalance} \end{figure*} In this section, the variables, objective function and constraints of the problem are described. This sets the frame for the application of the RL algorithms presented in \Cref{RLSection}. \subsection{Variables}\label{variables} We consider a set of time steps $t \in \mathcal{T} = \{t_0,...,t_\textrm{end}\}$ and a set of prosumers $i \in \mathcal{P} = \{1,...,n\}$. Decision variables are \emph{italicised} and input data are written in roman. Energy units are used unless specified otherwise. Participants have an EV, a PV panel, electric space heating and generic flexible loads. The EV at-home availability $\upmu_i^t$ (1 if available, 0 otherwise), EV demand for required trips $\textrm{d}_{\textrm{EV},i}^t$, household electric demand $\textrm{d}_i^{t}$, PV production $\textrm{p}_{\textrm{PV},i}^t$, external temperature $\textrm{T}_{\textrm{e}}^t$ and solar heat flow rate $\upphi^t$ are specified as inputs for $t \in \mathcal{T}$ and $i \in \mathcal{P}$. The local decisions by prosumers are the energy flows in and out of the battery $b_{\textrm{in},i}^t$ and $b_{\textrm{out},i}^t$, the electric heating consumption $h_i^t$ and the prosumer consumption $c_i^t$. These have both local and system impacts (\Cref{fig:EnergyBalance}). Local impacts include battery energy levels $E_i^t$, losses $\epsilon_{\textrm{ch},i}^t$ and $\epsilon_{\textrm{dis},i}^t$, prosumer import $p_i^t$, building mass temperature $T_{\textrm{m},i}^t$ and indoor air temperature $T_{\textrm{air},i}^t$. System impacts arise through the costs of total grid import $g^t$ and distribution network trading. Distribution network losses and reactive power flows are not included. \subsection{Objective function}\label{objfunc} Prosumers cooperate to minimise system costs consisting of grid ($c_\textrm{g}^t$), distribution ($c_\textrm{d}^t$) and storage ($c_\textrm{s}^t$) costs. This objective function will be maximised both in convex optimisations off-line -- to provide an upper bound for the achievable objective function, and in some cases to provide information to the learners during the simulated learning phase -- and in the learning of MARL policies for decentralised online implementation. \begin{equation} \max F = \sum_{\forall t \in \mathcal{T}}{\hat{F}_t} = \sum_{\forall t \in \mathcal{T}}{- (c_\textrm{g}^t + c_\textrm{d}^t + c_\textrm{s}^t )} \end{equation} \begin{equation} c_\textrm{g}^t = \textrm{C}_\textrm{g}^t \left( g^t + \epsilon_g \right) \end{equation} Where losses incurred by imports and exports from and to the main grid are approximated as \begin{equation} \epsilon_g = \frac{\textrm{R}}{\textrm{V}^2}\left(g^t\right)^2 \end{equation} The grid cost coefficient $\textrm{C}_\textrm{g}^t$ is the sum of the grid electricity price and the product of the carbon intensity of the generation mix at time $t$ and the Social Cost of Carbon which reflects the long-term societal cost of emitting greenhouse gases \citep{ParryM}. The impacts of local decisions on upstream energy prices are neglected. Grid losses are approximated using the nominal root mean square grid voltage $\textrm{V}$ and the average resistance between the main grid and the distribution network $\textrm{R}$ \citep{Multiclass}, based on the assumption of small network voltage drops and relatively low reactive power flows \citep{Coffrin2012}. The second-order dependency disincentivises large power imports and exports, which helps ensure interactions of transmission and distribution networks do not reduce system stability. \begin{equation} c_\textrm{d}^t = \textrm{C}_\textrm{d}\sum_{i \in \mathcal{P}}{\max\left(- p_i^t,0\right)} \end{equation} \noindent Distribution costs $c_\textrm{d}^t$ are proportional to the distribution charge $\textrm{C}_\textrm{d}$ on exports. The resulting price spread between individual imports and exports decreases risks of network constraints violation by incentivising the use of local flexibility first \citep{Morstyn2020_IntegratingP2P}. Distribution network losses due to power flows between prosumers are neglected so there is no second-order dependency. \begin{equation} c_\textrm{s}^t = \textrm{C}_\textrm{s}\sum_{i \in \mathcal{P}}{\left(b_{\textrm{in},i}^t + b_{\textrm{out},i}^t\right)} \end{equation} \noindent Storage battery depreciation costs $c_\textrm{s}^t$ are assumed to be proportional to throughput using the depreciation coefficient $\textrm{C}_\textrm{s}$, assuming a uniform energy throughput degradation rate \citep{DufoLopez2014}. \subsection{Constraints} Let $\textrm{E}_0$, $\underline{\textrm{E}}$ and $\overline{\textrm{E}}$ be the initial, minimum and maximum battery energy levels, $\upeta_\textrm{ch}$ and $\upeta_\textrm{dis}$ the charge and discharge efficiencies, and $\overline{\textrm{b}_\textrm{in}}$ the maximum charge per time step. Demand $\textrm{d}_{i,k}^{t_\textrm{D}}$ is met by the sum of loads consumed $\hat{c}_{i,k,t_\textrm{C},t_\textrm{D}}$ at time $t_\textrm{C}$ by prosumer $i$ for load of type $k$ (fixed or flexible) demanded at $t_\textrm{D}$. The flexibility boolean $\textrm{f}_{i,k,t_\textrm{C},t_\textrm{D}}$ indicates if time $t_\textrm{C}$ lies within the acceptable range to meet $\textrm{d}_{i,k}^{t_\textrm{D}}$. A Crank-Nicholson scheme \citep{ISO2007} is employed to model heating, with $\upkappa$ a 2x5 matrix of temperature coefficients, and $\underline{\textrm{T}}_i^t$ and $\overline{\textrm{T}}_i^t$ lower and upper temperature bounds. System constraints for steps $\forall \ t \in \mathcal{T}$ and prosumers $\forall \ i \in \mathcal{P}$ are: \begin{itemize} \item Prosumer and substation energy balance (see \Cref{fig:EnergyBalance}) \begin{equation} p_i^t = c_i^t + h_i^t + \frac{b_{\textrm{in},i}^t}{\upeta_\textrm{ch}} - {\upeta_\textrm{dis}} b_{\textrm{out},i}^t - \textrm{p}_{\textrm{PV},i}^t \end{equation} \begin{equation} \sum_{i \in \mathcal{P}}{p_i^t} = g^t \end{equation} \item Battery energy balance \begin{equation} E_i^{t+1} = E_i^t + b_{\textrm{in},i}^t - b_{\textrm{out},i}^t - \textrm{d}_{\textrm{EV},i}^t \end{equation} \item Battery charge and discharge constraints \begin{equation} \textrm{E}_0 = E_i^{t_0} = E_i^{t_\textrm{end}} + b_{\textrm{in},i}^{t_{\textrm{end}}} - b_{\textrm{out},i}^{t_{\textrm{end}}} - \textrm{d}_{\textrm{EV},i}^{t_{\textrm{end}}} \end{equation} \begin{equation} \upmu_i^t\underline{\textrm{E}}_i \leq E_i^t \leq \overline{\textrm{E}}_i \end{equation} \begin{equation} b_{\textrm{in},i}^t \leq \upmu_i^t \overline{\textrm{b}_\textrm{in}} \end{equation} \begin{equation} b_{\textrm{out},i}^t \leq \upmu_i^t \overline{\textrm{E}}_i \end{equation} \item Consumption flexibility --- the demand of type $k$ at time $t_\textrm{D}$ by prosumer $i$ must be met by the sum of partial consumptions $\hat{c}_{i,k,t_\textrm{C},t_\textrm{D}}$ at times $t_\textrm{C}...t_\textrm{C}+\textrm{n}_\textrm{flex}$ within the time frame $\textrm{n}_\textrm{flex}$ specified by the flexibility of each type of demand in matrix $\textrm{f}_{i,k,t_\textrm{C},t_\textrm{D}}$ \begin{equation}\label{eq:demandmet} \sum_{t_\textrm{C}\in\mathcal{T}}{\hat{c}_{i,k,t_\textrm{C},t_\textrm{D}} \textrm{f}_{i,k,t_\textrm{C},t_\textrm{D}}} = \textrm{d}_{i,k}^{t_\textrm{D}} \end{equation} \item Consumption --- the total consumption at time $t_\textrm{C}$ is the sum of all partial consumptions $\hat{c}_{i,k,t_\textrm{C},t_\textrm{D}}$ meeting parts of demands from current and previous time steps $t_\textrm{D}$: \begin{equation}\label{eq:totalcons} \sum_{t_\textrm{D}\in\mathcal{N}}{\hat{c}_{i,k,t_\textrm{C},t_\textrm{D}}}= c_{i,k}^{t_\textrm{C}} \end{equation} \item Heating --- the workings to obtain this equation are included in \Cref{app:heating}: \begin{equation}\label{eq:main_heating} \begin{bmatrix} T_{\textrm{m},i}^{t+1}\\ T_{\textrm{air},i}^{t+1} \end{bmatrix} = \upkappa \begin{bmatrix} 1, T_{\textrm{m},i}^{t}, \textrm{T}_{\textrm{e}}^t, \upphi^t, h_i^t \end{bmatrix}^\intercal \end{equation} \begin{equation} \underline{\textrm{T}}_i^t \leq T_{\textrm{air},i}^t \leq \overline{\textrm{T}}_i^t \end{equation} \item Non-negativity constraints \begin{equation} c_i^t, h_i^t,E_i^t, b_{\textrm{in},i}^t, b_{\textrm{out},i}^t, \hat{c}_{i,l,t_\textrm{C},t_\textrm{D}} \geq 0 \end{equation} \end{itemize} While the proposed framework could accommodate the use of idiosyncratic satisfaction functions to perform trade-offs between flexibility use and users' comfort, no such trade-offs are considered in this paper, with comfort requirements for temperature and EV usage always being met. Field evaluations have shown that programmes that do not maintain thermal comfort are consistently overridden, increasing overall energy use and costs \citep{Sachs2012}, while interference in consumption patterns and temperature set-points cause dissatisfaction \citep{Vazquez-Canteli2019}. Meeting fixed domestic loads, ensuring sufficient charge for EV trips, and maintaining comfortable temperatures are therefore set constraints. \section{Reinforcement learning methodology}\label{RLSection} The MARL approach is now presented in which independent prosumers learn to make individual decisions which together maximise the statistical expectation of the objective function in \Cref{system}. At time step $t \in \mathcal{T}$, each agent is in a state $s_i^t \in \mathcal{S}$ corresponding to accessible observations (here the time-varying grid cost), and selects an action $a_i^t \in \mathcal{A}$ as defined in \Cref{sec:agentdecision}. This action dictates the decision variables in \Cref{variables} $b_{\textrm{in},i}^t$, $b_{\textrm{out},i}^t$, $h_i^t$ and $c_i^t$. The environment then produces a reward $r^t \in \mathcal{R}$ which corresponds to the share $\hat{F}_t$ of the system objective function presented in \Cref{objfunc} and agents transition to a state $s_i^{t+1}$. Agents learn individual policies $\pi_i$ by interacting with the environment using individual, decentralised fixed-size Q-tables. We first introduce the Q-learning methodology. Then, the mapping between the RL agent action and the decision variables in \Cref{variables} is presented. Finally, we propose variations on the learning method, with different experience sources, multi-agent structures and reward definitions. \subsection{Q-Learning} While any reinforcement learning methodology could be used with the framework proposed in this paper, here we focus on Q-learning, a model-free, off-policy RL methodology. Its simplicity and proof of convergence make it suited to developing novel learning methodologies in newly defined environments \citep{Vazquez-Canteli2019}. State-actions values $Q(s,a)$ represent the expected value of all future rewards $r_t$ $\forall \ t \in \mathcal{T}$ when taking action $a$ in state $s$ according to policy $\pi$: \begin{equation} Q(s,a) \triangleq E^{\pi}{[r_{t} + \gamma r_{t+1} + \gamma^2r_{t+2}...|s_t = s, a_t = a ]} \end{equation} \noindent where $\gamma$ is the discount factor setting the relative importance of future rewards. Estimates are refined incrementally as \begin{equation} \hat{Q}(s,a)\leftarrow \hat{Q}(s,a) + \alpha\delta \end{equation} where $\delta$ is the temporal-difference error, \begin{equation} \delta = \left(r_t +\gamma \hat{V}(s^\textrm{next})-\hat{Q}(s,a)\right) \end{equation} $\hat{V}$ is the state-value function estimate, \begin{equation} \hat{V}(s) = \max_{a^* \in \mathcal{A}(s)}{\hat{Q}(s,a^*)} \end{equation} and $\alpha$ is the learning rate. In this work we use hysteretic learners, i.e. chiefly optimistic learners that use an increase rate superior to the decrease rate in order to reduce oscillations in the learned policy due to actions chosen by other agents \citep{Matignon2012, Matignon2007}. For $\beta < 1$: \begin{equation} \alpha = \begin{cases} \alpha_0 & \text{if $\delta > 0$}\\ \alpha_0\beta & \text{otherwise}\\ \end{cases} \end{equation} Agents follow an $\epsilon$-greedy policy to balance exploration of different state-action pairs and knowledge exploitation. The greedy action with highest estimated rewards is selected with probability $1-\epsilon$ and random actions otherwise. \begin{equation} \label{eqn:greedy} a^* = \begin{cases} \argmax_{\ a^* \in \mathcal{A}}\hat{Q}(s,a^*) & \text{if $x \sim U(0,1) > \epsilon$}\\ a \sim p(a) = \frac{1}{|\mathcal{A}|} \ \forall \ a \ \in \mathcal{A}& \text{otherwise}\\ \end{cases} \end{equation} Henceforth, we refer to the estimates $\hat{Q}$ and $\hat{V}$ as $Q$ and $V$ to reduce the amount of notation. \subsection{Agent state} The agent state is defined by the time-dependent grid cost coefficient $\textrm{C}_\textrm{g}^t$, i.e. the sum of the grid electricity price and the product of the carbon intensity of the generation mix at time $t$ and the social cost of carbon. To convert the RL policy action into local decisions, the agent also requires information on their current PV generation, battery level, flexible loads and indoor air temperature, as described below in \Cref{sec:agentdecision}. \subsection{Agent action}\label{sec:agentdecision} \begin{figure*}[!t] \begin{center} \includegraphics[width=0.7\textwidth]{inkscape_mu_4.pdf} \end{center} \caption{Decision variable $\psi$. Sections 1-5 denote the trade-off regimes described in \Cref{sec:agentdecision}. At each step, the fixed requirements for loads, heat and upcoming EV trips are first met. The $\psi$ decision then applies to the remaining flexibility, from maximal energy exports (full use of flexibility) at $\psi = 0$, to maximal energy imports (no use of flexibility) at $\psi = 1$. $\textrm{d}_\textrm{tot}$ and $\textrm{d}_\textrm{fixed}$ are the sum of household and heating loads with and without their flexible component. If fixed loads cannot be fully met by PV energy, the residual is met by storage and imports (2). If there is additional PV energy after meeting all loads, it can be stored or exported (4).} \label{fig:mu} \end{figure*} Large action spaces compound the curse of dimensionality in Q-learning and waste exploration resources \citep{Powell2011}. At each time step, the decision variables in \Cref{system} controlling the flows in and out of the battery$b_{\textrm{in},i}^t$ and $b_{\textrm{out},i}^t$, the electric heating consumption $h_i^t$ and the prosumer consumption $c_i^t$ for household $i$ are therefore synthesised into a single variable $\psi \in [0,1]$ controlling the use of available local flexibility. \Cref{fig:mu} shows how consumption (for domestic loads and heat), imports and storage change with $\psi$. At each step, the fixed requirements for loads, heat and upcoming EV trips are first met. The $\psi$ decision then applies to the remaining flexibility. In conditions deemed optimal for energy exports $\psi = 0$, all initial storage and residual PV generation is exported and flexible loads are delayed. On the other end, a \emph{passive} agent does not utilise its flexibility and uses the \emph{default} action $\psi = 1$, maximising imports with EVs charged when plugged in and no flexible loads delayed. Intermediate imports trade-offs are mapped on \Cref{fig:mu}: \begin{enumerate} \item From exporting all to none of the initial storage $E_i^t$ \item From meeting fixed loads $\textrm{d}_{i,\textrm{fixed}}^t$ with the energy stored to importing the required amount \item From no to maximum flexible consumption $\textrm{d}_{i,\textrm{tot}}^t$ \item From exporting to storing PV energy $\textrm{p}_{\textrm{PV},i}^t$ remaining after meeting loads \item From importing no additional energy to filling up the battery to capacity $\overline{\textrm{E}}_i$ \end{enumerate} Costlier actions incurring battery depreciation, losses and export costs are towards either $\psi$ extreme, only used in highly beneficial situations (convex local costs function in the lower plot of \Cref{fig:mu}). Ranking actions consistently ensures agents do not waste resources trialling sub-optimal combinations of decisions. For example, it is more cost-efficient to first absorb energy imports by consuming flexible loads, and only use the battery (incurring costs) if imports are large. Note that although this action space is continuous, it can be discretised into intervals for implementation in Q-learning. \subsection{Variations of the learning method}\label{methodologies} Different experience sources, reward definitions and MARL structures are proposed within the MARL approach. The performance of these combinations of algorithmic possibilities will be assessed in \Cref{results} to inform effective model design.\\ \subsubsection{Experience sources} In data-driven strategies, the learning is determined by the collected experience. \begin{itemize} \item \textbf{Environment exploration}. Traditionally, agents collect experience by interacting with an environment \citep{Sutton1998}. \item \textbf{Optimisations}. A novel approach collects experience from optimisations. Learning from entities with more knowledge or using knowledge more effectively than randomly exploring agents has previously been proposed, as with agents \say{mimicking} humans playing video games \citep{Grandmaster}. Similarly, agents learn from convex \say{omniscient} optimisations on historical data with perfect knowledge of current and future variables. This experience is then used under partial observability and control for stable coordination between prosumers at scale. Note in this case that, although the MARL learning and implementation are model-free, a model of the system is used to run the convex optimisation and produce experience to learn from. A standard convex optimiser uses the same data that would be used to populate the environment explorations but solves over the whole day-horizon with perfect knowledge of all variables using the problem description in \Cref{system}. Then, at each time step, the system variables are translated into equivalent RL $\{s_t,a_t,r_t, s_{t+1}\}$ tuples for each agent, which are used to update the policies in the same way as for standard Q-learning as presented below.\\ \end{itemize} \subsubsection{MARL structures} Both the centralised and decentralised structures proposed use fixed-size $|\mathcal{S}| \times |\mathcal{A}|$ Q-tables corresponding to individual state-action pairs. The size of a global Q-table referencing all possible combinations of states and actions would grow exponentially with the number of agents. This would limit scalability due to memory limitations and exploration time requirements. Moreover, as strategies proposed in this paper are privacy-preserving, only local state-action pairs are used for individual action selection, wasting the level of detail of a global Q-table. \begin{itemize} \item \textbf{Distributed learning}. Each agent $i$ learns its $Q_i$ table with its own experience. No information is shared between agents. \item \textbf{Centralised learning}. A single table $Q_\textrm{c}$ uses experience from all agents during pre-learning. All agents use the centrally learned policy for decentralised implementation. \end{itemize} \subsubsection{Reward definitions} The reward definition is central to learning as its maximisation forms the basis for incrementally altering the policy \citep{Sutton1998}. Assessing the impact of individual actions on global rewards accurately is key to the effective coordination of a large number of prosumers. In the following, the Q-tables $Q^0$, $Q^\textrm{diff}$,$Q^\textrm{A}$ and $Q^\textrm{count}$ may be either agent-specific $Q_i$ or centralised $Q_\textrm{c}$ based on the MARL structure. We proposed four variations of the Q-table update rule for each experience step tuple collected $(s_i^t, a_i^t, r^t,s_i^{t+1})$. \begin{equation} Q(s_i^t, a_i^t)\leftarrow Q(s_i^t,a_i^t) + \alpha \delta \end{equation} \begin{itemize} \item \textbf{Total reward}. The instantaneous total system reward $r^t = \hat{F}_t$ is used to update the Q-table $Q^0$. \begin{equation} \delta = r^t + \gamma V^0(s_i^{t+1}) - Q^0(s_i^t, a_i^t) \end{equation} \item \textbf{Marginal reward}. The difference in total instant rewards $r^t$ between that if agent $i$ selects the greedy action and that if it selects the default action is used to update $Q^\textrm{diff}$ \citep{Wolpert2002}. The default action $a_\textrm{default}$ corresponds to $\psi = 1$, where no flexibility is used. The default reward $r^t_{a_{i}=a_\textrm{default}}$, where all agents perform their greedy action apart from agent $i$ which performs the default action, is obtained by an additional simulation. \begin{equation} \delta = \left(r^t - r^t_{a_{i}=a_\textrm{default}}\right) + \gamma V^\textrm{diff}(s_i^{t+1}) - Q^\textrm{diff}(s_i^t, a_i^t) \end{equation} \item \textbf{Advantage reward}. The post difference between $Q^0$ values when $i$ performs the greedy and the default action is used. This corresponds to the estimated increase in rewards not just instantaneously but over all future states, analogously to in \citep{Foerster2018}. No additional simulations are required as the Q-table values are refined over the normal course of explorations. \begin{equation} \delta = \left(Q^0(s_i^t, a_i^t) - Q^0(s_i^t, a_{a_i=a_\textrm{default}})\right) - Q^\textrm{A}(s_i^t, a_i^t) \end{equation} \item \textbf{Count}. The Q-table stores the number of times each state-action pair is selected by the optimiser. \begin{equation} \alpha\delta = 1 \end{equation} \end{itemize} \section{Input Data}\label{data} \begin{table*}[!t] \newcommand{\rule[-10pt]{0pt}{20pt}}{\rule[-10pt]{0pt}{20pt}} \newcommand{\rule[-5pt]{0pt}{30pt}}{\rule[-5pt]{0pt}{30pt}} \begin{tabularx}{\textwidth}{ c|m{5cm}|m{5cm}} \toprule & Normalised profile & Scaling factor \\ \hline PV \rule[-10pt]{0pt}{20pt} & Randomly selected from current month bank $b_{t+1}=(m)$ & \multirow{2}{=}{\setlength\parskip{\baselineskip}% Computed as $\lambda_{t+1} = \lambda_{t} + x$, where $x \sim \Gamma\left(\alpha(b_{t},b_{t+1}),\beta(b_{t},b_{t+1})\right)$} \rule[-10pt]{0pt}{20pt} \\ \cline{1-2} Load \rule[-5pt]{0pt}{30pt} &\multirow{2}{=}{\setlength\parskip{\baselineskip}% Cluster selected based on transition probability $p(k_{t+1} | k_t, w_t, w_{t+1})$ \newline Normalised profile randomly selected from bank $b_{t+1} = (k_{t+1}, w_{t+1})$} \rule[-5pt]{0pt}{30pt} & \\ \cline{1-1} \cline{3-3} EV \rule[-5pt]{0pt}{30pt} & & Random variable from discrete distribution $p(\lambda_{t+1}|\lambda_t, b_t, b_{t+1})$ \rule[-10pt]{0pt}{20pt} \\ \bottomrule \end{tabularx} \caption{Markov chain mechanism for selecting behaviour clusters, profiles and scaling factors for input data in subsequent days} \label{tab:loadnextday} \end{table*} This section presents the data that is fed into the model presented in \Cref{system}. Interaction with this data will shape the policies learned through RL \citep{Sutton1998} and should reflect resource intermittency and uncertainty to maximise the expectation of rewards in a robust way without over-fitting. EV demand $\textrm{d}_{\textrm{EV},i}^t$ and availability $\upmu_i^t$, PV production $\textrm{p}_{\textrm{PV},i}^t$ and electricity consumption $\textrm{d}_i^{t}$ are drawn from large representative datasets. \subsection{Data selection and pre-processing} Load and PV generation profiles are obtained from the Customer Led Network Revolution (CLNR), a UK-based smart grid demonstration project \citep{TC1a,TC5}, and mobility data from the English National Travel Survey (NTS) \citep{DepartmentforTransport2019}. The NTS does not focus on EVs only and offers a less biased view into the general population's travel pattern than small-scale EV trials data, both due to the smaller volume of data available compared to for generic cars and because the self-selected EV early trial participants may not be representative of patterns once EVs become widely adopted. It is implicitly assumed that electrification will not affect transport patterns \citep{Crozier2018}. NTS data from 82,455 households from 2002 to 2017 results in 1,272,834 full days of travel profiles. Load and PV data from 11,907 customers between 2011 and 2014 yields 620,702 and 22,670 full days of data, respectively. Profiles are converted to hourly resolution and single missing points replaced with the figure from the same time the day or week before or after which has the lowest sum of squares of differences between the previous and subsequent point. Tested with available data, this yields absolute errors with mean 0.13 and 0.08 kWh and 99th percentile 1.09 and 0.81 kWh for PV and load data. PV sources have nominal capacities between 1.35 and 2.02 kWp. The at home-availability of the vehicles is inferred from the recorded journeys' origin and destination. EV energy consumption profiles are obtained using representative consumption factors from a tank-to-wheel model proposed in \citep{Crozier2018}, dependent on travel speed and type (rural, urban, motorway). \subsection{Markov chain} \begin{figure*}[!b] \begin{center} \includegraphics[width=0.7\textwidth]{f_load_EV_omni.pdf} \end{center} \caption{Scaling factors for normalised profiles (i.e. total daily loads in kWh) in subsequent days. Linear correlation can be observed for the load profiles, while more complex patterns are exhibited for EV consumption. $\rho$ is the Pearson correlation coefficient.} \label{fig:Corr} \end{figure*} During learning, agents continuously receive experience to learn from. However, numerous subsequent days of data are not available for single agents. We design a Markov chain mechanism to feed consistent profiles for successive days, using both consistent scaling factors and behaviour clusters. Daily profiles for load and travel are normalised such that $\sum_{t=0..24}{x^t}=1$, and clustered using K-means, minimising the within-cluster sum-of-squares \citep{Lloyd1982} in four clusters for both weekday and weekend data (with one for no travel). The features used for load profiles clustering are normalised peak magnitude and time and normalised values over critical time windows, and those for travel are normalised values between 6 am and 10 pm. PV profiles were grouped per month. Probabilistic Markov chain transition rules are shown in \Cref{tab:loadnextday}. Transition probabilities for clusters $k$ and scaling factors $\lambda$ are obtained from available transitions between subsequent days in the datasets for each week day type $w$ (week day or weekend day). \Cref{fig:Corr} shows that subsequent PV and load scaling factors follow strong linear correlation, with the residuals of the perfect correlation following gamma distributions with zero mean, whereas EV load scaling factors follow more complex patterns, so transitions probabilities are computed between 50 discrete intervals. \section{Case study results and discussion}\label{results} This section compares the performance of the residential flexibility coordination strategies presented in \Cref{RLSection} to baseline and upper bound scenarios for increasing numbers of prosumers. The performance of traditionally used MARL strategies drops at scale, while that of the novel optimisation-based methodology using marginal rewards is maintained. \subsection{Set-up} The MARL algorithm is trained in off-line simulations using historical data prior to online implementation. This means agents do not trial unsuccessful actions with real-life impacts during learning. Moreover, the computation burden is taken prior to implementation, while prosumers only apply pre-learned policies, avoiding the computational challenges of large-scale real-time control. The learning occurs over 50 epochs consisting of an exploration, an update and an evaluation phase. First, the environment is explored over two training episodes of duration $|\mathcal{T}| = 24$ hours. Learning in batches of multiple episodes helps stabilise learning in the stochastic environment. Then, Q-tables are updated based on the rules presented in \Cref{methodologies}. Finally, an evaluation is performed using a deterministic greedy policy on new evaluation data. Ten repetitions are performed such that the learning may be assessed over different trajectories. The Social Cost of Carbon is set at 70 £/tCO$_2$, consistent with the UK 2030 target \citep{Hirst2018}. Weather \citep{WeatherWunderground2020}, electricity time-of-use prices \citep{OctopusEnergy2019} and grid carbon intensity \citep{NationalGridESO2020} are from January 2020, where relevant specified for London, UK. The low solar heat gains in January are neglected \citep{Brown2020}. Other relevant parameters for the case studies are listed in \Cref{app:inputs}. As performed on a Intel(R) Core(TM) i7-9800X CPU @ 3.80GHz, computation time for a learning trajectory is $2^\prime45^{\prime\prime}$ for one agent and $97^\prime5^{\prime\prime}$ for 30 agents, including evaluation points. The policy can then be directly applied at the household level during operation. Case study results using different experience sources, reward definitions and MARL structures are presented in \Cref{fig:results}. Acronyms for each strategy are tabulated in the legend. Positive values denote savings relative to a baseline scenario where all agents are passive, i.e. not using their flexibility with EVs charged immediately and no flexible loads delayed. As the Q-learning policies are first initialised with zero values, in the first epoch of learning completely random action values are chosen, which provides rewards far below the baseline. As agents collect experience and update their policies at each epoch, improved policies are learned, some of which are able to outperform the baseline. An upper bound is provided by results from \say{omniscient} convex optimisations, which are however not achievable in practice for three main reasons. Firstly, they use perfect knowledge of all the environment variables in the present and future, despite uncertainty in renewable generation, mix of the grid, and customer behaviour. Optimisation with inaccurate data would lead to suboptimal results. Secondly, prosumers may not be willing to yield their data and direct control to an external entity. Finally, central optimisations become computationally expensive for real-time control of large numbers of prosumers. \subsection{Results} Results presented in \Cref{fig:results} show that only the algorithms learning from optimisations maintained stable coordination performance at scale, while the performance of traditionally used MARL algorithms would drop in this context of stochasticity and partial observation. The optimisation-based algorithm which uses marginal rewards (MO) performed best. We further elaborate on the results in the subsections below. \begin{figure*}[t!] \begin{center} \includegraphics[width=\textwidth]{results_vs_nag_20211210.pdf} \end{center} \caption{The left-hand side plot shows the five-epoch moving average of evaluation rewards relative to baseline rewards for a single prosumer. The right-hand side plot shows the mean of the final 10 evaluations against the number of prosumers. Lines show median values and shaded areas the 25th and 75th percentiles over the 10 repetitions. The best-performing MARL structure is displayed for each exploration source and reward definition pair. The performance of the baseline MARL algorithm (TE, orange) drops as the number of concurrently learning agents in the stochastic environment increases; the best-performing alternative algorithm proposed (MO, purple) maintains high performance at scale.} \label{fig:results} \end{figure*} \subsubsection{Environment exploration-based learning} The centralised MARL structure is favoured for environment exploration-based learning (continuous lines in \Cref{fig:results}). A single policy uses experience collected by all agents, rather than each agent learning from their own experience only. \Cref{fig:results} shows that environment exploration-based MARL using total rewards (TE, orange), the baseline MARL framework, exhibits a high performance for a single agent. However, savings drop as the number of cooperating agents increases, down to around zero from ten agents. Coordination challenges arise for independent learners to isolate the contribution of their actions to total rewards from the stochasticity of the environment, compounded by other simultaneously learning agents' random explorations, and the non-stationarity of their on-policy behaviour \citep{Matignon2012}. Using advantage rewards (AE, grey), based on estimates of the long-term value of actions relative to that of the baseline action, yields superior results beyond two agents. However, as AE uses the total reward $Q^0$-table as an intermediary step, results similarly drops for increasing numbers of agents. Using marginal rewards (ME, dark green), the value of each agent's action relative to the baseline action is singled out immediately by an additional simulation and used as a reward at each time step. This improves the performance relative to TE and AE for five agents and more, though still with declining performance as the number of agents increases. \subsubsection{Optimisation-based learning} Optimisation-based learning generally favours the distributed MARL structure, with agents able to converge to distinct compatible policies (dashed lines in \Cref{fig:results}). Comparing trajectories in \Cref{fig:results}, learning from the total rewards obtained by an optimiser (TO, light blue) yields lower savings than when using environment explorations (TE). The learned policies yield negative savings, i.e. would provide worse outcomes than inflexible agents. The omniscient optimiser takes precise, extreme decisions thanks to its perfect knowledge of all current and future system variables, importing at very high $\psi$ values when it is optimal to do so. RL algorithms on the other hand are used under partial observability, aiming for actions that statistically perform well under uncertainty. Agents independently picking TO-based decisive actions in a stochastic environment do not yield optimal outcomes. Assessing the long-term advantage of actions from optimisations (AO, dark blue) follows a similar trend, whilst providing marginally superior savings relative to TO. Optimisation-based learning using marginal rewards (MO, purple) offers the highest savings as the additional baseline simulations are best able to isolate the contribution of individual actions from variations caused by both the environment and other agents. When increasing the number of agents, the strategy is able to learn from optimal, stable, consistently behaving agents. Savings of 6.18p per agent per hour, or £45.11 per agent per month are obtained on average for 30 agents, corresponding to a 33.7\% reduction from baseline costs. 65.9\% of savings stem from reduced battery depreciation, 20.32\% from distribution grid congestion, 11.1\% from grid energy, and 2.7\% from greenhouse gas emissions. The count-based strategy learning from optimisations (CO, light green) seeks to reproduce the state-action patterns of the omniscient optimiser with perfect knowledge of system variables and perfect control of agents for local decision-making under partial observability. It provides results lower than the high performances of MO, though with a stable performance at scale. Savings of £21.09 per agent per month on average for 30 agents are obtained. The battery and distribution grid costs increase by an equivalent of 6.0\% and 7.7\% of total savings respectively, while grid energy and greenhouse gas emissions costs reductions represent 59.7\% and 54.0\% of total savings. Both the MO and CO strategies exhibit stable performance at scale, though converging to different types of policy. The MO policy saves more by smoothing out the charging and distribution grid utilisation profiles despite smaller savings in imports and emissions costs, while CO derives a larger advantage from the grid price differentials in grid imports, though with higher battery and distribution grid costs. The weight applied to each of those competing objectives in the objective function directly impacts the policies that are learned. Examples of how the individual home energy management system decision variables (heating, energy consumption, battery charging) vary based on the controller are illustrated in \Cref{app:example_case_study}. Overall, the new class of optimisation-based learning performs significantly better across different numbers of prosumers, with higher savings and lower inter-quartile range than environment-based learning at scale. This superior performance requires computations to run optimisations on historical data, and to perform baseline simulations to compute marginal rewards, though computational time for pre-learning is not strictly a limiting factor as it is performed off-line ahead of implementation. A fundamental challenge in MARL has been the trade-off between fully centralised value functions, which are impractical for more than a handful of agents, or, in a more straightforward approach, independent learning of individual action-value functions by each agent in independent Q-learning (IQL) \citep{Tan1993}. However, an ongoing issue with this approach has been that of convergence at scale, as agents do not have explicit representations of interactions between agents, and each agent’s learning is confounded by the learning and exploration of others \citep{Rashid2020}. As shown in \Cref{fig:results}, the Pareto selection, non-stationarity and stochasticity issues presented in \Cref{sec:gapanalysis} have prevented environment exploration-based learners from achieving successful MARL cooperation at scale for agents under partial observability in a stochastic environment. This case study of coordinated residential energy management shows that the novel combination of marginal rewards, which help agents isolate their marginal contribution to total rewards, and the learning from results of convex optimisations, where agents learn successful policy equilibriums from omniscient, stable, and consistent solutions, offer significant improvements on these scalability and convergence issues. \section{Conclusion}\label{conclusion} In this paper, a novel class of strategies has addressed the scalability issue of residential energy flexibility coordination in a cost-efficient and privacy-preserving manner. The combination of off-line optimisations with multi-agent reinforcement learning provides high, stable coordination performance at scale. We identified in the literature that the concept of RL-based implicit energy coordination, where energy prosumers cooperate towards global objectives based on local information only, had been under-researched beyond frequency droop control with limited number of agents. The scalability of such methods was identified as a key gap that we have sought to bridge. The novel coordination mechanism proposed in this paper thus satisfies the criteria for successful residential energy coordination set out in the introduction, as tested with large banks of real data in the case studies: \begin{itemize} \item Computational scalability: The scalability of traditional learning algorithms is significantly improved thanks to fixed-size Q-tables to avoid the curse of dimensionality, so that policies can be learned for larger number of agents. The proposed method does not require expensive communication and control appliances at the prosumer level, as pre-learned policies are directly applied with no further communication and no exponential time real-time optimisations needed. This is a crucial benefit for applications with physical limitations in hardware availability and processing time. \item Performance scalability: The coordination performance remains high for increasing numbers of prosumers despite the challenges of partial observability, environment stochasticity and concurrently learning of agents, thanks to learning from the results of global omniscient optimisations on historical data, and to rewards signals that isolate individual contributions to global rewards. Significant value of £45.11 per agent per month was obtained in the presented case study for 30 agents, thanks to savings in energy, prosumer storage and societal greenhouse gas emissions-related costs. Those savings do not drop with increasing number of agents, as opposed to with standard MARL approaches. \item Acceptability: The approach does not rely on sharing of personal data, thermal discomfort, or hindrance/delay of activities, and the appliances are controlled locally. This cost-efficient and privacy-preserving implicit coordination approach could help integrate distributed energy resources such as residential energy, otherwise excluded from energy systems' flexibility management. \end{itemize} Important future work is a more detailed assessment of the impacts of the coordination strategies on power flows, as well as an evaluation of the generalisation and adaptability potential of policies when used by other households or if household characteristics change over time. Moreover, while all agents readily reduce individual costs through participation in the framework, further game-theoretic tools could be used to design a post-operation reward scheme. \section*{Acknowledgement} This work was supported by the Saven European Scholarship and by the UK Research and Innovation and the Engineering and Physical Sciences Research Council (award references EP/S000887/1, EP/S031901/1, and EP/T028564/1).
1,314,259,995,711
arxiv
\section{Introduction and main results} Throughout this paper, for every metric space $(E,d)$ and every function $f:E \to \mathbb{R},$ we will denote the Lipschitz constant of $f$ on $E$ by $\lip(f,E),$ that is, $$ \lip(f,E):= \inf \lbrace L >0 \: : \: |f(x)-f(y)| \leq L d(x,y) \quad \text{for all} \quad x,y\in E \rbrace. $$ Also, if $\lambda \geq 0,$ we will say that $f:E \to \mathbb{R}$ is $\lambda$-Lipschitz on $E$ whenever $|f(x)-f(y)| \leq \lambda d(x,y)$ for every $x,y\in E.$ We will denote by $B(x_0,r)$ the closed ball centered at $x_0$ and with radius $r>0$ with respect to the metric on $E.$ Finally, for any Banach space $X$ with norm $\| \cdot \|,$ the dual norm on $X^*$ will be denoted by $\| \cdot \|_*.$ \medskip In this paper we deal with the following problem. \begin{problem}\label{mainproblem} Let $X$ be a Banach space, let $u_0: \overline{\Omega} \to \mathbb{R}$ be a Lipschitz function defined on the closure of an open subset $\Omega$ of $X$ and let $k \in \mathbb{N} \cup \lbrace \infty \rbrace.$ Given $\varepsilon >0,$ does there exist a function $v: \overline{\Omega} \to \mathbb{R}$ of class $C^k(\Omega)$ with $\lip(v, \overline{\Omega}) \leq \lip(u_0, \overline{\Omega}), \: v=u_0 $ on $\partial \Omega$ and $|u_0-v| \leq \varepsilon$ on $\overline{\Omega}$ ? \end{problem} In finite dimensional spaces, the integral convolution with mollifiers provides uniform approximation by $C^\infty$ functions preserving the Lipschitz constant of the function to be approximated. However this approximation does not necessarily preserve the value of $u_0$ on $\partial \Omega.$ On the other hand, it was proved in \cite[Theorem 2.2]{CzarneckiRifford} an approximation theorem for locally Lipschitz functions defined on open subsets of $\mathbb{R}^n$ which implies that for any continuous function $\delta: \Omega \to (0,+ \infty),$ and any locally Lipschitz function $u_0$ there exists a function $v$ of class $C^\infty$ satisfying (among other properties) that $$ |u_0(x)-v(x)| \leq \delta(x) \quad \text{and} \quad | Dv(x)| \leq \lip( u_0 , B(x, \delta(x)) \cap \Omega ) + \delta(x), \quad x\in \Omega. $$ Using the above result with $\delta(x)= \min \lbrace \varepsilon, \dist(x,\partial \Omega) \rbrace$ we get a smooth Lipschitz approximation $v$ of $u_0$ that extends continuously to $\overline{\Omega}$ by setting $v=u_0$ on $\partial \Omega.$ The function $v$ has Lipschitz constant arbitrarily close to $\lip(u_0, \overline{\Omega}),$ but bigger than $\lip(u_0, \overline{\Omega})$ in general. Thus this does not yield any answer to Problem \ref{mainproblem}. \medskip In the infinite dimensional case, it was proved in \cite[Theorem 1]{AFLR} that any Lipschitz function defined on an open subset $\Omega$ of a separable Hilbert space (or even a separable infinite dimensional Riemannian manifold) can be approximated in the $C^0$-fine topology by $C^\infty$ functions whose Lipschitz constant can be taken to be arbitrarily close to the Lipschitz constant of $u_0,$ i.e., for any given continuous function $\delta : \Omega \to (0,+ \infty)$ and $r>0,$ there exists $v$ of class $C^\infty$ such that $$ |u_0(x)-v(x)| \leq \delta(x), \quad x\in \Omega \quad \text{and} \quad \lip(v, \Omega) \leq \lip(u_0, \overline{\Omega})+r. $$ \medskip One can find in \cite{AFK, HajekJohanis, MJSLSG} some results on approximation of Lipschitz functions by $C^k$-smooth Lipschitz functions in more general Banach spaces. In these results, the approximating function preserves the Lipschitz constant of the original function up to a factor $C_0\geq 1,$ which only depends on the space and is bigger than $1$ in general. \medskip In this paper we show that the answer to Problem \ref{mainproblem} depends on the relation between $\lip(u_0, \partial \Omega)$ and $\lip(u_0, \overline{\Omega}).$ Let us now state our main results in this direction. \begin{theorem}\label{secondmaintheorem} Let $X$ be a finite dimensional normed space, or a separable Hilbert space or the space $c_0(\Gamma),$ for an arbitrary set of indices $\Gamma.$ Let $\Omega$ be an open subset of $X$ and let $u_0 : \overline{\Omega} \to \mathbb{R}$ be a Lipschitz function such that $\lip(u_0, \partial \Omega) < \lip(u_0, \overline{\Omega}).$ Given $\varepsilon >0,$ there exists a function $v: \overline{\Omega} \to \mathbb{R}$ such that $v$ is of class $C^\infty(\Omega), \: v$ is Lipschitz on $\overline{\Omega}$ with $\lip(v, \overline{\Omega})\leq \lip(u_0, \overline{\Omega}), \: v=u_0$ on $\partial \Omega$ and $|u_0-v| \leq \varepsilon$ on $\overline{\Omega}.$ \end{theorem} For non-separable Hilbert spaces, we have the following. \begin{theorem}\label{maintheoremnonseparablehilbert} Let $X$ be a Hilbert space. Let $\Omega$ be an open subset of $X$ and let $u_0 : \overline{\Omega} \to \mathbb{R}$ be a Lipschitz function such that $\lip(u_0, \partial \Omega) < \lip(u_0, \overline{\Omega}).$ Given $\varepsilon >0,$ there exists a function $v: \overline{\Omega} \to \mathbb{R}$ such that $v$ is of class $C^1(\Omega), \: v$ is Lipschitz on $\overline{\Omega}$ with $\lip(v, \overline{\Omega}) \leq \lip(u_0, \overline{\Omega}), \: v=u_0$ on $\partial \Omega$ and $|u_0-v| \leq \varepsilon$ on $\overline{\Omega}.$ \end{theorem} Theorems \ref{secondmaintheorem} and \ref{maintheoremnonseparablehilbert} gives a positive answer to Problem \ref{mainproblem} for the $C^1(\Omega)$ or $C^\infty(\Omega)$ class, when $\lip(u_0, \partial \Omega) < \lip(u_0, \overline{\Omega}),$ in certain Banach spaces. These theorems will be proved by combining approximation techniques in the pertinent space with the following result. \begin{theorem}\label{generaltheorem} Let $k\in \mathbb{N} \cup \lbrace \infty \rbrace$ and let $X$ be a Banach space with the property that for every Lipschitz function $f: X \to \mathbb{R}$ and every $\eta>0,$ there exists a function $g :X \to \mathbb{R}$ of class $C^k(X)$ such that $|f-g| \leq \eta$ on $X$ and $\lip(g, B(x_0,r)) \leq \lip(f, B(x_0, r+\eta) ) + \eta$ for every ball $B(x_0,r) \subset X.$ Then, if $\Omega$ is an open subset of $X, \: u_0 : \overline{\Omega} \to \mathbb{R}$ is a Lipschitz function such that $\lip(u_0, \partial \Omega) < \lip(u_0, \overline{\Omega})$ and $\varepsilon >0,$ there exists a function $v: \overline{\Omega} \to \mathbb{R}$ such that $v$ is of class $C^k(\Omega), \: v$ is Lipschitz on $\overline{\Omega}$ with $\lip(v, \overline{\Omega}) \leq \lip(u_0, \overline{\Omega}), \: v=u_0$ on $\partial \Omega$ and $|u_0-v| \leq \varepsilon$ on $\overline{\Omega}.$ \end{theorem} In Section \ref{sectionexample}, we will see an example on $\mathbb{R}^2$ with the $\ell_1$ norm showing that Problem \ref{mainproblem} has a negative answer (even for the class of functions which are merely differentiable on $\Omega$) if we allow $\lip(u_0, \partial \Omega) = \lip(u_0, \overline{\Omega}).$ Therefore, one can say that Theorem \ref{secondmaintheorem} is optimal (in the sense of Problem \ref{mainproblem}), at least in the setting of finite dimensional normed spaces. \medskip We now consider a subproblem of Problem \ref{mainproblem} when $X$ is a finite dimensional normed space. \begin{problem}\label{mainproblemalmostclassicalsolutions} Let $(X, \| \cdot \|)$ be a finite dimensional normed space with $\dim(X) \geq 2$ and let $u_0: \overline{\Omega} \to \mathbb{R}$ be a $1$-Lipschitz function defined on the closure of an open subset $\Omega$ of $X.$ Given $\varepsilon >0,$ does there exist a $1$-Lipschitz function $w: \overline{\Omega} \to \mathbb{R}$ such that $w$ is differentiable on $\Omega$ with $\| Dw\|_*=1$ almost everywhere on $\Omega, \: w=u_0 $ on $\partial \Omega$ and $|u_0-w| \leq \varepsilon$ on $\overline{\Omega}$ ? \end{problem} Observe that if $w=u_0$ on $\partial \Omega$ and $\lip(u_0,\partial \Omega) <1,$ then the Mean Value Theorem yields the existence of $x\in \Omega$ such that $\| Dw(x)\|_* <1.$ Therefore the function $w$ (if it exists) has no continuous derivative in this case. \medskip The following theorem gives a positive answer to Problem \ref{mainproblemalmostclassicalsolutions} when $\lip(u_0, \partial \Omega) <1.$ \begin{theorem}\label{maintheoremarbitrarynorm} Let $\Omega$ be an open subset of a finite dimensional normed space $(X, \| \cdot \|)$ with $ \dim(X) \geq 2.$ Let $u_0: \overline{\Omega} \to \mathbb{R}$ be a $1$-Lipschitz function such that $\lip(u_0, \partial \Omega) <1.$ Given $\varepsilon >0,$ there exists a differentiable $1$-Lipschitz function $w: \overline{\Omega} \to \mathbb{R}$ such that $\| D w\|_*=1 $ almost everywhere on $\Omega, \: w=u_0$ on $\partial \Omega$ and $|u_0-w| \leq \varepsilon$ on $\overline{\Omega}.$ \end{theorem} In Section \ref{sectionexample}, we prove, using the theory of almost minimizing Lipschitz extensions, that if $\Omega$ is an open subset in a $2$-dimensional euclidean space and if $u_0: \partial \Omega \to \mathbb{R}$ is a $1$-Lipschitz function, then there exists a differentiable $1$-Lipschitz function $w: \overline{\Omega} \to \mathbb{R}$ such that $\| D w\|_*=1 $ almost everywhere on $\Omega$ and $\: w=u_0$ on $\partial \Omega$. However, Example \ref{counterexamplel1} in Section \ref{sectionexample} shows that the above theorem is optimal in the sense of Problem \ref{mainproblemalmostclassicalsolutions}. Observe that Theorem \ref{maintheoremarbitrarynorm} covers the case of homogeneous Dirichlet conditions. Also, we notice that the above theorem does not hold when $X= \mathbb{R}.$ Indeed, if $u_0: [0,1] \to \mathbb{R}$ is $1$-Lipschitz and differentiable on $(0,1),$ with $|u_0(1)-u_0(0)| <1,$ then a result of A. Denjoy \cite{Denjoy} tells us that either $\lbrace x \: : \: |u_0'(x)| <1 \rbrace$ is empty or else it has positive Lebesgue measure. But this subset is nonempty by the Mean Value Theorem. \medskip The contents of the paper are as follows. In Section \ref{sectionapproximationmetricspaces}, we show that in general metric spaces, one can approximate a Lipschitz function $u_0$ by a function which coincides with $u_0$ on a given subset and has, on bounded subsets, better Lipschitz constants. In Section \ref{sectionsmoothapproximation}, we will give the proof of Theorems \ref{generaltheorem}, \ref{secondmaintheorem} and \ref{maintheoremnonseparablehilbert} with the decisive help of the above result. In Section \ref{sectionapproximationalmostclassical}, we use Theorem \ref{secondmaintheorem} and the results in \cite{DevilleJaramillo} to prove Theorem \ref{maintheoremarbitrarynorm}. Finally, in Section \ref{sectionexample}, we consider the case $\lip(u_0, \partial \Omega) = \lip(u_0, \overline{\Omega}):$ although a partial positive result in the euclidean setting can be obtained, we show that Problem \ref{mainproblem} does not always have a positive answer in this limiting case. \section{Approximation by functions with smaller Lipschitz constants}\label{sectionapproximationmetricspaces} Throughout this section, all the sets involved are considered to be subsets of a metric space $(X,d)$ and all the Lipschitz constants are taken with respect to the distance $d.$ The following result will be very useful in Section \ref{sectionsmoothapproximation} and it is interesting in itself. \begin{theorem}\label{theoremglobalapproximation} Let $E$ and $F$ be two nonempty closed sets such that $F \subset E,$ let $u_0: E \to \mathbb{R}$ be a $K$-Lipschitz function such that $\lambda_0:=\lip(u_0,F) <K.$ Given $\varepsilon >0,$ there exists a function $u: E \to \mathbb{R}$ such that $|u-u_0| \leq \varepsilon$ on $E, \: u=u_0$ on $F$ and $u$ has the property that $\lip(u,B) <K$ for every bounded subset $B$ of $E.$ \end{theorem} A crucial step for proving the above theorem is the following lemma. For any two nonempty subsets $A$ and $B$ of $X$ and for any $x\in X$, we will denote $$ \dist(x,B):= \inf \lbrace d(x,y) \: : \: y\in B \rbrace, $$ $$ \dist(A,B):= \inf \lbrace d(x,y) \: : x\in A,\: y\in B \rbrace\quad\text{and}\quad \diam(A):= \sup \lbrace d(x,y) \: : \: x,y\in A \rbrace. $$ \begin{lemma}\label{lemmalocalapproximation} Let $E$ and $F$ be two nonempty closed subsets such that $F \subset E$ and $E \setminus F$ is bounded. Let $u_0: E \to \mathbb{R}$ be a $1$-Lipschitz function, let $u_\mu : F \to \mathbb{R}$ be $\mu$-Lipschitz, with $\mu < 1,$ let $\delta \geq 0$ and assume that $| u_\mu -u_0| \leq \delta$ on $F.$ For every $ \mu < \lambda < 1,$ there exists a function $u_\lambda: E \to \mathbb{R}$ such that $u_\lambda$ is $\lambda$-Lipschitz on $E$ with $u_\lambda = u_\mu$ on $F$ and $| u_0- u_\lambda |\leq \delta + \varepsilon( \lambda, \mu, E, F)$ on $E;$ where $$ \varepsilon( \lambda, \mu, E, F) = \frac{1-\lambda}{\lambda-\mu}(\lambda + \mu) \left( \diam( \overline{E\setminus F}) + \dist( \overline{E\setminus F}, F ) \right)>0 $$ and $\varepsilon( \lambda, \mu, E, F) =0$ whenever $E \setminus F= \emptyset.$ \end{lemma} \begin{proof} In the case when $E \setminus F = \emptyset,$ we have that $E=F$ and then it is enough to take $u_\lambda = u_\mu.$ From now on, we assume that $E \setminus F\ne\emptyset$, we fix $\mu <\lambda < 1$, and we denote $\varepsilon_\lambda = \varepsilon( \lambda, \mu, E, F)$. We now define the strategy of proof of the lemma. We first show that the family $$ \mathcal{C}_\lambda := \lbrace u : E \to \mathbb{R} \: : \: u \:\:\text{is} \: \: \lambda\text{-Lipschitz on} \:\: E, \: u\leq u_0 + \delta+ \varepsilon_\lambda \:\: \text{on} \:\: E, \: u=u_\mu \:\: \text{on} \:\: F \rbrace $$ is nonempty, and then we define the function $u_\lambda$ by: \begin{equation} \label{definitionulambda} u_\lambda (x) := \sup\lbrace u(x) \: : \: u \in \mathcal{C}_\lambda \rbrace, \quad x\in E. \end{equation} In order to prove that the function $u_\lambda$ is the required solution, it will be enough to check that $u_\lambda\in\mathcal{C}_\lambda$ and that $u_0 \leq u_\lambda + \delta + \varepsilon_\lambda$ on $E.$ \item[] $\textbf{1.}$ We now prove that the family $\mathcal{C}_\lambda$ is nonempty. Consider the function $$ v(x)= \sup_{y\in F} \lbrace u_\mu(y)-\lambda d(x,y) \rbrace, \quad x\in E, $$ and let us see that $v\in C_\lambda.$ Since $u_\mu$ is $\lambda$-Lipschitz (in fact, $\mu$-Lipschitz) on $F,$ it follows from standard calculations concerning the sup convolution of Lipschitz functions that $v$ is a well-defined $\lambda$-Lipschitz function on $E$ with $v=u_\mu$ on $F.$ Now, given $x \in E\setminus F$ and $y \in F$ let us see that $u_\mu(y)-\lambda d(x,y) \leq u_0(x) + \delta+ \varepsilon_\lambda.$ For every $\eta>0,$ we can find a point $z_\eta\in F$ with \begin{equation}\label{pointminimizingdistance} \dist(x,F ) + \eta \geq d(x,z_\eta). \end{equation} In the case when $u_\mu(y)-\lambda d(x,y) < u_\mu(z_\eta)-\lambda d(x,z_\eta),$ by the assumption that $|u_\mu-u_0| \leq \delta$ on $F$ together with \eqref{pointminimizingdistance} and the fact that $\dist(x,F)\le\varepsilon_\lambda$, we have that \begin{align*} u_\mu(y)-\lambda d(x,y) & < u_\mu(z_\eta)-\lambda d(x,z_\eta) \leq u_0(z_\eta) +\delta - \lambda d(x,z_\eta) \leq u_0(x)+ \delta + (1-\lambda) d(x,z_\eta) \\ & \leq u_0(x)+ \delta + (1-\lambda) \left( \dist(x,F ) + \eta \right) \leq u_0(x)+ \delta +\varepsilon_\lambda + (1-\lambda) \eta. \end{align*} In the case when $u_\mu(y)-\lambda d(x,y) \geq u_\mu(z_\eta)-\lambda d(x,z_\eta).$ The fact that $u_\mu$ is $\mu$-Lipschitz on $F$ yields \begin{align*} u_\mu(y)-\lambda d(x,y) & \geq u_\mu(z_\eta)-\lambda d(x,z_\eta) \geq u_\mu(y)-\mu d(y,z_\eta)-\lambda d(x,z_\eta) \\ & \geq u_\mu(y) -\mu d(x,y)- \mu d(x,z_\eta)-\lambda d(x,z_\eta), \end{align*} which in turn implies \begin{equation}\label{comparabledistance} (\lambda- \mu) d(x,y) \leq (\lambda+ \mu) d(x,z_\eta). \end{equation} Using first that $u_0$ is $1$-Lipschitz on $E$ and then \eqref{comparabledistance} and \eqref{pointminimizingdistance}, we obtain \begin{align*} u_\mu(y)-\lambda d(x,y) & \leq u_0(y)+ \delta -\lambda d(x,y) \leq u_0(x)+ \delta + (1-\lambda ) d(x,y) \\ & \leq u_0(x)+ \delta+ \frac{1-\lambda}{\lambda-\mu}(\lambda + \mu) d(x,z_\eta) \leq u_0(x)+ \delta+ \frac{1-\lambda}{\lambda-\mu}(\lambda + \mu) \left( \dist(x,F)+ \eta \right) \\ & \leq u_0(x)+ \delta+ \varepsilon_\lambda + \frac{1-\lambda}{\lambda-\mu}(\lambda + \mu) \: \eta. \end{align*} Hence, in both cases, we have that $$ u_\mu(y)-\lambda d(x,y) \leq u_0(x)+ \delta+ \varepsilon_\lambda + \frac{1-\lambda}{\lambda-\mu}(\lambda + \mu) \: \eta, $$ and letting $\eta \to 0^+,$ it follows that $v(x) \leq u_0(x)+ \delta+ \varepsilon_\lambda$ for every $x\in \overline{E \setminus F}.$ This proves the inequality $v \leq u_0+ \delta+ \varepsilon_\lambda$ on $E,$ which shows that $v\in \mathcal{C}_\lambda.$ \medskip \item[] $\textbf{2.}$ The function $u_\lambda$ belongs to $\mathcal{C}_\lambda$ because a supremum of $\lambda$-Lipschitz functions is a $\lambda$-Lipschitz function, and because inequalities and equalities are preserved by taking supremum. Before proving the inequality $u_0 \leq u_\lambda + \delta + \varepsilon_\lambda$ on $E$, we first show that $u_\lambda$ coincides with the function $$ v_\lambda(x):= \inf_{y \in F \cup S_\lambda} \lbrace u_\lambda(y) + \lambda d(x,y) \rbrace, \quad x\in E; $$ where $$ S_\lambda= \left\lbrace x\in E \: : \: u_\lambda(x) \geq u_0(x)+\delta+ \frac{\varepsilon_\lambda}{2} \right\rbrace. $$ Observe that, since $u_\mu \leq u_0+ \delta$ on $F, \: S_\lambda$ and $F$ are disjoint. Since $u_\lambda$ is $\lambda$-Lipschitz on $E$ (and, in particular, on $F \cup S_\lambda$), the function $v_\lambda$ is the greatest $\lambda$-Lipschitz extension of $u_\lambda$ from the set $F \cup S_\lambda.$ Thus $v_\lambda = u_\lambda$ on $F \cup S_\lambda$ and $u_\lambda \leq v_\lambda$ on $E.$ Hence, by \eqref{definitionulambda}, we will have that $v_\lambda = u_\lambda$ as soon as we see that $v_\lambda \leq u_0 + \delta+ \varepsilon_\lambda$ on $E.$ Let us define $$ G_\lambda = \lbrace x\in E \setminus \left( F \cup S_\lambda \right) \: : \: v_\lambda(x) \geq u_0(x) + \delta + \varepsilon_\lambda \rbrace. $$ \begin{claim}\label{subsetemptyclaim} $G_\lambda = \emptyset.$ \end{claim} Assume that $G_\lambda \neq \emptyset.$ Since $E\setminus F$ is bounded, then $v_\lambda-u_0$ is bounded on $G_\lambda$ and we can define $$ a:= \sup_{G_\lambda} \lbrace v_\lambda - u_0 \rbrace. $$ It is obvious that $a \geq \delta + \varepsilon_\lambda.$ We can pick a point $y \in G_\lambda$ such that \begin{equation}\label{approximationsupremumglambda} v_\lambda(y)-u_0(y) \geq a - \frac{\varepsilon_\lambda}{2}. \end{equation} We next define the function $$ w_\lambda : = \max \lbrace u_\lambda , v_\lambda -a + \delta + \varepsilon_\lambda \rbrace : E \to \mathbb{R}. $$ The function $w_\lambda$ is $\lambda$-Lipschitz on $E$ and satisfies the following. \item[] $(i)$ On the set $ F \cup S_\lambda,$ we have $v_\lambda= u_\lambda.$ Since $a \geq \delta + \varepsilon_\lambda,$ we have that $w_\lambda = u_\lambda$ on $F \cup S_\lambda.$ In particular $w_\lambda = u_\mu$ on $F.$ \item[] $(ii)$ On $G_\lambda,$ we have, by the definition of $a,$ that $ v_\lambda -a \leq u_0. $ Since we always have $u_\lambda \leq u_0 + \delta + \varepsilon_\lambda,$ the function $w_\lambda$ satisfies $w_\lambda \leq u_0 + \delta + \varepsilon_\lambda$ on $G_\lambda.$ \item[] $(iii)$ If $x\in E \setminus (G_\lambda \cup F \cup S_\lambda),$ then $$ v_\lambda (x)-a < u_0(x)+ \delta + \varepsilon_\lambda - a \leq u_0(x), $$ together with $u_\lambda \leq u_0 + \delta+ \varepsilon_\lambda$ on $E,$ this implies $w_\lambda(x) \leq u_0(x) + \delta + \varepsilon_\lambda.$ \medskip From the remarks $(i), (ii)$ and $(iii)$ above we obtain that $w_\lambda \leq u_0 + \delta+ \varepsilon_\lambda$ on $E$ with $w_\lambda = u_\mu$ on $F.$ By \eqref{definitionulambda} we must have $w_\lambda \leq u_\lambda$ on $E.$ But, for the point $y \in G_\lambda,$ (see \eqref{approximationsupremumglambda}) it follows that $$ u_\lambda(y) \geq w_\lambda(y) \geq v_\lambda(y)-a+ \delta + \varepsilon_\lambda \geq u_0(y)+ \delta + \frac{\varepsilon_\lambda}{2} . $$ It turns out that $y$ belongs to $S_\lambda,$ which is a contradiction since $G_\lambda$ and $S_\lambda$ are disjoint subsets. This proves Claim \ref{subsetemptyclaim}. \medskip Finally, because $G_\lambda = \emptyset,$ it is clear that $v_\lambda \leq u_0 + \delta+ \varepsilon_\lambda$ on $E$ and therefore \begin{equation}\label{ulambdaequalinfimalconvolution} u_\lambda (x) = v_\lambda (x) = \inf_{y\in F \cup S_\lambda} \lbrace u_\lambda(y) + \lambda d(x,y) \rbrace, \quad x\in E. \end{equation} \medskip \item[] $\textbf{3.}$ We now show that $u_0(x) \leq u_\lambda(x) + \delta + \varepsilon_\lambda$ for every $x\in E.$ Since $u_0 \leq u_\mu + \delta =u_\lambda+ \delta$ on $F,$ we only need to consider the situation when $x\in E \setminus F.$ Let us fix $\eta >0.$ We can find a point $z_\eta \in F$ with \begin{equation}\label{pointminimizingdistance2} \dist(x,F) + \eta \geq d(x,z_\eta). \end{equation} Moreover, by \eqref{ulambdaequalinfimalconvolution}, it is clear that there exists $y_\eta \in F \cup S_\lambda$ such that \begin{equation}\label{sequenceynminimizing} u_\lambda(y_\eta) + \lambda d(x,y_\eta) \leq \min \left\lbrace u_\lambda(z_\eta) + \lambda d(x,z_\eta), u_\lambda(x)+\eta \right\rbrace. \end{equation} Suppose first that $y_\eta \in S_\lambda.$ In particular $y_\eta \in E \setminus F$ and $u_\lambda(y_\eta)\geq u_0(y_\eta)+ \delta+ \frac{\varepsilon_\lambda}{2}.$ Using that $u_0$ is $1$-Lipschitz together with \eqref{sequenceynminimizing} we obtain \begin{align*} u_0(x) & \leq u_0(y_\eta) + d(x,y) = u_0(y_\eta) + \lambda d(x,y_\eta) + (1- \lambda) d(x,y_\eta) \\ & \leq u_\lambda(y_\eta)-\delta- \frac{\varepsilon_\lambda}{2} + \lambda d(x,y_\eta)+ (1- \lambda) d(x,y_\eta) \\ & \leq u_\lambda(x)+\eta -\delta- \frac{\varepsilon_\lambda}{2}+ (1-\lambda) \diam( \overline{E \setminus F}) \leq u_\lambda(x) + \delta +\varepsilon_\lambda + \eta. \end{align*} Suppose now that $y_\eta \in F.$ Using \eqref{sequenceynminimizing} and the fact that $u_\lambda$ is $\mu$-Lipschitz on $F,$ we can write \begin{align*} u_\lambda(z_\eta) + \lambda d(x,z_\eta) & \geq u_\lambda (y_\eta) + \lambda d(x,y_\eta) \geq u_\lambda(z_\eta)- \mu d(y_\eta,z_\eta)+\lambda d(x,y_\eta) \\ & \geq u_\lambda(z_\eta)- \mu d(x,z_\eta)+(\lambda - \mu) d(x,y_\eta) , \end{align*} which implies, taking into account \eqref{pointminimizingdistance2}, \begin{equation}\label{comparabledistances2} d(x,y_\eta) \leq \frac{\lambda + \mu}{\lambda-\mu} d(x,z_\eta) \leq \frac{\lambda + \mu}{\lambda-\mu} (\dist(x,F)+\eta) \leq \frac{\varepsilon_\lambda}{1-\lambda} + \frac{\lambda + \mu}{\lambda-\mu} \: \eta. \end{equation} Bearing in mind that $u_\lambda + \delta = u_\mu+ \delta \geq u_0$ on $F$ and using \eqref{sequenceynminimizing} and \eqref{comparabledistances2} we obtain \begin{align*} & u_0(x) \leq u_0(y_\eta) + \lambda d(x,y_\eta) + (1- \lambda) d(x,y_\eta) \\ & \leq u_\lambda(y_\eta) + \delta + \lambda d(x,y_\eta) + (1- \lambda) d(x,y_\eta) \leq u_\lambda(x) + \eta + \delta+ \varepsilon_\lambda + (1-\lambda) \frac{\lambda + \mu}{\lambda-\mu} \: \eta. \end{align*} We have thus shown the inequality $$ u_0(x) \leq u_\lambda(x) + \delta + \varepsilon_\lambda + \eta + (1-\lambda) \frac{\lambda + \mu}{\lambda-\mu} \: \eta \quad \text{on} \quad E. $$ Letting $\eta \to 0^+,$ we conclude that $u_0(x) \leq u_\lambda(x) + \delta + \varepsilon_\lambda$ for every $x\in E.$ \end{proof} \medskip \begin{proof}[Proof of Theorem \ref{theoremglobalapproximation}] Without loss of generality we may and do assume that $K=1.$ Let us fix a point $p\in F$ and set $E_n = \left( E \cap B(p,n) \right) \cup F$ and $F_n=E_{n-1}$ for every $n \geq 1,$ where $F_1 = E_0 = F.$ It is clear that we can construct an increasing sequence of numbers $\lbrace \lambda_n \rbrace_{n \geq 1}$ with $\lambda_0 < \lambda_1$ and $\lambda_n <1$ for every $n \geq 1$ such that \begin{equation}\label{inequalitychoicesequence} \frac{1-\lambda_n}{\lambda_n-\lambda_{n-1}}(\lambda_n + \lambda_{n-1}) \left( \diam( \overline{E_n\setminus F_n}) + \dist( \overline{E_n\setminus F_n}, F_n ) \right) \leq \frac{\varepsilon}{2^n} \end{equation} for every $n \geq 1$ such that $E_n \setminus F_n \neq \emptyset.$ Let us construct by induction a sequence of functions $\lbrace u_n \rbrace_{n \geq 1}$ such that each $u_n : E_n \to \mathbb{R}$ is $\lambda_n$-Lipschitz on $E_n$ and satisfy $u_n=u_{n-1}$ on $E_{n-1}$ and $|u_n-u_0| \leq \varepsilon$ on $E_n$ for every $n \geq 1.$ \medskip Since $u_0|_F$ is $\lambda_0$-Lipschitz, we can apply Lemma \ref{lemmalocalapproximation} with $F_1 \subset E_1, \: \delta=0, \: u_0: E_1 \to \mathbb{R}, \: \mu = \lambda_0, \: u_\mu =u_0|_{F_1}$ in order to obtain a $\lambda_1$-Lipschitz function $u_1: E_1 \to \mathbb{R}$ such that $u_1=u_\mu =u_0$ on $F_1$ and $ |u_1-u_0| \leq \frac{\varepsilon}{2}$ on $ E_1,$ thanks to \eqref{inequalitychoicesequence}. Observe that $u_1=u_0$ on $F.$ Now assume that we have constructed functions $u_1, \ldots, u_n$ respectively defined on $E_1, \ldots, E_n$ such that each $u_k$ is $\lambda_k$-Lipschitz on $E_k,$ with $u_k=u_{k-1}$ on $E_{k-1}=F_k$ and $$ | u_k- u_0| \leq \frac{\varepsilon}{2}+ \cdots + \frac{\varepsilon}{2^k} \quad \text{on} \quad E_k, $$ for every $1 \leq k \leq n.$ Then we apply Lemma \ref{lemmalocalapproximation} with $\delta = \varepsilon/2+ \cdots + \varepsilon/2^n, \: E_n=F_{n+1} \subset E_{n+1}, \: \mu = \lambda_n, \: u_\mu = u_n: E_n \to \mathbb{R}$ and $u_0: E_{n+1} \to \mathbb{R}$ to obtain a $\lambda_{n+1}$-Lipschitz function $u_{n+1}: E_{n+1} \to \mathbb{R}$ such that $u_{n+1}=u_0$ on $E_n$ and, thanks to \eqref{inequalitychoicesequence}, $$ | u_{n+1}- u_0| \leq \frac{\varepsilon}{2}+ \cdots + \frac{\varepsilon}{2^{n+1}} \quad \text{on} \quad E_{n+1}. $$ This proves the induction. We now define the function $u: E \to \mathbb{R}$ as follows: given $x\in E,$ we take a positive integer $n$ with $x\in E_n$ and set $u(x):=u_n(x).$ Since $E = \bigcup_{n \geq 1} E_n$ and each $u_n$ coincides with $u_{n-1}$ on $E_{n-1},$ the function $u$ is well defined. Because $u=u_n$ on each $E_n,$ we have that $$ |u-u_0| = |u_n-u_0| \leq \varepsilon \quad \text{on} \quad E_n, $$ which implies that $|u-u_0| \leq \varepsilon$ on $E.$ Also, note that $u=u_0$ on $F$ because $u=u_1$ on $E_1$ and $u_1=u_0$ on $F \subset E_1.$ Finally, given a bounded subset $B$ of $E,$ we can find some natural $n$ with $B \subset E_n.$ This implies that $u=u_n$ on $B,$ where $u_n$ is $\lambda_n$-Lipschitz and $\lambda_n <1.$ \end{proof} \section{Approximation by smooth Lipschitz functions: Proof of Theorem \ref{generaltheorem}}\label{sectionsmoothapproximation} This section contains the proofs of Theorems \ref{generaltheorem}, \ref{secondmaintheorem} and \ref{maintheoremnonseparablehilbert}. Let us start with the proof of Theorem \ref{generaltheorem}, so let us assume from now on that $X$ is a Banach space satisfying the hypothesis of Theorem \ref{generaltheorem} for some $k\in \mathbb{N} \cup \lbrace \infty \rbrace.$ We will need to use the following two claims. \begin{claim}\label{c0fineapproximation} Let $\Omega \subset X$ be an open subset and let $u : \Omega \to \mathbb{R}$ be a Lipschitz function. For every continuous function $\varepsilon : \Omega \to (0,+ \infty)$ there exists $v : \Omega \to \mathbb{R}$ of class $C^k(\Omega)$ such that \item[] $(1)$ $|u(x)-v(x)| \leq \varepsilon(x)$ for all $x\in \Omega.$ \item[] $(2)$ $\| D v(x) \|_* \leq \lip( u , B(x, \varepsilon(x)) \cap \Omega ) + \varepsilon(x)$ for all $x\in \Omega.$ \end{claim} \begin{proof} By replacing $\varepsilon$ with $\min\lbrace \varepsilon, \frac{1}{2} \dist( \cdot, \partial \Omega) \rbrace,$ we may and do assume that $\varepsilon \leq \frac{1}{2} \dist( \cdot, \partial \Omega)$ on $\Omega,$ which implies that $B(x, \varepsilon(x))$ is contained in $\Omega$ for every $x\in \Omega.$ By continuity of $\varepsilon,$ for each $p \in \Omega,$ there exists $0< \delta_p \leq \varepsilon(p)/4$ such that $\varepsilon(x) \geq \varepsilon(p)/2$ for all $x\in B(p, \delta_p).$ The assumption on $X$ implies in particular that there exists a constant $C_0 \geq 1$ such that, for every Lipschitz function $f: X \to \mathbb{R}$ and every $\eta >0,$ there exists a $C^k$ Lipschitz function $g: X \to \mathbb{R}$ such that $|f-g| \leq \eta$ on $X$ and $\lip(g,X) \leq C_0 \lip(f,X).$ Then, as a consequence of \cite[Lemma 3.6]{MJSLSG}, there exists a partition of unity $\lbrace \varphi_{n,p} \rbrace_{(n,p)\in \mathbb{N} \times \Omega}$ of class $C^k(\Omega)$ and Lipschitz such that $\sop(\varphi_{n,p}) \subset B(p,\delta_p)$ for every $(n,p) \in \mathbb{N} \times \Omega,$ and for every $x\in \Omega,$ there exists an open neighbourhood $U_x$ of $x$ and a positive integer $n_x$ such that \begin{align}\label{locallyfiniteness} & \text{If} \quad n >n_x, \quad \text{then} \quad U_x \cap \sop(\varphi_{n,p})= \emptyset \quad \text{for every} \: \: p\in \Omega. \\ & \text{If} \quad n \leq n_x, \quad \text{then} \quad U_x \cap \sop(\varphi_{n,p}) \neq \emptyset \quad \text{for at most one} \: \: p\in \Omega. \nonumber \end{align} We can assume that $u$ is extended to all of $X$ with the same Lipschitz constant. Using the assumption on $X,$ we can find a family of $C^k(X)$ Lipschitz functions $\lbrace v_{n,p} \rbrace_{(n,p)\in \mathbb{N} \times \Omega}$ such that, for every $(n,p) \in \mathbb{N} \times \Omega,$ \begin{equation}\label{approximationgivenbylemma} | u-v_{n,p}| \leq \frac{\varepsilon(p)}{(1+\lip(\varphi_{n,p}))2^{n+2}} \quad \text{on} \quad X \quad \text{and} \end{equation} \begin{equation}\label{preservinglocallipschitzconstant5} \lip(v_{n,p}, B(x_0,r)) \leq \lip(u, B(x_0, r+\delta_p)) + \delta_p \leq \lip(u, B(x_0, r+ \delta_p)) + \frac{\varepsilon(p)}{4} \end{equation} for every ball $B(x_0,r)$ contained in $\Omega.$ We define the approximation $v: \Omega \to \mathbb{R}$ by $$ v(x)= \sum_{(n,p) \in \mathbb{N} \times \Omega } v_{n,p}(x) \varphi_{n,p}(x), \quad x\in \Omega. $$ By the properties of the partition $\lbrace \varphi_{n,p} \rbrace_{(n,p)\in \mathbb{N} \times \Omega},$ the function $v$ is well defined and is of class $C^k(\Omega).$ Given $x\in \Omega,$ \eqref{approximationgivenbylemma} implies \begin{align*} |u(x)-v(x)| & \leq \sum_{ \lbrace (n,p) \: : \: B(p,\delta_p) \ni x \rbrace } | u(x)- v_{n,p}(x)| \: \varphi_{n,p}(x) \leq \sum_{ \lbrace (n,p) \: : \: B(p,\delta_p) \ni x \rbrace } \frac{\varepsilon(p)}{2} \: \varphi_{n,p}(x) \\ & \leq \sum_{ \lbrace (n,p) \: : \: B(p,\delta_p) \ni x \rbrace } \varepsilon(x) \: \varphi_{n,p}(x) = \varepsilon(x). \end{align*} This proves part $(1)$ of our claim. Now, let us estimate $\|Dv(x)\|_*.$ Since $\sum_{(n,p)} \varphi_{n,p}=1,$ we have that $\sum_{(n,p)} D \varphi_{n,p}=0$ on $ \Omega.$ Then, taking into account that $\sop(\varphi_{n,p}) \subset B(p, \delta_p)$ for every $(n,p) \in \mathbb{N} \times \Omega,$ we can write $$ Dv(x)= \sum_{ \lbrace (n,p) \: : \: B(p,\delta_p) \ni x \rbrace } D v_{n,p} (x) \varphi_{n,p} (x) + \sum_{ \lbrace (n,p) \: : \: B(p,\delta_p) \ni x \rbrace } (v_{n,p}(x)-u(x)) D \varphi_{n,p}(x). $$ Hence, \eqref{approximationgivenbylemma} together with \eqref{locallyfiniteness} lead us to \begin{align*} \| Dv(x)\|_* & \leq \sum_{ \lbrace (n,p) \: : \: B(p,\delta_p) \ni x \rbrace } \|D v_{n,p} (x)\|_* \: \varphi_{n,p} (x) + \sum_{ \lbrace (n,p) \: : \: \varphi_{n,p}(x) \neq 0 \rbrace } \frac{\varepsilon(p)}{(1+\lip(\varphi_{n,p}))2^{n+2}} \|D \varphi_{n,p}(x)\|_* \\ & \leq \sum_{ \lbrace (n,p) \: : \: B(p,\delta_p) \ni x \rbrace } \|D v_{n,p} (x)\|_* \: \varphi_{n,p} (x) +\frac{\varepsilon(x)}{2} . \end{align*} Note that if $p \in \Omega$ is such that $x\in B(p,\delta_p),$ then $\varepsilon(x) \geq \varepsilon(p)/2 \geq 2 \delta_p$ and we can write, by virtue of \eqref{preservinglocallipschitzconstant5}, that $$ \|Dv_{n,p}(x)\|_* \leq \lip( v_{n,p}, B(x, \varepsilon(x)-\delta_p)) \leq \lip( u, B(x, \varepsilon(x))) + \frac{\varepsilon(p)}{4} \leq \lip( u, B(x, \varepsilon(x))) + \frac{\varepsilon(x)}{2}. $$ Therefore, we obtain $$ \| Dv(x)\|_* \leq \sum_{ \lbrace (n,p) \: : \: B(p,\delta_p) \ni x \rbrace } \left( \lip( u, B(x, \varepsilon(x))) + \frac{\varepsilon(x)}{2} \right) \varphi_{n,p}(x) +\frac{\varepsilon(x)}{2} = \lip( u, B(x, \varepsilon(x))) + \varepsilon(x). $$ This completes the proof of statement $(2).$ \end{proof} \begin{claim}\label{propositionapproximationglobally1lipschitz} Let $\Omega \subset X$ be an open subset and let $u: \Omega \to \mathbb{R}$ be a $K$-Lipschitz function with the property that $\lip(u,B) <K$ for every bounded subset $B$ of $\Omega.$ Then, given a continuous function $\varepsilon: \Omega \to (0,+ \infty),$ there exists $v: \Omega \to \mathbb{R}$ of class $C^k(\Omega)$ such that \item[] $(1)$ $|u(x)-v(x)| \leq \varepsilon(x)$ for every $x\in \Omega.$ \item[] $(2)$ $\| D v (x)\|_* <K$ for all $x\in \Omega.$ \end{claim} \begin{proof} Let us define $L(r) = \lip(u, B(0, r+1) \cap \Omega)$ for every $r>0.$ The function given by $\delta(r)= \frac{K-L(r)}{2},$ for every $r \geq 0,$ is positive and nonincreasing. The function $\tilde{\delta} : [0,+ \infty) \to \mathbb{R}$ given by $$ \tilde{\delta}(t) = \int_{t}^{t+1} \delta(s) ds, \quad t\geq 0, $$ is continuous and satisfies $\tilde{\delta} \left( [0,+ \infty) \right) \subset (0,K)$ and $\tilde{\delta} \leq \delta$ on $[0,+\infty).$ Let us define the mapping $ \rho : \Omega \to (0,+\infty)$ by $\rho(x)= \tilde{\delta}( \| x\|)$ for every $x\in \Omega.$ Then $\rho$ is continuous and we can replace $\varepsilon$ by $ \min \lbrace 1, \varepsilon,\rho, \frac{1}{2}\dist(\cdot, \partial \Omega) \rbrace$ on $\Omega.$ In particular, this implies that $B(x,\varepsilon(x)) \subset \Omega$ for every $x\in \Omega.$ We thus have from Claim \ref{c0fineapproximation} that there exists $v \in C^k(\Omega)$ such that $$ |u(x)-v(x)| \leq \varepsilon(x), \quad x\in \Omega, $$ and $$ \| Dv(x)\|_* \leq \lip(u, B(x, \varepsilon(x)) ) + \varepsilon(x), \quad x\in \Omega. $$ Since $\varepsilon \leq 1,$ the ball $B(x, \varepsilon(x))$ is contained in $B(0, \| x\| + 1) \cap \Omega.$ Hence, the last inequality leads us to $$ \| Dv(x)\|_* \leq L( \|x\| ) + \varepsilon(x) \leq L( \|x\| ) + \rho(x) \leq \frac{K+L( \|x\|)}{2} $$ for every $x\in \Omega.$ This shows that $\| Dv(x)\|_*<K$ on $\Omega.$ \end{proof} \medskip We are now ready to prove Theorem \ref{generaltheorem}. \begin{proof}[Proof of Theorem \ref{generaltheorem}] Assume that $X$ satisfies the hypothesis of Theorem \ref{generaltheorem} for some $k\in \mathbb{N} \cup \lbrace \infty \rbrace.$ Let us denote by $\lambda_0$ and $K$ the Lipschitz constants $\lip(u_0, \partial \Omega)$ and $\lip(u_0, \overline{\Omega})$ of $u_0$ on $\partial \Omega$ and $\overline{\Omega}$ respectively. By Theorem \ref{theoremglobalapproximation}, there exists a function $u : \overline{\Omega} \to \mathbb{R}$ with \begin{equation}\label{estimationauxiliarfunction1} |u_0-u| \leq \varepsilon/2 \quad \text{on} \quad \overline{\Omega} , \quad u=u_0 \quad \text{on} \quad \partial \Omega, \end{equation} and the Lipschitz constant of $u$ on every bounded subset of $\overline{\Omega}$ is strictly smaller than $K.$ Now, applying Claim \ref{propositionapproximationglobally1lipschitz} for $u,$ we can find a function $v: \Omega \to \mathbb{R}$ of class $C^k(\Omega)$ such that \begin{equation}\label{estimationauxiliarfunction2} |u(x)-v(x)| \leq \min \left\lbrace \frac{\varepsilon}{2}, \dist( x, \partial \Omega) \right\rbrace \quad \text{and} \quad \| Dv(x)\|_* <K \quad \text{for all} \quad x\in \Omega. \end{equation} If we extend $v$ to the boundary $\partial \Omega$ of $\Omega$ by setting $v= u$ on $\partial \Omega$ and we use the inequality \eqref{estimationauxiliarfunction2}, we obtain, for every $x\in \partial \Omega, \: y\in \Omega,$ that $$ |v(x)-v(y)| \leq |u(x)-u(y)| + |u(y)-v(y)| \leq K \|x-y\| + \dist(y, \partial \Omega) \leq (1+K) \|x-y\|. $$ This proves that the function $v$ is continuous on $\overline{\Omega}.$ Therefore, the fact that $v$ is $K$-Lipschitz on $\overline{\Omega}$ is a consequence of the following well-known fact. \begin{fact}\label{factlipschitzconstant} {\em If $w: \overline{\Omega} \to \mathbb{R}$ is continuous on $\overline{\Omega}$, is differentiable on $\Omega,$ is $K$-Lipschitz on $\partial \Omega$ and satisfies $\| Dw(x)\|_* \leq K$ for every $x\in \Omega,$ then $w$ is $K$-Lipschitz on $\overline{\Omega}.$ } \end{fact} It only remains to see that $v$ is $\varepsilon$-close to $u_0.$ Indeed, by using \eqref{estimationauxiliarfunction1} and \eqref{estimationauxiliarfunction2} we obtain $$ |u_0-v| \leq |u_0-u| + |u-v| \leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon \quad \text{on} \quad \overline{\Omega}. $$ \end{proof} \subsection{Finite dimensional and Hilbert spaces} We are now going to prove that if $X$ is a finite dimensional space or a Hilbert space, then $X$ satisfies the assumption of Theorem \ref{generaltheorem} with $k=\infty$ in the separable case and with $k=1$ in the non-separable case. \begin{lemma}\label{lemmapartialsmoothapproximation} Let $X$ be a separable Hilbert space or a finite dimensional normed space. Given a $K$-Lipschitz function $f: X \to \mathbb{R}$ and $\varepsilon >0,$ there exists a function $g$ of class $C^\infty(X)$ such that $|g-f| \leq \varepsilon$ on $X$ and $\lip(g, B(x_0,r)) \leq \lip(f, B(x_0,r+ \varepsilon))+ \varepsilon$ for every ball $B(x_0,r) \subset X.$ On the other hand, if $X$ is a non-separable Hilbert space, the statement holds replacing $C^\infty$ smoothness with $C^1.$ \end{lemma} \begin{proof} Let us first consider that $X=\mathbb{R}^d$ is endowed with an arbitrary norm. If $f: \mathbb{R}^d \to \mathbb{R}$ is Lipschitz and we consider a function $\theta_\delta : \mathbb{R}^d \to \mathbb{R}$ of class $C^\infty(\mathbb{R}^d)$ with $\sop(\theta_\delta) \subseteq B(0,\delta)$ and $\int_{\mathbb{R}^d} \theta_\delta = 1,$ it is well known that the integral convolution $ f_\delta = f* \theta_\delta$ is a Lipschitz function of class $C^\infty$ such that $$ \lip(f_\delta, S) \leq \lip(f, S+ B(0,\delta)) \quad \text{for every subset} \quad S \subset \mathbb{R}^d. $$ In addition, $f_\delta \to f$ uniformly on $\mathbb{R}^d$ as $\delta \to 0^+.$ This proves the lemma in the finite dimensional case. \medskip Now, let $X$ be a Hilbert space and let us denote by $\| \cdot \|$ the norm on $X.$ If $g: X \to \mathbb{R}$ is a $K$-Lipschitz function, then the functions defined by $$ g_\lambda(x)= \inf_{y\in X} \lbrace f(y) + \tfrac{1}{2\lambda}\| x-y\|^2 \rbrace, \quad g^\mu(x)= \sup_{y\in X} \lbrace f(y) - \tfrac{1}{2\mu}\| x-y\|^2 \rbrace $$ for all $x\in X$ and $ \lambda, \mu >0,$ are $K$-Lipschitz as well. Also, it is easy to see that the infimum/supremum defining $g_\lambda(x)$ and $g^\mu(x)$ can be restricted to the ball $B(x, 2 \lambda K)$ and $B(x,2 \mu K)$ respectively. Let us now prove the following relation between the local Lipschitz constants of $g$ and $g_\lambda:$ \begin{equation}\label{preservinglocallipschitzconstant1} \lip(g_\lambda , B(x_0,r)) \leq \lip(g, B(x_0,r+2\lambda K)) \quad \text{for every ball} \quad B(x_0,r) \subset X. \end{equation} Indeed, let us fix a ball $B(x_0,r),$ two points $x,x'\in B(x_0,r)$ and $\varepsilon >0.$ We can find $y \in B(x', 2 \lambda K)$ such that $$ g(y)+\tfrac{1}{2\lambda}\| x'-y\|^2 \leq g_\lambda(x') + \varepsilon. $$ The points $y$ and $x-x'+y$ belong to $B(x_0,r+ 2\lambda K)$ and then we can write \begin{align*} g_\lambda(x)-g_\lambda(x') & \leq g(x-x'+y) + \tfrac{1}{2\lambda}\| x-(x-x'+y)\|^2-g(y)\\ & \quad - \tfrac{1}{2\lambda} \| x'-y\|^2+ \varepsilon \leq \lip(g, B(x_0,r+2\lambda K)) \| x-x'\| + \varepsilon, \end{align*} which easily implies \eqref{preservinglocallipschitzconstant1}. Similarly, we show that \begin{equation}\label{preservinglocallipschitzconstant2} \lip(g^\mu , B(x_0,r)) \leq \lip(g, B(x_0,r+2\mu K)) \quad \text{for every ball} \quad B(x_0,r) \subset X. \end{equation} Now, we consider the Lasry-Lions sup-inf convolution formula for $g,$ that is $$ g_\lambda^\mu(x)= \sup_{z\in X} \inf_{y\in X} \lbrace f(y)+ \tfrac{1}{2\lambda} \| z-y\|^2- \tfrac{1}{2 \mu} \| x-z\|^2 \rbrace $$ for all $x\in X$ and $0 < \mu < \lambda.$ By the preceding remarks, the function $g_\lambda^\mu$ is $K$-Lipschitz and satisfies that \begin{equation}\label{preservinglocallipschitzconstant3} \lip(g_\lambda^\mu , B(x_0,r)) \leq \lip(g, B(x_0,r+2(\lambda + \mu) K)) \quad \text{for every ball } B(x_0,r) \subset X. \end{equation} Moreover, in \cite{LasryLions, ATAZ} it is proved that $g_\lambda^\mu$ is of class $C^1(X)$ and $g_\lambda^\mu$ converges uniformly to $g$ as $0 < \mu < \lambda \to 0.$ Now, given our $K$-Lipschitz function $f: X \to \mathbb{R}$ and $\varepsilon >0,$ we can find $0 < \mu < \lambda$ small enough so that the function $f_\lambda^\mu$ is $K$-Lipschitz and of class $C^1(X), \: | f_\lambda^\mu-f| \leq \varepsilon/2$ on $X$ and, by virtue of \eqref{preservinglocallipschitzconstant3}, \begin{equation}\label{preservinglocallipschitzconstant4} \lip(f_\lambda^\mu, B(x_0,r)) \leq \lip(f, B(x_0,r+ \varepsilon))\quad \text{for every ball } B(x_0,r) \subset X. \end{equation} If we further assume that $X$ is separable, then we can use \cite[Theorem 1]{Moulis} in order to obtain a function $g \in C^\infty(X)$ such that $$ | f_\lambda^\mu - g| \leq \frac{\varepsilon}{2} \quad \text{and} \quad \| D f_\lambda^\mu - Dg\|_* \leq \varepsilon \quad \text{on} \quad X, $$ where $\| \cdot \|_*$ denotes the dual norm of $\| \cdot \|.$ From the first inequality we see that $| f- g| \leq \varepsilon$ on $X.$ The second one together with \eqref{preservinglocallipschitzconstant4} shows that $$ \lip(g,B(x_0,r)) \leq \lip(f_\lambda^\mu, B(x_0,r))+ \varepsilon \leq \lip(f, B(x_0,r+ \varepsilon))+ \varepsilon $$ for every ball $B(x_0,r)$ of $X.$ \end{proof} Combining Lemma \ref{lemmapartialsmoothapproximation} with Theorem \ref{generaltheorem}, we obtain Theorem \ref{maintheoremnonseparablehilbert} and Theorem \ref{secondmaintheorem} when $X$ is a separable Hilbert space or a finite dimensional space. \begin{remark} \em{In the case when the function to be approximated vanishes on the boundary, the proof of Theorem \ref{secondmaintheorem} for finite dimensional spaces can be very much simplified as we do not need to use Theorem \ref{theoremglobalapproximation}. Indeed, if $\mathbb{R}^n$ is endowed with an arbitrary norm and $u_0: \overline{\Omega} \to \mathbb{R}$ is a Lipschitz function with $u_0=0$ on $\partial \Omega,$ given $\varepsilon >0,$ we define the function $\varphi_\varepsilon : \mathbb{R} \to \mathbb{R}$ by \begin{equation}\label{equationfunctioncomposition} \varphi_\varepsilon(t) = \left\lbrace \begin{array}{ccl} t+\frac{\varepsilon}{2} & \mbox{if } & t \leq - \frac{\varepsilon}{2}, \\ 0 & \mbox{if }& -\frac{\varepsilon}{2}\leq t\leq \frac{\varepsilon}{2}, \\ t-\frac{\varepsilon}{2} & \mbox{if }& t \geq \frac{\varepsilon}{2}. \end{array} \right. \end{equation} We can assume that $u_0$ is extended to all of $\mathbb{R}^n$ by putting $u_0=0$ on $\mathbb{R}^n \setminus \overline{\Omega},$ preserving the Lipschitz constant. The function $u = \varphi_\varepsilon \circ u_0$ defined on $\mathbb{R}^n$ is Lipschitz because so are $u_0$ and $\varphi_\varepsilon$, and $\lip(u,\mathbb{R}^n) \leq \lip(u_0, \mathbb{R}^n).$ Also, since $| \varphi_\varepsilon(t)-t| \leq \varepsilon/2$ for every $t\in \mathbb{R},$ it is clear that $$ | u(x)-u_0(x)| = | \varphi_\varepsilon(u_0(x))- u_0(x)| \leq \frac{\varepsilon}{2} \quad \text{for all} \quad x\in \mathbb{R}^d. $$ Now we define $$ v(x)=(u * \theta_\delta)(x)= \int_{\mathbb{R}^d} u(y) \theta_\delta(x-y) dy, \quad x\in \mathbb{R}^d, $$ where $\theta_\delta : \mathbb{R}^d \to \mathbb{R}$ is a $C^\infty(\mathbb{R}^d)$ such that $\theta_\delta \geq 0, \: \int_{\mathbb{R}^d}\theta_\delta =1$ and $\sop(\theta_\delta) \subseteq B(0,\delta).$ Using the preceding remarks together with the well-known properties of the integral convolution of Lipschitz functions with mollifiers, it is straightforward to check that, for $\delta>0$ small enough, $v$ is the desired approximating function, i.e, $v$ is of class $C^\infty(\mathbb{R}^d)$ with $v=0$ on $\partial \Omega, \: \lip(v, \mathbb{R}^n) \leq \lip(u_0, \mathbb{R}^n)$ and $|u_0-v| \leq \varepsilon$ on $\overline{\Omega}.$ } \end{remark} \subsection{The space $c_0(\Gamma)$} Let us now prove that the space $X= c_0(\Gamma)$ satisfies the hypothesis of Theorem \ref{generaltheorem} with $k=\infty.$ In order to do this, we will use the construction given in \cite[Theorem 1]{HajekJohanis2} and we will observe that the local Lipschitz constants are preserved. \begin{lemma}\label{lemmapartialc0} If $\Gamma$ is an arbitrary subset, $X=c_0(\Gamma)$ and $f: X \to \mathbb{R}$ is a Lipschitz function, then, for every $\varepsilon >0,$ there exists a function $g: X \to \mathbb{R}$ of class $C^\infty(X)$ such that $|f-g| \leq \varepsilon$ on $X$ and $\lip(g,B(x_0,r)) \leq \lip( f, B(x_0, r+ \varepsilon))$ for every ball $B(x_0,r) \subset X.$ \end{lemma} \begin{proof} If $K$ denotes the Lipschitz constant of $f,$ let us consider $0 < \eta <\frac{\varepsilon}{2(1+K)}.$ Let us define the function $\phi: X \to X$ by $\phi(x) = \left( \varphi_{2\eta}(x_\gamma) \right)_{\gamma \in \Gamma}$ for every $x=(x_\gamma)_{\gamma \in \Gamma} \in X,$ where $\varphi_{2\eta}$ is defined in \eqref{equationfunctioncomposition}. Thus $\phi$ is $1$-Lipschitz and satisfies $\| \phi(x)-x\| \leq \eta$ for every $x\in X.$ By composing $f$ with $\phi$ we obtain a function $h = f \circ \phi$ satisfying $|f-h| \leq \frac{\varepsilon}{2}$ and with the property that, for every $x\in X,$ there exists a finite subset $F$ of $\Gamma$ such that whenever $y,y'\in B(x, \frac{\eta}{2})$ and $P_F(y)=P_F(y')$ (here $P_F(z) = \sum_{\gamma \in F} e^*_\gamma(z) e_\gamma$ for every $z\in X$) we have $h(y)=h(y').$ Moreover, we observe that if $x,y\in B(x_0,r) \subset X,$ then $\phi(x), \phi(y) \in B(x_0, r+\eta)$ and therefore $$ |h(x)-h(y)| \leq \lip( f, B(x_0,r+\eta) ) \| \phi(x)-\phi(y)\| \leq \lip( f, B(x_0,r+\eta) ) \| x-y\|; $$ which shows that $\lip(h,B(x_0,r)) \leq \lip( f, B(x_0,r+\eta) ).$ Now we use the construction of \cite[Lemma 6]{HajekJohanis2} to obtain the desired approximation $g:$ let us define $g $ as the limit of the net $\lbrace g_F \rbrace_{F\in \Gamma^{<\omega}},$ where each $g_F$ is defined by $$ g_F(x) =\int_{\mathbb{R}^{|F|}} h \Big ( x- \sum_{\gamma\in F} t_\gamma e_\gamma \Big ) \prod_{\gamma \in F} \theta(t_\gamma) d \lambda_{|F|}(t), \quad x\in X; $$ and $\theta$ is a even $C^\infty$ smooth non-negative function on $\mathbb{R}$ such that $\int_{\mathbb{R}} \theta=1$ and $\sop(\theta) \subset [ - c \varepsilon, c \varepsilon],$ for a suitable small constant $c>0.$ It turns out that $g$ is of class $C^\infty(X)$ with $|g-h| \leq \frac{\varepsilon}{2}$ on $X$ and with the property that, for every $x\in X,$ there exists a finite subset $F_x$ of $\Gamma$ such that $g(x)=g_H(x)$ for every finite subset $H$ of $\Gamma$ containing $F_x.$ See \cite[Lemma 6]{HajekJohanis2} for details. In addition, we notice that if $x,y\in B(x_0,r),$ and we consider finite subsets $F_x$ and $F_y$ of $\Gamma$ with the above property, then for the set $H=F_x \cup F_y,$ we have that \begin{align*} | g(x)& -g(y)| = |g_H(x)-g_H(y)| \leq \int_{\mathbb{R}^{|H|}} \bigg | h \Big ( x- \sum_{\gamma\in H} t_\gamma e_\gamma \Big ) - h \Big ( y- \sum_{\gamma\in H} t_\gamma e_\gamma \Big ) \bigg | \prod_{\gamma \in H} \theta(t_\gamma) d \lambda_{|H|}(t) \\ & \leq \lip( h, B(x_0, r+ c\varepsilon)) \| x-y\| \int_{\sop(\theta)^{|H|}} \prod_{\gamma \in H} \theta(t_\gamma) d \lambda_{|H|}(t)= \lip( h, B(x_0, r+ c\varepsilon)) \| x-y\|. \end{align*} This shows that $$ \lip(g,B(x_0,r)) \leq \lip( h, B(x_0, r+ c\varepsilon)) \leq \lip( f, B(x_0,r+c\varepsilon + \eta) ), $$ for every ball $B(x_0,r) \subset X.$ This proves the lemma. \end{proof} Combining Lemma \ref{lemmapartialc0} with Theorem \ref{generaltheorem}, we obtain Theorem \ref{secondmaintheorem} in the case $X=c_0(\Gamma).$ \section{Approximation by almost classical solutions of the Eikonal equation}\label{sectionapproximationalmostclassical} Throughout this section $X$ will denote a finite dimensional normed space with $\dim(X) \geq 2.$ At the end of the section we will complete the proof of Theorem \ref{maintheoremarbitrarynorm}. \medskip We need to recall the notion of \textit{almost classical solutions} of stationary Hamilton-Jacobi equations with Dirichlet boundary condition. This concept was introduced in \cite{DevilleMatheron} for the Eikonal equation and was generalized in \cite{DevilleJaramillo} as follows. \begin{definition} Let $\Omega$ be an open subset of $X$ and let $F : \mathbb{R} \times \Omega \times X^* \to \mathbb{R}$ and $u_0: \partial \Omega \to \mathbb{R}$ be continuous. A continuous function $u: \overline{\Omega} \to \mathbb{R}$ is an almost classical solution of the equation $F( u(x), x, Du(x))=0$ with Dirichlet condition $u=u_0$ on $\partial \Omega$ if: \begin{itemize} \item[$(i)$] $u=u_0$ on $\partial \Omega.$ \item[$(ii)$] $u$ is differentiable on $\Omega$ and $F( u(x), x, Du(x))\leq 0$ for all $x\in \Omega.$ \item[$(iii)$] $F( u(x), x, Du(x))=0$ for almost every $x\in \Omega.$ \end{itemize} \end{definition} In \cite[Theorem 4.1]{DevilleMatheron} it was proved the existence of almost classical solutions of the Eikonal equation with homogeneous boundary data, that is, $|D v| =1$ and $v=0$ on $\partial \Omega.$ This result was generalized in \cite{DevilleJaramillo} for an arbitrary function $F$ under certain conditions on $F.$ See \cite[Theorem 3.1]{DevilleJaramillo} or Proposition \ref{propositionexistencealmostclassicalsolution} below. \medskip We start by proving a slight refinement of \cite[Theorem 3.1]{DevilleJaramillo} for the existence of almost classical solutions, in which these solutions can be taken with arbitrarily small supremum norm. \begin{proposition}\label{propositionexistencealmostclassicalsolution} Let $\Omega \subset X$ be an open subset and let $F: \mathbb{R} \times \Omega \times X^* \to \mathbb{R}$ be a continuous mapping. Assume that \begin{itemize} \item[(A)] $F(0,x,0) \leq 0$ for every $x\in \Omega.$ \item[(B)] For every compact subset $K$ of $\Omega$ there exist constants $\alpha_K, M_K >0$ such that for all $x\in K, \: r\in [0, \alpha_K]$ and $ x^* \in X^*$ with $\| x^* \|_* \geq M_k$ we have $F(r,x,x^*)>0.$ \end{itemize} Then, given $\varepsilon >0,$ there exists a function $u \geq 0$ on $\overline{\Omega}$ such that $| u |\leq \varepsilon$ on $\overline{\Omega}$ and $u$ is an almost classical solution of the equation $F(u(x),x,Du(x))=0$ on $\Omega$ with Dirichlet condition $u=0$ on $\partial \Omega.$ Moreover, the extension $\tilde{u}$ of $u$ defined by $\tilde{u}=0$ on $X \setminus \Omega$ is differentiable on $X.$ \end{proposition} \begin{proof} Although \cite[Theorem 3.1]{DevilleJaramillo} was originally stated when $X=\mathbb{R}^n$ is endowed with the euclidean norm, we can easily rewrite its statement (and its proof) for general finite dimensional normed spaces by using the following proposition, which is an easy consequence of \cite[Corollary 3.6]{DevilleMatheron}. \begin{proposition}\label{propositionmonotonicbidualnorm} Suppose that $B$ is a closed ball of $X^*.$ There exists a mapping $t : B \to S_{X^{**}}$ such that if $(\sigma_n)_n \subset B$ is a sequence with $t(\sigma_n)(\sigma_{n+1}-\sigma_n) \geq 0$ for every $n,$ then $(\sigma_n)_n$ converges. \end{proposition} In \cite[Theorem 3.1]{DevilleJaramillo}, $\Omega$ is decomposed as $\Omega = \bigcup_{j \geq 1} C_j,$ where $\lbrace C_j\rbrace_{j \geq 1}$ is a locally finite family of closed cubes and the function $u$ satisfies $u=0$ on $\bigcup_{j \geq 1} \partial C_j$ (because $u$ is the sum of a series of functions all vanishing on this union). Moreover, it is possible to choose the covering $\lbrace C_j\rbrace_{j \geq 1}$ so that $\diam(C_j) \leq \varepsilon$ for every $j \geq 1,$ and then, the Mean Value Theorem yields that $|u| \leq \varepsilon$ on $\Omega.$ \end{proof} \begin{proof}[Proof of Theorem \ref{maintheoremarbitrarynorm}] Given a $1$-Lipschitz function $u_0: \overline{\Omega} \to \mathbb{R}$ such that $u_0$ is $\lambda_0$-Lipschitz on $\partial \Omega $ for some $\lambda_0<1$ and given $\varepsilon >0,$ we can find, thanks to Theorem \ref{secondmaintheorem}, a $1$-Lipschitz function $v : \overline{\Omega} \to \mathbb{R}$ of class $C^\infty(\Omega)$ such that \begin{equation}\label{estimationauxiliarfunction3} |u_0-v| \leq \frac{\varepsilon}{2} \quad \text{on} \quad \overline{\Omega}, \quad v=u_0 \quad \text{on} \quad \partial \Omega. \end{equation} Let us define $F: \Omega \times X^* \to \mathbb{R}$ by $F(x, x^*) = \| x^* + D v (x) \|_*-1,$ for every $(x, x^*) \in \Omega \times X^* .$ Because $v$ is $1$-Lipschitz on $\overline{\Omega},$ we have $F(x,0) \leq 0$ for every $x\in \Omega$, which means that the function identically $0$ is a subsolution to the problem \begin{equation} \left\lbrace \begin{array}{ccl}\label{particularHJequation} F(x,D u(x))=0 & \text{on } \Omega , \\ u = 0 & \text{on } \partial \Omega, \end{array} \right. \end{equation} Also, observe that, whenever $\| x^*\|_*\ge 3$, we have, for all $x\in\Omega$, $F(x, x^*) \ge 1$. Hence, Proposition \ref{propositionexistencealmostclassicalsolution} provides an almost classical solution $u$ to problem \eqref{particularHJequation} such that $| u| \leq \varepsilon/2$ on $\overline{\Omega}.$ Let us define $w = u+ v$ on $\overline{\Omega}.$ Then $w$ is continuous on $\overline{\Omega}$ and differentiable on $\Omega$ with $\| Dw(x)\|_* = \| D u(x)+ Dv(x)\|_* \leq 1$ for every $x\in \Omega$ and $\| Dw(x)\|_*=1$ for almost every $x\in \Omega.$ Also, $w$ satisfies that $w=v=u_0$ on $\partial \Omega$ and $|w-v| \leq \varepsilon/2$ on $\overline{\Omega}.$ Using Fact \ref{factlipschitzconstant}, we obtain that $w$ is in fact $1$-Lipschitz on $\overline{\Omega}.$ Finally note that $$ |u_0-w| \leq |v-w| + |u_0-v| \leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2} \leq \varepsilon \quad \text{on} \quad \overline{\Omega}. $$ This completes the proof of Theorem \ref{maintheoremarbitrarynorm}. \end{proof} \section{The limiting case}\label{sectionexample} In this section we are concerned about constructions of functions $u_0$ with prescribed values on the boundary of $\Omega$ such that $u_0$ is differentiable on $\Omega$ and $\lip(u_0, \partial \Omega) = \lip(u_0, \Omega).$ \begin{proposition}\label{propositiontwodimensions} If $\Omega \subset \mathbb{R}^2$ is open and $u_0: \partial \Omega \to \mathbb{R}$ is $1$-Lipschitz for the usual euclidean distance, then there exists a differentiable $1$-Lipschitz function $w: \overline{\Omega} \to \mathbb{R}$ such that $| \nabla w| =1$ almost everywhere on $\Omega$ and $w=u_0$ on $\partial \Omega,$ i.e, there exist almost classical solutions of the Eikonal equation with boundary value $u_0.$ \end{proposition} \begin{proof} We know by O. Savin's results in \cite{Savin} that the \textit{Absolutely Minimizing Lipschitz Extension} (AMLE for short) of $u_0$ to $\overline{\Omega}$ is of class $C^1(\Omega).$ In particular, there exists a $1$-Lipschitz extension $v: \overline{\Omega} \to \mathbb{R}$ of $u_0$ such that $v \in C^1(\Omega).$ If we consider the problem \begin{equation}\label{auxiliarhomogeneousequation} \left\lbrace \begin{array}{ccl} | \nabla u + \nabla v |=1 & \text{on } \Omega , \\ u = 0 & \text{on } \partial \Omega, \end{array} \right. \end{equation} and define $F: \Omega \times \mathbb{R}^2 \to \mathbb{R}$ by $F(x,p) = | p + \nabla v(x)|, \: x\in \Omega,\: p \in \mathbb{R}^2,$ we have that $F$ is a continuous function which is easily checked to satisfy the hypothesis of \cite[Theorem 3.1]{DevilleJaramillo} (see Proposition \ref{propositionexistencealmostclassicalsolution} in Section \ref{sectionapproximationalmostclassical}) for the existence of an almost classical solution to the problem \eqref{auxiliarhomogeneousequation}. If we denote by $u$ this solution and we set $w = u+v$ on $\overline{\Omega},$ it is clear that $w$ is the desired function. \end{proof} We notice that the proof of Proposition \ref{propositiontwodimensions} cannot be adapted for dimension $n \geq 3,$ because it is unknown whether or not the AMLE of $u_0$ is of class $C^1.$ We only know from the results in \cite{EvansSmart}, that these AMLE are differentiable everywhere. \begin{example}\label{counterexamplel1} {\em Consider the $\ell_1$ norm on $\mathbb{R}^2$ and define $\Omega = \lbrace (x,y) \in \mathbb{R}^2 \: : \: x^2+y^2<1 \rbrace$ and the function $u_0(x,y)= |x|-|y|$ on the boundary $\partial \Omega$ of $\Omega.$ The function $u_0$ is $1$-Lipschitz and all possible $1$-Lipschitz extensions of $u_0$ to $\overline{\Omega}$ are not differentiable at $(0,0).$} \end{example} \begin{proof} Given $(x,y), (x',y') \in \partial \Omega,$ we can easily write $$ | u(x,y)-u(x',y')| = \big | |x|-|x'| + |y'|-|y| \big | \leq |x-x'| + |y-y'| = \| (x,y)-(x',y')\|_1, $$ where the above inequalities are sharp. Thus, $u_0$ is a $1$-Lipschitz function on $\partial \Omega.$ Now, let $u : \overline{\Omega} \to \mathbb{R}$ be a $1$-Lipschitz extension of $u_0.$ We have that $u(0,0)\leq 0$ since $ u(0,0)+1 = u(0,0)-u(0,1) \leq 1.$ On the other hand, for every $x\in [-1,1],$ we can write \begin{align*} u(x,0) & \geq u(\sign(x),0) - \|(\sign(x),0)-(x,0)\|_1 = 1 -(1-|x|) = |x|\\ u(x,0) & \leq u(0,0)+ \|(x,0)-(0,0)\|_1 \leq |x| ; \end{align*} which implies that $u(x,0)=|x|$ for every $x\in [-1,1].$ Therefore $u$ is not differentiable at $(0,0).$ \end{proof} The above example shows in particular that if $u_0$ is extended to a $1$-Lipschitz on $\overline{\Omega}$ and $\varepsilon>0,$ there is no $1$-Lipschitz function $v$ on $\overline{\Omega}$ which is differentiable on $\Omega, \: v=u_0$ on $\partial \Omega$ and $|u_0-v| \leq \varepsilon$ on $\overline{\Omega}.$ Thus Problem \ref{mainproblem} has a negative solution in the limiting case $\lip(u_0, \partial \Omega) = \lip(u_0,\overline{\Omega}).$ An example with the same properties can be obtained with the $\ell_\infty$ norm by means of the isometry $T : (\mathbb{R}^2, \| \cdot \|_1) \to (\mathbb{R}^2, \| \cdot \|_\infty), \: T(x,y)=(x+y,x-y).$ \section*{Acknowledgements} The second author wishes to thank the Institut de Math\'{e}matiques de Bordeaux, where this work was carried out.
1,314,259,995,712
arxiv
\section{Introduction} IP geolocation is a longstanding problem in computer networking, with both an active academic research and a wide array of commercial solutions and applications. IP geolocation is used for a variety of purposes, including mapping clients to nearby content delivery network (CDN) replicas, personalization of search results and advertising, and customization of content (e.g., weather, language localization). In a legal context, IP geolocation is used for digital rights management (e.g, geographic licensing restrictions), compliance with the laws and regulations of a region or country (e.g., gambling, sales taxes, privacy regulations), and to assist with law enforcement (e.g., determining jurisdictions or collecting evidence). In security contexts, government and commercial entities use it for for counter-terrorism, attack attribution, monitoring access to private networks, and detecting potential fraud. It facilitates operations and site reliability (e.g., monitoring packet loss from a location), and informs infrastructure investments by both industry and policymakers~\cite{2010_gill_geolocation_circumvention,2006_gueye_constraint_based_geolocation,2018_ovidiu_geolocation_reverse_dns,2011_poese_uhlig_ip_geolocation_unreliable,2013_li_geolocation_moderately_connected,2011_wang_street_level_geolocation}. Computer science researchers also use IP geolocation to study the properties and evolution of the network itself, such as the structure and graph parameters of networks~\cite{2005_freedman_feamster_geographic_locality_ip_prefixes,2002_spring_rocketfuel_geo}. Increasingly, IP geolocation is being used to address various problems in {\em policy and social science} that entail drawing inferences about various demographics and geographies based on inferred locations of IP addresses. Social scientists have noted the potential to use ``big data'' as a lens on human behaviors and interactions~\cite{2009_lazer_computational_social_science}, and as modern society is increasingly mediated through the Internet, many of our interactions are associated with IP addresses. Server logs and speed measurements, for instance, show who accesses resources and the quality of their connections. This allows aggregate statistics or time trends. But associating these behaviors and network conditions with human populations ultimately requires a way to map IP addresses to physical locations. A natural approach would be to use IP geolocation with census tract-scale precision to link IP addresses to physical locations. In this paper, we evaluate whether free and paid IP geolocation databases can achieve this level of accuracy, and we explore the determinants of IP geolocation accuracy. The accuracy of IP geolocation databases has practical implications for the answers to a wide range of social and public policy questions. One area of particular timeliness is that of the so-called ``digital divide.'' Calls for digital equity and inclusion, already urgent, have reached a fever pitch during the COVID-19 pandemic. Prominent studies of broadband performance from Microsoft and M-Lab rely on IP geolocation to associate Internet throughput and latencies with zip codes~\cite{2021_microsoft_broadband,2021_mlab_visualization}. \citeauthor{2019_ganelin_chuang_ip_mooc_regressive}. studied whether or not geolocation databases could reliably indicate socioeconomic status of MOOC registrants with known-physical addresses. That study ultimately concluded, as will we, that answering such questions based on existing IP geolocation databases is premature~\cite{2019_ganelin_chuang_ip_mooc_regressive}. We revisit this problem now, due both to its practical implications, and thanks to the availability of two highly-accurate and large-scale groundtruth datasets of GPS-located IP addresses. These datasets, from Unacast and Ookla\textsuperscript{\textregistered}{} Speedtest Intelligence\textsuperscript{\textregistered}{}, afford us a view of consumer behaviors on both fixed-line and mobile networks, that is markedly different from the geolocation targets used in past work. Table~\ref{tab:findings} presents our main findings. The rest of this paper is organized as follows. Section~\ref{sec:related} discusses related work in IP geolocation, both in research and in commercial product offerings. Section~\ref{sec:data} describes the datasets that we use for the analysis in this paper. Section~\ref{sec:quality} evaluates the quality of the datasets that we are using, in particular exploring the suitability of using GPS data as a ``ground truth'' for evaluating IP geolocation databases. Section~\ref{sec:results} presents the result of our study, including findings about the circumstances under which IP geolocation is more or less accurate. In Section~\ref{sec:discussion}, we interpret and extend our results in the context of research on human populations and privacy. We conclude in Section~\ref{sec:conclusion}. \input{tabs/main_findings.tex} \section{Related Work}\label{sec:related} Past work on IP geolocation generally takes three approaches, as outlined by \citeauthor{2001_padmanabhan_geoping_geocluster}~\cite{2001_padmanabhan_geoping_geocluster}. Their IP Geolocation work, IP2Geo, compared the complementary strengths of active latency measurements (GeoPing), active traceroutes paired with DNS hints (GeoTrack), and static databases of outside information (GeoCluster). Each of these approaches has evolved. Padmanabhan and Subramanian concluded that database-driven methods held the greatest promise. Commercial products have accordingly built databases with proprietary methods that include registry information, outside data, and active methods. On the other hand, academic work has tended to focus on active and DNS-based measurements. \paragraph{IP geolocation methods.} Starting with DNS, \citeauthor{2002_spring_rocketfuel_geo} developed techniques in their Rocketfuel project to map infrastructure (i.e., routers) to physical locations. A significant contribution was to optimize traceroute targets to minimize redundancy and ensure that each path will traverse its target ISP~\cite{2002_spring_rocketfuel_geo}, although their use of the DNS to geolocate routers was pioneering at the time. Their subsequent approach to DNS hint identification was largely manual~-- ``browsing through the list of router names''~-- but the resultant \texttt{undns} tool has proven influential and enduring. \citeauthor{2005_freedman_feamster_geographic_locality_ip_prefixes} extended \texttt{undns}' coverage~\cite{2005_freedman_feamster_geographic_locality_ip_prefixes}. These projects were driven by questions about properties of the network, specifically the topology of large ISPs and the efficiency of block assignments in BGP routing tables. More recently, \citeauthor{2018_ovidiu_geolocation_reverse_dns}~\cite{2018_ovidiu_geolocation_reverse_dns} attempted to enumerate all possible DNS city name hints and finalize location decisions with machine learning. Like IP2Geo, the authors relied on a large dataset from Microsoft for their ground truth, although the ground truth data was from Bing instead of Hotmail. In the latency-based space, \citeauthor{2006_gueye_constraint_based_geolocation}~\cite{2006_gueye_constraint_based_geolocation} and \citeauthor{2006_katz-basett_geolocation_delay_and_topology} \cite{2006_katz-basett_geolocation_delay_and_topology} introduced constraint-based geolocation (CBG) and topology-based geolocation (TBG). CBG is essentially the intersection of several latency-derived distance buffers, while TPG also localizes intermediate hosts so that targets can be constrained by their relation to passive landmarks rather than just active probes. Subsequently, Octant incorporated both positive \emph{and negative} constraints (the IP address is \emph{not} within a certain radius)~\cite{2007_wong_octang_geolocation_negative}. In addition to this ``geometric'' approach are several statistical strategies. \citeauthor{2010_erikkson_barford_learning_based_geolocation} developed first a Bayesian approach and then a likelihood-driven choice among possibilities with the CBG-derived regions~\cite{2010_erikkson_barford_learning_based_geolocation,2012_eriksson_barford_posit_lightweight_geolocation}. Other work presents strategies using kernel density and maximum likelihood estimation~\cite{2009_youn_kde_geolocation,2010_arif_mle_geolocation}. It is also possible to constrain location from the covariance matrix of latency measurements with locations. Notable in \citeauthor{2010_erikkson_barford_learning_based_geolocation}'s Bayesian work is the insight that outside information can help constrain or inform geolocation. They used population as a measure of places' importance, as have later researchers~\cite{2018_ovidiu_geolocation_reverse_dns}. Other forms of information help as well. In trace-based work reminiscent of TPG, \citeauthor{2011_wang_street_level_geolocation} performed extensive webscraping and analysis to identify and confirm businesses with locally-hosted sites that they could ``enlist'' as passive landmarks. They used those landmarks to identify the locations of routers near the geolocation target~\cite{2011_wang_street_level_geolocation}. Scalability has long been a limitation of active measurements. Since locations are most-constrained by the closest locations, \citeauthor{2012_zi_geolocations_of_millions_of_addresses} developed methods to prioritize measurements from nearby hosts, effectively by localizing avatars from subnets~\cite{2012_zi_geolocations_of_millions_of_addresses}. Alternatively, \citeauthor{2013_li_geolocation_moderately_connected} ``flip'' the standard infrastructure of active geolocation with GeoGet: the targets to be localized measure the latency themselves, through javascript, rather than generating pings through an API~\cite{2013_li_geolocation_moderately_connected}. This reduces the number of servers and traffic required, and it is also helpful since clients' devices or networks may fail to respond to pings or complete traceroutes. \paragraph{Evaluating commercial services.} These advances notwithstanding, commercial geolocation tends to be implemented through databases, which are inexpensive to distribute and can aggregate historical observations across many sources. The leading services---MaxMind, IP2Location, Akamai, or NetAcuity---all use proprietary methods. A number of papers assess the performance of these databases, comparing with the preceding active methods \cite{2007_imprecision_of_block_geolocation}, points-of-presence paired with routing tables from a large ISP \cite{2011_poese_uhlig_ip_geolocation_unreliable}, DNS lookups paired with ground truth rules from domain operators \cite{2017_gharaibeh_router_geolocation}, from RIPE ATLAS built-in measurements, or PlanetLab nodes, or against each other, sometimes with a majority logic applied. The databases are themselves often taken as the ground truth for latency-based measurements often with a sort of majority logic. \citeauthor{2011_shavitt_geolocation_study} employ that strategy in evaluating the databases themselves, but also focus on \emph{consistency} among addresses determined to share a point-of-presence, based on an earlier algorithm~\cite{2011_shavitt_geolocation_study,2012_feldman_pop_geolocation}. Similarly, \citeauthor{2011_huffaker_geocompare} assess the agreement of country determinations and distances from a centroid, from majority votes (supplemented by PlanetLab ground-truth and limited round-trip time measurements)~\cite{2011_huffaker_geocompare}. On the whole, both the formal literature and ``popular wisdom" paint a fairly pessimistic picture of geolocation performance. Research studies from about ten years ago assessed median accuracy of these services at 25~km in Western Europe and 100~km in the United States. On the commercial side, \citeauthor{2011_poese_uhlig_ip_geolocation_unreliable} quote median accuracies between tens and hundreds of kilometers for MaxMind and IP2Location~\cite{2011_poese_uhlig_ip_geolocation_unreliable}. Other early works present distributions with ranges between hundreds or thousands of kilometers~\cite{2011_shavitt_geolocation_study}. \citeauthor{2017_gharaibeh_router_geolocation} present results for routers in particular, with median accuracies between 10~km for NetAcuity and 1,000~km for IP2Location, on either extreme of the free and paid versions of MaxMind. More recently, \citeauthor{2018_ovidiu_geolocation_reverse_dns} presented medians between 10 and 30~km, depending on the sample and service. \cite{2018_ovidiu_geolocation_reverse_dns} They present results in 10~km bins and do not differentiate performance at the very bottom of the range. \paragraph{Studies of how Internet infrastructure affects geolocation accuracy.} A persistent though somewhat more subtle current of the literature has explored the physical structure of the Internet and its relation to geolocation accuracy. \citeauthor{2001_padmanabhan_geoping_geocluster} anticipated the interplay between network infrastructure and geolocation accuracy in 2001~\cite{2001_padmanabhan_geoping_geocluster}. They noted the impact of the geographical concentration of AOL's login nodes on accuracy, and showed that clusters of addresses that were physically larger were associated with poorer performance for the GeoCluster (database) method. This point was echoed in 2007 \citeauthor{2007_imprecision_of_block_geolocation} \cite{2007_imprecision_of_block_geolocation} Similarly, \citeauthor{2005_freedman_feamster_geographic_locality_ip_prefixes} measured the physical scale of autonomous systems. Later, \citeauthor{2016_gharaibeh_geo_ip_colocality} probed the common assumption of databases that /24 subnets are co-located~\cite{2007_imprecision_of_block_geolocation} Those papers show that systems, subnets, and IP prefixes advertised by the Border Gateway Protocol (BGP) can span large physical distances. In this paper, we seek to extend this work, aiming to identify the circumstances when they are large or small. \citeauthor{2011_huffaker_geocompare} characterized accuracy according to carriers' network role; we extend that line of exploration in this research, exploring how accuracy varies between commercial ISPs, large companies, and universities. We categorize addresses by ``Doing-Business As" names reported in IP address registries; to our knowledge, such a characterization is unprecedented, at least in the current era where mobile devices are significantly more prevalent than they were a decade ago. In addition to work on IP address \emph{locations}, our data also shed light on the persistence of dynamically assigned IP addresses, itself an active area of analysis. Recent works using RIPE Atlas probes~\cite{2020_komosny_ip_address_survival} and browser extensions \cite{2020_mishra_ip_address_retention} have found retention times on the order of days, but we observe a significantly slower rate of churn than observed in related contemporaneous work, suggesting that questions about the persistence of IP address assignment continues to deserve attention. \paragraph{How this paper extends past work.} Past work that evaluates IP geolocation accuracy has tended to rely either on active measurements of somewhat coarse precision, or on a fairly consistent set of (unrepresentative) benchmarks: specifically, PlanetLab sites and university clusters. The dataset we rely on for this paper of course has its own peculiarities---it is a non-random sample of mobile devices--- but this view from the access network, including mobile devices, is critical and distinctive from past studies. It is a large sample, indicative of realistic consumer geolocation targets in major cities in the United States. The Global Positioning System~(GPS) has long served as a counterpoint to IP geolocation, both as a benchmark of accuracy and as an analog in multilateration. Historically, its deployment and use for Internet measurement felt impossibly far off~\cite{2006_katz-basett_geolocation_delay_and_topology,2010_erikkson_barford_learning_based_geolocation,2012_eriksson_barford_posit_lightweight_geolocation}, but the future has now arrived. This paper complements and extends previous work as a result of its large sample of consumer smartphone locations on diverse networks. The primary dataset was provided by Unacast; we confirm our basic findings with a smaller, Chicago-only sample of GPS-located Speedtest\textsuperscript{\textregistered}{} data from Ookla\textsuperscript{\textregistered}{}. Similar datasets are readily available for commercial applications and academic research. We exploit this sample to understand how IP geolocation accuracy varies by geography, carrier, mode of access, and other factors. In contrast to previous work, which has tended to question the overall reliability of geolocation even at country-level accuracy, we show that it works fairly well in predictable and well-defined contexts. Nevertheless, the imperfect accuracy and context-specific performance still currently constrain the applicability of IP geolocation for studying Internet access by human populations. \section{The Data}\label{sec:data} This paper relies on two commercial datasets with GPS-tagged IP addresses to analyze the geography of consumer IP addresses. We also evaluate and analyze the performance of databases for IP geolocation from two popular, commercial services: IP2Location and MaxMind from the same time periods. Table~\ref{tab:data} outlines the datasets that we use in our analysis, and Figure~\ref{fig:data} explains how these datasets are joined and augmented in our analysis. \input{tabs/data.tex} \begin{figure} \centering \includegraphics[width=0.8\linewidth]{unacast_data_augmentation.pdf} \caption{Simplified illustration of the data augmentation process, for Unacast data. The fundamental data consist of device identifiers, times, locations, and IP addresses. Clusters (see text) are also labelled by type, for instance, \texttt{TRAVEL} or \texttt{LONG\_AREA\_DWELL}. The time and duration are used to construct a flag for night-time clusters. The IP address is used with the ARIN whois resource to construct Doing Business As (DBA) names, and database-defined locations are retrieved from up to four databases by MaxMind and IP2Location. Vincenty distances are calculated between database and GPS locations } \label{fig:data} \end{figure} The GPS data were delivered anonymized and remain so. The data were collected in accordance with local laws and opt-out policies~(GDPR), and analyzed with approval from our university's Institutional Review Board~(IRB). The IRB approved analysis of reconstructed ``home locations" for earlier work, but emphasized the sensitivity of doing so. For that reason, we avoided geographic analysis of individual devices in this project, and proxied ``residence" simply as activities recorded at night. \subsection{Unacast GPS Smartphone Locations} The primary dataset used for the analysis is from Unacast, a location intelligence firm. This dataset contains GPS locations reported by mobile devices, along with timestamps and unique, anonymous identifiers. Unacast aggregates multiple location data streams from other firms; they perform extensive data validation, de-duplication, and processing on those streams. The exact applications that generate locations are not provided. The share of data reporting IANA reserved or private addresses low, 0.5\%, and the share of addresses associated with foreign Internet registries totals just 0.2\% (mostly RIPE, breakdown shown in the Appendix). The traffic observed in the Unacast dataset is overwhelmingly IPv4, at 99.6\%. We use data contained within a 40~mile radius of three major cities in the United States: New York, Chicago, and Philadelphia. This large buffer includes both urban and rural populations. Two samples were provided: The first is from August--October 2020. A second, shorter period from April 2021 is aligned with licenses for paid geolocation databases to allow us to evaluate the accuracy of those services. As discussed below, the IP address from which a physical location is reported is recorded for about half of clusters in the 2020 sample, although this falls to just 15\% in the 2021 sample. Data are used only when they contain an IP address, so the full dataset thus offers IP addresses recorded at over 248 million locations. The median reported GPS location accuracy is 17~meters on the 2020 sample, and 11~meters on the 2021 sample. A small fraction of data (1.7\%) are recorded with only four decimal points of coordinate precision, corresponding to about 10~m; we exclude them from subsequent analyses, along with any with estimated device accuracy greater than 50~m. We also exclude the small fraction of addresses from reserved IP ranges and foreign NICs. Each line of data represents a \emph{cluster} of location reports, called \emph{bumps}. Clusters are built by combining bumps from an individual device that are close in both time and space, using Unacast's proprietary algorithm. That algorithm uses machine learning to account for variation in physical scale among locations: a mall is larger than a coffee shop or a home. Clusters are labelled according to their durations, which are also reported. Locations recorded during movement are labelled as \texttt{TRAVEL}. See the Appendix for a listing of cluster frequencies. This clustering reduces the data volume by a factor of 20 while retaining most of the information. Just as important, the raw, un-clustered data often cannot be relicensed. The clustering entails some subtlety: a single physical location and IP address is reported per cluster, and thus the centroid of a \texttt{TRAVEL} cluster may not exactly coincide with the moment that the reported IP address was used. Indeed, the physical location of a consumer IP address is often not fixed; for instance, consumers can roam freely through their home while connected to their WiFi. In practice, individual IP addresses are recorded at many physical locations---and these locations may be close or distant from each other. As a means of selecting residential IP addresses, we flag clusters generated at night. Night-time clusters are those for which the period between the first and last bumps extends into the hours between midnight to 6am of any day. These clusters represent just 4.7\% of clusters but 26\% of bumps. Only 18\% of devices have at least one night-time cluster, but those devices generate the vast majority of the data: 80\% of clusters and 88\% of bumps. In short, weighted by data volume, most devices have observations at times when they can reasonably be assumed to be at home. For the set of devices with night-time clusters, the ratio of devices to the population of the study region is about one device for every 20 people. To investigate the determinants of geolocation accuracy, we also identify ISPs. Each address is associated with its /24 subnet, whose organization is retrieved from the ARIN whois registry, on September 1 2020, or April 25 2021. If the prefix size exceeds 24 on IPv4 or 48 on IPv6, we follow the link to the ``parent'' network. This strategy is philosophically similar to an ASN lookup, and differs in practice primarily in superior coverage of the Department of Defense NIC, wireless carriers, and foreign NICs. The ASN lookup also ``fractures'' organizations like small city governments or businesses from their providers. We associate large and common organizations with standardized ``Doing Business As'' (DBA) names, taking particular care to capture the major ISPs in each market (Comcast, Charter, etc.). We separate AT\&T's and Verizon's mobile broadband from their fixed offerings based on the words ``Mobility" or ``Wireless" in the organization name. This may not be a perfect division: ``Verizon Business" and ``AT\&T Services" may include mobile offerings, but examining the ASN tables suggests this is not their primary use. It is worth noting that the sample is dominated by locations recorded while connected through mobile providers: there are ten times as many addresses on AT\&T mobile than AT\&T fixed-line services, and more than five times as many on Verizon mobile than Verizon fixed-line. However, as we will separate addresses by ISP, this sample volume effect is largely ``partitioned out." Ultimately, each address is associated with a single DBA name for analysis. These procedures also identify large companies and institutions, in particular, universities. We flag addresses from universities with at least ten thousand students, and Fortune 100 companies. University clusters are ``classic" targets for academic work on geolocation, since they have have meaningful and well-known locations, but they are not representative of the consumer space. We exclude ISPs, including Google, from the Fortune 100 set. We tabulate IANA special use and non-ARIN addresses, as checks on the underlying data, but exclude these from subsequent analysis. \subsection{Geolocated Ookla Speedtest Data} In addition to the data from Unacast, we have obtained Speedtest data from Ookla. The data are for tests performed on smartphones, again with locations from GPS. This dataset is substantially smaller, and is limited in geographic extent to the counties surrounding Chicago. We appeal to these data as a cross-check of the Unacast data that, though more voluminous, were not designed for this work. We have received over 4 million individual Speedtest measurements for 2020, though only 270 thousand match the period of the study (August 2020). Unlike Unacast data, each location comes from a single moment in time (it is not a cluster). On the other hand, the Speedtest data include only the first three bytes of the IP address, due to privacy restrictions. We rely on Ookla's coding of Internet Service Providers. \subsection{Geolocation Databases and Distances} We obtain the free versions of the MaxMind and IP2Location databases, for August 1, 2020. We also acquire both the free and paid versions of these databases, from from April 26, 2021. The NetAcuity and Akamai geolocation services, which are much more expensive, are not included in this work. Using these databases, we geolocate IP addresses from the GPS sample. Per the license, this is done only for the months of GPS data matching the databases (August 2020 and April 2021). We then measure the Vincenty distance (on the ellipsoid of Earth) from each IP-geolocated point to the location recorded by the GPS-enabled device. For most of what follows, we take the centroids of the GPS clusters as the ``ground truth'' and call the entire distance the ``accuracy'' or ``error.'' Since the database providers acknowledge their limited resolution and in certain cases quantify it accurately, this language is perhaps unfair: it is different for a database to acknowledge a location as unknown or indeterminate (as in reserved, private addresses) than to be ``wrong'' about the location. Moreover, the GPS data themselves do have some limitations, noted below. Semantics aside, the balance of this work tabulates distances with respect to the ground truth and seeks to explain their heterogeneity. \section{Evaluating Data Quality}\label{sec:quality} Before coming to questions about the properties of consumer IP addresses, we analyze the quality of our data. We first explore the consistency of the GPS-based location data we obtain from Unacast and Ookla by comparing the data against each other, with respect to geolocation databases. \subsection{Are GPS data a credible ground truth of IP address locations?} \label{ssec:groundtruth} The accuracy of IP geolocation is central to Unacast's core business, and the company dedicates enormous resources to validating and maintaining their incoming data streams. While GPS data from smartphones is generally understood to be accurate, datasets from smartphone-based services do often incorporate additional data to assist with locating devices in circumstances where GPS does not work (e.g., indoors). Thus, while we expect these GPS-based datasets to be reasonably accurate in general, it behooves us to explore the quality of these datasets before proceeding with other questions. Since we aim to use these datasets as ``ground truth'', this analysis may seem a bit of circular. Our strategy is to compare the \emph{consistency} of IP geolocation result for different GPS contexts and across wholely independent GPS samples (Unacast and Ookla). Of course, this analysis does not exclude the possibility of systematic errors arising in \emph{both} GPS datasets, or across all datasets, but given the lack of further ground truths, we are left with consistency checks. \paragraph{Evaluating cluster types.} The correspondence between GPS coordinates and the physical location of its IP address may not be perfect. For example, we expect that the clustering procedures could affect the ``compatibility'' of the IP address and GPS location. Further, if a GPS location is recorded when no network is available, it may be subsequently \emph{reported} at a different physical location where an IP address can be obtained. We would expect these effects to be most severe for \texttt{TRAVEL} clusters, as previously discussed. The flip side of this argument is that navigation applications are more likely to be active during \texttt{TRAVEL}. These apps record location more frequently, which could \emph{improve} accuracy. To evaluate the effects of imperfect knowledge of locations, stemming from these effects, we contrast \texttt{TRAVEL} clusters with others. We will show below that geolocation performance differs by network. Obviously, it is easier to ``travel" when connected to a mobile than fixed-line network. We therefore focus this analysis on a single, mobile network: AT\&T Mobility. We do observe that accuracy is worse for travel than non-travel data, but the difference at the median is only about 2.5\%, for either IP2Location or MaxMind. As can be seen in the Appendix, the cumulative distribution functions for travel and non-travel clusters are fairly across their entire domain. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{cdf_chicago_ookla.pdf} \caption[Geolocation error of GPS location targets in Chicago, on both Unacast and Ookla Speedtest Intelligence\textsuperscript{\textregistered}{} data, using the free versions of the MaxMind and IP2Location databases for August 2020.]{Geolocation error of GPS location targets in Chicago, on both Unacast and Ookla Speedtest Intelligence\textsuperscript{\textregistered}{} data,\footnotemark{} using the free versions of the MaxMind and IP2Location databases for August 2020.} \label{fig:ookla} \end{figure} \footnotetext{Based on the authors' analysis of Ookla\textsuperscript{\textregistered}{} Speedtest Intelligence\textsuperscript{\textregistered}{} data for August 2020 in Chicago. Ookla trademarks used under license and reprinted with permission.} \paragraph{Analysis of independent samples.} To further validate the GPS data, we contrast data from Unacast with Ookla, for fixed-line broadband ISPs, in Chicago and August 2020, where both datasets are available and aligned with the free versions of the geolocation databases. Figure~\ref{fig:ookla} shows these results. MaxMind performs somewhat better on Comcast addresses from the Ookla dataset than the Unacast data, and somewhat worse on AT\&T; RCN and WOW! are very consistent. Discrepancies are somewhat larger on IP2Location as is comparative performance by the two databases. One notable feature in the 2020 Unacast dataset is a small but non-negligible share of the data with IP geolocation ``error" \emph{very} close to zero. Depending on the ISP, that share is 4-5\% of the fixed-line locations on MaxMind and 1-2\% of those on on IP2Location. On close inspection, these appear to be locations reported by applications \emph{relying the IP Geolocation services themselves}, rather than true GPS coordinates. For example, these ultra-``accurate" locations are not at residences, as one might expect for fixed-line ISPs, but in parks, as is MaxMind's practice for default locations.~\cite{2020_mishra_ip_address_retention,2020_komosny_ip_address_survival} The share of ``too-close" locations is smaller on the 2021 clusters; however, the IP address field is populated for a lower share of those data. However, the basic features of Figure~\ref{fig:ookla} are consistent in the completely separate sample from Ookla, which does not exhibit this feature. \subsection{Which database provides the lowest error in location?} \label{ssec:db_error} The practical question is which database to use, and how well it should be expected to perform. This analysis, uniquely, is performed using the April 2021 sample from Unacast, for which the paid geolocation databases were licensed. Since Section~\ref{ssec:reliable} will show that geolocation on mobile broadband is very poor, this analysis focusses on fixed-line broadband. The short answer is that MaxMind's paid database, GeoIP2, provides the best accuracy, in terms of geolocation error on all quantiles. The traditional way of reporting this is the median error, which is 2.62~km in New York City, 3.31~km in Chicago, and 4.02~km in Philadelphia. Other quantiles and the other three databases are shown in Table~\ref{tab:distance_quantiles}. Figure~\ref{fig:cdf_by_city} shows the distribution of distances by city and database. We use ``city" to refer to the city itself along with the 40-mile buffer around it. Because the distance from Staten Island to North Philadelphia is only 46 miles, some data are included in the curves for both New York and Philadelphia. \input{tabs/quantiles_by_db.tex} \begin{figure} \centering \includegraphics[width=0.8\linewidth]{cdf_by_city.pdf} \caption{Cumulative distribution function by geolocation database and city. Colors reference databases, and line styles denote paid and free versions. \label{fig:cdf_by_city}} \end{figure} Although the paid databases are more accurate in each city and at every quantile, the relative improvements in accuracy are modest. An important limitation of this particular study is our focus on urban areas in the United States. In particular, we do not test accuracy of these databases outside of major metro areas, and global or national performance may of course be different. Nonetheless, it would be possible to perform the analysis we have presented in this section for other datasets, if and when they are made available. \section{The Geography of Consumer Subnets}\label{sec:results} We now turn from an initial assessment of the dataset and databases, to measurements of the geography of the underlying networks. \subsection{Under what circumstances are IP geolocation databases accurate?} \label{ssec:reliable} The basic results of Section~\ref{ssec:db_error} mask extreme but unsurprising heterogeneity. Figure~\ref{fig:cdf_by_city} already shows that geolocation performs better in New York than Chicago, and better in Chicago than Philadelphia. But the largest source of heterogeneity stems from providers, which deploy different physical infrastructures (and serve different cities). This entire Section relies entirely on the free databases. \begin{figure} \centering \includegraphics[width=\linewidth]{cdf_cities_dbs.pdf} \caption{Geolocation performance by city, database provider, and ISP. Free versions of the database are used in each case. ISPs are shown by their ``brand" colors, according to the whois database, which leaves the Sprint and T-Mobile networks distinguishable. Fixed-line networks are denoted by solid lines while mobile networks shown by dashed lines. \label{fig:cdf_cities_dbs}} \end{figure} \paragraph{Fixed-line and mobile networks.} Figure~\ref{fig:cdf_cities_dbs} shows accuracies observed in New York, Chicago, and Philadelphia for major broadband carriers in each market. In the best cases, such as either RCN or Comcast on MaxMind in Chicago, the median error is less than 5~km. In each city/database pair, the accuracy is good for fixed broadband and poor for any mobile broadband. In Chicago, MaxMind is more accurate with fixed-line (AT\&T, RCN, WOW, and Comcast) than on mobile (AT\&T Mobile, T-Mobile, Sprint, Verizon Mobile) carriers. (IP2Location performs poorly with RCN.) Similarly in New York, Charter, Cablevision, Comcast and Verizon are better localized than AT\&T Mobile, Sprint, T-Mobile, and Verizon Mobile; and in Philadelphia, geolocation is more accurate on Comcast than Verizon, T-Mobile, AT\&T Mobile, or Verizon Mobile. Quantitatively, the share of Comcast addresses in New York that MaxMind's free service locates within 10~km of the GPS location is 66\%. At the other extreme, 87\% of T-Mobile addresses from the New York region are assigned to just two distinct locations representing New York itself and Newark; 98\% are assigned either to those two, or to one of six other locations in Philadelphia (3), Providence, Boston, and Washington. As a result, only 18\% of devices are assigned within 10~km of their true location. In fairness, it must be emphasized that MaxMind does not \emph{claim} to assign these devices within 10~km: almost all of the T-Mobile addresses assigned to the New York and Newark locations are in the 200~km accuracy class. This basic dichotomy between mobile and fixed broadband is apparent even within ISPs. AT\&T offers both services in Chicago, and the CDFs for its fixed-line and mobile services are widely separated. The individual subnets with the largest geolocation errors all belong to the AT\&T Mobility organization. In New York and Philadelphia, AT\&T only operates mobile networks, and this is reflected in those cumulative distributions. The observation that mobile and fixed-line networks differ may appear obvious once stated, but it need not have been true. Mobile carriers could have constructed networks so that individual antennas held a fixed set of IP addresses. That does not appear to be what they did. \paragraph{Universities, businesses, and consumer networks.} Before continuing, we also contrast geolocation performance on consumer fixed-broadband, with large universities and companies. We include universities with at least ten thousand students, and Fortune 100 companies other than ISPs. Again, we note that we are implicitly studying the WiFi access points that these institutions operate and which their employees, students, and clients connect to via mobile devices, rather than wired connections or fixed infrastructures of servers. Universities are a classic target in the academic literature on geolocation, but Figure~\ref{fig:f100_edu} shows that they are in general more-accurately geolocated than either consumer ISPs or companies. This is not surprising: they have large, physically-concentrated networks, with registration addresses clearly spelled out in ARIN records. In most cases, median geolocation error on MaxMind (free) is less than 2~km, though a few institutes -- DePaul in Chicago and the City University of New York -- are mislocated by upwards of 10~km. Note that the nominal sample period is August 2020, when students~-- and indeed many staff and faculty~-- were not on campus, due to both summer vacation and the coronavirus pandemic. \begin{figure} \centering \includegraphics[width=\linewidth]{cdf_cities_dbs_cats.pdf} \caption{Geolocation performance on consumer ISPs, contrasted with large universities and Fortune 100 companies. \label{fig:f100_edu}} \end{figure} Figures \ref{fig:cdf_by_city}-\ref{fig:f100_edu} suggest that for a substantial share of IP addresses, IP geolocation is quite accurate. However, this does not do us much good unless those locations can be identified in advance. It is already clear that the picture is rosier with fixed broadband. Those data can be easily identified, either via a \texttt{whois} look-up or (in some cases) through the geolocation databases themselves. But mobile and fixed is not the only lever. MaxMind is able to perform better on RCN than on Comcast in Chicago, and better on Charter or Cablevision than Comcast in New York. How are we to identify localizable blocks of addresses? We highlight two additional methods. MaxMind's database provides an ``accuracy'' field that successfully identifies the precision of entries. Figure~\ref{fig:cdf_mm_by_accuracy} shows the CDF for successive bins of claimed accuracy on the free database. In the most precise bin, accuracy of ``1~km,'' the median device in Chicago is geolocated just 2.0~km from the GPS-based location. The ``error'' with respect to the ground-truth degrades in-line with quoted accuracy, though there is enormous spread in the least-precise, 500~km bin. It is thus \emph{possible} to identify accurately-located addresses -- MaxMind does it. But this leaves an open question: \emph{why} are those addresses well or ill-located? That brings us to the second method. Our hypothesis is that if /24 subnets are geographically localized~-- small~-- then addresses within them are more-likely to be accurately geolocated. If they are large, then precise locations would require finer address-level data. The question can then be re-posed: how large are subnets, and is their size in fact correlated with geolocation accuracy? \begin{figure} \centering \includegraphics[width=0.9\linewidth]{cdf_mm_by_accuracy.pdf} \caption{Cumulative distribution of geolocation accuracy on the MaxMind database, by quoted accuracy bin. \label{fig:cdf_mm_by_accuracy}} \end{figure} \subsection{What is the geographic scale of /24 subnets?} \label{ssec:size} What are the physical and network properties of accurately-located subnets? We focus this analysis on a single, fixed network~-- Comcast~-- and require that subnets have at least 10 devices and 10 distinct IP addresses. There are over twenty thousand such subnets between the three cities. \paragraph{Constructing a physical scale.} To quantify whether or not a subnet is localized, we define a characteristic physical scale. Many subnets have some outliers, perhaps with locations reported after the fact. To mitigate the impact of these outliers, we must first identify them. We compute the medioid of locations in the subnet, defined in this case simply as the median of the $x$ and $y$ coordinates in a projected (flat) geometry (EPSG 2163). We then measure individual locations' distances from that medioid. We select a configurable fraction $f$ of the data that is ``closest'' by that measure. For that subset of the data, we calculate the convex hull. If $f = 1$, then the convex hull covers all locations recorded on the subnet; if $f = 1/2$, it covers the half of points closest to the medioid. Finally we take the area of the convex hull, and ``convert'' this area to a distance by taking its square root. That square root defines the length scale of the subnet. Figure~\ref{fig:convex_hull} illustrates this procedure for two subnets. (To preserve anonymity, random noise has been added to the individual points in the illustration.) \begin{figure} \centering \includegraphics[width=0.85\linewidth]{convex_hull.png} \caption{Illustration of the procedure defining subnet scales, for one dispersed and one well-localized subnet in Chicago. Convex hulls wrap around $f = 0.9$ of the points within the subnet. The ``scale" is the square root of this area. The linear scale on the right-hand side (67.176.158.0/24) is a factor of 8 larger than on the right-hand side. Gaussian noise has been added to the locations for illustrative purposes only. \label{fig:convex_hull}} \end{figure} Figure~\ref{fig:subnet_scale} shows this distance scale for subnets with at least 10 devices and addresses, for several choices of $f$. By construction, the scale is smaller or larger when outliers are more or less suppressed, respectively. Setting $f = 0.5$ results in a median subnet scale of 4.3~km, and $f = 0.9$ leads to a scale of 9.9~km. However, the proportion of subnets with scales exceeding 10~km is small for any choice of $f < 0.9$. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{subnet_scale.pdf} \caption{Cumulative density of subnets' distance scale as derived from the convex hull of locations, as described in the text. \label{fig:subnet_scale}} \end{figure} \paragraph{The relationship of physical scale and accuracy.} Armed with this scale, we return to the earlier question: when can subnets be accurately located? Discarding locations with geolocation error over 100~km, the correlation between $f = 0.75$ subnet scale and mean geolocation error, is 0.69 for MaxMind Free (GeoLite) but only 0.30 on IP2Location (which has worse overall performance). We thus confirm the hypothesis that localization and localizability are related, though strictly speaking, this analysis is not causal. Still, this analysis has delayed but not \emph{answered} the question; it suggests that geolocation fails on fixed-line addresses when subnets are large, which raises in turn the issue of why large subnets exist at all. Comcast uses both large and small subnets. Are large ones used differently? Taking a hint from the results on mobile broadband, we hypothesize that the small subnets are nearly static whereas large ones provide a reserve of ``ephemeral" addresses ~-- perhaps for devices waiting for a static one. A client assigned to an ``ephemeral" address would be unlikely to fall on that same address again, whereas a ``sticky" address granted to a home network would be used repeatedly. The relevant variable is thus the number of times that a single client is observed at each IP address (weighted by visits). Figure~\ref{fig:address_visits} confirms the hypothesis: for subnets with scale greater than 20~km ($f = 0.75$) nearly half of visitors to an IP address visit exactly once. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{scale_nvisits.pdf} \caption{The number of times a single device visits a single IP address on the subnet (weighted by visits). On subnets with scale greater than 20~km ($f = 0.75$), over half of visits device/IP pairs are unique. \label{fig:address_visits}} \end{figure} \subsection{How persistent are the physical locations of /24 subnets?} \label{ssec:movement} Geolocation providers are quick to point out that databases evolve continuously. Clearly, the physical infrastructure of the Internet evolves over time, but how quickly do subnets actually move? Because mobile networks subnets are already physically very large, and addresses on them are not accurately located, we focus this analysis on fixed-line broadband. \paragraph{The movement of subnets.} Figure~\ref{fig:movement_by_month} presents the physical distance between the medioids of individual /24 subnets, as constructed in August and October 2020. As in Section~\ref{ssec:size}, the medioid is the median of the $x$ and $y$ coordinates. To enter into this figure, subnets must have at least ten unique devices and ten unique addresses in each month. We consider only fixed-line broadband carriers, for this exercise. On each network considered, the median subnet moves less than a kilometer; There is some inherent variability in our construction of the medioid as the ``location'' of the subnet in each period, and the Figure shows the difference of these two ``noisy" measurements. We thus suspect that this overstates movement. In short, we conclude that on this time scale, subnet locations are quite stable \begin{figure} \centering \includegraphics[width=0.8\linewidth]{movement_by_month.pdf} \caption{Distance moved by the medioids of /24 subnet on fixed-line networks, over a two-month period from August to October 2020. \label{fig:movement_by_month}} \end{figure} \paragraph{Is the sample biased?} A substantial threat to this analysis is sample composition: by requiring 10 devices and 10 addresses, the subnet \emph{must} be observed in New York, Chicago, or Philadelphia in both months, to enter the sample at all. However, it does not seem to be the case that subnets are moving out of sample. Of the subnets satisfying the cuts in August, 92\% also pass them in October (vice versa, 96\%). If we raise the thresholds to enter the sample, requiring 20 devices and 20 addresses, 95\% of subnets passing these cuts in August also show up with at least 10 devices in October (vice versa, 98\%). Raising the thresholds yet further to 50 devices and 50 addresses, the persistence from August to October exceeds 99\% (vice versa, 98\%). \subsection{How long does a consumer connection retain an IP address?} \label{ssec:churn} The analyses above show that IP addresses identify physical locations at the level of 2~km, under the best circumstances. On its own, the IP address clearly does not identify individuals. Of course, physical locations~-- geographic coordinates~-- are not the only way in which IP addresses identify people. Linked to log-ins or other online behaviors, IP addresses can be used to track users over time even without cookies or fingerprinting (or as a component of a fingerprint). If the IP address is static for a long time, it easier to link online behaviors. A critical concern is thus \emph{how long} fixed-line IP addresses remain with a single household. \paragraph{Defining churn.} We define \emph{churn} as the likelihood of a device returning to the same IP address on an ISP, after a delay of $d$ days. The denominator includes every pair of night-time connections by a single device to one ISP, $d$ days apart. We select night-time activity, to focus on periods when devices can be reasonably assumed ``at home." The numerator is the number of those pairs for which the two nights' connections are on the same IP address. Stated less formally: if I see a device on Monday night ($d = 0$) and again on the same ISP Tuesday night ($d = 1$), what are the chances that it will be on the same IP address? What about next Monday ($d = 7$)? Since the sample selection is somewhat peculiar~-- devices are necessarily recorded on fixed-line broadband on multiple nights~-- one should take some care in interpreting these results. This consideration is particularly acute at the maximum of the range, since there are fewer opportunities for a device to be observed 80 days apart (just 10) let alone 90 (just 1). This perhaps explains the drop-off on the right-hand side. \begin{figure} \centering \includegraphics[width=\linewidth]{ip_stability} \caption{Persistence of IP addresses. The Figure shows the share of night-time clusters on a single ISP and device, separated by $d$ days, for which the IP addresses are equal on both clusters. Note that for visual clarity, the $y$ axis begins at 0.5 instead of 0. \label{fig:ip_stability}} \end{figure} \paragraph{Rates of change, over two months.} Figure~\ref{fig:ip_stability} shows the persistence of IP-addresses on fixed-line broadband ISPs. It is clear that devices ``leave" individual IP addresses gradually, but at different rates on different ISPs. After one month, more than 90\% of devices observed reconnecting to AT\&T, RCN, and Cablevision do so on the same IP address. After two months, more than three-quarters of devices return to the same IP address, for all major ISPs in the three cities shown. \section{Can IP geolocation databases be used to study Internet access?} \label{sec:discussion} At this stage, we would usually turn to a general discussion of findings. Here, we focus our discussion and extend our results, according to the question that originally motivated our work: assessing the potential for using IP-referenced data in \emph{social science} research on Internet access. \emph{Where} and \emph{for what demographic groups} is geolocation accurate? Can \emph{IP geography} enable \emph{Internet demography}? To make this query concrete, imagine a study of the ``homework gap" -- (in)equity in access to digital resources for education -- based solely on server logs from a site like Wikipedia. If we observe frequencies of use by IP subnet \emph{alone}, can we infer what groups do and do not access the site? \paragraph{General considerations.} This question is non-trivial, since it confronts the correlations of population density and demographics with geolocation accuracy, along with the spatial patterns of connection modality (mobile vs fixed). Cities have smaller subnets simply because they have higher density of people and devices. They also tend to have larger minority populations. This alone leads to a correlation between geolocation accuracy with demographics or disadvantage. For Chicago and its buffer, the correlation between tract median geolocation error on MaxMind (free) and population density is $-0.09$ ($p < 0.0001$); in turn, population density is correlated with log median household income ($r = -0.18$, $p < 10^{-10}$). Both of these are small but significant. The flip side of better accuracy at higher density is that distance precision \emph{has} to improve in dense environments, to associate activity with the right population. It's easier to ``jump" over many people when they are close together. Accuracy also varies \emph{within} the city, due to heterogeneity in the fraction of people on mobile vs fixed broadband. There are two reasons for this. People use mobile devices (1) when they are on the go, or (2) because they do not have access to a fixed broadband connection at home. That means that devices in the present sample observed in city centers appear to have ``inaccurate" IP geolocation, simply because the device users are more-likely on mobile on the way to or at work. On the other hand, populations without fixed broadband access are unlikely to be accurately IP geolocated, even in their home neighborhood. As a final consideration before proceeding, one must not confound ``unknown" addresses with ``mis-located" ones. For example, if a default database location for T-Mobile addresses sits in a particular neighborhood, that neighborhood will appear to have ``accurate" geolocation, even though the locations are not known any better than elsewhere. Performance will appear to ``degrade" radially, with distance from the default location. Since the default locations are usually in or near cities, that would (\emph{ceteris paribus}) give a false impression that IP addresses in cities (or near the center of the United States, for instance) are accurately-located. \paragraph{Differences in access modality by demographic group.} Returning to the data, Figure~\ref{fig:chicago_mobile} presents the proportion of the night-time clusters in each tract of Chicago, that are on fixed and mobile broadband. Note that the data are inherently mobile devices with GPS chips; this does not include laptops, for instance. This classifies AT\&T, Comcast, WOW, and RCN, as fixed-line providers, and T-Mobile, Sprint, Verizon, and AT\&T Mobile as mobile. For those familiar with Chicago, the results are no surprise: the proportion of night-time pings on mobile networks is lower on the wealthier North Side of the city than on the West or South Sides. Indeed, our eyes do not deceive us: the tract level correlation between this constructed variable and share of households with a broadband contract as reported to the Census is $-0.25$. The correlation to neighborhood proportion Hispanic is $0.23$ (both $p < 10^{-10}$). In other words, connection type is correlated with demographic factors and broadband adoption. This would be reflected in geolocation accuracy. In practice, this means that constraining an analysis to accurately-located, fixed-line IP addresses would tend to exclude vulnerable populations from the analysis. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{chicago_mobile.pdf} \caption{Proportion of night-time clusters in Chicago recorded on mobile networks. \label{fig:chicago_mobile}} \end{figure} \begin{figure*} \centering \includegraphics[width=\linewidth]{race_ethn_cdf.pdf} \caption{Cumulative distribution of geolocation error for tracts with white, Black, and Hispanic super-majorities. The first panel presents all data, while the second through fourth restrict to Chicago, Chicago at night, and Chicago at night on Comcast. \label{fig:race_ethn_cdf}} \end{figure*} \paragraph{The influences of density, demographics, and modality on IP geolocation accuracy.} Figure~\ref{fig:race_ethn_cdf} offers an alternative view of this effect, disentangling the countervailing forces of density, demographics, and access modality. It displays the CDF of device geolocation accuracy for hyper-segregated neighborhoods of Chicago~-- ones where two-thirds of residents are white (only), Black (alone or in combination with other races), or Hispanic (of any race). Moving from left to right, we begin from the full dataset and layer the cumulative requirements of devices in Chicago proper (not the 40 mile buffer), at night (that is, likely at home), and on Comcast (i.e., on a single, fixed broadband network). The first plot shows an enormous difference between geolocation in ``white" tracts and other segregated tracts -- geolocation performs much worse. This effect appears to have more to do with density than race: it reverses when focusing on the City of Chicago, and zeroing in on a single network, the performance lines up quite closely. The exception is at the very high end (above 10~km and 90\% of the CDF), where there is apparently an error for a set of white tracts. About 80\% of points are within 5~km of the true location, for all three categories of neighborhood. \paragraph{Attenuation bias, from reliance on mis-attributed IP addresses.} The analyses of device modalities above suggest that IP geolocation databases' ability to attribute online behaviors to populations will tend to fail more often for disadvantaged groups. Still, if we were to persist, what errors might we expect to ``accrue," by moving an observation from its GPS-based location to the IP-based location? In essence, this question pits the scale of geolocation accuracy against the physical scale of demographic segregation. If IP geolocation moves a point among communities with similar demographics, the error does not directly bias results. This analysis is limited to fixed-broadband data from Comcast, where geolocation has a chance of succeeding. Figure~\ref{fig:log_mhi_quantiles} presents the log median household income as it would be imputed from a MaxMind look-up, against the true median household income of the neighborhood (Census tract). This results in an unsurprising regression to the mean: as is the usual case with measurement error, the slope is simply attenuated. This suggests that even for fixed broadband, efforts to use IP address alone to ``link" online behaviors with human populations are inadvisable at this physical scale. They will in general yield estimates whose magnitudes are biased down. In other words, measurements of ``who uses what" that rely on IP geolocation will tend to \emph{understate} differential access. This is consistent with the findings of \citeauthor{2019_ganelin_chuang_ip_mooc_regressive} on MOOC registrants and their geolocated IP addresses. \cite{2019_ganelin_chuang_ip_mooc_regressive} \begin{figure} \centering \includegraphics[width=\linewidth]{log_mhi_quantiles.pdf} \caption{Quantiles of neighborhood log median household income as ``imputed'' from MaxMind geolocation ($y$) as a function of the true neighborhood value ($x$). \label{fig:log_mhi_quantiles}} \end{figure} \balance \section{Conclusion}\label{sec:conclusion} Using a large sample of GPS-based smartphone locations this paper has quantified the performance of commercial geolocation databases in large American cities. The precision of this analysis far outstrips past work. The analysis has demonstrated significant heterogeneity in geolocation accuracy. The median error for MaxMind's free service is well less than 10~km on fixed commercial broadband networks and at Universities. On mobile networks, IP geolocation is not accurate below the city level. This analysis has also sought to explain \emph{why} some addresses are accurately located whereas others are not. The physical size of subnets is strongly correlated with accuracy. Large subnets appear to be used for ``ephemeral" address assignment, which clients do not use repeatedly. Finally, we have contextualized these findings for applications to research on human populations. Both the present data and existing surveys show that disadvantaged populations are less likely to use a fixed broadband subscription at home. Online behaviors cannot be accurately associated with these groups, and dropping mobile devices altogether will tend to remove them from analyses. Focussing on the context where geolocation does work -- fixed-line broadband~-- the accuracy still appears inadequate for associating online activities with real-world geographies and demographics. From a privacy perspective, a single IP address does not identify an individual, but it both localizes and individual and provides an ``index" through time that may be used to aggregate other indirect identifiers. We have shown that the time for IP reassignment of fixed-line broadband consumers varies by ISP, but is typically on the order of months. \newpage \bibliographystyle{ACM-Reference-Format} \balance
1,314,259,995,713
arxiv
\section{Introduction} Building a good speech recogniser typically requires a large amount of annotated data from a specific language. Obtaining high-quality labelled data is a costly and time-intensive process, and for many languages this remains a big issue. However, even in a highly resourced language like English, recent work has shown impressive results in Automatic Speech Recognition (ASR) by pre-training on unlabelled data and transferring that knowledge to regular speech recognition models \cite{baevski2020wav2vec,xu2020selftraining,zhang2020pushing} or even completely unsupervised speech recognition \cite{baevski2021unsupervised}. This paradigm shift towards unlabelled data is of great significance as untranscribed recordings of speech are much easier to acquire. Self-supervised learning is a clever way to learn general information from data without requiring any labels. Recently many successful methods have emerged for self-supervised representation learning from speech. The general idea is to implicitly learn the global structure and local characteristics that are inherently present in speech. Depending on the task, both local information, such as the pronunciation of a specific phoneme, and more global information, such as speaker traits and recording properties, can be useful. By pre-training a network with a well-chosen objective function, these relevant attributes about the input speech can be captured and summarised in rich feature vectors. This improves several downstream tasks like speech recognition and typically reduces the amount of required data, since the principal characteristics are already extracted and more easily accessible. Moreover, the structure of a speech waveform is to some extent general and language-independent, which explains the improvements with these features in low-resource languages \cite{conneau2020unsupervised, riviere2020unsupervised}. The objective function used in self-supervised learning techniques is the driving force behind the extraction of powerful speech representations. In fact, the self-supervised objective has more impact on the learned representation than architectural differences between methods \cite{icassp2021_chung}. In Autoregressive Predictive Coding (APC) \cite{chung2019unsupervised, chung2020generative, chung2020vectorquantized, chung2020improved}, the objective is to predict a frame a few steps ahead, given the information up to that point. Another branch of research focuses on predicting the current frame given past and future context, by reconstructing several masked frames \cite{Liu_2020, chi2021audio}, similar to the Masked Language Modeling (MLM) approach in Natural Language Processing \cite{devlin2019bert}. Finally, Contrastive Predictive Coding (CPC) \cite{oord2019representation} is a popular technique in representation learning, where the objective is to predict the future in the latent space and a contrastive loss is applied to maximise mutual information. CPC has been successfully applied to speech recognition \cite{baevski2020wav2vec, schneider2019wav2vec, baevski2020vqwav2vec, baevski2020effectiveness} and has shown to be able to learn robust and cross-lingual speech representations \cite{conneau2020unsupervised, riviere2020unsupervised, kawakami2020learning}. We refer to the literature for other related work in self-supervised and unsupervised representation learning \cite{Chorowski_2019, khurana2020convolutional, liu2020tera, liu2020nonautoregressive, ravanelli2020multitask, pascual2019learning, jiang2020speech, Ling_2020}. Following the widespread improvements in ASR as a result of self-supervised pre-training, this paper will focus on Flemish Dutch, a medium-resourced language. Flemish is the language spoken in Flanders, the Dutch-speaking part of Belgium. It is closely related to the Dutch variant spoken in The Netherlands, but there are still many noticeable differences \cite{Velde2010WillDB}. A few seconds of speech suffice to distinguish the two variants. Although geographically relatively small, Flemish Dutch is diverse with several dialects, roughly corresponding to the five provinces in Flanders, though natives will observe even finer detail. Furthermore, Dutch belongs to the family of West-Germanic languages, like English and German, which makes it a very interesting language to examine whether pre-training on English leads to strong improvements in Dutch. While there is some overlap in phones, there are also several vowels and diphthongs that do not occur in English. In this work, we compare several popular self-supervised pre-training methods when applied to Flemish. First of all, we look at the applicability of off-the-shelf models that are pre-trained on English and assess the transferability to Flemish. This would be convenient for several research domains and technological applications. Additionally it would eliminate the need for large computational resources necessary to pre-train these models, which scales with the model size (e.g. the high-capacity wav2vec 2.0 model \cite{baevski2020wav2vec}) and for many models also with the amount of data. Second, we examine the importance of matching the pre-training language to the target language as opposed to the amount of data used in pre-training. To this end, we compare pre-trained models in English and Netherlands Dutch to models trained on Flemish Dutch. Recent work \cite{conneau2020unsupervised} has shown that low-resource languages can greatly benefit from higher-resource languages when they are more similar due to positive transfer, but cross-lingual representation learning degrades the performance on high-resource languages due to interference. Furthermore, self-supervised pre-training has shown to improve robustness and reduce the degradation on out-of-domain data \cite{robustW2V2, ma2021probing}. We show that simply augmenting the finetuning data leads to a strong speech recognition improvement in noisy and reverberated environments. Finally, we investigate ASR improvements with the recent wav2vec 2.0 model \cite{baevski2020wav2vec, conneau2020unsupervised} and study several pre-training and finetuning scenario's with an increasing amount of data, yielding substantial reduction of Word Error Rates (WER) compared to the baselines. We evaluate the models in terms of linear phone separability by reporting the classification accuracy of an external linear classifier. The classifier is trained to predict Flemish Dutch phones from the features extracted from each model. For the ASR experiments, we report the results of an HMM-DNN hybrid model \cite{Povey_ASRU2011} where the DNN is trained with the learned features. \section{Models} The procedure consists of three separate phases: 1) pre-training a model on data without labels, 2) optionally finetuning the model on a labelled set with transcripts, 3) extracting the learned features to perform a downstream evaluation task. \subsection{Self-Supervised Pre-training} We start with a short description of the investigated pre-training techniques and refer to the corresponding papers for more details. Table~\ref{tab:overview} gives an overview of all models. \begin{table*}[htbp] \caption{Shallow overview and comparison of all pre-training techniques.} \label{tab:overview} \centering \small \begin{tabularx}{\textwidth}{| l || X | X | X | X | p{2.3cm} |} \toprule \textbf{Model} & \textbf{Feature encoder} & \textbf{Aggregator} & \textbf{Objective} & \textbf{Output dimension} & \textbf{\# Parameters}\\ \midrule \textbf{APC} & Filterbank & GRU & Reconstruct future frame & 512 & 4.1M \\ \hline \textbf{Mockingjay} & Filterbank & Bidirectional Transformer & Reconstruct masked frame & 768 & 21.3M \\ \hline \textbf{CPC} & CNN & LSTM & Identify future feature & 256 & 1.8M \\ \hline \textbf{wav2vec} & CNN & CNN & Identify future feature & 512 & 32.5M \\ \hline \textbf{wav2vec 2.0} & CNN & Transformer & Identify quantised future feature & 768 (base), 1024 (large) & 95.0M (base), 317.3M (large) \\ \bottomrule \end{tabularx} \end{table*} \subsubsection{APC} In APC, autoregressive models encode the temporal information in the past sequence of frames, for example with Gated Recurrent Units (GRU). A future frame, $n$ steps ahead of the current frame, is linearly predicted from the autoregressive outputs. The model is then trained with an L1 reconstruction loss on the predicted frame. We use a model with 3 GRU layers and predict 5 steps ahead \cite{chung2019unsupervised, chung2020vectorquantized}. The outputs of the last GRU layer are extracted as features for the downstream task. \subsubsection{Mockingjay} While APC conditions its prediction on past context only, Mockingjay leverages both past and future context to predict a frame that has been masked out. The encoder is a deep bidirectional Transformer \cite{vaswani2017attention} that learns contextualised representations, which are extracted from the last layer. These representations are linearly mapped to predict the masked frames, and the model is trained with a reconstruction loss between the predicted and true frames. We use the base model with 3 Transformer blocks in the encoder \cite{Liu_2020, S3PRL}. \subsubsection{CPC} CPC directly applies a stack of strided convolutional layers to the raw waveform to encode the sequence in the latent space. An autoregressive model (the aggregator) then looks at the representations of the past sequence and its output is mapped to predict the latent representations for several steps in the future. The loss is not reconstructive, but contrastive: given the aggregator output, the model has to distinguish the correct sample out of a bunch with distractors from windows more distant in time or from different sequences. We use the modified CPC approach \cite{riviere2020unsupervised} where the encoder exists of 5 CNN layers, the autoregressive model is an LSTM and the prediction network is a 1-layer Transformer network. The model predicts 1 to 12 steps in the future, with a separate projection layer for every step, and is trained with 10 distractors. The outputs of the autoregressive model are the extracted features. \subsubsection{wav2vec} Wav2vec is built on CPC but uses a fully convolutional model. The autoregressive model is replaced by a context network consisting of 12 convolutional layers. Two additional linear transformations increase the capacity of the encoder (this architecture is called \textit{wav2vec large} in the corresponding paper \cite{schneider2019wav2vec}). The outputs of the context network are the feature vectors \cite{ott2019fairseq}. \subsubsection{wav2vec 2.0} Wav2vec 2.0 combines ideas from wav2vec \cite{schneider2019wav2vec}, vq-wav2vec \cite{baevski2020vqwav2vec} and MLM. The encoder computes latent speech representations from the raw waveform with 7 temporal convolution blocks. A certain proportion of the latent features is masked before feeding to the aggregator, which is a Transformer network. At the same time, a quantisation module maps the latent feature vectors to discretised versions. The final training objective is then to distinguish the true quantised representation for a masked time step, given the aggregator output \cite{baevski2020wav2vec}. We differentiate between the base and large architecture of the model, which contain respectively 12 and 24 Transformer blocks in the aggregator. The contextual features at the output of the aggregator are extracted for downstream tasks \cite{ott2019fairseq}. We duplicate them in time to mimic a stride of 10ms instead of 20ms. The wav2vec 2.0 model can be finetuned on a labelled set. To this end, an extra linear layer is added on top of the context network and a CTC loss is applied with the transcription characters as targets. The encoder is frozen during finetuning. Finetuning is done after the pre-training is completed. Finally, XLSR-53 is a large wav2vec 2.0 model pre-trained on 53 languages simultaneously \cite{conneau2020unsupervised}. The authors have shown that the quantised speech representations can express connections between languages when trained in a multilingual setup. Due to limited resources, we pre-train wav2vec 2.0 base models for 100k updates and finetune for 500k updates, and we don't pre-train our own wav2vec 2.0 large models but only finetune existing pre-trained models. \subsection{Downstream Feature Evaluation} \subsubsection{Phone Classification} \label{sec:phoneclass} We train an external phone classifier consisting of just one linear layer and a softmax layer \cite{S3PRL}, with as input the features extracted from the pre-trained models. All pre-trained features are compared to the baseline of 80-dimensional log-mel filterbank features, including second order delta features and mean-variance normalisation. For every utterance, there is a phone label every 10ms, corresponding to the stride of the input features. The classifier is trained with a cross-entropy loss. We report the accuracy of the classifier of predicting the correct phone label for every 10ms window, instead of using the most voted phone during its entire duration, because the learned representations should contain phonetic information even at the start of a phone. For English experiments we use the phone labels from \cite{oord2019representation}, which have been generated by forced alignment with Kaldi \cite{Povey_ASRU2011} using pre-trained models on LibriSpeech, and mapped to 41 classes. For Flemish experiments we use the phone labels provided in the Corpus Gesproken Nederlands (Section~\ref{sec:CGN}). The phone sequences have been computed by forced alignment on the manually checked orthographic transcripts with SPRAAK \cite{demuynck_laureys}, and have been partly manually checked as well. There are 49 distinct phone classes \cite{CGN_Oostdijk}. \subsubsection{ASR} \label{sec:asrmet} We train a baseline HMM-DNN model with Kaldi \cite{Povey_ASRU2011} on MFCC features. The HMM-GMM models triphones and includes LDA, MLLT and fMLLR transformations. It is trained on MFCC features to compute alignments and build a phonetic tree with one state per phone. For the pre-trained models, we reuse the alignments and tree from the MFCC model and only train the DNN model with the extracted features as input. We make a distinction between a large DNN model containing 14 TDNN-F layers \cite{Povey2018} (similar to the Switchboard recipe) and a small DNN model with only 3 TDNN-F layers. We leave out iVector extraction and speed perturbation, and remove the delta layers for pre-trained features. We decode with a pruned trigram language model and use a lexicon of 100k words. We report Word Error Rates based on the Levenshtein distance, but make a correction for inconsistencies in compounding (which occur frequently in Dutch). \section{Data} \subsection{Flemish Dutch datasets} \subsubsection{Labelled data} \label{sec:CGN} Corpus Gesproken Nederlands (CGN) \cite{CGN_Oostdijk} - also called Spoken Dutch Corpus - is a manually annotated speech database of around 900 hours of Dutch, of which 270 hours correspond to Flemish Dutch. CGN contains both phonetical and word-level transcriptions and segmentations. The labelled data can be used for finetuning, for ASR model training and for the proposed evaluation procedures. We make the distinction between three training sets of data, based on the type of speech. \textbf{VL-train-clean} This set contains 35h of prepared, read speech by professional readers. This corresponds to component O of CGN. \textbf{VL-train-other} This set contains several types of speech, including read speech (\textit{VL-train-clean}), news reports, interviews, lectures, sports commentary, etc. This set holds 145h of data from components B,F,G,H,I,J,K,L,M,N,O of CGN. \textbf{VL-train-all} This set contains all components from the CGN database and corresponds to 270 hours of speech. The difference with \textit{VL-train-other} is the inclusion of narrowband telephone speech (8kHz resampled to 16kHz) and spontaneous conversational speech, which correspond to respectively components C,D and component A of CGN. In a similar way, we make a distinction between \textbf{VL-test-clean} (4h) and \textbf{VL-test-other} (15h, including the 4h from \textit{VL-test-clean}). There is no overlap in speakers with the train sets. For phone classification experiments in English, we use the \textit{train-clean-100} set of LibriSpeech \cite{librispeech} and use the train-test split and phone labels from \cite{oord2019representation}. \subsubsection{Unlabelled data} We have created a dataset of 450h of unlabelled data for unsupervised experiments in Flemish Dutch, by extracting audio from online available resources. We refer to this set as \textbf{VL-unsup}. This set consists of 200h of data from recordings in the Flemish parliament, 100h of audio from broadcast TV news and 150h of audio from TV talkshows. For pre-training, we use this set and the labelled sets without the transcriptions. \subsection{Pre-trained models} \label{sec:prm} For some experiments, we use off-the-shelf available pre-trained models for APC, Mockingjay, CPC, wav2vec and wav2vec 2.0 \cite{S3PRL, ott2019fairseq}. These models have been pre-trained on English audiobooks from LibriSpeech (\textit{LS-960}) \cite{librispeech}, LibriLight (\textit{LL-60k}) \cite{librilight} or both, i.e. LibriVox (\textit{LV-60k}) \cite{Pratap_2020}. The XLSR-53 model is trained on 56k hours of data from 53 different languages. The XLSR data originates from CommonVoice \cite{ardila2020common}, Multilingual LibriSpeech \cite{Pratap_2020}, and BABEL \cite{babel}. It includes around 1.6k hours of Dutch \cite{conneau2020unsupervised} of which we recon only a very small part is Flemish Dutch (a few hours in CommonVoice). We also use a wav2vec 2.0 model pre-trained on the Dutch part of VoxPopuli (\textit{VP-NL-4.5k}) \cite{voxpopuli}, which contains 4.5k hours of Netherlands Dutch speech recordings from the European parliament. \section{Discussion} \subsection{Phone classification} \subsubsection{Applicability of off-the-shelf models to Flemish Dutch} First, we perform phone classification as explained in Section~\ref{sec:phoneclass}. For experiments in English, we train and test the classifier on a train-test split of LibriSpeech \textit{train-clean-100}. For experiments in Flemish, we either train a classifier on \textit{VL-train-clean} and test on \textit{VL-test-clean}, or train on \textit{VL-train-other} and evaluate on \textit{VL-test-other}. Table~\ref{tab:prep} shows the phone classification accuracies with features extracted from English pre-trained models that are online available (see Section~\ref{sec:prm}). \begin{table}[hbt] \centering \caption{Linear phone classification accuracy (\%) with features extracted from off-the-shelf models pre-trained on English. We evaluate classification on English and Flemish.} \footnotesize \begin{tabular}{l|c|c|c} \toprule \multirow{2}{*}{\textbf{Model}} & \textbf{English} & \multicolumn{2}{c}{\textbf{Flemish}} \\ & \textit{LS-tc100} & \textit{VL-test-clean} & \textit{VL-test-other} \\ \midrule Baseline & 48.0 & 48.5 & 39.3 \\ APC & 72.7 & 71.4 & 60.1 \\ Mockingjay & 68.1 & 71.4 & 59.1 \\ CPC & 71.3 & 71.7 & 60.5 \\ wav2vec & \textbf{78.4} & \textbf{73.3} & \textbf{62.4} \\ wav2vec 2.0 (base) & 75.1 & 71.7 & 58.8 \\ \bottomrule \end{tabular} \label{tab:prep} \end{table} \noindent The relative improvements with respect to the baseline as a result of pre-training are consistent across both languages. The accuracy on \textit{VL-test-clean} is of a similar magnitude as the accuracy on \textit{LS-tc100}, which can be explained by the fact that both sets contain rather easy, clean speech. On \textit{VL-test-clean} and \textit{VL-test-other}, we see absolute accuracy improvements of more than 20\%. This shows that the pre-training techniques improve linear phone separability, even when the target language differs from the pre-training language. \subsubsection{Language Matching} Second, we examine the effect of matching the domain (i.e. the language, but also the type of speech) of the pre-training speech to the target speech. We pre-train models on Flemish Dutch data, compare them to other pre-trained models, and investigate the effect of finetuning wav2vec 2.0 on a Flemish subset. Table~\ref{tab:pca} reports phone classification accuracies for pre-training and finetuning on several datasets. \begin{table}[ht] \centering \caption{Phone classification accuracy (PCA) percentage when training a classifier on \textit{VL-train-clean} and testing on \textit{VL-test-clean} ('clean'), and when training a classifier on \textit{VL-train-other} and testing on \textit{VL-test-other} ('other').} \footnotesize \begin{tabularx}{\columnwidth}{X|X|X|p{0.5cm}|p{0.5cm}} \toprule \multirow{2}{\hsize}{\textbf{Model}} & \multirow{2}{\hsize}{\textbf{Pre-training}} & \multirow{2}{\hsize}{\textbf{Finetuning}} & \multicolumn{2}{c}{\textbf{PCA}} \\ & & & clean & other \\ \midrule Baseline (Filterbank) & -- & -- & 48.5 & 39.3 \\ \hline \multirow{5}{\hsize}{APC} & LS-960 & -- & 71.4 & 60.1 \\ \cline{2-5} & VL-train-clean & -- & 66.9 & 54.8 \\ \cline{2-5} & VL-train-other & -- & 67.6 & 57.1 \\ \cline{2-5} & VL-unsup & -- & \textbf{73.3} & \textbf{63.3} \\ \cline{2-5} & VL-train-all + VL-unsup & -- & \textbf{73.3} & 63.0 \\ \hline \multirow{2}{\hsize}{Mockingjay} & LS-960 & -- & 71.4 & 59.1 \\ \cline{2-5} & VL-train-clean & -- & 65.2 & 53.5 \\ \hline \multirow{4}{\hsize}{CPC} & LL-60k & -- & 71.7 & \textbf{60.5} \\ \cline{2-5} & VL-train-clean & -- & \textbf{72.6} & 55.6 \\ \cline{2-5} & VL-train-other & -- & 69.3 & 59.5 \\ \cline{2-5} & VL-unsup & -- & 66.8 & 57.4 \\ \cline{2-5} & VL-train-all + VL-unsup & -- & 67.5 & 58.1 \\ \hline wav2vec & LS-960 & -- & 73.3 & 62.4 \\ \hline \multirow{5}{\hsize}{wav2vec 2.0 base} & LS-960 & -- & 71.7 & 58.8 \\ \cline{2-5} & VP-NL-4.5k & -- & 64.5 & 50.0 \\ \cline{2-5} & VL-train-other & -- & 47.4 & 36.1 \\ \cline{2-5} & VL-train-all + VL-unsup & -- & 54.7 & 43.9 \\ \cline{2-5} & LS-960 & VL-train-other & \textbf{83.6} & \textbf{76.2} \\ \cline{2-5} & VP-NL-4.5k & VL-train-other & \textbf{83.6} & 76.1 \\ \cline{2-5} & VL-train-other & VL-train-other & 81.3 & 74.1 \\ \cline{2-5} & VL-train-all + VL-unsup & VL-train-other & 82.2 & 75.0 \\ \hline \multirow{7}{\hsize}{wav2vec 2.0 large} & LS-960 & -- & 55.9 & 45.2 \\ \cline{2-5} & LV-60k & -- & 24.6 & 14.3 \\ \cline{2-5} & XLSR-53 & -- & 34.4 & 21.8 \\ \cline{2-5} & VP-NL-4.5k & -- & 58.2 & 45.7 \\ \cline{2-5} & LS-960 & VL-train-other & 81.2 & 73.4 \\ \cline{2-5} & LV-60k & VL-train-other & 85.0 & 76.6 \\ \cline{2-5} & XLSR-53 & VL-train-other & \textbf{86.4} & \textbf{79.1} \\ \cline{2-5} & VP-NL-4.5k & VL-train-other & 84.12 & 76.3 \\ \bottomrule \end{tabularx} \label{tab:pca} \end{table} For APC, we notice an improvement over the English pre-trained model when we match the training and target language and use a sufficient amount of data (but still less than LibriSpeech). For CPC, we experienced converging difficulties and a high sensitivity to the number of training cases. We see an improvement over the pre-trained model on \textit{VL-test-clean} when only training on \textit{VL-train-clean}. This might suggest that domain matching is important for CPC. The LibriLight pre-trained model is trained on much more data (60k hours), which can explain the strong performance on \textit{VL-test-other}. For wav2vec 2.0, the base models trained on Flemish data and the pre-trained model on VoxPopuli perform worse than the LibriSpeech model, despite matching language (VL) or using more data in a related language (VP). The former is most likely explained by sub-optimal training, the latter could be explained by the fact that the VoxPopuli parliament recordings reflect different acoustic conditions to certain components with clean speech in CGN. Finetuning on Flemish leads to very high phone classification accuracies for all models. \noindent For the large high-capacity wav2vec 2.0 models, we note low accuracies without finetuning. Other works corroborate this finding and ascribe it to the problem-agnostic pre-training, and have shown that certain audio features are more easily accessible from the middle layers in very deep transformer models than the output layer \cite{baevski2021unsupervised, ma2021probing}. Finetuning (with graphemic transcripts) alleviates this discrepancy and gives an improved accuracy over the base models with the large models. It seems that the large model also benefits more from a large amount of pre-training data, as the LibriVox and XLSR model show. The XLSR-53 model reports the highest score. It is trained on a similar amount of data as the LibriVox model, but the training data contains Dutch, German and English. We also note that the (Flemish) Dutch stops, which are prevoiced voiced stops, differ from the aspirated voiceless stops in English. This is also better covered in the XLSR-53 set. \subsection{ASR Results} \subsubsection{Clean ASR} Table~\ref{tab:asr} reports WER results on \textit{VL-test-other} of HMM-DNN ASR experiments with a large DNN and features from different models, as described in Section~\ref{sec:asrmet}. Every ASR model (including the baseline model) is trained on \textit{VL-train-other}. \begin{table}[ht] \centering \caption{ASR experiments with large DNN ASR model, reporting WER on \textit{VL-test-other}.} \footnotesize \begin{tabularx}{\columnwidth}{X|X|X|p{0.65cm}} \toprule \textbf{Model} & \textbf{Pre-training} & \textbf{Finetuning} & \textbf{WER} \\ \midrule Baseline (MFCC) & -- & -- & 15.10 \\ \hline \multirow{3}{*}{APC} & LS-960 & -- & 16.02 \\ & \multirow{2}{\hsize}{VL-train-all + VL-unsup} & \multirow{2}{*}{--} & \multirow{2}{*}{16.20} \\ & & & \\ \hline CPC & LS-960 & -- & 15.03 \\ \hline wav2vec & LS-960 & -- & 14.89 \\ \hline \multirow{3}{\hsize}{wav2vec 2.0 base} & LS-960 & -- & 13.84 \\ \cline{2-4} & VL-train-other & -- & 14.44 \\ \cline{2-4} & VL-train-all + VL-unsup & -- & 13.52 \\ \cline{2-4} & LS-960 & VL-train-other & \textbf{11.42} \\ \cline{2-4} & VL-train-other & VL-train-other & 13.41 \\ \cline{2-4} & VL-train-all + VL-unsup & VL-train-other & 11.76 \\ \hline \multirow{6}{\hsize}{wav2vec 2.0 large} & LS-960 & -- & 14.33 \\ \cline{2-4} & LV-60k & -- & 14.72 \\ \cline{2-4} & VP-NL-4.5k & -- & 16.32 \\ \cline{2-4} & XLSR-53 & -- & 13.40 \\ \cline{2-4} & LS-960 & VL-train-other & 10.87 \\ \cline{2-4} & LV-60k & VL-train-other & 12.65 \\ \cline{2-4} & XLSR-53 & VL-train-other & \textbf{10.61} \\ \bottomrule \end{tabularx} \label{tab:asr} \end{table} \noindent The improvements compared to the baseline with MFCC are small, if any, except for wav2vec 2.0. In contrast to the phone classification experiments, we see an improvement over the LibriSpeech model (960h) when we pre-train the base model on a comparable amount of Flemish (720h) without finetuning, which supports the idea that the combination of a matching pre-training language and a large amount of data is key. \noindent The poor performance of the large VoxPopuli model can possibly be explained by a poor transfer from Netherlands Dutch to Flemish and different acoustic conditions compared to the test set. Also, contrarily to phone classification, the large wav2vec 2.0 model pre-trained on LibriSpeech outperforms the model pre-trained on LibriVox. Analogous to the phone classification experiments, the XLSR-53 model - which has a large variability in its training data - yields a significant WER improvement, manifesting a positive cross-lingual transfer to Flemish, and the best results are obtained when finetuning a model on an annotated Flemish subset. We obtain almost 30\% relative WER improvement when finetuning XLSR-53 on Flemish, compared to the baseline. The difference with the LibriSpeech model is however small. We postulate that a large wav2vec 2.0 model trained on more Flemish data would equal or improve this result. \subsubsection{Effect of amount of pre-training and finetuning data} We quantitatively examine the effect of an increasing amount of data used for pre-training wav2vec 2.0 base models (unlabelled) or finetuning XLSR-53 (labelled). We shuffle all available data from all sets. We also evaluate finetuning on different types of speech (from different sets). For the unlabelled dataset, this distinction is not trivial. Table~\ref{tab:asr2} shows the results. The ASR model is always trained on \textit{VL-train-other}. \begin{table}[ht] \caption{WER with DNN ASR model in function of the amount of Flemish data used for pre-training or finetuning.} \label{tab:asr2} \begin{subtable}{\columnwidth} \centering \footnotesize {\begin{tabularx}{\columnwidth}{X|X|X|X|X|X|X|X|X} \toprule 10h & 30h & 50h & 100h & 150h & 250h & 350h & 500h & 700h \\ \hline 31.87 & 20.85 & 16.76 & 15.55 & 15.74 & 14.76 & 14.34 & 14.73 & 13.52 \\ \bottomrule \end{tabularx}} \caption{Unlabelled data for pre-training a base wav2vec 2.0 model (no finetuning), large ASR DNN.} \label{pt1} \end{subtable}% \begin{subtable}{\columnwidth} \centering \footnotesize {\begin{tabularx}{\columnwidth}{X|X|X|X|X|X|X|X|X} \toprule 0h & 1h & 10h & 20h & 30h & 50h & 90h & 150h & 250h \\ \hline 27.75 & 13.84 & 12.08 & 11.32 & 11.19 & 10.71 & 10.61 & 10.53 & 10.50 \\ \bottomrule \end{tabularx}} \caption{Labelled data for finetuning XLSR-53, small ASR DNN.} \label{tab:ft1} \end{subtable}% \begin{subtable}{\columnwidth} \centering \footnotesize {\begin{tabularx}{\columnwidth}{p{0.8cm}|p{1.3cm}|p{1.4cm}|X|X} \toprule VL set & No FT (0h) & Clean (29h) & Other (128h) & All (248h) \\ \hline WER & 13.40 & 12.35 & 10.61 & 10.58 \\ \bottomrule \end{tabularx}} \caption{Different sets of CGN for finetuning XLSR-53, large ASR DNN.} \label{tab:ft2} \end{subtable} \end{table} \noindent For pre-training, it is necessary to have a considerable amount of data to improve upon the baseline, and more Flemish data gives improvements. It seems that the learned audio representations include acoustic details aside from more abstract phoneme qualities, as the WER on 150h of shuffled data is higher than when pre-training on an equal amount of matched data (\textit{VL-train-other} in Table~\ref{tab:asr}). This might suggest a high dependency on acoustic conditions. For finetuning, the data should match the type or conditions of the test set for optimal results, and the improvements saturate with more data. Note that a small DNN suffices after finetuning, but a large DNN is required when the XLSR model is not finetuned (first column of Table~\ref{tab:ft1}). This is in line with the poor phone classification results of the large models without finetuning. \subsubsection{ASR in noisy environments} We investigate the robustness of wav2vec 2.0 to noisy and reverberated speech by replicating the \textit{VL-test-other} set in 4 different scenario's: filtered with RIRs (\textit{rev}), with added noise at a certain SNR (\textit{noise1}: 5-20dB, \textit{noise2}: 0-15dB) and both (\textit{rev} + \textit{noise3}: 5-15dB). We use noises from multiple sources (NTT Noise-DB, CHIME2, NoiseX, DEMAND, Humming) and RIRs from the Aachen Impulse Response Database. We compare wav2vec 2.0 large models pre-trained on LibriVox: without finetuning, finetuned on \textit{VL-train-other} ('clean') and finetuned on a fourfold augmented \textit{VL-train-other} ('aug') by adding noise and reverberation in Table~\ref{tab:noisy}. \begin{table}[ht] \centering \caption{WER with large DNN ASR on augmented \textit{VL-test-other} with LibriVox pre-trained large wav2vec 2.0 models.} \footnotesize \begin{tabularx}{\columnwidth}{X|X|X|X|X|X|X} \toprule \multirow{2}{\hsize}{\textbf{Model}} & \multirow{2}{\hsize}{\textbf{FT}} & \multicolumn{5}{c}{\textbf{WER}} \\ & & \textit{clean} & \textit{rev} & \textit{noise1} & \textit{noise2} & \textit{rev + noise3} \\ \midrule MFCC & -- & 15.10 & 28.36 & 20.58 & 26.26 & 39.21 \\ \hline w2v2 & -- & 14.71 & 27.12 & 19.96 & 25.28 & 39.19 \\ \hline w2v2 & clean & 12.43 & 22.60 & 16.31 & 20.61 & 33.32 \\ \hline w2v2 & aug & 12.13 & 18.08 & 14.64 & 17.39 & 24.43 \\ \bottomrule \end{tabularx} \label{tab:noisy} \end{table} \noindent Finetuning on augmented data gives strong improvements over finetuning on clean data in the reverberated and noisy settings, with 3-9\% absolute WER reduction. More so, there is even a slight improvement in the clean setting as well, probably because of more finetuning data. \section{Conclusion} Pre-trained features on English speech transfer well to Flemish Dutch in terms of improving linear phone separability. These self-supervised pre-trained models are readily available and easy to use. Matching the pre-training and target language further improves results, but either matching the type of speech or using a larger amount of data is necessary. The recently proposed wav2vec 2.0 model appears superior, especially when finetuned on data from the target language. Finally, we obtain the best results with the large multilingually trained XLSR-53 model and see nearly 30\% improvement in WER by finetuning the XLSR-53 model on Flemish, compared to the baseline. We show the importance of matching pre-training and target language and acoustic conditions. \section{Acknowledgements} This research received funding from the Flemish Government under the "Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen" programme. \pagebreak \bibliographystyle{IEEEbib}
1,314,259,995,714
arxiv
\section{Introduction} A celebrated theorem of Savitch \cite{Savitch70} states that $NSPACE(S)$ $\subseteq$ $DSPACE(S^2)$. In particular, Savitch gave a deterministic algorithm to solve {\sc{ST-Connectivity}}\ (an ${\bf{NL}}$-complete problem) using $O({\log}^2{n})$ space, implying ${\bf{NL}}$\ $\subseteq$ $DSPACE({\log}^2{n})$. Savitch's algorithm runs in time $2^{O({\log}^2{n})}$. It has been a longstanding open problem to improve Savitch's theorem i.e., to prove (i) ${\bf{NL}}$\ $\subseteq {DSPACE}(o({\log}^{2}{n}))$ or (ii) ${\bf{NL}}$\ $\subseteq$ ${\bf{{SC}^{2}}}$, i.e., {\sc{ST-Connectivity}}\ can be solved by a deterministic algorithm in polynomial time and $O({\log}^2{n})$ space. While Savitch's theorem itself has not been improved in the last four decades, studying the space complexity of several special cases of {\sc{ST-Connectivity}}\ has provided new insights into the space-bounded complexity classes. Allender's survey \cite{allender-stconn-survey} gives an update of progress related to several special cases of {\sc{ST-Connectivity}}. Recently {\sc{ST-Connectivity}}\ in planar DAGs with $O({\log}{n})$ sources is shown to be in ${\bf{L}}$\ \cite{planar-few-sources}. Stolee and Vinodchandran proved that {\sc{ST-Connectivity}}\ in DAGs with $2^{O(\sqrt{{\log}n})}$ sources embedded on surfaces of genus $2^{O(\sqrt{{\log}n})}$ is in ${\bf{L}}$\ \cite{reach-surface-embedded}. All the connectivity problems considered in the literature so far are essentially special cases of {\sc{ST-Connectivity}}. In the first half of this paper, we introduce new kind of graph connectivity problems which we call {\em{graph realizability problems}}. All of our graph realizability problems are generalizations of {\sc{Undirected ST-Connectivity}}. {\sc{ST-Realizability}}, the most general graph realizability problem is ${\bf{LogCFL}}$-complete. We define the corresponding complexity classes that lie between ${\bf{L}}$\ and ${\bf{LogCFL}}$\ and study their relationships. As special cases of our graph realizability problems we define two natural problems, {\sc{Balanced ST-Connectivity}}\ and {\sc{Positive Balanced ST-Connectivity}}, that lie between ${\bf{L}}$\ and ${\bf{NL}}$. In the second half of this paper, we study the space complexity of ${\bf{SGSLogCFL}}$\ (see \secref{subsec:sgslogcfl} for definition). We define generalizations of graph squaring and transitive closure, present efficient parallel algorithms for ${\bf{SGSLogCFL}}$\ and use the techniques of Trifonov \cite{trifonov-logloglog} to show that ${\bf{SGSLogCFL}}$\ is contained in $DSPACE({\log}{n}{\log}{\log}{n})$. This implies that {\sc{Balanced ST-Connectivity}}, a natural graph connectivity problem which lies between ${\bf{L}}$\ and ${\bf{NL}}$, is contained in $DSPACE({\log}{n}{\log}{\log}{n})$. \subsection{Preliminaries, Related Work and Our Results} \noindent {\bf{Auxiliary Pushdown Automata}} : A language is accepted by a non-deterministic pushdown automaton (PDA) if and only if it is a context-free language. Deterministic context-free languages are those accepted by the deterministic PDAs. ${\bf{LogCFL}}$\ is the set of all languages that are log-space reducible to a context-free language. Similarly, ${\bf{LogDCFL}}$\ is the set of all languages that are log-space reducible to a deterministic context-free language. There are many equivalent characterizations of ${\bf{LogCFL}}$. Sudborough \cite{Sudborough78} gave the machine class equivalence. Ruzzo \cite{Ruzzo80} gave an alternating Turing machine (ATM) class equivalent to ${\bf{LogCFL}}$. Venkateswaran \cite{Venkateswaran91} gave a circuit characterization and showed that ${\bf{LogCFL}}$\ = ${\bf{{SAC}^{1}}}$. For a survey of parallel complexity classes and ${\bf{LogCFL}}$\ see Limaye's thesis \cite{nutan-logcfl}. An Auxiliary Pushdown Automaton (NAuxPDA or simply AuxPDA), introduced by Cook \cite{cook-auxpda}, is a two-way PDA augmented with an $S(n)$-space bounded work tape. If a deterministic two-way PDA is augmented with an $S(n)$-space bounded work tape then we get a Deterministic Auxiliary Pushdown Automaton (DAuxPDA). We present the formal definitions in the {\bf{appendix}} (see Section \ref{sec:symmauxpda}). Let {\em{NAuxPDA-SpaceTime}} ($S(n)$,$T(n)$) be the class of languages accepted by an AuxPDA with $S(n)$-space bounded work tapes and the running time bounded by $T(n)$. Let the corresponding deterministic class be {\em{DAuxPDA-SpaceTime}} ($S(n)$,$T(n)$). It is easy to see that ${\bf{NL}}$ $\ \subseteq\ $ {\em{NAuxPDA-SpaceTime}} ($O({\log}n)$, $poly(n)$). It is shown by Sudborough that {\em{NAuxPDA-SpaceTime}} ($O({\log}n)$, $poly(n)$) = ${\bf{LogCFL}}$\ and {\em{DAuxPDA-SpaceTime}} ($O({\log}n)$,$poly(n)$) = ${\bf{LogDCFL}}$\ \cite{Sudborough78}. Using ATM simulations, Ruzzo showed that ${\bf{LogCFL}}$\ $\subseteq$ ${\bf{{NC}^{2}}}$\ \cite{Ruzzo80}. Simpler proofs of {\em{DAuxPDA-SpaceTime}} ($O({\log}n)$,$poly(n)$) = ${\bf{LogDCFL}}$\ and ${\bf{LogCFL}}$\ = ${\bf{{SAC}^{1}}}$\ are given in \cite{circuits-cfls}. Many proof techniques and results obtained in the context of ${\bf{NL}}$, are generalized to obtain the corresponding results for ${\bf{LogCFL}}$. For example : (i) Borodin \cite{borodin-nl-nc2} proved that ${\bf{NL}}$\ $\subseteq$ ${\bf{{NC}^{2}}}$. Ruzzo \cite{Ruzzo80} introduced tree-size-bounded alternating Turing machines, gave a new characterization of ${\bf{LogCFL}}$, and proved that ${\bf{LogCFL}}$\ $\subseteq$ ${\bf{{NC}^{2}}}$. (ii) Immerman \cite{Immerman88} and Szelepcs\'{e}nyi \cite{Szelepcsenyi87} proved that ${\bf{NL}}$\ = ${\bf{co}}$-${\bf{NL}}$. Borodin et. al. \cite{BCDRT89} generalized their inductive counting technique and proved that ${\bf{LogCFL}}$\ = ${\bf{co}}$-${\bf{LogCFL}}$. In fact, they proved a stronger result showing that ${\bf{SAC}}^i$ is closed under complementation for $i>0$. (iii) Wigderson \cite{Wigderson-parityNL} proved that ${\bf{NL}}$\ $\leq_r$\ ${\bf{{\oplus}NL}}$. G{\'a}l and Wigderson \cite{GalWigderson96} proved that ${\bf{LogCFL}}$\ $\leq_r$\ ${\bf{{\oplus}LogCFL}}$. (iv) Nisan \cite{nisan-rlsc} proved that ${\bf{BPL}}$\ $\subseteq$ ${\bf{{SC}^{2}}}$. Venkateswaran \cite{venkat-auxpda, venkat-auxpda-techreport} proved that ${\bf{BPLogCFL}}$\ $\subseteq$ ${\bf{{SC}^{2}}}$\ and ${\bf{BPLogCFL}}$\ $\subseteq$ ${\bf{{NC}^{2}}}$. Here ${\bf{BPLogCFL}}$\ (resp. ${\bf{RLogCFL}}$\ and ${\bf{ZPLogCFL}}$) is the bounded error (resp. one-sided error and zero error) probabilistic version of ${\bf{LogCFL}}$. All the above results are elegant and non-trivial generalizations of the corresponding results in the logspace setting. Throughout this paper, we consider $O({\log}n)$-space bounded and polynomial-time bounded AuxPDAs. The {\em{surface configuration}} (introduced by Cook \cite{cook-auxpda}) of an AuxPDA, on an input $w$, consists of the state, contents and head positions of the work tapes, the head position of the input tape and the topmost symbol of the stack i.e., the rightmost symbol of the pushdown tape. Note that for an $S(n)$-space bounded AuxPDA, its surface configurations take only $O(S(n))$ space. In the rest of the paper, we will refer to surface configurations as configurations. For an input $w$, a pair of configurations $(C_1,C_2)$ is {\em{realizable}} if the AuxPDA can move from $C_1$ to $C_2$ ending with its stack at the same height as in $C_1$, and without popping its stack below its level in $C_2$ for any of the intermediate configurations. An AuxPDA $M$ accepts an input $w$ iff there is a realizable pair $(I,A)$, where $I$ is the initial configuration and $A$ is the unique accepting configuration. \\ \noindent {\bf{Realizable Paths}} : {\sc{ST-Connectivity}}\ (resp. {\sc{Undirected ST-Connectivity}}) is the problem of determining whether there exists a path between two distinguished vertices $s$ and $t$ in a directed (resp. undirected) graph. These two graph connectivity problems played a central role in understanding the complexity classes ${\bf{L}}$, ${\bf{SL}}$\ and ${\bf{NL}}$\ \cite{AKLLR, SymmLogspace, BCDRT89, ustconn-log3by2, KWspan, SL=coSL, sakszhou3by2, ustconn-log4by3, zigzag, trifonov-logloglog, SL=L}. In \secref{sec:paths}, we introduce a new graph connectivity problem, which we call {\sc{ST-Realizability}}\ and prove that {\sc{ST-Realizability}}\ is complete for ${\bf{LogCFL}}$. {\sc{ST-Realizability}}\ is a generalization of {\sc{ST-Connectivity}}, which is ${\bf{NL}}$-complete. Our definition of {\sc{ST-Realizability}}\ is motivated by (i) Hardest CFL \cite{hardestCFL, Sudborough78, harrison-book}, (ii) Labeled Acyclic GAP, which is ${\bf{LogCFL}}$-complete \cite{greenlaw-book} (iii) CFL-reachability, which is ${\bf{P}}$-complete \cite{CFL-reachability, CFL-Pcomplete-PODS, CFL-Pcomplete-reps, CFL-Pcomplete-FOCS} and (iv) the insights from Niedermeier and Rossmanith's parsimonious simulation of ${\bf{LogCFL}}$\ by ${\bf{{SAC}^{1}}}$\ circuits \cite{NiedermeierR95}. Unlike {\sc{ST-Connectivity}}, using breadth-first search (or) depth-first search and keeping track of ``visited" vertices does not result in a polynomial time algorithm for {\sc{ST-Realizability}}. In \secref{sec:transitive}, we generalize the notions of transitive closure and graph squaring. Using these generalizations we present a natural polynomial time algorithm to compute the generalized transitive closure, thus solving {\sc{ST-Realizability}}. \\ \noindent {\bf{Symmetric AuxPDAs}} : In \secref{sec:ustconn}, we define {\sc{Undirected ST-Realizability}}, a ``symmetric" version of {\sc{ST-Realizability}}. To study the space complexity of {\sc{Undirected ST-Realizability}}\ we define {\em{symmetric}} auxiliary pushdown automata, a natural generalization of symmetric Turing machines introduced by Lewis and Papadimitriou \cite{SymmLogspace}. We introduce a new complexity class called ${\bf{SLogCFL}}$, a generalization of ${\bf{SL}}$\ and show that ${\bf{LogDCFL}}$\ $\subseteq$ ${\bf{SLogCFL}}$\ $\subseteq$ ${\bf{LogCFL}}$. \\ \noindent {\bf{Graph Realizability Problems}} : In \secref{sec:realproblems}, we study several variants of {\sc{ST-Realizability}}\ and the corresponding complexity classes. All of these complexity classes lie between ${\bf{L}}$\ and ${\bf{LogCFL}}$. In particular, {\sc{Balanced ST-Connectivity}}\ and {\sc{Positive Balanced ST-Connectivity}}\ are natural graph connectivity problems that lie between ${\bf{L}}$\ and ${\bf{NL}}$. Figure \ref{fig:real-classes} summarizes the relationship among the newly defined classes. \\ \noindent {\bf{Space Efficient Algorithms}} : The ${\bf{L}}$\ vs ${\bf{SL}}$\ question (i.e., is there a log space algorithm for solving {\sc{Undirected ST-Connectivity}}) motivated an exciting series of new concepts and techniques. Prior to the work of Lewis and Papadimitriou \cite{SymmLogspace}, Aleliunas et. al. \cite{AKLLR} proved that {\sc{Undirected ST-Connectivity}}\ $\in$ ${\bf{RL}}$, implying ${\bf{SL}}$\ $\subseteq$ ${\bf{RL}}$. Nisan, Szemeredi and Wigderson \cite{ustconn-log3by2} showed that {\sc{Undirected ST-Connectivity}}\ can be solved deterministically in space $O({\log}^{\frac{3}{2}}{n})$. This result was later subsumed by a beautiful result of Saks and Zhou, showing that ${BP}_{H}{SPACE}({S}) \subseteq {DSPACE}({S}^{3/2})$ \cite{sakszhou3by2}. Armoni, et. al. \cite{ustconn-log4by3} showed that {\sc{Undirected ST-Connectivity}}\ $\in DSPACE({\log}^{\frac{4}{3}}{n})$. Trifonov \cite{trifonov-logloglog} gave an $O({\log}{n}{\log}{\log}{n})$-space deterministic algorithm for {\sc{Undirected ST-Connectivity}}. Independently at the same time, using completely different techniques, Reingold \cite{SL=L} settled the space complexity of {\sc{Undirected ST-Connectivity}}\ and proved that ${\bf{SL}}$\ = ${\bf{L}}$. The zig-zag graph product, introduced by Reingold, Vadhan and Wigderson \cite{zigzag-journal}, played a crucial role in Reingold's algorithm. Our space efficient algorithm for ${\bf{SGSLogCFL}}$\ (see \secref{sec:sgslogcfl-logloglog}) is based on Trifinov's technique \cite{trifonov-logloglog}, which is based on Chong-Lam's parallel algorithm \cite{parallel-ustconn-lognloglogn} solving {\sc{Undirected ST-Connectivity}}\ in $O({\log}{n}{\log}{\log}{n})$ time on EREW PRAM. This necessitates the development of such a parallel algorithm for ${\bf{SGSLogCFL}}$. \\ \noindent {\bf{Parallel Algorithms}} : Hirschberg, Chandra and Sarwate \cite{parallel-ustconn-log2n} presented an $O({\log}^2{n})$ time parallel algorithm using $n^2/{\log}{n}$ processors on a CREW PRAM to find connected components of an undirected graph. Their algorithm remained the best known for almost a decade. In a breakthrough work, Johnson and Metaxas \cite{parallel-ustconn-log3by2n} presented a CREW algorithm running in $O({\log}^{\frac{3}{2}}{n})$ time using $n+m$ processors. Subsequently they improved their algorithm to run on an EREW PRAM with the same time complexity and number of processors \cite{parallel-ustconn-log3by2n-erew}. Chong and Lam \cite{parallel-ustconn-lognloglogn} presented an $O({\log}{n}{\log}{\log}{n})$ time deterministic EREW PRAM algorithm with $O(m + n)$ processors. Chong, Han, and Lam \cite{parallel-ustconn-logn} showed that the problem can be solved on the EREW PRAM in $O({\log}{n})$ time with $O(m + n)$ processors. In \secref{sec:parallel-sgslogcfl}, we generalize the algorithms of \cite{parallel-ustconn-log2n}, \cite{parallel-ustconn-log3by2n} and \cite{parallel-ustconn-lognloglogn} and design the corresponding parallel algorithms for ${\bf{SGSLogCFL}}$. In \secref{sec:sgslogcfl-logloglog}, we use these algorithms to prove that ${\bf{SGSLogCFL}}$\ is contained in $DSPACE({\log}{n}{\log}{\log}{n})$. \section{Realizable Paths}\label{sec:paths} \subsection{{\sc{ST-Realizability}}} We are given a directed graph $\mathcal{G}(V,E)$, a vertex labeling function $L_{V}:V{\rightarrow}\{\alpha_1,\alpha_2,\dots,\alpha_k\}$ and an edge labeling function $L_{E}:E{\rightarrow}\{push,pop,\epsilon\}$. The ordered pair $(s,t)$, where $s,t \in V$, is said to be {\bf{realizable}} if the following two conditions hold : \begin{itemize} \item{There is a directed path (say $P$) from $s$ to $t$.} \item{The concatenation of the vertex and edge labels along the path $P$ is a {\em{realizable}} string (see \defref{defn:realstring}).} \end{itemize} \begin{definition}\label{defn:realstring} Let $\mathcal{A} = \{push,pop,\epsilon,\alpha_1,\alpha_2,\dots,\alpha_k\}$ be the set of alphabets. A {\bf{realizable string}} is a nonempty string of alphabets from $\mathcal{A}$, defined in the following recursive manner : \begin{itemize} \item{for all $1 \leq i \leq k$, ``$\alpha_i$" is a realizable string.} \item{for all $1 \leq i \leq k$, ``${\alpha_i}\ \epsilon\ {\alpha_i}$" is a realizable string.} \item{if $S$ is a realizable string then so is ``${\alpha_i}\ push\ S\ pop\ {\alpha_i}$", for all $1 \leq i \leq k$.} \item{for all $1 \leq i \leq k$, if ``${\alpha_i}\ S_1\ {\alpha_i}$" and ``${\alpha_i}\ S_2\ {\alpha_i}$" are realizable strings then so is ``${\alpha_i}\ S_1\ {\alpha_i}\ S_2\ {\alpha_i}$".} \end{itemize} \end{definition} \begin{framed} \noindent {\sc{ST-Realizability}}\ : Given a directed graph $\mathcal{G}(V,E)$ with vertices labeled from $\{\alpha_1,\alpha_2,\dots,\alpha_k\}$ and edges labeled from $\{push,pop,\epsilon\}$ and two distinguished nodes $s$ and $t$, decide if there is a realizable path from $s$ to $t$ in $\mathcal{G}$. \end{framed} We use the notation $(u{\leadsto}v)$ to denote that there is a realizable path from $u$ to $v$. If all the vertices of $\mathcal{G}$ are labeled $\alpha_1$ (i.e., $k=1$) and all the edges are labeled $\epsilon$, we get an instance of {\sc{ST-Connectivity}}. Hence, {\sc{ST-Realizability}}\ is a generalization of {\sc{ST-Connectivity}}. \begin{theorem}\label{thm:streal} {\sc{ST-Realizability}}\ is ${\bf{LogCFL}}$-complete. \end{theorem} \begin{corollary}\label{cor:strealnoepsilon} {\sc{ST-Realizability}}\ with no $\epsilon$-edges is ${\bf{LogCFL}}$-complete. \end{corollary} \subsection{Graph Representation} We now discuss the representation of an instance of {\sc{ST-Realizability}}\ i.e., a directed graph $\mathcal{G}$ with the vertex and edge labels. Let this graph be $\mathcal{G}(V,E)$ with $|V|=n$. For simplicity we assume that there are no multi-edges. We represent $\mathcal{G}$ as a 4-tuple $\mathcal{G} = {\langle}\mathcal{L},\mathcal{P}_{push},\mathcal{P}_{pop},\mathcal{E}{\rangle}$, where $\mathcal{L}$ is an integer array of length $n$, $\mathcal{P}_{push}$, $\mathcal{P}_{pop}$ and $\mathcal{E}$ are $n{\times}n$ boolean matrices. $\mathcal{L}$ is an integer array of length $N$ representing the vertex labels. $\mathcal{L}[u]$ represents the label of vertex $u$ i.e., $\mathcal{L}[u] = i$ iff the label of $u$ is $\alpha_i$. The $[u,v]^{th}$ entry of the matrix $\mathcal{P}_{push}$ (resp. $\mathcal{P}_{pop}$ and $\mathcal{E}$) is 1 if and only if the directed edge $(u,v)$ is labeled {\em{push}} (resp. {\em{pop}} and $\epsilon$). We may assume that $L_E(u,u) = \epsilon$ for all $u \in V$ i.e., $\mathcal{E}[u,u] = \epsilon$ for all $u \in V$. \subsection{Gap Matrix}\label{subsec:gapmatrixdefn} \begin{definition} {\em{(Niedermeier and Rossmanith \cite{NiedermeierR95})}} : Let {\em{a,b,c,d}} be four configurations such that : {\em{a}} and {\em{b}} have same pushdown heights, {\em{c}} and {\em{d}} have same pushdown heights and there exists a computation path from {\em{a}} to {\em{c}} and one from {\em{d}} to {\em{b}}. The level of the pushdown must not go below the level of {\em{a}} and {\em{b}} during the computation. We say that {\em{(a,b)}} is {\em{realizable with gap}} {\em{(c,d)}}. \end{definition} In the context of {\sc{ST-Realizability}}, we relax the above definition as shown below. This allows us to define a natural repeated squaring algorithm to solve {\sc{ST-Realizability}}. For the rest of this paper, we will use the following definition. \begin{framed} \noindent {\bf{Path with gap}} : A {\em{path with gap}} consists of four vertices $a,b,c,d$ such that (i) there is a computation path $P_1$ from $a$ to $c$ and $P_2$ from $d$ to $b$ (ii) the vertex labels of $a$ and $b$ are the same (iii) the vertex labels of $c$ and $d$ are the same (iv) let $P$ be the path formed by concatenating $P_1$ and $P_2$ i.e., identifying $c$ and $d$ (iv) the concatenation of the vertex and edge labels along the path $P$ is a {\em{realizable}} string. We denote such a ``path with gap" by $(a{\leadsto}(c,d){\leadsto}b)$ and say that {\em{(a,b)}} is {\em{realizable with gap}} {\em{(c,d)}}. \end{framed} {\em{Pair-with-gap}} $(a{\leadsto}(c,d){\leadsto}b)$ is interpreted as if the two surface configurations $c$ and $d$ were the same, i.e., as if a realizable path from $c$ to $d$ would exist. To keep track of paths with gaps, we maintain a boolean {\em{gap matrix}} $\Upsilon$, indexed by 4-tuple of vertices $[a,(c,d),b]$ such that if $\Upsilon[a,(c,d),b]= 1$ then $(a{\leadsto}(c,d){\leadsto}b)$. We initialize the gap matrix $\Upsilon$ with the labels from the matrices $\mathcal{L}$,$\mathcal{P}_{push}$ and $\mathcal{P}_{pop}$ as follows. \noindent \line(1,0){400} \\ \noindent ${\bf{Initialize Gap Matrix}}({\Upsilon})$ \\ \indent for all $a,b,c,d \in V$\ \ \ $\Upsilon[a,(c,d),b]=0$ \\ \indent for all $a,b,c,d \in V$ \\ \indent \indent {\bf{if}} $((\mathcal{P}_{push}[a,c]==1){\&}{\&}(\mathcal{P}_{pop}[d,b]==1){\&}{\&}(\mathcal{L}[a]==\mathcal{L}[b]){\&}{\&}(\mathcal{L}[c]==\mathcal{L}[d]))$ \\ \indent \indent \indent {\bf{then}} ${\Upsilon}[a,(c,d),b] = 1$ \\ \indent for all $a \in V$\ \ \ $\Upsilon[a,(a,a),a]=1$ \\ \indent for all $a,b \in V$\ \ \ $\Upsilon[a,(a,b),b]=1$ \\ \noindent \line(1,0){400} \\ All the required information from the matrices $\mathcal{L}$,$\mathcal{P}_{push}$ and $\mathcal{P}_{pop}$ is now present in the gap matrix ${\Upsilon}$. Note that we are implicitly removing the ``unnecessary" edges as follows. \\ \noindent {\bf{Removing unnecessary edges}} : If $s$ and $t$ are realizable in ${\mathcal{G}}$ along a path $P$ then the {\em{push}} and {\em{pop}} edges along $P$ have to ``match" i.e., every {\em{push}} label has a corresponding {\em{pop}} label. In other words, if there is a {\em{push}} edge $(a,c)$ such that the label of $a$ is $\alpha_i$ and the label of $c$ is $\alpha_j$ then there is a corresponding {\em{pop}} edge $(d,b)$ along the path $P$ such that the label of $d$ is $\alpha_j$ and the label of $b$ is $\alpha_i$. Hence, we can remove the unnecessary edges as follows : \begin{itemize} \item{Let $(u,v)$ be a {\em{push}} edge in $\mathcal{G}$ such that the label of $u$ is $\alpha_i$ and the label of $v$ is $\alpha_j$. If there is no {\em{pop}} edge in $\mathcal{G}$ (other than $(v,u)$) with the vertex labels $(\alpha_j,\alpha_i)$, then remove the edge $(u,v)$.} \item{Let $(u,v)$ be a {\em{pop}} edge in $\mathcal{G}$ such that the label of $u$ is $\alpha_i$ and the label of $v$ is $\alpha_j$. If there is no {\em{push}} edge in $\mathcal{G}$ (other than $(v,u)$) with the vertex labels $(\alpha_j,\alpha_i)$, then remove the edge $(u,v)$.} \end{itemize} We call $\mathcal{E}$ the {\em{standard}} matrix and $\Upsilon$ the {\em{gap}} matrix and assume that an instance of {\sc{ST-Realizability}}, $\mathcal{H}$, is represented by an $n \times n$ standard matrix $\mathcal{E}$ and an $n^2 \times n^2$ gap matrix ${\Upsilon}$ and denote this by $\mathcal{H} = {\langle}{\Upsilon},\mathcal{E}{\rangle}$. The rows and columns of ${\Upsilon}$ are indexed by pairs of vertices of $\mathcal{H}$. $\Upsilon[a,(c,d),b]$ corresponds to the $[(a,b),(c,d)]^{th}$ entry in the $n^2 \times n^2$ matrix. \section{{\sc{Undirected ST-Realizability}}\ and Symmetric AuxPDAs}\label{sec:ustconn} \subsection{{\sc{Undirected ST-Realizability}}} We are given an undirected graph $\mathcal{G}(V,E)$, a vertex labeling function $L_{V}:V{\rightarrow}\{\alpha_1,\alpha_2,\dots,\alpha_k\}$ and an edge labeling function $L_{E}:E{\rightarrow}\{push,pop,\epsilon\}$. Moreover, the edge labels are ``symmetric" i.e., they satisfy the following properties : (i) $L_E(u,v) = push$ if and only if $L_E(v,u) = pop$ and (ii) $L_E(u,v) = \epsilon$ if and only if $L_E(v,u) = \epsilon$. The pair $(s,t)$, where $s,t \in V$, is said to be {\em{realizable}} if there is an undirected path (say $P$) from $s$ to $t$ and the concatenation of the vertex and edge labels along the path $P$ is a {\em{realizable}} string. Since the edge labels are symmetric, $(s,t)$ is realizable if and only if $(t,s)$ is realizable. We denote this by $(s{\leftrightsquigarrow}t)$. \begin{framed} \noindent {\sc{Undirected ST-Realizability}}\ : Given an undirected graph $\mathcal{G}(V,E)$ with vertices labeled from $\{\alpha_1,\alpha_2,\dots,\alpha_k\}$ and {\em{symmetric}} edge labels from $\{push,pop,\epsilon\}$ and two distinguished nodes $s$ and $t$, decide if $s$ and $t$ are realizable in $\mathcal{G}$. \end{framed} If all the vertices of $\mathcal{G}$ are labeled $\alpha_1$ (i.e., $k=1$) and all the edges are labeled $\epsilon$, we get an instance of {\sc{Undirected ST-Connectivity}}. Hence, {\sc{Undirected ST-Realizability}}\ is a generalization of {\sc{Undirected ST-Connectivity}}. To study the space complexity of {\sc{Undirected ST-Realizability}}\ we introduce {\em{symmetric}} AuxPDAs in the following subsection. \subsection{Symmetric AuxPDAs} Intuitively, a {\em{symmetric}} AuxPDA is a nondeterministic multi-tape Turing machine which has an extra tape called pushdown tape, with an additional requirement that every move of the machine is ``reversible". In other words, the ``yields" relation between its (surface) configurations is symmetric. Such a machine is allowed to scan two symbols at a time on each of its tapes. We present the formal definitions, properties of symmetric AuxPDAs and the proofs of the following theorems in the {\bf{appendix}} (see \apref{sec:symmauxpda}). We define ${\bf{SLogCFL}}$\ to be the class of languages accepted by a log space bounded and polynomial time bounded symmetric AuxPDA. \begin{theorem}\label{thm:gen-SL-inclusion} ${\bf{LogDCFL}}$\ $\subseteq$ ${\bf{SLogCFL}}$\ $\subseteq$ ${\bf{LogCFL}}$. \end{theorem} \begin{theorem}\label{thm:ustreal} {\sc{Undirected ST-Realizability}}\ is ${\bf{SLogCFL}}$-complete. \end{theorem} \begin{corollary}\label{cor:ustrealnoepsilon} {\sc{Undirected ST-Realizability}}\ with no $\epsilon$-edges is ${\bf{SLogCFL}}$-complete. \end{corollary} \section{More Realizability Problems between ${\bf{L}}$\ and ${\bf{LogCFL}}$}\label{sec:realproblems} As noted earlier, an instance $\mathcal{G}$ of {\sc{ST-Realizability}}\ is represented by an $n \times n$ standard matrix $\mathcal{E}$ and an $n^2 \times n^2$ gap matrix ${\Upsilon}$. The vertices of $\mathcal{G}$ are labeled with $\{\alpha_1, \dots, \alpha_k\}$. In this section, we define more graph realizability problems based on the symmetry of the gap and standard matrices and $\mathcal{E}$ and the number of distinct vertex labels (i.e., number of stack symbols, denoted by $k$). We define the corresponding complexity classes as the set of all languages that are logspace reducible to the corresponding graph realizability problem. Table 1 summarizes all the definitions. The prefix ${\bf{S}}$ is used to denote the symmetry of the standard matrix. The prefix ${\bf{SGS}}$ is used to denote the symmetry of the standard {\em{and}} gap matrices. A moment of thought would reveal that the case of symmetric gap matrix and asymmetric standard matrix does not make much sense. The prefix ${\bf{1}}$ is used to denote that there is only one vertex label. \begin{table}\label{defnstbl} \begin{tabular}{l*{6}{c}r} Complexity class & Number of stack symbols & Standard Matrix & Gap Matrix \\ \hline ${\bf{LogCFL}}$\ & $k \geq 2$ & asymmetric & asymmetric \\ ${\bf{SLogCFL}}$\ & $k \geq 2$ & symmetric & asymmetric \\ ${\bf{SGSLogCFL}}$\ & $k \geq 2$ & symmetric & symmetric \\ ${\bf{1LogCFL}}$\ & $k = 1$ & asymmetric & asymmetric \\ ${\bf{1SLogCFL}}$\ & $k = 1$ & symmetric & asymmetric \\ ${\bf{1SGSLogCFL}}$\ & $k = 1$ & symmetric & symmetric \\ \end{tabular} \caption{Graph realizability problems between ${\bf{L}}$\ and ${\bf{LogCFL}}$.} \end{table} \subsection{Realizability with Symmetric Gap}\label{subsec:sgslogcfl} We are given an undirected graph $\mathcal{G}(V,E)$, a vertex labeling function $L_{V}:V{\rightarrow}\{\alpha_1,\alpha_2,\dots,\alpha_k\}$ and an edge labeling function $L_{E}:E{\rightarrow}\{push,pop,\epsilon\}$. The edge labels are ``symmetric" as defined in \secref{sec:ustconn}. The pair $(s,t)$, where $s,t \in V$, is said to be {\bf{realizable with symmetric gap}} if the following two conditions hold : \begin{itemize} \item{There is an undirected path (say $P$) from $s$ to $t$.} \item{The concatenation of the vertex and edge labels along the path $P$ is a {\em{realizable string with symmetric gap}} (see \defref{defn:symgapreal}).} \end{itemize} \begin{definition}\label{defn:symgapreal} Let $\mathcal{A} = \{push,pop,\epsilon,\alpha_1,\alpha_2,\dots,\alpha_k\}$ be the set of alphabets. A {\bf{realizable string with symmetric gap}} is a nonempty string of alphabets from $\mathcal{A}$, defined in the following recursive manner : \begin{itemize} \item{for all $1 \leq i \leq k$, ``$\alpha_i$" is a realizable string.} \item{for all $1 \leq i \leq k$, ``${\alpha_i}\ \epsilon\ {\alpha_i}$" is a realizable string.} \item{if $S$ is a realizable string then so is ``${\alpha_i}\ push\ S\ pop\ {\alpha_i}$", for all $1 \leq i \leq k$.} \item{if $S$ is a realizable string then so is ``${\alpha_i}\ pop\ S\ push\ {\alpha_i}$", for all $1 \leq i \leq k$.} \item{for all $1 \leq i \leq k$, if ``${\alpha_i}\ S_1\ {\alpha_i}$" and ``${\alpha_i}\ S_2\ {\alpha_i}$" are realizable strings then so is ``${\alpha_i}\ S_1\ {\alpha_i}\ S_2\ {\alpha_i}$".} \end{itemize} \end{definition} Since the edge labels are symmetric, $(s,t)$ is realizable if and only if $(t,s)$ is realizable. We initialize the gap matrix as described in \secref{subsec:gapmatrixdefn}. By the definition of {\em{realizable string with symmetric gap}}, $(a{\leadsto}(c,d){\leadsto}b)$ if and only if $(c{\leadsto}(a,b){\leadsto}d)$. Hence the corresponding $n^2 \times n^2$ gap matrix ${\Upsilon}$ is a symmetric matrix. We denote this symmetry by $(a{\leftrightsquigarrow}(c,d){\leftrightsquigarrow}b)$. \begin{framed} \noindent {\sc{Symmetric Gap Undirected ST-Realizability}}\ : Given an undirected graph $\mathcal{G}(V,E)$ with vertices labeled from $\{\alpha_1,\alpha_2,\dots,\alpha_k\}$ and {\em{symmetric}} edge labels from $\{push,pop,\epsilon\}$ and two distinguished nodes $s$ and $t$, decide if $s$ and $t$ are {\em{realizable with symmetric gap}} in $\mathcal{G}$. \end{framed} \begin{framed} \noindent ${\bf{SGSLogCFL}}$\ is the class of languages that are logspace reducible to {\sc{Symmetric Gap Undirected ST-Realizability}}. \end{framed} \subsection{Realizability with one stack symbol} The complexity classes ${\bf{1LogCFL}}$, ${\bf{1SLogCFL}}$\ and ${\bf{1SGSLogCFL}}$\ are obtained by restricting ${\bf{LogCFL}}$, ${\bf{SLogCFL}}$\ and ${\bf{SGSLogCFL}}$\ respectively to use only one stack symbol i.e., by insisting that $k = 1$ in the above definitions. Since the vertices are all labeled with one label, we may omit the vertex labels in the definitions. After omitting the vertex labels, the corresponding {\em{realizability}} can be defined using a context-free language as shown below. \subsubsection{${\bf{1LogCFL}}$}\label{one-logcfl} ${\bf{1LogCFL}}$\ is the class of languages that are logspace reducible to the following graph realizability problem. We are given a directed graph $\mathcal{G}(V,E)$, with edges labeled from $\{push,pop,\epsilon\}$. The ordered pair $(s,t)$, where $s,t \in V$, is said to be realizable if the following two conditions hold : \begin{itemize} \item{There is a directed path (say $P$) from $s$ to $t$.} \item{The concatenation of the edge labels on the path $P$ is a string produced by the following context-free grammar :\ \ $S \rightarrow S\ S$;\ $S \rightarrow push\ S\ pop$;\ $S \rightarrow \epsilon$;\ $S \rightarrow \emptyset$. Here $\emptyset$ denotes the empty string. } \end{itemize} \subsubsection{${\bf{1SLogCFL}}$}\label{one-slogcfl} We are given an undirected graph $\mathcal{G}(V,E)$, with the edges labeled from $\{push,pop,\epsilon\}$. Moreover, the edge labels are ``symmetric" as defined in \secref{sec:ustconn}. The pair $(s,t)$, where $s,t \in V$, is said to be {\em{realizable}} if there is an undirected path (say $P$) from $s$ to $t$ and the concatenation of the edge labels along the path $P$ is a string produced by the context-free grammar mentioned in \secref{one-logcfl}. Since the edge labels are symmetric, $(s,t)$ is realizable if and only if $(t,s)$ is realizable. ${\bf{1SLogCFL}}$\ is the class of languages that are logspace reducible to this undirected graph realizability problem. \subsubsection{${\bf{1SGSLogCFL}}$}\label{one-sgslogcfl} ${\bf{1SGSLogCFL}}$\ is the class of languages that are logspace reducible to the following graph realizability problem. We are given an undirected graph $\mathcal{G}(V,E)$, with the edges labeled from $\{push,pop,\epsilon\}$. The edge labels are ``symmetric" as defined in \secref{sec:ustconn}. The pair $(s,t)$, where $s,t \in V$, is said to be realizable if the following two conditions hold : \begin{itemize} \item{There is a simple undirected path (say $P$) from $s$ to $t$.} \item{The concatenation of the edge labels on the path $P$ is a string produced by the following context-free grammar :\ \ $S \rightarrow S\ S$;\ $S \rightarrow push\ S\ pop$;\ $S \rightarrow pop\ S\ push$;\ $S \rightarrow \epsilon$;\ $S \rightarrow \emptyset$. Here $\emptyset$ denotes the empty string. } \end{itemize} \subsection{Relationship among the Realizability Problems} By definition, we have the following inclusions : (i) ${\bf{SGSLogCFL}}$\ $\subseteq$ ${\bf{SLogCFL}}$\ $\subseteq$ ${\bf{LogCFL}}$, (ii) ${\bf{1SGSLogCFL}}$\ $\subseteq$ ${\bf{1SLogCFL}}$\ $\subseteq$ ${\bf{1LogCFL}}$, (iii) ${\bf{1LogCFL}}$\ $\subseteq$ ${\bf{LogCFL}}$, (iv) ${\bf{1SLogCFL}}$\ $\subseteq$ ${\bf{SLogCFL}}$\ and (v) ${\bf{1SGSLogCFL}}$\ $\subseteq$ ${\bf{SGSLogCFL}}$. Independent to our work, Allender and Lange \cite{allender-lange} defined symmetric AuxPDAs and proved that every language accepted by a nondeterministic auxiliary pushdown automaton in polynomial time can be accepted by a symmetric auxiliary pushdown automaton in polynomial time. Their definition of symmetric AuxPDAs is equivalent to ours \cite{allender-personal-comm}. Borodin et. al. \cite{BCDRT89} proved that ${\bf{LogCFL}}$\ = ${\bf{co}}$-${\bf{LogCFL}}$. The following theorem and its corollary are immediate. \begin{theorem}\label{thm:all-lange-theorem} {\em{(Allender and Lange \cite{allender-lange})}}. ${\bf{SLogCFL}}$\ = ${\bf{LogCFL}}$. \end{theorem} \begin{corollary} ${\bf{SLogCFL}}$\ = ${\bf{co}}$-${\bf{SLogCFL}}$. \end{corollary} \subsection{Realizability Problems between ${\bf{L}}$\ and ${\bf{NL}}$}\label{subsec:balanced-defn} All the realizability problems defined above are generalizations of {\sc{Undirected ST-Connectivity}}. Hence, the corresponding complexity classes contain ${\bf{L}}$. We now prove that ${\bf{NL}}$\ = ${\bf{1LogCFL}}$. Hence, ${\bf{L}}$\ = ${\bf{SL}}$\ $\subseteq$ ${\bf{1SGSLogCFL}}$\ $\subseteq$ ${\bf{1SLogCFL}}$\ $\subseteq$ ${\bf{1LogCFL}}$\ = ${\bf{NL}}$. We introduce two natural graph connectivity problems characterizing ${\bf{1SGSLogCFL}}$\ and ${\bf{1SLogCFL}}$. \begin{theorem}\label{thm:nl-1logcfl} ${\bf{NL}}$\ = ${\bf{1LogCFL}}$. \end{theorem} \begin{corollary} ${\bf{L}}$\ = ${\bf{SL}}$\ $\subseteq$ ${\bf{1SGSLogCFL}}$\ $\subseteq$ ${\bf{1SLogCFL}}$\ $\subseteq$ ${\bf{1LogCFL}}$\ = ${\bf{NL}}$. \end{corollary} Let $\mathcal{G}(V,E)$ be a directed graph. Let $\mathcal{G'}(V,E')$ be the underlying undirected graph of $\mathcal{G}$. Let $P$ be a path in $\mathcal{G'}$. Let $e = (u,v)$ be an edge along the path $P$. Edge $e$ is called {\em{neutral}} edge if both $(u,v)$ and $(v,u)$ are in $E$. Edge $e$ is called {\em{forward}} edge if $(u,v) \in E$ and $(v,u) \notin E$. Edge $e$ is called {\em{backward}} edge if $(u,v) \notin E$ and $(v,u) \in E$. A path (say $P$) from $s \in V$ to $t \in V$ in $\mathcal{G'}(V,E')$ is called {\em{balanced}} if the number of forward edges along $P$ is equal to the number of backward edges along $P$. A balanced path might have any number of neutral edges. By definition, if there is a balanced path from $s$ to $t$ then there is a balanced path from $t$ to $s$. The path $P$ may not be a simple path. We are concerned with balanced paths of length at most $n$. See \secref{sec:bstconn} in the {\bf{appendix}} for more details and variants of balanced connectivity problems. \begin{framed} \noindent {\sc{Balanced ST-Connectivity}}\ : Given a directed graph $\mathcal{G}(V,E)$ and two distinguished nodes $s$ and $t$, decide if there is {\em{balanced}} path (of length at most $n$) between $s$ and $t$. \end{framed} Let $P$ be a path from $s \in V$ to $t \in V$ in $\mathcal{G}(V,E)$. We say $v \in P$ if the vertex $v$ is on the path $P$. For $v \in P$ we denote by $P_v$ the subpath of $P$ starting from $s$ and ending at $v$. We say that $P$ is {\em{positive}} if the number of forward edges of $P_v$ is at least the number of backward edges of $P_v$, for all $v \in P$. In other words, the number of forward edges minus the number of backward edges of $P_v$ is positive, for all $v \in P$. We say that $P$ is {\em{positive balanced}} if $P$ is positive and balanced. By definition, if there is a positive balanced path from $s$ to $t$ then there is a positive balanced path from $t$ to $s$. \begin{framed} \noindent {\sc{Positive Balanced ST-Connectivity}}\ : Given a directed graph $\mathcal{G}(V,E)$ and two distinguished nodes $s$ and $t$, decide if there is {\em{positive balanced}} path (of length at most $n$) between $s$ and $t$. \end{framed} \begin{theorem}\label{thm:1sgslogcfl} {\sc{Balanced ST-Connectivity}}\ is ${\bf{1SGSLogCFL}}$-complete. \end{theorem} \begin{theorem}\label{thm:1slogcfl} {\sc{Positive Balanced ST-Connectivity}}\ is ${\bf{1SLogCFL}}$-complete. \end{theorem} Figure \ref{fig:real-classes} summarizes the relationship among the above defined classes. \begin{figure}[htp] \begin{center} \includegraphics[width=4in]{real-classes.jpg} \end{center} \caption{Relationship among the complexity classes. A directed edge from class ${\bf{A}}$ to class ${\bf{B}}$ shows that ${\bf{A}} \subseteq {\bf{B}}$. In addition, ${\bf{RL}}$\ $\subseteq$ ${\bf{RLogCFL}}$\ and ${\bf{BPL}}$\ $\subseteq$ ${\bf{BPLogCFL}}$. {\sc{Balanced ST-Connectivity}}\ is ${\bf{1SGSLogCFL}}$-complete and {\sc{Positive Balanced ST-Connectivity}}\ is ${\bf{1SLogCFL}}$-complete.} \label{fig:real-classes} \end{figure} \section{Transitive Closure}\label{sec:transitive} The definitions and theorems in this section apply to all the graph realizability problems defined above. We present the definitions and theorems for {\sc{ST-Realizability}}, the most general graph realizability problem. \begin{definition} Let $\mathcal{G} = {\langle}{\Upsilon},\mathcal{E}{\rangle}$ be an instance of {\sc{ST-Realizability}}. The {\bf{transitive closure}} of $\mathcal{G}$, denoted by $\mathcal{G}^* = {\langle}{\Upsilon}^*,\mathcal{E}^*{\rangle}$, is a pair of gap and standard matrix such that for all $a,b,c,d \in V$, \\ \indent (i) $\mathcal{E}^*[a][b] = 1$ iff $(a{\leadsto}b)$ and \\ \indent (ii) ${\Upsilon}^*[a,(c,d),b] = 1$ iff $(a,b)$ is realizable with gap $(c,d)$. \end{definition} \subsection{Tensor Products}\label{subsec:tensor} We now present several tensor products acting on $\mathcal{E}$ and ${\Upsilon}$. The products $\otimes_1$ to $\otimes_5$ are introduced in \cite{venkat-auxpda}. We introduce $\otimes_6$ and $\otimes_7$. These products update the standard matrix $\mathcal{E}$ and the gap matrix $\Upsilon$ with new ``connectivity information" of $\mathcal{G}$. Let $\mathcal{E}$, $\mathcal{E}_1$, $\mathcal{E}_2$ represent standard matrices and ${\Upsilon}$, ${\Upsilon}_1$, ${\Upsilon}_2$ represent gap matrices. Let $a,b,c,d,z$ represent the vertices of $\mathcal{G}$. Matrices indexed by two (resp. four) indices are standard (resp. gap) matrices. When we are dealing with boolean matrices, all the summations (resp. multiplications) are interpreted as boolean $\vee$ (resp. boolean $\wedge$). \begin{enumerate} \item{ If $(a{\leadsto}z)$ and $(z{\leadsto}b)$ then $(a{\leadsto}b)$ : \begin{center} $(\mathcal{E}_1 \otimes_1 \mathcal{E}_2)[a,b] = \displaystyle\sum_z {\mathcal{E}_1}[a,z]{\cdot}{\mathcal{E}_2}[z,b]$. \end{center} } \item{ If $(a{\leadsto}(c,d){\leadsto}b)$ and $(c{\leadsto}d)$ then $(a{\leadsto}b)$ : \begin{center} $({\Upsilon} \otimes_2 {\mathcal{E}})[a,b] = \displaystyle\sum_{c,d} {\Upsilon}[a,(c,d),b]{\cdot}{\mathcal{E}}[c,d]$. \end{center} } \item{ If $(a{\leadsto}(c,d){\leadsto}b)$ and $(b{\leadsto}z)$ then $(a{\leadsto}(c,d){\leadsto}z)$ : \begin{center} $({\Upsilon} \otimes_3 {\mathcal{E}})[a,(c,d),z] = \displaystyle\sum_b {\Upsilon}[a,(c,d),b]{\cdot}{\mathcal{E}}[b,z]$. \end{center} } \item{ If $(z{\leadsto}a)$ and $(a{\leadsto}(c,d){\leadsto}b)$ then $(z{\leadsto}(c,d){\leadsto}b)$ : \begin{center} $({\mathcal{E}} \otimes_4 {\Upsilon})[z,(c,d),b] = \displaystyle\sum_a {\mathcal{E}}[z,a]{\cdot}{\Upsilon}[a,(c,d),b]$. \end{center} } \item{ If $(a{\leadsto}(c,d){\leadsto}b)$ and $(c{\leadsto}(e,f){\leadsto}d)$ then $(a{\leadsto}(e,f){\leadsto}b)$ : \begin{center} $({{\Upsilon}_1} \otimes_5 {{\Upsilon}_2})[a,(e,f),b] = \displaystyle\sum_{c,d} {{\Upsilon}_1}[a,(c,d),b]{\cdot}{{\Upsilon}_2}[c,(e,f),d]$. \end{center} } \item{ If $(a{\leadsto}(c,d){\leadsto}b)$ and $(z{\leadsto}d)$ then $(a{\leadsto}(c,z){\leadsto}b)$ : \begin{center} $({\Upsilon} \otimes_6 {\mathcal{E}})[a,(c,z),b] = \displaystyle\sum_{d} {\Upsilon}[a,(c,d),b]{\cdot}{\mathcal{E}}[z,d]$. \end{center} } \item{ If $(a{\leadsto}(c,d){\leadsto}b)$ and $(c{\leadsto}z)$ then $(a{\leadsto}(z,d){\leadsto}b)$ : \begin{center} $({\Upsilon} \otimes_7 {\mathcal{E}})[a,(z,d),b] = \displaystyle\sum_{c} {\Upsilon}[a,(c,d),b]{\cdot}{\mathcal{E}}[c,z]$. \end{center} } \begin{comment} \item{ If $(a{\leadsto}(c,c){\leadsto}b)$ then $(a{\leadsto}b)$ : \begin{center} $({\Upsilon} \otimes_7 {\Upsilon})[a,b] = \displaystyle\sum_{c} {\Upsilon}[a,(c,c),b]$. \end{center} } \end{comment} \end{enumerate} \subsection{Computing Transitive Closure}\label{subsec:ckt-square} Given $\mathcal{G} = {\langle}{\Upsilon},\mathcal{E}{\rangle}$ the following algorithm computes ${\bf{Square}}(\mathcal{G})$. This algorithm is based on a parsimonious simulation of ${\bf{LogCFL}}$\ by ${\bf{{SAC}^{1}}}$\ circuits given by Niedermeier and Rossmanith \cite{NiedermeierR95}. Implementation of ${\bf{Square}}({\langle}{\Upsilon},\mathcal{E}{\rangle})$ using the above mentioned tensor products is shown below. \thref{thm:ckt-square} implies a natural polynomial time algorithm to solve {\sc{ST-Realizability}}. \noindent \line(1,0){450} \\ \noindent ${\bf{Square}}({\langle}{\Upsilon},\mathcal{E}{\rangle})$ \\ \indent for all $a,b \in V$ update $\mathcal{E}$ as follows : \begin{eqnarray*} \mathcal{E}[a,b] & = & \displaystyle\sum_{c,e,f,g,d}{\Upsilon}[a,(c,d),b]{\cdot}{\Upsilon}[c,(e,f),g]{\cdot}\mathcal{E}[e,f]{\cdot}\mathcal{E}[g,d] \\ \end{eqnarray*} \indent for all $a,b,c,d \in V$ update ${\Upsilon}$ as follows : \begin{eqnarray*} {\Upsilon}[a,(c,d),b] & = & \displaystyle\sum_{c',e',f',g',d'}{\Upsilon}[a,(c'd'),b]{\cdot}{\Upsilon}[c',(e',f'),g']{\cdot}{\Upsilon}[e',(c,d),f']{\cdot}\mathcal{E}[g',d'] \\ & + & \displaystyle\sum_{c',e',f',g',d'}{\Upsilon}[a,(c'd'),b]{\cdot}{\Upsilon}[c',(e',f'),g']{\cdot}\mathcal{E}[e',f']{\cdot}{\Upsilon}[g',(c,d),d'] \\ \end{eqnarray*} \indent {\bf{return}} ${\langle}{\Upsilon},\mathcal{E}{\rangle}$ \\ \noindent \line(1,0){450} \\ \noindent \line(1,0){300} \\ \noindent ${\bf{Square}}({\langle}{\Upsilon},\mathcal{E}{\rangle})$ \\ \indent $\mathcal{E} = (\Upsilon \otimes_2((\Upsilon \otimes_2 \mathcal{E})\otimes_1 \mathcal{E}))$ \\ \indent $\Upsilon = (\Upsilon \otimes_5((\Upsilon \otimes_5 \Upsilon)\otimes_3 \mathcal{E})) + (\Upsilon \otimes_5((\Upsilon \otimes_2 \mathcal{E})\otimes_4 \Upsilon)) $\\ \indent {\bf{return}} ${\langle}{\Upsilon},\mathcal{E}{\rangle}$ \\ \noindent \line(1,0){300} \\ \begin{theorem}\label{thm:ckt-square} Let $\mathcal{G}$ be an instance of {\sc{ST-Realizability}}. $\mathcal{G}^* = {\langle}{\Upsilon}^*,\mathcal{E}^*{\rangle}$ can be computed using $O({\log}n)$ repeated applications of ${\bf{Square}}(\mathcal{G})$. \end{theorem} \subsection{Simple Squaring Operation} The following algorithm ${\bf{SimpleSquare}}$ is a more intuitive squaring operation. It plays a crucial role in the proofs of correctness of parallel and space efficient algorithms for ${\bf{SGSLogCFL}}$\ (see \secref{sec:parallel-sgslogcfl} and \secref{sec:sgslogcfl-logloglog}). \noindent \line(1,0){200} \\ \noindent ${\bf{SimpleSquare}}({\langle}{\Upsilon},\mathcal{E}{\rangle})$ \\ \indent $\mathcal{E} = \mathcal{E} \otimes_1 \mathcal{E}$ \\ \indent $\mathcal{E} = \Upsilon \otimes_2 \mathcal{E}$ \\ \indent $\Upsilon = \Upsilon \otimes_3 \mathcal{E}$ \\ \indent $\Upsilon = \mathcal{E} \otimes_4 \Upsilon$ \\ \indent $\Upsilon = \Upsilon \otimes_5 \Upsilon$ \\ \indent $\Upsilon = \Upsilon \otimes_6 \mathcal{E}$ \\ \indent $\Upsilon = \Upsilon \otimes_7 \mathcal{E}$ \\ {\bf{return}} ${\langle}{\Upsilon},\mathcal{E}{\rangle}$ \\ \noindent \line(1,0){200} \begin{theorem}\label{thm:simple-square} Let $\mathcal{G}$ be an instance of {\sc{ST-Realizability}}. $\mathcal{G}^* = {\langle}{\Upsilon}^*,\mathcal{E}^*{\rangle}$ can be computed using $O({\log}n)$ repeated applications of ${\bf{SimpleSquare}}(\mathcal{G})$. \end{theorem} \section{Parallel algorithms for ${\bf{SGSLogCFL}}$}\label{sec:parallel-sgslogcfl} Let $\mathcal{G} = {\langle}{\Upsilon},\mathcal{E}{\rangle}$ be an instance of {\sc{Symmetric Gap Undirected ST-Realizability}}. Let the vertices of $\mathcal{G}$ be $V = \{1,2,\dots,n\}$. $\mathcal{G}$ is represented by an $n \times n$ standard matrix $\mathcal{E}$ and an $n^2 \times n^2$ gap matrix ${\Upsilon}$. In this section, we present parallel algorithms to compute $\mathcal{G}$'s transitive closure $\mathcal{G}^* = {\langle}{\Upsilon}^*,\mathcal{E}^*{\rangle}$. Let $V^2 = V \times V$ be the set of pairs of vertices. In the rest of this paper the term ``vertex" refers to elements from $V$ as well as $V^2$. Let $V^4 = V \times V \times V \times V$. $\mathcal{G}$ has two types of edges. The standard edges from $V^2$ are present in $\mathcal{E}$ and the gap edges from $V^4$ are present in ${\Upsilon}$. In the rest of this paper the term ``edge" refers to elements from $V^2$ as well as $V^4$. \begin{definition} A subset of vertices $S \subseteq V$ is a {\bf{standard component}} ($s$-component) of $\mathcal{G}$ iff for all $u,v \in S$ it holds that $(u{\leadsto}v)$ and $(v{\leadsto}u)$. \end{definition} \begin{definition} A subset $S \subseteq V^2$ is a {\bf{gap component}} ($g$-component) of $\mathcal{G}$ iff for all $(a,b),(c,d) \in S$ it holds that $(a{\leadsto}(c,d){\leadsto}b)$ and $(c{\leadsto}(a,b){\leadsto}d)$. \end{definition} In the rest of this paper the term ``component" refers to both standard and gap components. If there is ambiguity we will explicitly say $s$-component or $g$-component. A {\em{pseudotree}} $P = (C,D)$ is a maximal connected directed graph with $|C| = k$ vertices and $|D| = k$ arcs for some $k$, for which each vertex has outdegree one. Note that every pseudotree has exactly one simple directed cycle (which may be a self-loop). The number of arcs in the cycle of a pseudoree $P$ is its {\em{circumference}}. A {\em{rooted tree}} is a pseudotree whose cycle is a self-loop on some vertex $r$ called the {\em{root}}. A {\em{rooted star}} $R$ with root $r$, is a rooted tree whose arcs are of the form $(x,r)$ with $x \in R$. A {\em{pseudoforest}} is a collection of pseudotrees. \\ \noindent {\bf{Symmetric Squaring}} : We first present a simplified squaring algorithm when the input graph is an instance of {\sc{Symmetric Gap Undirected ST-Realizability}}. Here the matrices $\mathcal{E}$ and ${\Upsilon}$ are symmetric i.e., $\mathcal{E}[a,b] = \mathcal{E}[b,a]$ and ${\Upsilon}[(a,b),(c,d)] = {\Upsilon}[(c,d),(a,b)]$. Moreover, ${\Upsilon}[(a,b),(c,d)] = {\Upsilon}[(a,b),(d,c)] = {\Upsilon}[(b,a),(c,d)] = {\Upsilon}[(b,a),(d,c)]$. Due to this symmetry, the products $\otimes_3$, $\otimes_4$, $\otimes_6$ and $\otimes_7$ are equivalent. \corref{cor:symmetric-square} follows from \thref{thm:simple-square}. \noindent \line(1,0){200} \\ \noindent ${\bf{SymmetricSquare}}({\langle}{\Upsilon},\mathcal{E}{\rangle})$ \\ \indent $\mathcal{E} = \mathcal{E} \otimes_1 \mathcal{E}$ \\ \indent $\mathcal{E} = \Upsilon \otimes_2 \mathcal{E}$ \\ \indent $\Upsilon = \Upsilon \otimes_3 \mathcal{E}$ \\ \indent $\Upsilon = \Upsilon \otimes_5 \Upsilon$ \\ {\bf{return}} ${\langle}{\Upsilon},\mathcal{E}{\rangle}$ \\ \noindent \line(1,0){200} \begin{corollary}\label{cor:symmetric-square} Let $\mathcal{G}$ be an instance of {\sc{Symmetric Gap Undirected ST-Realizability}}. $\mathcal{G}^*$ can be computed using $O({\log}n)$ repeated applications of ${\bf{SymmetricSquare}}(\mathcal{G})$. \end{corollary} \subsection{An $O({\log}^2n)$ time parallel algorithm} \begin{algorithm}{{\bf{Connect}}($\mathcal{G} = {\langle}{\Upsilon},\mathcal{E}{\rangle}$)} \begin{algorithmic}[1] \STATE ${\mathcal{E}}^* \leftarrow {\mathcal{E}}$ \STATE ${\Upsilon}^* \leftarrow {\Upsilon}$ \STATE {\bf{for all}} $i$ {\bf{do}} $X_{\mathcal{E}}(i) = i$ \STATE {\bf{for all}} $i$ {\bf{do}} $X_{\Upsilon}(i,j) = (i,j)$ \vspace{0.1in} \FOR{$O({\log}{n})$ iterations} \vspace{0.1in} \STATE {\bf{for all}} $i$ {\bf{do}} $Temp_{\mathcal{E}}(i) \leftarrow StandardHook(i)$ \STATE {\bf{for all}} $i$ {\bf{do}} $Temp_{\mathcal{E}}(i) \leftarrow \mbox{min}_j\{Temp_{\mathcal{E}}(j)\ |\ X_{\mathcal{E}}(j) = i\ \mbox{and}\ Temp_{\mathcal{E}}(j) \neq i\}$ \STATE \indent if none then $Temp_{\mathcal{E}}(i) \leftarrow X_{\mathcal{E}}(i)$ \\ \vspace{0.1in} \STATE {\bf{for all}} $i$ {\bf{do}} $Temp_{\Upsilon}(i,j) \leftarrow GapHook(i,j)$ \STATE {\bf{for all}} $i$ {\bf{do}} $Temp_{\Upsilon}(i,j) \leftarrow \mbox{min}_{(k,l)}\{Temp_{\Upsilon}(k,l)\ |\ X_{\Upsilon}(k,l) = (i,j)\ \mbox{and}\ Temp_{\Upsilon}(k,l) \neq (i,j)\}$ \STATE \indent if none then $Temp_{\Upsilon}(i,j) \leftarrow X_{\Upsilon}(i,j)$ \vspace{0.1in} \STATE {\bf{for all}} $i$ {\bf{do}} $X_{\mathcal{E}}(i) \leftarrow Temp_{\mathcal{E}}(i)$ \STATE {\bf{for all}} $(i,j)$ {\bf{do}} $X_{\Upsilon}(i,j) \leftarrow Temp_{\Upsilon}(i,j)$ \vspace{0.1in} \FOR{$O({\log}{n})$ iterations} \STATE {\bf{for all}} $i$ {\bf{do}} $Temp_{\mathcal{E}}(i) \leftarrow Temp_{\mathcal{E}}(Temp_{\mathcal{E}}(i))$ \STATE {\bf{for all}} $(i,j)$ {\bf{do}} $Temp_{\Upsilon}(i,j) \leftarrow Temp_{\Upsilon}(Temp_{\Upsilon}(i,j))$ \ENDFOR \vspace{0.1in} \STATE {\bf{for all}} $i$ {\bf{do}} $X_{\mathcal{E}}(i) \leftarrow \mbox{min}\{Temp_{\mathcal{E}}(i), X_{\mathcal{E}}(Temp_{\mathcal{E}}(i))\}$ \STATE {\bf{for all}} $(i,j)$ {\bf{do}} $X_{\Upsilon}(i,j) \leftarrow \mbox{min}\{Temp_{\Upsilon}(i,j), X_{\Upsilon}(Temp_{\Upsilon}(i,j))\}$ \vspace{0.1in} \STATE {\bf{for all}} $i,j$ {\bf{do}} {\bf{if}} $X_{\mathcal{E}}(i) = X_{\mathcal{E}}(j)$ {\bf{then}} ${\mathcal{E}}^*[i,j] \leftarrow 1$. \STATE {\bf{for all}} $i,j,k,l$ {\bf{do}} {\bf{if}} $X_{\Upsilon}(i,j) = X_{\Upsilon}(k,l)$ {\bf{then}} ${\Upsilon}^*[i,(k,l),j] \leftarrow 1$. \vspace{0.1in} \ENDFOR \vspace{0.1in} \RETURN $\mathcal{G}^* = {\langle}{\Upsilon}^*,\mathcal{E}^*{\rangle}$ \end{algorithmic} \end{algorithm} \begin{algorithm}{{\bf{StandardHook}}($i$)} \begin{algorithmic}[1] \STATE $S_1 \leftarrow \{X_{\mathcal{E}}(j)\ |\ \mathcal{E}^*[i,j] = 1\ \mbox{and}\ X_{\mathcal{E}}(j) \neq X_{\mathcal{E}}(i)\}$ \STATE $S_2 \leftarrow \{X_{\mathcal{E}}(j)\ |\ \Upsilon^*[i,(k,k),j] = 1\ \mbox{and}\ X_{\mathcal{E}}(j) \neq X_{\mathcal{E}}(i)\}$ \STATE $S = S_1 \cup S_2$ \IF {$S = \emptyset$} \RETURN $X_{\mathcal{E}}(i)$ \ELSE \RETURN min($S$) \ENDIF \end{algorithmic} \end{algorithm} \begin{algorithm}{{\bf{GapHook}}($i,j$)} \begin{algorithmic}[1] \STATE $S_1 \leftarrow \{X_{\Upsilon}(k,l)\ |\ \Upsilon^*[i,(k,l),j] = 1\ \mbox{and}\ X_{\Upsilon}(k,l) \neq X_{\mathcal{E}}(i,j)\}$ \STATE $S_2 \leftarrow \{X_{\Upsilon}(k,j)\ |\ \mathcal{E}^*[i,k] = 1\ \mbox{and}\ X_{\Upsilon}(k,j) \neq X_{\Upsilon}(i,j)\}$ \STATE $S = S_1 \cup S_2$ \IF {$S = \emptyset$} \RETURN $X_{\Upsilon}(i,j)$ \ELSE \RETURN min($S$) \ENDIF \end{algorithmic} \end{algorithm} We will assume that there is one processor $P_i$ assigned to each vertex $i \in V$, one processor $P_{ij}$ assigned to each edge $(i,j) \in V^2$ and one processor $P_{ijkl}$ assigned to each {\em{gap edge}} $(i,j,k,l) \in V^4$. We use a vector $X_{\mathcal{E}}$ of length $n$ to specify the $s$-components of $\mathcal{G}$ as follows : if $V_c \subseteq V$ is any $s$-component, then for all $i \in V_c$, $X_{\mathcal{E}}(i)$ equals the least element of $V_c$. We use an $n \times n$ matrix $X_{\Upsilon}$ to specify the $g$-components of $\mathcal{G}$ as follows : if $W_c \subseteq V^2$ is any $g$-component, then for all $(i,j) \in W_c$, $X_{\Upsilon}(i,j)$ equals the lexicographically least element of $W_c$. The algorithm {\bf{Connect}} iteratively computes the vectors $X_{\mathcal{E}}$ and $X_{\Upsilon}$ from the input $\mathcal{G} = {\langle}{\Upsilon},\mathcal{E}{\rangle}$ and updates ${\Upsilon}^*$ and $\mathcal{E}^*$. It is based on a hook and contract algorithm \cite{parallel-ustconn-log2n} that works as follows. The algorithm deals with ``components", which are sets of ``vertices" {\em{found}} to belong to the same (standard or gap) component of $\mathcal{G}$. Each component is equipped with an edge-list, a linked list of edges that connect it to other components. Initially each element from $V$ is an $s$-component by itself. Their edge-lists correspond to the undirected edges of $\mathcal{E}$. These components will eventually grow and become the corresponding $s$-components. Initially each element from $V^2$ is a $g$-component by itself. Their edge-lists correspond to the undirected edges of $\Upsilon$. These components will eventually grow and become the corresponding $g$-components. The algorithm proceeds as follows : \\ \noindent {\bf{repeat}} until there are no edges left : \begin{enumerate} \item{Each component picks an edge pointing to a lexicographically minimum vertex from its edge-list leading to a neighboring component and hooks by pointing to it. If a component has an empty edge-list, it hooks to itself. The details of hooking are presented in {\bf{StandardHook}} and {\bf{GapHook}}. Note that both these hooking steps use the previously computed connectivity information from {\em{both}} ${\Upsilon}^*$ and $\mathcal{E}^*$. These hooking processes create clusters of components called pseudotrees. The $s$-components form pseudotrees on the vertex set $V$ and $g$-components form pseudotrees on the vertex set $V^2$.} \item{Each pseudotree is identified as a new component with one of its vertices as its representative. Each representative receives into its edge-list all the edges contained in the edge-lists of its pseudotree. At this stage the matrices $\mathcal{E}^*$ and ${\Upsilon}^*$ are updated with ``new" edges.} \item{Edges internal to components are removed implicitly.} \end{enumerate} During the first iteration the edges connecting each vertex to neighboring vertices are examined (steps 6-11), and sets of vertices which are known to be connected are identified (steps 14-17). In effect, each such set of vertices is merged into a ``supervertex" which are specified by the vectors $X_{\mathcal{E}}(i)$ and $X_{\Upsilon}(i,j)$. For each $i$ in a supervertex, $X_{\mathcal{E}}(i)$ equals the smallest-numbered vertex in the supervertex. For each $(i,j)$ in a supervertex, $X_{\Upsilon}(i,j)$ equals the lexicographically first vertex in the supervertex. In succeeding iterations, the edges connecting each supervertex to neighboring supervertices are examined in steps 6-11, and sets of supervertices are merged in steps 14-17. The process continues until all the vertices in a (standard and gap) component have been merged into one gigantic supervertex. \begin{theorem}\label{thm:parallel-log2n} The algorithm {\bf{Connect}} finds $\mathcal{G}^* = {\langle}{\Upsilon}^*,\mathcal{E}^*{\rangle}$ in parallel time $O({\log}^2n)$ using $n^4$ processors in the CREW PRAM model. \end{theorem} {\bf{Connect}} algorithm is a generalization of the parallel algorithm presented in \cite{parallel-ustconn-log2n}. We added two hooking procedures (one for growing $s$-components and one for growing $g$-components). Unlike \cite{parallel-ustconn-log2n} the new edges found after the contraction step are added in the matrices ${\Upsilon}^*$ and $\mathcal{E}^*$ {\em{before}} starting the next hooking step. The algorithms of \cite{parallel-ustconn-log3by2n} and \cite{parallel-ustconn-lognloglogn} can similarly be generalized to compute $\mathcal{G}^* = {\langle}{\Upsilon}^*,\mathcal{E}^*{\rangle}$ in parallel time $O({\log}^{3/2}n)$ and $O({\log}{n}{\log}{\log}{n})$ respectively. The processor bounds in all these algorithms is polynomial in $n$, the number of vertices of $\mathcal{G}$. We now present an outline of the parallel algorithms of \cite{parallel-ustconn-log3by2n} and \cite{parallel-ustconn-lognloglogn} and the necessary modifications to apply them to ${\bf{SGSLogCFL}}$. We refer the reader to \cite{parallel-ustconn-log3by2n} and \cite{parallel-ustconn-lognloglogn} for low-level implementation details of these algorithms. \subsection{An $O({\log}^{3/2}n)$ time parallel algorithm} In the algorithm presented in the previous section the size of the components formed after hooking phase may vary a lot. A slow growing component may consist of as few as two vertices, whereas a fast-growing component may have as many as $n$ vertices for an $s$-component and $n^2$ vertices for a $g$-component. As a result the contraction (steps 14-17) requires $\Theta({\log}n)$ time in order to allow the biggest component to contract to a single vertex. The algorithm must iterate ${\log}n$ times so that a slow-growing component, which may only double its size in each iteration, can eventually grow to its full size. A crucial observation of \cite{parallel-ustconn-log3by2n} is that slow-growing components need little time to contract and fast-growing components require fewer iterations to grow to their full size. Johnson and Metaxas \cite{parallel-ustconn-log3by2n} presented an algorithm in which components are scheduled to hook and contract according to their growth rate. Their algorithm schedules every component to grow by a factor of at least $2^{\sqrt{{\log}n}}$ in a phase of $O({\log}n)$ time. Hence, ${\sqrt{{\log}n}}$ phases suffice to find all connected components in the graph, for a total of $O({\log}^{3/2}n)$ time. Within a phase slow-growing components are scheduled to hook and contract in $o({\log}n)$ time repeatedly until they catch up with fast-growing components. Fast-growing components are left idle once they have achieved the intended size. \begin{itemize} \item{In the algorithm of \cite{parallel-ustconn-log2n} the vertices hook to a lexicographically minimum vertex. In Johnson-Metaxas algorithm vertices hook to the {\em{first edge}} in their edge-list. This creates pseudotrees of arbitrary circumference i.e., pseudotrees can have large cycles which are to be contracted properly in the contraction phase. Since exclusive writing is required, the usual pointer doubling technique will not terminate when applied to a cycle. Johnson and Metaxas \cite{parallel-ustconn-log3by2n} introduced {\em{cycle-reducing shortcutting}} technique to solve this problem. This technique (i) contracts a pseudotree into a rooted tree in time logarithmic in its circumference, (ii) contracts a rooted tree into a rooted star in time logarithmic in the length of its longest path.} \item{It is expensive to compute the set of edges of all the components in a pseudotree without concurrent writing. Potentially there are a large number of components that hook together in the first step and therefore a large number of components that are ready to give their edge-lists simultaneously to the new super-component's edge-list. Johnson and Metaxas \cite{parallel-ustconn-log3by2n} introduced {\em{edge-plugging scheme}} which achieves the objective in constant time, irrespective of whether the component is yet contracted to a rooted star.} \item{It is also expensive to have a component pick a mate. There may be a large number of edges internal to the component. The number of such edges grows every time components hook. These internal edges cannot be used to find a mate. Hence, a component may attempt to find a mate several times and will be unsuccessful if it picks an internal edge. Removing all the internal edges before picking an edge may also take a lot of time. Johnson and Metaxas \cite{parallel-ustconn-log3by2n} introduced a {\em{growth-control schedule}}. Components grow in size in a uniform way that controls their minimum sizes as long as continued growth is possible. The internal edges are identified and removed periodically to make hooking more efficient. The algorithm recognizes whether a component is growing too fast and therefore can be ignored.} \end{itemize} For implementation details of the above algorithm see \cite{parallel-ustconn-log3by2n}. As mentioned earlier, to get the corresponding parallel algorithm for ${\bf{SGSLogCFL}}$\ we add two hooking procedures (one for growing $s$-components and one for growing $g$-components). After each contraction step the newly found edges are added in the matrices ${\Upsilon}^*$ and $\mathcal{E}^*$ . \subsection{An $O({\log}{n}{\log}{\log}{n})$ time parallel algorithm} The Chong Lam algorithm \cite{parallel-ustconn-lognloglogn} is also based on a hook and contract approach. The hooking process uses an ordering $<_d$ of the vertices such that $u <_d v$ iff the degree of $u$ is less than the degree of $v$ (or) the degrees are the same, but $u$ is less than $v$ in their lexicographic ordering. Before every phase, every vertex of the current {\em{supergraph}} is either active, inactive or done. All active and inactive vertices have nonzero degree, the done vertices have zero degree, and there are no multiedges between active vertices; the inactive vertices are organized in a set of hooking trees. Initially all vertices with nonzero degree are active, and the rest are done. To choose their hooking edges, the active vertices of the graph perform the following steps in parallel. (i) if a vertex $v$ has a neighbor larger according to $<_d$ than itself, then $v$ hooks to the {\em{largest}} such neighbor. (ii) if after the first step all neighbors of $v$ are hooked to it, then $v$ hooks to itself. Otherwise, if after the first step a neighbor $u$ of $v$ is hooked to a vertex different from $v$, then $v$ hooks to $u$. This type of hooking scheme guarantees that any tree with a large degree must also contain a large number of vertices. The hooking schemes of \cite{parallel-ustconn-log2n, parallel-ustconn-log3by2n} suffer from creating pseudotrees with few vertices but a large degree. Some of the current hooking trees are contracted to a representative vertex in a contraction phase. The representative vertex is the only vertex in the tree which is hooked to itself. Whether a tree is contracted is determined by a parameter. This parameter depends on the phase and sets an upper bound on the sum of the degrees of the vertices of the trees which are contracted. For every contracted tree, its representative becomes a new active vertex and the rest of its vertices become done. All multiedges between new active vertices are removed. The vertices of every uncontracted tree become inactive. The processing required by a hooking phase is performed in parallel time $O({\log}d)$, where $d$ is the degree of the active vertex, using pointer jumping. Checking the degree of a hooking tree during the contraction phase is done in parallel time $O({\log}c)$, where $c$ is the contraction parameter, by using pointer jumping and a constant time edge-list plugging technique. \\ \noindent \line(1,0){200} \\ \noindent {{\bf{Connect}}($k$)} \\ \indent {\bf{MaxHook}}; \\ \indent {\bf{if}} $k > 0$ {\bf{then}} \\ \indent \indent {\bf{Connect}}($2^{2^k}$) \\ \indent \indent {\bf{Connect}}($k-1$) \\ \indent \indent {\bf{Connect}}($k-1$) \\ \indent {\bf{Contract}}($2^{2^{k+1}}$) \\ \noindent \line(1,0){200} \\ A call to {\bf{Connect}}(${\lceil}{{\log}{\log}n}{\rceil}$) contracts every connected component of the graph to a single vertex and all the other vertices are organized in a set of rooted parent trees such that the root of the tree of a vertex $u$ is the vertex to which the connected component of $u$ is contracted. To generalize this algorithm to ${\bf{SGSLogCFL}}$, we make the following modifications : (i) add two hooking procedures (one for growing $s$-components and one for growing $g$-components) (ii) the new edges found after every call {\em{Contract}} are added in the matrices ${\Upsilon}^*$ and $\mathcal{E}^*$ and the new degrees of the vertices are recomputed. The correctness of the algorithm follows by using \corref{cor:symmetric-square} in the correctness argument of \cite{parallel-ustconn-lognloglogn}, implying an $O({\log}{n}{\log}{\log}{n})$ time EREW parallel algorithm computing $\mathcal{G}^* = {\langle}{\Upsilon}^*,\mathcal{E}^*{\rangle}$. \section{${\bf{SGSLogCFL}}$\ $\subseteq$ $DSPACE({\log}{n}{\log}{\log}{n})$}\label{sec:sgslogcfl-logloglog} Trifonov's algorithm \cite{trifonov-logloglog} is based on the $O({\log}{n}{\log}{\log}{n})$ time deterministic EREW PRAM algorithm with $O(m + n)$ processors of Chong and Lam \cite{parallel-ustconn-lognloglogn} outlined in the previous section. This parallel algorithm is first simulated sequentially in linear space. Using this sequential algorithm a mathematical structure called {\em{configuration}} is defined. This configuration corresponds to the state of the sequential algorithm at a certain point of its execution. An ordering on the edges incident to a vertex is fixed, and the hooking is done sequentially for all active vertices. Using the sequence of configurations an $O({\log}^2{n})$ space algorithm, which instead of storing all of its current state recomputes parts of it when it needs them. This algorithm works pretty much like Savitch's algorithm \cite{Savitch70}. The max-degree hooking scheme of \cite{parallel-ustconn-lognloglogn} ensures that small trees have small neighborhoods. Using the exploration walks on trees defined by Koucky ́ \cite{koucky-uts}, the levels of recursion of \cite{parallel-ustconn-lognloglogn} are implemented so that they process small trees in $o({\log}n)$ space. These walks essentially play the role of the edge-list plugging technique and pointer jumping techniques employed by the Chong-Lam algorithm. They allow us to traverse the pseudotrees space-efficiently. The $O({\log}n)$ space per level is mainly due to storing vertices in the local variables of the functions, since each vertex takes $\Theta({\log}n)$ space. To overcome this bottleneck the functions are redefined so that they never keep a vertex in their local variables. The vertex $v$ is removed from the argument list of the functions. Instead of this argument, one current vertex is maintained in a global variable. All functions are programmed to return some ``information" about this vertex. A function which otherwise must return a vertex is defined so that after its execution the current vertex is its result. If needed the calling function keeps enough information locally to restore the original current vertex. The crucial part of the optimization is to avoid storing vertices locally and be able to move the current vertex temporarily, perform something at the new current vertex, and then return to the original current vertex. Instead of this going back and forth between the two vertices, using the reversibility of the moves along the edges and the exploration walks on the trees, the comparison is performed bit by bit. Aside from the information stored for the ways back, this takes only the $\Theta({\log}n{\log}{\log}n)$ space necessary to store the index of a bit. In this way the bottleneck of $\Omega({\log}n)$ space is reduced to $\Omega({\log}n{\log}{\log}n)$. The introduction of one global current vertex and always returning information about this vertex, mimics the implementation and correctness of Chong-Lam algorithm with minor modifications to the hooking scheme. The current vertex is an implicit argument to all functions describing a {\em{configuration}}. To generalize this algorithm to ${\bf{SGSLogCFL}}$, we make the following modifications : (i) add two hooking procedures (one for growing $s$-components and one for growing $g$-components) (ii) the new edges found after every call {\em{Contract}} are added in the matrices ${\Upsilon}^*$ and $\mathcal{E}^*$ and the new degrees of the vertices are recomputed and (iii) the exploration walks and the bit by bit comparison are done on the hooking trees generated by the $s$-components and $g$-components. \begin{theorem}\label{thm:sgslogcfl-space-lognloglogn} Let $\mathcal{G} = {\langle}{\Upsilon},\mathcal{E}{\rangle}$ be an instance of {\sc{Symmetric Gap Undirected ST-Realizability}}. $\mathcal{G}^* = {\langle}{\Upsilon}^*,\mathcal{E}^*{\rangle}$ can be computed deterministically in $O({\log}{n}{\log}{\log}{n})$ space i.e., ${\bf{SGSLogCFL}}$\ $\in$ $DSPACE({\log}{n}{\log}{\log}{n})$. \end{theorem} \begin{corollary}\label{cor:bstconn-space-lognloglogn} {\sc{Balanced ST-Connectivity}}\ $\in$ $DSPACE({\log}{n}{\log}{\log}{n})$. \end{corollary} \section{Open Problems} In a recent work \cite{kintali-complement}, we proved that {\sc{Balanced ST-Connectivity}}, ${\bf{SGSLogCFL}}$\ and {\sc{Positive Balanced ST-Connectivity}}\ are all closed under complementation. Several interesting research directions arise from our work : \begin{itemize} \item{{\bf{Balanced Connectivity}}: {\sc{Balanced ST-Connectivity}}\ and {\sc{Positive Balanced ST-Connectivity}}\ are natural graph connectivity problems that lie between ${\bf{L}}$\ and ${\bf{NL}}$. Studying their space complexity is an interesting research direction towards improving the space complexity of {\sc{ST-Connectivity}}. In particular, it would be interesting to improve \thref{thm:sgslogcfl-space-lognloglogn}. Is ${\bf{SGSLogCFL}}$\ $\in$ ${\bf{L}}$\ ? Less ambitiously, is ${\bf{SGSLogCFL}}$\ $\in$ ${\bf{{SC}^{2}}}$\ ?} \item{An alternate proof of \thref{thm:sgslogcfl-space-lognloglogn} using the techniques of \cite{zigzag-journal, SL=L} or \cite{derand-squaring} seems to be a challenging task.} \item{${\bf{SLogCFL}}$\ vs ${\bf{LogDCFL}}$: In the logspace setting we have ${\bf{L}}$\ = ${\bf{SL}}$\ $\subseteq$ ${\bf{NL}}$. In the ${\bf{LogCFL}}$\ setting, we have ${\bf{LogDCFL}}$\ $\subseteq$ ${\bf{SLogCFL}}$\ = ${\bf{LogCFL}}$\ (see \thref{thm:all-lange-theorem}). By definition, we have ${\bf{NL}}$\ $\subseteq$ ${\bf{LogCFL}}$. It is known that ${\bf{LogDCFL}}$\ $\subseteq$ ${\bf{{SC}^{2}}}$\ \cite{Cook-logDCFL}. This motivates the study of the relationship between ${\bf{LogDCFL}}$\ and ${\bf{SLogCFL}}$. It would be interesting to generalize the techniques of \cite{zigzag-journal, SL=L} to prove ${\bf{LogDCFL}}$\ = ${\bf{SLogCFL}}$. This would imply ${\bf{NL}}$\ $\subseteq$ ${\bf{{SC}^{2}}}$, i.e., {\sc{ST-Connectivity}}\ can be solved by a deterministic algorithm in polynomial time and $O({\log}^2{n})$ space.} \item{${\bf{SLogCFL}}$\ vs ${\bf{RLogCFL}}$: We have ${\bf{LogDCFL}}$\ $\subseteq$ ${\bf{SLogCFL}}$\ = ${\bf{LogCFL}}$\ and ${\bf{LogDCFL}}$\ $\subseteq$ \allowbreak ${\bf{RLogCFL}}$\ $\subseteq$ ${\bf{LogCFL}}$\ implying ${\bf{RLogCFL}}$\ $\subseteq$ ${\bf{SLogCFL}}$. In the logspace setting, prior to Reingold's work, Aleliunas et. al. \cite{AKLLR} proved that ${\bf{SL}}$\ $\subseteq$ ${\bf{RL}}$, using random walks. It would be interesting to generalize their techniques to prove ${\bf{SLogCFL}}$\ $\subseteq$ ${\bf{RLogCFL}}$. Since ${\bf{BPLogCFL}}$\ $\subseteq$ ${\bf{{SC}^{2}}}$\ \cite{venkat-auxpda}, a proof of ${\bf{SLogCFL}}$\ $\subseteq$ ${\bf{RLogCFL}}$\ would imply ${\bf{NL}}$\ $\subseteq$ ${\bf{{SC}^{2}}}$.} \item{Is there a circuit characterization of ${\bf{SGSLogCFL}}$\ ? What is the relationship between (i) ${\bf{SGSLogCFL}}$\ and ${\bf{NL}}$\ ? (ii) ${\bf{SGSLogCFL}}$\ and ${\bf{LogDCFL}}$\ ? (iii) ${\bf{SGSLogCFL}}$\ and ${\bf{DET}}$\footnote{${\bf{DET}}$ is the class of problems ${\bf{{NC}^{1}}}$\ Turing reducible to the determinant \cite{Cook85-DET}.} ?} \item{Allender and Lange \cite{allender-lange} proved that ${\bf{SLogCFL}}$\ = ${\bf{LogCFL}}$. Is ${\bf{1SLogCFL}}$\ = ${\bf{1LogCFL}}$\ ? i.e., is {\sc{Positive Balanced ST-Connectivity}}\ ${\bf{NL}}$-complete ?} \end{itemize} \vspace{0.2in} \noindent {\large{\bf{Acknowledgements}}} : This project is partially funded by the NSF grant CCF-0902717. I gratefully acknowledge helpful discussions with Eric Allender, Klaus-J{\"o}rn Lange, Nutan Limaye, Richard J. Lipton, H. Venkateswaran and Dieter van Melkebeek. \bibliographystyle{alpha}
1,314,259,995,715
arxiv
\section{Introduction} In the words of \citet{Shapley1922}, NGC~2419 is `a globular cluster of uncommon interest'. At a distance of 83~kpc \citep{Ripepi2007}, it has long been recognised as one of the most remote globular clusters (GCs) in the Milky Way \citep{Baade1935,Racine1975}. Despite the difficulties associated with the great distance, NGC~2419 has a number of unique characteristics that motivate the effort required to undertake detailed studies. It is among the most luminous and massive GCs in the Milky Way and it is also very extended; these properties together imply a very long relaxation time. \citet{Baumgardt2018} found a total mass of $(9.8\pm1.4)\times10^5 M_\odot$ and a half-mass radius of $r_h = 24.2$~pc, corresponding to a half-mass relaxation time of $t_\mathrm{rh} = 55 \, \mathrm{Gyr}$. This is by far the longest among the GCs in the Milky Way. In many GCs, the present-day structural parameters have likely been significantly modified by dynamical effects such as mass segregation and orbital mixing \citep{Decressin2008,Vesperini2013,Dalessandro2014,Dalessandro2018}. Because of the long relaxation time, such effects are expected to be minimal in NGC~2419 and it therefore offers one of the best opportunities to constrain the initial structural properties of a globular cluster. The vast majority of GCs that have been studied in detail to date exhibit variations in the abundances of many of the light elements, with anti-correlated abundances of Na/O, C/N, and, less commonly, Al/Mg \citep{Carretta2009,Cohen2002,Shetrone1996,Sneden2004}. NGC~2419 is no exception, and some of its abundance variations are in fact more extreme than in most other GCs: about half of the stars in NGC~2419 have extremely depleted Mg abundances, reaching $\mathrm{[Mg/Fe]}$ values as low as $-1$, and these stars are also enriched in potassium \citep{Cohen2012,Mucciarelli2012a}. However, the overall metallicity spread appears to be very small (less than $\sim$0.1 dex), with a mean metallicity of $\mathrm{[Fe/H]} = -2.09$ \citep{Mucciarelli2012a,Frank2015}. Photometric studies reveal an extremely extended blue tail of the horizontal branch in NGC~2419 \citep{Ripepi2007,Dalessandro2008,Sandquist2008}, which suggests the presence of a population of strongly He-enriched stars \citep[$\mathrm{Y}\approx0.36$;][]{DiCriscienzo2011,DiCriscienzo2015}. \citet{DiCriscienzo2011} also noted a significant spread in the F435W-F814W ($\sim B\!-\!I$) colours of red giant branch (RGB) stars, and estimated that the colour spread was consistent with about 30\% of the stars being He-enriched. A spread in the colours of RGB stars was also found by \citet[][hereafter B2013]{Beccari2013} from ground-based $uVI$ photometry. According to B2013, stars with blue $u\!-\!V$ and $u\!-\!I$ colours tended to be more centrally concentrated within NGC~2419 than those with redder colours, and the authors argued that the blue colours were indicative of enhanced He. A more centrally concentrated distribution of stars with anomalous (i.e., non field-like) abundances of He and other elements would indeed be expected in some self-enrichment scenarios for the origin of multiple populations (MPs) in GCs \citep[e.g.][]{DErcole2008,Decressin2008,Bastian2013a}. However, B2013 also found that stars with anomalous (low) Mg abundances tended to have redder $u\!-\!V$ colours than those with normal Mg abundances, in apparent conflict with the expectation that elevated He content would be coupled with depleted Mg abundances. A complication when interpreting the $uVI$ colours of red giants is that the SDSS $u$-band is also sensitive to N abundance variations. An increased amount of N will suppress the flux in the $u$-band and lead to redder $u\!-\!V$ colours compared to a N-normal population. The net effect on the colours of an `enriched' population thus depends on the balance between the opposing effects of He- and N variations. This balance may change as a function of luminosity on the RGB. In this context, it is worth noting that the radial distributions were determined by B2013 for stars on the lower RGB, whereas the stars with spectroscopic Mg abundance measurements are found near the tip of the RGB. It is thus important to establish how the Mg abundance anomalies correlate with variations in other elemental abundances by independently constraining the He and N abundances of the different populations. The most common spectroscopic tracers of multiple populations in GCs are the Na/O \citep{Carretta2009} and C/N \citep{Cohen2002} anti-correlations. Variations in CNO abundances are detectable photometrically through their effects on the OH, CN, NH, and CH molecular absorption bands in the blue part of the spectra of cool stars \citep[e.g.][]{Sbordone2011,Carretta2011}, as demonstrated in spectacular fashion by the Hubble Space Telescope (HST) UV Legacy Survey of globular clusters \citep{Piotto2015,Milone2017}. However, the distance of NGC~2419 makes UV observations of RGB stars relatively time consuming. From ground-based Str{\"o}mgren $uvby$ photometry, \citet[][hereafter F2015]{Frank2015} found a significant spread in N abundance for RGB stars in the outer parts of NGC~2419, with an approximately equal mix of two populations with distinct N abundances being favoured over a single Gaussian distribution of N abundances. This is reminiscent of the bimodality observed in the $\mathrm{[Mg/Fe]}$ and $\mathrm{[K/Fe]}$ abundance ratios \citep{Mucciarelli2012a}. The Str{\"o}mgren colours were found by F2015 to be consistent with the Mg-normal stars being N-normal, and Mg-poor stars being N-rich. The inner regions of NGC~2419 are too crowded for accurate ground-based photometry of all but the brightest RGB stars, and the studies of B2013 and F2015 were both restricted to stars outside the central $\sim$50\arcsec\ (about one half-light radius) of the cluster. F2015 pointed out that the difference between the roughly equal fractions of N-normal and N-rich stars found by them in the outer regions of the cluster and the somewhat smaller fraction of He-rich stars in the centre \citep{DiCriscienzo2011} might suggest an inverse population gradient, in the sense that the enriched stars are somewhat less concentrated. While this might be at odds with theoretical expectations, it would not be completely unprecedented. In \citet{Larsen2015}, it was found that stars with enhanced N abundances were less concentrated than those with normal N abundances within the central regions of the GC M15, whereas an apparent reversal of this trend occurred at larger radii. However, this result has recently been challenged by \citet{Nardiello2018}, who found no significant difference in the radial distributions of the different populations in M15. The correspondence between the stellar populations identified in the central regions of NGC~2419 (via the extended HB and the spread in optical colours on the RGB) and the constraints on N abundance variations in the outer parts remains unclear. What is still missing is a more robust way of establishing the contributions of N-abundance vs.\ He abundance variations to the colour variations in the central regions of the cluster. HST observations in the F275W filter used by \citet{Piotto2015} would require impractically long integration times, but a viable alternative is offered by the narrow-band F343N filter, which is sensitive to the NH absorption band near 3400~\AA\ \citep{Larsen2014a}. Here we present new HST observations of NGC~2419 in the F343N and F336W filters, which we combine with existing archival data in several optical filters to constrain the properties of multiple populations in the inner regions of the cluster. Throughout this paper we assume a distance modulus of $(m-M)_0 = 19.60$~mag \citep{Ripepi2007} and a foreground extinction in the HST filters of $A_\mathrm{F336W} = 0.271$ mag, $A_\mathrm{F438W} = 0.220$ mag, $A_\mathrm{F555W} = 0.174$ mag, $A_\mathrm{F606W} = 0.151$ mag, $A_\mathrm{F814W} = 0.093$ mag, and $A_\mathrm{F850LP} = 0.073$ mag \citep[][via the NASA/IPAC Extragalactic Database, NED]{Schlafly2011}. Because of the low concentration of NGC~2419, the exact location of the centre is uncertain by several arcsec. The 2010 edition of the \citet{Harris1996} catalogue gives the J2000.0 centre coordinates as (RA, Decl) = (07h38m08.47s, $+38^\circ52^\prime56\farcs8$) whereas NED lists the coordinates as (07h38m07.9s, $+38^\circ52^\prime48\arcsec$). \citet{Dalessandro2008} used HST photometry to compute the location of the barycentre as (07h38m08.47s, $+38^\circ52^\prime55\arcsec$). From looking at our new HST images, the Harris coordinates seem a little too far north, and those given by the NED too far south. We thus adopted the coordinates from \citet{Dalessandro2008}. \section{Observations and data reduction} \subsection{The WFC3 filters and multiple populations} \begin{figure*} \centering \includegraphics[width=16cm]{fig1.pdf} \caption{Model spectra of RGB stars with P1 and P2-like composition. Both model spectra are computed for $T_\mathrm{eff}=5254$ K, $\log g = 2.76$, and $\mathrm{[Fe/H]}=-2.0$. Also shown are the transmission curves for the filters used in this paper. \label{fig:p1p2spec} } \end{figure*} The sensitivity of various photometric systems to multiple populations in GCs has been discussed in detail by previous studies \citep[e.g.][]{Carretta2011,Sbordone2011,Piotto2015}. The photometric signatures can be grouped into two broad categories: 1) atmospheric effects, and 2) effects on stellar structure. In the first case, the observed spectral energy distribution (SED) is modified by variations in the strengths of strong molecular absorption features (CN, CH, NH, OH), which are linked to variations in the light-element abundances (C, N, O). As long as the C+N+O sum is constant, these abundance variations are not expected to have any significant effect on the structure of the star itself. In the second case, the abundance variations (typically an increased He content) do affect the structure of the star, and a He-enriched isochrone will generally be shifted to higher effective temperatures (bluer colours). Figure~\ref{fig:p1p2spec} shows model spectra for two stars with properties similar to those found near the base of the RGB in NGC~2419 ($T_\mathrm{eff}=5254$ K, $\log g = 2.76$, and $\mathrm{[Fe/H]}=-2.0$). The model atmospheres and corresponding spectra were computed with the \texttt{ATLAS12} and \texttt{SYNTHE} codes \citep{Sbordone2004,Kurucz2005} for normal ($\alpha$-enhanced) halo-like composition (P1) and the CNONa2 mixture of \citet{Sbordone2011}, which is typical of enriched stars (P2) in GCs ($\Delta$(C, N, O, Na) = $-0.6, +1.44, -0.8, +0.8$ dex). Also included are the transmission curves for the WFC3 filters used in this paper. We see that the F343N filter samples the NH feature near 3400~\AA, which is much stronger in the N-rich P2 star. The broader F336W filter is also sensitive to variations in the OH bands bluewards of the NH feature, which are weaker in the P2 spectrum (due to the depleted O abundance). To the extent that stars in NGC~2419 follow the usual tendency for N and O to be anti-correlated, this enhances the ability of the F336W-F343N colour to distinguish between the different populations. We note also that the F438W filter includes the CH band near 4300~\AA\ (the Fraunhofer G feature), which is weaker in the P2 spectrum because of the C depletion. The F555W, F606W, F814W, and F850LP filters include no strong molecular bands and are largely insensitive to the atmospheric effects of multiple populations, but they are, of course, sensitive to variations in effective temperature that may be caused by variations in He content. \subsection{Observations} For the optical photometry we used archival data from observing programme GO-11903 (P.I.: J.\ Kalirai), which imaged the central parts of NGC~2419 with the Wide Field Camera 3 (WFC3) on board HST. These observations consist of pairs of un-dithered exposures in many filters. In this paper we use observations in F438W (exposure times of $2\times725$ s), F555W ($2\times580$ s), F606W (2$\times400$ s), F814W ($2\times650$ s), and F850LP ($2\times675$ s). The programme also includes exposures in several ultraviolet filters \citep[which were used by][]{DiCriscienzo2015}, but these are generally short and do not allow us to reach the required photometric accuracy for stars on the RGB. Additional observations in F336W and F343N were obtained in cycle 25 under programme GO-15078 (P.I.: S.\ Larsen). This filter combination was chosen specifically to measure variations in the strength of the NH band near 3400~\AA\ (Fig.~\ref{fig:p1p2spec}). The magnitude difference between the two filters is sensitive to the strength of the NH feature in a manner similar to, for example, the Str{\"o}mgren $\beta$ index for the H$\beta$ line \citep{Stromgren1966}. The GO-15078 observations consist of three visits, of which two visits were allocated to F343N imaging and one visit to F336W. Each visit had a duration of three orbits and within each visit, the observations were dithered according to the C6 $3\times2$ dither pattern described in \citet{Dahlen2010}. Hence, NGC~2419 was observed in F343N for six orbits which yielded 12 exposures with a total exposure time of 17152 s. In F336W, the six exposures obtained during three orbits had a total exposure time of 8576 s. The roll angles, centre coordinates, and dither patterns were identical for all three visits. Given that the two filters have very similar central wavelengths, we expect that most systematic effects (reddening, sensitivity, etc.) will cancel out when calculating the difference between the F336W and F343N magnitudes. \subsection{Photometry} For the analysis we used the `\texttt{*\_flc}' frames, which are corrected for charge-transfer inefficiencies by the instrument pipeline \citep{Ryan2016}. Photometry was carried out with \texttt{ALLFRAME} \citep{Stetson1994}, following the procedure described in \citet{Larsen2014a}. After cleaning the individual pipeline-reduced frames of cosmic rays and multiplying them by the appropriate pixel area maps, \texttt{ALLFRAME} was set up to carry out photometry on each individual frame. Point-spread functions (PSFs) were determined from about 50 isolated, bright stars distributed evenly across each detector. We followed the standard approach of detecting stars with the \texttt{find} task in \texttt{daophot} and carrying out a first round of aperture- and PSF-fitting photometry, then detecting additional stars on the star-subtracted images generated by \texttt{ALLFRAME} in the first pass, and using the merged star catalogues as input for a second pass of PSF-fitting photometry \citep{Stetson1987}. The PSFs used in the second pass were redetermined from images in which all stars except the PSF stars had been subtracted. The GO-15078 and GO-11903 data were reduced separately and the photometry catalogues were then merged. The photometry was calibrated to STMAG magnitudes using aperture photometry of the PSF stars and the 2017 photometric zero-points for an $r=10$ pixels aperture published on the WFC3 webpage\footnote{\url{http://www.stsci.edu/hst/wfc3/analysis/uvis_zpts/}}. These are: $z_\mathrm{F343N} = 22.770$ mag, $z_\mathrm{F336W} = 23.517$ mag, $z_\mathrm{F438W} = 24.236$ mag, $z_\mathrm{F555W} = 25.651$ mag, $z_\mathrm{F606W} = 26.154$ mag, $z_\mathrm{F814W} = 25.861$ mag, and $z_\mathrm{F850LP} = 24.885$ mag. Conveniently, the roll angles of the two datasets differ by less than 10 degrees (as indicated by the header keyword PA$\_$V3, which has a value of 276 deg for the GO-15078 observations and 269 deg for the GO-11903 observations) and the difference between the centre coordinates is only about $10\arcsec$. Hence, the overlap between the datasets is excellent. We used the \texttt{geomap} and \texttt{geoxytran} tasks in the \texttt{images.immatch} package in IRAF to define coordinate transformations between the GO-15078 and GO-11903 datasets. Because of the good overlap, most of the stars imaged on a given detector in GO-15078 were mapped onto the same detector in GO-11903, although a small fraction of CCD\#1 in GO-15078 was mapped onto CCD\#2 in GO-11903 and vice versa. We used about 100--150 stars on each CCD to define the transformations, which were fitted with 3rd order polynomials in the $x$ and $y$ coordinates. This yielded an r.m.s.\ scatter of about 0.05 pixels in the solutions. The pixel coordinates (measured in the GO-15078 frames) were further transformed to sky coordinates using the \texttt{wcs} package in \texttt{Astropy} \citep{AstropyCollaboration2018}. Small offsets to the HST coordinates (about 0\farcs216 in right ascension and 0\farcs061 in declination) were applied in order to match the Gaia astrometry, based on about 400 stars in common between our HST data and the Gaia DR2 catalogue \citep{GaiaCollaboration2016,GaiaCollaboration2018}. The dispersions around the mean offsets were about $0\farcs026$ and $0\farcs015$ in right ascension and declination, respectively, from which we estimate the remaining systematic uncertainty on the astrometric calibration to be about $10^{-3}$ arcsec. The first few entries of the photometric catalogue are listed in Table~\ref{tab:photometry}, and the full catalogue is available on-line. \begin{figure} \centering \includegraphics[width=\columnwidth]{fig2.png} \caption{($m_\mathrm{F438W}-m_\mathrm{F850LP}, m_\mathrm{F438W}$) colour-magnitude diagram. Red symbols indicate stars that are saturated in the F850LP filter. \label{fig:cmd_bl_b} } \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig3.pdf} \caption{Spatial distribution of RGB stars relative to the centre of NGC~2419. \label{fig:map} } \end{figure} Figure~\ref{fig:cmd_bl_b} shows the ($m_\mathrm{F438W}$ vs.\ $m_\mathrm{F438W-F850LP}$) colour-magnitude diagram (CMD). Overall, the CMD is very similar to that shown in Fig.~1 of \citet{DiCriscienzo2015}, apart from an offset due to our use of STMAG (instead of VEGAMAG) zero-points. The CMD clearly shows the main features identified in previous studies, including the horizontal branch (HB) and its extended blue tail, a population of blue stragglers (BS), as well as the relatively narrow RGB. The photometry reaches a couple of magnitudes below the main sequence turn-off (MSTO), although we will concentrate exclusively on the RGB in this paper. The red dashed line indicates the colour cut that we will use to separate RGB stars from potential HB and BS interlopers, which is given by: $m_\mathrm{F438W}-m_\mathrm{F850LP} > -0.12 - 0.45 \times (m_\mathrm{F438W} - 19.5)$. We use the F438W-F850LP colour combination for this purpose, partly because it offers a long colour baseline, which helps to separate the RGB from AGB stars in the range $19 \la m_\mathrm{F438W} \la 20$, and partly because the F850LP filter is less affected by saturation than the F555W and F814W filters. In F555W and F814W, saturation sets in at $m_\mathrm{F555W} \sim 18.9$ and $m_\mathrm{F814W} \sim 19.0$, respectively, whereas the saturation limit is about one magnitude brighter, at $m_\mathrm{F850LP} \sim 17.9$ in F850LP. Stars that are saturated in F850LP are indicated with red symbols in Fig.~\ref{fig:cmd_bl_b}. In the F336W, F343N, and F438W observations, even the brightest RGB stars remain unsaturated. While it is possible to recover the flux accurately even for stars that are several magnitude brighter than the saturation limit \citep{Gilliland2010}, we have not attempted to do so here. Figure~\ref{fig:map} shows the spatial distribution of RGB stars brighter than $M_\mathrm{F438W}=+2$ that are included in both the GO-15078 and GO-11903 datasets, relative to the adopted centre of NGC~2419. The coverage is spatially complete within a radius of $\approx70\arcsec$ (apart from the gap between the WFC3 detectors), and the outermost star is about 110\arcsec\ from the centre. In terms of the projected half-light radius \citep[$r_{h,lp}=45\arcsec$;][]{Baumgardt2018}, spatial coverage is thus complete to about $1.5 \, r_{h,lp}$. \subsection{Artificial star tests} \label{sec:artstar} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig4.pdf} \caption{Input F438W magnitudes versus the difference between input and recovered F336W-F343N colour, $\delta_{o-i}$(F336W-F343N), for the artificial stars. The red lines show the 16\% and 84\% percentiles. \label{fig:synt_dunu_b} } \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig5.pdf} \caption{Here $\delta_{o-i}$(F336W-F343N) is plotted as a function of radial distance $R$ from the centre of NGC~2419 for artificial stars brighter than $M_\mathrm{F438W}=+2$. The red solid (dashed) lines show the 16\% and 84\% (2.5\% and 97.5\%) percentiles. \label{fig:synt_r_dunu} } \end{figure} The photometric accuracy and completeness were quantified by means of artificial star tests. Ten rounds of such tests were carried out, adding about 2050 stars to the HST images in each round with the \texttt{mksynth} task in the \texttt{BAOLAB} package \citep{Larsen1999}. The \texttt{mksynth} task models artificial stars by treating the PSF as a probability density function, from which events are picked at random and added to the image one by one until the desired number of counts has been reached. In this way, the images of simulated stars are subject to the same stochastic effects as those of real stars. To ensure that the central regions of the cluster were well sampled, the artificial stars were grouped into a series of concentric annuli, centred on NGC~2419. These annuli had radii of $0 < r_1 < 100$ pixels (60 stars per round), $100 < r_2 < 200$ pixels (90 stars), $200 < r_3 < 400$ pixels (325 stars), $400 < r_4 < 600$ pixels (300 stars), $600 < r_5 < 800$ pixels (280 stars), $800 < r_6 < 1200$ pixels (540 stars), and $1200 < r_7 < 2000 $ pixels (480 stars). To avoid self-crowding among the artificial stars, we enforced a minimum separation of 20 pixels between any pair of artificial stars. A set of dedicated artificial PSF stars were also added. The F814W magnitudes of the artificial stars were picked at random from the actual magnitude distribution of RGB stars in NGC~2419, with the faintest artificial stars having an absolute magnitude of $M_\mathrm{F814W} = +4$ ($M_\mathrm{F438W} \approx +3$). To determine the magnitudes in the other filters, we used \texttt{ATLAS12} and \texttt{SYNTHE} to compute synthetic spectra for 20 sampling points along the RGB of an $\alpha$-enhanced isochrone with $\mathrm{[Fe/H]}=-2$ and an age of 13 Gyr \citep{Dotter2007}. Magnitudes in the STMAG system were then determined by integrating the synthetic spectra over the filter transmission curves, and for a given $M_\mathrm{F814W}$ magnitude the magnitudes in other filters were then obtained by interpolation in the synthetic relations. The \texttt{ALLFRAME} procedure was then repeated, the only modification with respect to the original photometry being that the PSFs were determined from the artificial PSF stars. This was done to ensure that inaccuracies in the PSF determination were propagated properly through the procedure, rather than fitting the artificial stars directly with the same PSFs that were used to generate them in the first place. The resulting photometry catalogues were matched against the input lists of artificial stars. An artificial star in the input catalogue was defined as recovered if a counterpart was found within a distance of 1 pixel in the \texttt{ALLFRAME} catalogue. The artificial star experiments were carried out separately for the GO-15078 and GO-11903 data, but using the same input catalogue of artificial stars. The photometry catalogues for the artificial stars could then subsequently be merged in the same way as was done for the science data. Essentially all of the artificial stars added to the images were recovered by the \texttt{ALLFRAME} photometry procedure in both the GO-15078 and GO-11903 datasets. Even at the faintest magnitudes, about one magnitude below the limits adopted in our analysis, the recovery fraction remained at $>99\%$ at all radii. Hence, we can assume that the \texttt{ALLFRAME} photometry is unaffected by completeness effects over the magnitude range of interest here. In Figure~\ref{fig:synt_dunu_b} we plot the input F438W magnitude vs. the difference between the input and recovered (output) F336W-F343N colour, $\delta_{o-i}$(F336W-F343N). The red curves indicate the 16\% and 84\% percentiles of $\delta_{o-i}$(F336W-F343N), computed in 0.25 mag bins. As expected, the scatter increases towards the faint end of the magnitude distribution. Taking half of the separation between the 16\% and 84\% percentiles as an estimate of the standard error, $\sigma_\mathrm{F336W-F343N}$, we find that this increases from $\sigma_\mathrm{F336W-F343N} = 0.008$ mag at the bright end to $\sigma_\mathrm{F336W-F343N} = 0.028$ mag at $M_\mathrm{F438W} = +2$, which will be the typical faint limit adopted in most of the subsequent analysis. While crowding does not appear to affect our ability to detect RGB stars even at the centre of NGC~2419, it may still have an effect on the photometric errors. Figure~\ref{fig:synt_r_dunu} shows $\delta_{o-i}$(F336W-F343N) as a function of distance from the centre of NGC~2419 for stars brighter than $M_\mathrm{F438W} = +2$. Plots for other colours look very similar. We see that there is indeed a mild tendency for the scatter to increase towards the centre. Estimating the standard error in the same way as above, we find $\sigma_\mathrm{F336W-F343N} = 0.023$~mag in the innermost bin ($\langle R \rangle = 3\arcsec$), decreasing to $\sigma_\mathrm{F336W-F343N} = 0.016$~mag at radii $R>70\arcsec$. When using the artificial star tests to estimate the error distributions for observed stars in NGC~2419, we need to correct for the fact that the radial distribution of the artificial stars does not exactly match that of the actual cluster stars. To this end, we assigned a radius-dependent weight $w_A(R)$ to each artificial star, where $w_A(R)$ is given by the ratio of the number of actual stars at radius $R$ to the number of artificial stars at the same radius. We computed these weights in radial bins of 100 pixels (4\arcsec), and then assigned the corresponding $w_A(R)$ to all artificial stars within that radial bin. The weights ranged between $w_A(R)\approx0.15$ and $w_A(R)\approx0.35$. This relatively modest variation, combined with the weak radial dependence of the errors, means that the error distributions and corresponding estimates of the photometric errors change little when applying the weights. \begin{figure} \centering \includegraphics[width=\columnwidth]{fig6.pdf} \caption{Histograms of the D$_\mathrm{F555W-F606W}$ distributions for observed RGB stars and artificial stars. \label{fig:dvr} } \end{figure} While artificial star tests such as those described above are widely used, it should be kept in mind that it is nearly impossible to include all effects that are present in the real data in such tests \citep[see e.g.][]{Bellazzini2002,Anderson2008}. The PSF of the artificial stars will typically not be an exact match to that of the real stars, and fitting the artificial stars with the same PSF used to generate them would fail to account fully for uncertainties in the PSF modelling. While we have attempted to account for this by redetermining the PSF from artificial stars, cases where artificial stars are blended with real stars may still be problematic. Furthermore, the real PSF varies across the WFC3 detectors, whereas our artificial star tests used a single PSF for all stars. Although spatial variations in the PSF are taken into account in the \texttt{ALLFRAME} photometry, the modelling of these variations is by necessity imperfect, and this is difficult to take into account in a fully realistic way in the artificial star tests. Even if the spatial variability of the PSF were fed back into the artificial star tests, it would be restricted to the parameterisation used by \texttt{ALLFRAME}. To assess the fidelity of the artificial star tests, Figure~\ref{fig:dvr} shows a comparison of the observed spread in the F555W-F606W colours around the RGB with the corresponding spread for the artificial stars. The quantity D$_\mathrm{F555W-F606W}$ denotes the difference between the observed colours and a polynomial fit to the RGB for stars in the range $0 < M_\mathrm{F438W} < +2$. The F555W-F606W combination is expected to be insensitive to the abundance variations arising from the presence of multiple populations because of the small colour baseline and the lack of strong molecular features in these two bands, so that the spread in D$_\mathrm{F555W-F606W}$ should be due mainly to observational errors. As can be seen in the figure, the distributions for the observed and artificial stars are very similar indeed. To be more quantitative, we again estimated the dispersions from the 16th and 84th percentiles of the D$_\mathrm{F555W-F606W}$ colour distribution. For a Gaussian distribution, this will be similar to the dispersion computed in the usual way, but by using the percentiles we are less sensitive to extreme outliers. As indicated in the legend, the dispersion for the artificial stars ($\sigma_A = 0.0118$ mag) is very similar to that of the observations ($\sigma_\mathrm{Obs} = 0.0120$ mag). We note that including the correction for spatial coverage of the artificial stars makes virtually no difference; if this correction is omitted we find $\sigma_A = 0.0119$~mag. Very similar results were obtained from the F814W-F850LP combination, for which the same comparison yields $\sigma_A = 0.0153$~mag and $\sigma_\mathrm{Obs} = 0.0147$~mag. We conclude that the artificial star tests, in spite of the potential concerns mentioned above, provide a fairly realistic estimate of the random uncertainties in our photometric analysis. \section{Results} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig7.pdf} \caption{$M_\mathrm{F438W}$ vs.\ $M_\mathrm{336W}-M_\mathrm{F343N}$ diagram. The filled coloured symbols are stars with $\mathrm{[Mg/Fe]}$ measurements \citep{Cohen2012,Mucciarelli2012a}. \label{fig:uuni} } \end{figure} \begin{figure} \centering \includegraphics[width=42mm]{fig8a.pdf} \includegraphics[width=42mm]{fig8b.pdf} \caption{(a): F438W vs.\ F336W-F343N colour-magnitude diagram with the 10\% and 90\% fiducial lines indicated. (b): verticalised CMD. \label{fig:cmd_unu_b} } \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig9.pdf} \caption{Histogram of the $\Delta_\mathrm{F336W-F343N}$ colour distribution. The error distribution of the synthetic stars has been shifted by 0.015 mag to match the P2 peak. \label{fig:hist_dunu} } \end{figure} \subsection{Ultraviolet colours and N abundance variations} \label{sec:uvcol} In Fig.~\ref{fig:uuni} we show the $M_\mathrm{F438W}$ vs.\ $M_\mathrm{F336W}-M_\mathrm{F343N}$ diagram for RGB stars in NGC~2419. Here, and in the rest of the paper, we assume that the extinction in F343N is the same as in F336W. We use the F438W magnitude instead of F336W on the vertical axis, since the latter turns over somewhat below the tip of the RGB (i.e., the coolest and most luminous RGB stars are not the brightest in F336W). The loci of N-normal and N-rich stars are therefore not well separated at the bright end when plotting $M_\mathrm{F336W}$ vs.\ $M_\mathrm{F336W}-M_\mathrm{F343N}$. Filters centred at even longer wavelengths would potentially be even better than F438W, but here saturation becomes problematic. The stars for which spectroscopic Mg abundance measurements are available in the literature \citep{Cohen2012,Mucciarelli2012a} are highlighted with red ($\mathrm{[Mg/Fe]}>0$) and blue ($\mathrm{[Mg/Fe]}<0$) symbols. Also included in the figure are synthetic colours computed for standard field-like ($\alpha$-enhanced) and CNONa2 mixture, for He content of $Y=0.25$ (solid lines) and $Y=0.40$ (dashed lines). We used isochrones with $\mathrm{[Fe/H]}=-2$ and $t=13$ Gyr from the Dartmouth database \citep{Dotter2007} and colours were calculated with \texttt{ATLAS12}/\texttt{SYNTHE}, as described in Sect.~\ref{sec:artstar}. We also computed a set of model atmospheres and spectra in which the Mg abundance was decreased by 1 dex, but this was found to have a negligible effect on the colours. The model colours have been shifted by a small offset (0.02 mag) in the horizontal direction. It is clear that the F336W-F343N colour index is a very effective discriminator of CNO content, specifically N abundance, while it is hardly affected by He content. The observed spread in F336W-F343N is roughly similar to the separation between the field-like and CNONa2 models, and much larger than the photometric errors. Henceforth we refer to stars appearing to the right in Fig.~\ref{fig:uuni} (i.e., those with field-like composition) as belonging to P1 and those to the left as P2. We will adhere to this nomenclature specifically in the context of CNO variations. It is clear that the Mg-poor stars are associated mostly with the N-rich population P2, and the Mg-normal stars mostly with the N-normal population P1, although there may not be an exact 1:1 correspondence. Figure~\ref{fig:uuni} already hints at a bimodal distribution of the F336W-F343N colours. To further examine the properties of the colour distribution, we verticalised Fig.~\ref{fig:uuni}, following a similar procedure to that described in \citet{Milone2017}. This entailed computing the offset in F336W-F343N for each star with respect to a fiducial line, and renormalising the offsets relative to those at some fixed magnitude to account for changes in the width of the colour distribution. While the model lines in Fig.~\ref{fig:uuni} roughly trace the extremes of the F336W-F343N colour distribution, it was found that a better result was obtained by determining the fiducial lines directly from the data. We computed the 10\% and 90\% percentiles of the F336W-F343N colours as a function of $M_\mathrm{F438W}$ in bins of 0.5 mag, and then fitted fourth-order polynomials to the percentile values vs.\ $M_\mathrm{F438W}$. In reality, the 10\% and 90\% lines were found to be nearly parallel (Fig.~\ref{fig:cmd_unu_b}, panel (a)), with the separation varying between 0.101 mag at $M_\mathrm{F438W} = +2$ and 0.104 mag at $M_\mathrm{F438W} = +0.5$, and then narrowing slightly to 0.092 mag at $M_\mathrm{F438W} = -1.5$. We then defined the offset $\Delta_\mathrm{F336W-F343N}$ with respect to the 10\% percentile line, scaled to the separation between the two lines at a reference magnitude of $M_\mathrm{F438W} = +1$. Panel (b) of Fig.~\ref{fig:cmd_unu_b} shows the verticalised CMD, in which two vertical sequences are readily visible. For magnitudes fainter than $M_\mathrm{F438W} = +2$ the separation between the two sequences becomes less evident, presumably because of the increasing photometric errors (Fig.~\ref{fig:synt_dunu_b}). In the following, we therefore restrict the analysis to stars brighter than $M_\mathrm{F438W} = +2$, as indicated by the horizontal dashed line in Fig.~\ref{fig:cmd_unu_b}. This gives a sample of 1717 RGB stars. Figure~\ref{fig:hist_dunu} shows the distribution of $\Delta_\mathrm{F336W-F343N}$ for stars selected as described above. The histogram confirms that the distribution is clearly bimodal. The orange open histogram shows the error distribution, $\delta_{o-i}$(F336W-F343N) for artificial stars in the same magnitude range as the data included in the figure. The $\delta_{o-i}$(F336W-F343N) values denote the differences between the input and recovered F336W-F343N colours, which were scaled by the same magnitude-dependent factor as the $\Delta_\mathrm{F336W-F343N}$ offsets in order to facilitate comparison with the observed $\Delta_\mathrm{F336W-F343N}$ distribution. The error distribution has also been corrected for differences in the radial distributions of artificial and actual stars, using the weights $w_A(R)$ defined in Sect.~\ref{sec:artstar}. The error histogram appears only slightly narrower than the two peaks in the observed colour distribution, which suggests that much of the broadening of the peaks may be due to photometric errors. To quantify the evidence for bimodality, we applied the KMM test \citep{Ashman1994} to the $\Delta_\mathrm{F336W-F343N}$ distribution. The KMM algorithm models the parent distribution of an observed sample as a sum of multiple Gaussians, and compares the likelihood obtained from the best fitting multi-Gaussian model with that obtained from a single Gaussian. The improvement of the multi-Gaussian fit with respect to a single Gaussian is expressed as a $p$-value. Here we used the KMM algorithm to carry out a double-Gaussian fit to the $\Delta_\mathrm{F336W-F343N}$ distribution, allowing the peaks ($\mu_1$ and $\mu_2$) and dispersions ($\sigma_1$ and $\sigma_2$) of both Gaussians to vary. The red and blue curves in Fig.~\ref{fig:hist_dunu} represent the best-fitting double-Gaussian model estimated by the KMM algorithm. The KMM algorithm returned a $p$-value of $p<10^{-5}$, signifying that a double-Gaussian fit is a highly significant improvement over a single Gaussian (as was already evident from the histogram). The peak colours for P1 and P2 are $\mu_1 = 0.085$~mag and $\mu_2 = 0.015$~mag and the dispersions are $\sigma_1 = 0.024$~mag and $\sigma_2 = 0.021$~mag. The KMM algorithm assigned 939 stars (55\%) and 778 stars (45\%) to P1 and P2, respectively. From the artificial star tests we find a dispersion of $\sigma_\mathrm{art} = 0.022$ mag based on the scaled $\delta_{o-i}$(F336W-F343N) values. This value is essentially independent of whether or not the weights $w_A(R)$ are applied ($\sigma_\mathrm{art} = 0.0227$ without the weights, 0.0224 when including them). This is very similar to the width of the P2 peak found by the KMM algorithm, and only slightly narrower than the P1 peak. Again, this is consistent with the visual impression from Fig.~\ref{fig:hist_dunu}, which shows the $\delta_{o-i}$(F336W-F343N) histogram to be very similar to the P2 Gaussian. The KMM algorithm assigns stars with $\Delta_\mathrm{F336W-F343N}<0.047$ mag to P2 and those with $\Delta_\mathrm{F336W-F343N}>0.047$ mag to P1. In the remainder of this paper we associate stars with P1 and P2 based on this limit. \subsection{Optical colours and pseudo-chromosome maps} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig10.pdf} \caption{F814W vs.\ F438W-F814W colour-magnitude diagram, showing the RGB for stars with $\Delta_\mathrm{F336W-F343N}>0.047$ (P1) and $\Delta_\mathrm{F336W-F343N}\le0.047$ (P2). The black curves are polynomial fits to the two sequences. Grey symbols indicate stars which are saturated in F814W. \label{fig:rgb_bi_i} } \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig11.pdf} \caption{Verticalised F438W vs.\ F438W-F814W colour-magnitude diagram. The horizontal dashed lines show the magnitude limits used in the construction of the pseudo-chromosome map. Grey symbols indicate stars which are saturated in F814W. \label{fig:dbi_b} } \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig12.pdf} \caption{Pseudo-chromosome map and histograms of the $\Delta_\mathrm{F438W-F814W}$ colours for P1 and P2 stars. The thin orange histogram indicates the errors as determined from artificial star tests. The arrows show the effect of changing the He abundance by $\Delta \mathrm{Y}=0.15$, the overall metallicity by $\Delta \mathrm{[Z/H]} = +0.1$ dex, and other light element abundances by $\Delta$([(C, N, O, Na)/Fe]) = $-0.6, +1.44, -0.8, +0.8$ dex \citep{Sbordone2011}. \label{fig:chromo} } \end{figure} The F336W-F343N colours provide an effective and clean way to assess nitrogen abundance variations. However, it is becoming increasingly clear that more than one parameter may be required to characterise the abundance variations in GCs, even in mono-metallic clusters. A clear demonstration is provided by the diversity observed among the chromosome maps of GCs \citep{Milone2017}, which provide evidence of He abundance variations even among P1 stars in some clusters \citep{Lardo2018,Milone2018}. While the chromosome maps defined by \citet{Milone2017} make use of the F275W filter, we can use the colours available for NGC~2419 to construct pseudo-chromosome maps that have the same basic ability to separate variations in N- and He abundance. For the N abundance variations we continue to rely on the $\Delta_\mathrm{F336W-F343N}$ index. To trace He abundance variations, we can use optical colours, preferably with a long baseline. Here we opt for the F438W-F814W combination, noting that F438W-F850LP gives essentially identical results. Figure~\ref{fig:rgb_bi_i} shows the $M_\mathrm{F814W}$ vs.\ $M_\mathrm{F438W} - M_\mathrm{F814W}$ CMD for RGB stars in NGC~2419. Stars associated with P1 and P2 based on their $\Delta_\mathrm{F336W-F343N}$ colours are plotted with red and blue symbols, respectively. It is evident that the P2 stars tend to have bluer F438W-F814W colours on average than the P1 stars, which is consistent with a higher He content for the P2 stars. We note that a difference in overall metallicity could potentially produce a similar effect on the RGB colours, in which case the bluer colours of the P2 stars would imply lower metallicities. The two curves in Fig.~\ref{fig:rgb_bi_i} are polynomial fits to the P1 and P2 sequences. To verticalise the CMD we follow a slightly different procedure than for F336W-F343N, since the photometric errors contribute significantly to the scatter. Instead of using percentiles, we use the fits to the P1 and P2 sequences as fiducials, calculating the offsets $\Delta_\mathrm{F438W-F814W}$ with respect to the P1 fiducial line and normalising, as before, at $M_\mathrm{F438W} = +1$ ($M_\mathrm{F814W} = +1.3$). The resulting verticalised version of Fig.~\ref{fig:rgb_bi_i} is shown in Fig.~\ref{fig:dbi_b}. Because of the saturation in F814W at bright magnitudes, we impose a bright magnitude limit of $M_\mathrm{F438W} = 0$, which leaves us with 1449 RGB stars. Figure~\ref{fig:chromo} shows the pseudo-chromosome map obtained by plotting $\Delta_\mathrm{F336W-F343N}$ vs.\ $\Delta_\mathrm{F438W-F814W}$. The arrows indicate the effect of changing the He content by $\Delta \mathrm{Y} = 0.15$, the overall metallicity by $\Delta \mathrm{[Z/H]} = +0.1$~dex, and the CNO abundances according to the CNONa2 mixture. We have defined the axes such that the plot resembles the chromosome maps of \citet{Milone2017} as closely as possible, i.e., with He content increasing towards the left and N increasing (and C and O decreasing) upwards. We note that the $\Delta$Z arrow is nearly anti-parallel to the $\Delta$Y arrow. The mean $\Delta_\mathrm{F438W-F814W}$ colours of the P1 and P2 stars are $-0.007$ mag and $-0.050$ mag, respectively. Part of the colour difference may be caused by the differences in CNO abundances, since the $\Delta$CNO arrow is not exactly vertical. We will quantify this further below. The histograms in the upper panel show the colour distributions of the P1 and P2 stars together with the distribution of $\delta_{o-i}$(F438W-F814W) from the artificial star tests. As was done for $\delta_{o-i}$(F336W-F343N), a magnitude-dependent scaling was applied to the $\delta_{o-i}$(F438W-F814W) values for consistency with the verticalised colour distribution. For P1, the dispersion (as estimated from the 16th and 84th percentiles) is $\sigma_\mathrm{F438W-F814W} = 0.023$ mag and for P2 it is $\sigma_\mathrm{F438W-F814W} = 0.040$ mag. The standard deviation of the synthetic star colours is $\sigma=0.015$ mag. For the P2 stars, the $\Delta_\mathrm{F438W-F814W}$ distribution is thus significantly broader than the error distribution, and it appears that there may be some spread in $\Delta_\mathrm{F438W-F814W}$ also for the P1 stars, with a tail towards the blue that may indicate the presence of He-enriched stars. \subsection{Constraints on N and He abundance variations} \begin{table*} \caption{Observed and modelled colour differences between P1 and P2 stars.} \label{tab:coldist} \centering \begin{tabular}{lccccccc} \hline\hline Colour & $\langle\Delta(\mathrm{P2-P1})\rangle_\mathrm{obs}$ & $C_\mathrm{CNO}$ & $C_\mathrm{He}$ & $C_\mathrm{Z}$ & \multicolumn{3}{c}{$\Delta$(P2-P1)$_\mathrm{fit}$} \\ & & & & & CNO, He & CNO, Z & CNO, He, Z \\ & (mag) & & & & (mag) & (mag) & (mag) \\ \hline \hline $\Delta_\mathrm{F336W-F343N}$ & $-0.070$ & $-0.093$ & $-0.010$ & $+0.003$ & $-0.081$ & $-0.087$ & $-0.071$ \\ $\Delta_\mathrm{F343N-F438W}$ & $+0.113$ & $+0.150$ & $-0.032$ & $+0.019$ & $+0.114$ & $+0.114$ & $+0.112$ \\ $\Delta_\mathrm{F336W-F438W}$ & $+0.045$ & $+0.056$ & $-0.043$ & $+0.022$ & $+0.032$ & $+0.026$ & $+0.039$ \\ $\Delta_\mathrm{F336W-F555W}$ & $+0.013$ & $+0.042$ & $-0.075$ & $+0.029$ & $+0.009$ & $+0.006$ & $+0.017$ \\ $\Delta_\mathrm{F336W-F814W}$ & $-0.001$ & $+0.042$ & $-0.105$ & $+0.032$ & $-0.001$ & $+0.003$ & $+0.001$ \\ $\Delta_\mathrm{F438W-F555W}$ & $-0.026$ & $-0.017$ & $-0.039$ & $+0.009$ & $-0.027$ & $-0.025$ & $-0.027$ \\ $\Delta_\mathrm{F438W-F814W}$ & $-0.044$ & $-0.017$ & $-0.075$ & $+0.013$ & $-0.040$ & $-0.029$ & $-0.045$ \\ $\Delta_\mathrm{F438W-F850LP}$ & $-0.048$ & $-0.017$ & $-0.081$ & $+0.013$ & $-0.042$ & $-0.029$ & $-0.049$ \\ $\Delta_\mathrm{F555W-F814W}$ & $-0.019$ & $+0.000$ & $-0.038$ & $+0.004$ & $-0.013$ & $-0.004$ & $-0.020$ \\ \hline \end{tabular} \tablefoot{The last three columns give the fitted colour differences $\Delta$(P2-P1)$_\mathrm{fit}$ when allowing for variations in CNO$+$He, CNO$+$Z, and CNO$+$He$+$Z, respectively. } \end{table*} \begin{table} \caption{Scaling coefficients for the fits to the colour differences in Table~\ref{tab:coldist}.} \label{tab:coeff} \centering \begin{tabular}{lccc} \hline \hline & CNO, He & CNO, Z & CNO, He, Z \\ \hline $\mathscr{S}_\mathrm{CNO}$ & 0.83 & 0.90 & 0.74 \\ $\mathscr{S}_\mathrm{He}$ & 0.34 & \ldots & 0.64 \\ $\mathscr{S}_\mathrm{Z}$ & \ldots & $-1.08$ & 1.17 \\ r.m.s. (mag) & 0.0065 & 0.0127 & 0.0024 \\ \hline \end{tabular} \tablefoot{The last row gives the r.m.s. dispersion of the differences $\langle\Delta(\mathrm{P2-P1})\rangle_\mathrm{obs} - \Delta$(P2-P1)$_\mathrm{fit}$ from Table~\ref{tab:coldist}. } \end{table} \begin{figure} \centering \includegraphics[width=8cm]{fig13.pdf} \caption{Colour distributions for P1 (red) and P2 stars (blue). The yellow histograms show the error distributions based on artificial star tests. The arrows indicate the effect of changing the He content by $\Delta$Y=$+0.15$ and the CNONa abundances by $\Delta$([(C, N, O, Na)/Fe]) = $-0.6, +1.44, -0.8, +0.8$ dex. \label{fig:hist_dcol} } \end{figure} The pseudo-chromosome diagram utilises only two out of many possible colour combinations that can be formed from the available photometry. In Fig.~\ref{fig:hist_dcol} we show the histograms of various other colour combinations for the P1 and P2 stars, together with the error distributions from the artificial star tests. Again, the error distributions have been corrected for spatial coverage and the magnitude-dependent verticalisation scaling. The error distributions have been scaled to the same total number as the P1 stars. Each panel also includes horizontal arrows indicating the effect of a change of 0.15 in the He abundance and a change in CNONa abundances from standard $\alpha$-enhanced abundances to the CNONa2 mixture. In many of these colour combinations, the two populations appear separated to some degree. For colour combinations that use F438W as the blue filter instead of F336W or F343N, the P2 stars generally appear bluer than the P1 stars, consistent with the P2 stars being more He rich (or metal-poor) on average. In these cases, the He and CNO arrows are parallel (due to the effect of C variations on the G-band contained within the F438W filter), so that He and CNO abundance variations reinforce each other. For the combinations that do include F336W, the effect of modified CNO abundances tends to counterbalance the increase in He abundance for the P2 stars. For F336W-F814W these two effects appear to cancel out almost exactly on average, causing the colour distributions of P1 and P2 to be similar, whereas the smaller colour baseline for F336W-F555W implies that the CNONa variations win, making the P2 stars slightly redder on average than the P1 stars. This is reminiscent of the analysis by F2015, who found no difference between the Str{\"om}gren $u-y$ colours of Mg-poor and Mg-normal RGB stars in NGC~2419, and attributed this to the opposite effects of He and CNO abundance variations on this particular colour. For F814W-F850LP (bottom panel), the two populations have essentially identical mean colours, which is consistent with the expectation that this colour combination should be insensitive to He- and CNO variations. The dispersions of the P1, P2, and synthetic colour distributions are indicated in the legends of Fig.~\ref{fig:hist_dcol}. In all cases (except F814W-F850LP), the colour spread for the P2 stars clearly exceeds that expected from the artificial star tests. In general, the colour spread may be due to a combination of CNO and He abundance variations. The F555W-F814W colour is expected to be a relatively clean tracer of He abundance variations (as indicated by the short CNONa2 arrow) and the significant spread in F555W-F814W for the P2 stars therefore corroborates the conclusion from the chromosome map that a He spread is likely present within P2. For P1, the case for a significant internal He spread is less strong, since the observed F555W-F814W distribution is only slightly broader than the error distribution, although there is again a hint of an asymmetric tail towards the blue. For F814W-F850LP we note that the observed dispersions of P1 and P2 are very similar to that of the artificial stars, which provides another verification that these tests give a realistic estimate of the photometric errors (cf.\ Sec.~\ref{sec:artstar}). If we assume that the effects of He and CNO abundance variations, and potentially also of overall metallicity variations, can be combined linearly for each colour, then we can write \begin{equation} \langle\Delta(\mathrm{P2-P1})_i \rangle = C_\mathrm{CNO,i} \mathscr{S}_\mathrm{CNO} + C_\mathrm{He,i} \mathscr{S}_\mathrm{He} + C_\mathrm{Z,i} \mathscr{S}_\mathrm{Z} \label{eq:dcol} \end{equation} where the coefficients $C_\mathrm{CNO,i}$, $C_\mathrm{He,i}$, and $C_\mathrm{Z,i}$ specify how the $i$th colour responds to variations in the CNO and He abundances and metallicity. Setting each of the scaling factors $\mathscr{S}_\mathrm{CNO}$, $\mathscr{S}_\mathrm{He}$, and $\mathscr{S}_\mathrm{Z}$ to unity corresponds to reference abundance changes of $\Delta[\mathrm{(C, N, O)/Fe}] = (-0.6, +1.44, -0.8)$~dex, $\Delta Y = 0.15$ (as indicated by the arrows in Fig.~\ref{fig:hist_dcol}) and $\Delta Z = 0.1$ dex. We can then solve for the scaling factors that need to be applied to the reference abundance changes in order to best reproduce all of the observed colour differences $\langle\Delta(\mathrm{P2-P1})_i \rangle$. In Table~\ref{tab:coldist} we list the mean observed colour differences between the P1 and P2 stars for each colour combination, with the exception of F814W-F850LP which contains little information and was included in Fig.~\ref{fig:hist_dcol} only as a consistency check. The coefficients $C_\mathrm{CNO}$, $C_\mathrm{He}$, and $C_\mathrm{Z}$, were calculated from our synthetic photometry at a reference magnitude of $M_\mathrm{F438W} = +1$. The last three columns give the colour differences obtained by solving for variations in CNO and He (with Z fixed), in CNO and Z (with He fixed), and in all three parameters. In Table~\ref{tab:coeff} we give the corresponding best-fitting scaling factors $\mathscr{S}_\mathrm{CNO}$, $\mathscr{S}_\mathrm{He}$, and $\mathscr{S}_\mathrm{Z}$, which were found from a least-squares fit with the \texttt{lstsq} function in the \texttt{scipy.linalg} package in \texttt{Python}. It is clear that, in all cases, a significant difference in mean CNO content between P1 and P2 is required to explain the colour differences, although the scaling factor $\mathscr{S}_\mathrm{CNO}$ is slightly less than unity for all fits. Hence, the implied average CNO difference between P1 and P2 is slightly smaller than assumed in the CNONa2 mixture. Table~\ref{tab:coldist} shows that a combination of variations in CNO and He content reproduces most of the P2-P1 colour differences to within about 0.01 mag, with an r.m.s.\ difference between the observed and modelled colour differences of 0.007 mag (Table~\ref{tab:coeff}). Assuming that $\Delta$Y scales linearly with $\mathscr{S}_\mathrm{He}$, the implied mean difference in He content is $\Delta \mathrm{Y} \simeq 0.05$. Keeping He fixed and allowing Z to vary instead produces a somewhat worse fit, with an r.m.s.\ difference of 0.013 mag between the observed and best-fit colours. In this case, the metallicity scaling factor is negative, $\mathscr{S}_\mathrm{Z}=-1.1$, implying that P2 would be on average 0.11 dex more metal-poor than P1. Allowing all three scaling factors to vary produces the smallest residuals (r.m.s.\ = 0.002 mag), with a larger variation in He abundance ($\Delta \mathrm{Y} \simeq 0.10$) and P2 now being more metal-rich than P1 by about 0.12 dex. From the above, we conclude that small variations in mean metallicity may contribute to the observed colour differences, but these are largely degenerate with variations in He content and thus essentially unconstrained by our data. We can state that any differences in mean metallicity between P1 and P2 are likely less than about 0.1 dex, which is consistent with previous studies. The variations in CNO are slightly smaller than those corresponding to the CNONa2 mixture, so we may estimate that the mean difference in N abundance between P2 and P1 is $\Delta \mathrm{[N/Fe]} \approx 0.9 \times 1.44~\mathrm{dex} \approx 1.3~\mathrm{dex}$. The mean difference in He content is $\Delta \mathrm{Y} \approx 0.05$, or possibly slightly larger if P2 is also more metal-rich. The model-dependent nature of these estimates should, however, be emphasised \citep[e.g.][]{Dotter2015}. \subsection{Radial distributions} \label{sec:radist} \begin{table} \caption{Half-number radii for blue and red sub-populations, identified in various colours.} \label{tab:rdist} \centering \begin{tabular}{l ccc} \hline\hline Colour & $R_h$(blue) & $R_h$(red) & $p_\mathrm{KS}$ \\ \hline F336W-F343N & $31\farcs3\pm1\farcs1$ & $34\farcs6\pm0\farcs9$ & 0.046 \\ F343N-F438W & $35\farcs4\pm0\farcs8$ & $31\farcs4\pm1\farcs0$ & 0.020 \\ F336W-F438W & $33\farcs7\pm1\farcs1$ & $33\farcs5\pm1\farcs2$ & 0.708 \\ F336W-F555W & $33\farcs7\pm1\farcs2$ & $33\farcs5\pm1\farcs0$ & 0.247 \\ F336W-F814W & $32\farcs6\pm1\farcs2$ & $34\farcs0\pm1\farcs0$ & 0.051 \\ F438W-F555W & $31\farcs8\pm1\farcs1$ & $34\farcs8\pm0\farcs9$ & 0.100 \\ F438W-F814W & $31\farcs8\pm1\farcs2$ & $34\farcs8\pm0\farcs9$ & 0.050 \\ F438W-F850LP & $31\farcs8\pm1\farcs1$ & $35\farcs0\pm0\farcs9$ & 0.038 \\ F555W-F814W & $31\farcs8\pm1\farcs1$ & $35\farcs2\pm0\farcs9$ & 0.014 \\ F814W-F850LP & $33\farcs2\pm1\farcs2$ & $33\farcs7\pm1\farcs0$ & 0.350 \\ \hline \end{tabular} \tablefoot{$p_\mathrm{KS}$ gives the $p$-value from a Kolmogorov-Smirnov test when comparing the radial distributions of the blue and red sub-populations.} \end{table} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig14.pdf} \caption{Cumulative radial distributions of P1 and P2 stars, selected based on their $\Delta_\mathrm{F336W-F343N}$ colours. P1E and P2E represent the ``extreme'' populations, which avoid the colour range between the peaks of P1 and P2. \label{fig:rdist_uun} } \end{figure} As noted in the introduction, the long relaxation time of NGC~2419 means that the spatial distributions of sub-populations within the cluster are expected to be less affected by dynamical evolution than in other clusters. In this section we discuss the constraints on the radial distributions of subpopulations within NGC~2419. In Fig.~\ref{fig:rdist_uun} we show the cumulative radial distributions of P1 and P2 stars. The P2 stars are slightly more concentrated, but the difference between the P1 and P2 cumulative distributions does not appear dramatic and a Kolmogorov-Smirnov test returns $p_\mathrm{KS}=0.046$. From the cumulative distributions, the half-number radii for P1 and P2 are $34\farcs6\pm0\farcs9$ and $31\farcs3\pm1\farcs1$, respectively, with the $\sim10$\% difference being significant at about 2.3$\sigma$ (the errors were estimated via bootstrapping). To get cleaner samples of P1 and P2 stars, we omitted stars with colours in the overlapping region between the peaks of P1 and P2, $\mu_2 < \Delta_\mathrm{F336W-F343N} < \mu_1$. Thus defining the extreme populations as P1E and P2E for $\Delta_\mathrm{F336W-F343N} > \mu_1$ and $\Delta_\mathrm{F336W-F343N} < \mu_2$, respectively, we get half-number radii of $34\farcs6\pm1\farcs1$ (P1E) and $32\farcs2\pm1\farcs7$ (P2E). The corresponding cumulative distributions, which are shown with dashed lines in Fig.~\ref{fig:rdist_uun}, are very similar to those of the full P1 and P2 samples, but due to the smaller numbers of stars the Kolmogorov-Smirnov test now gives $p_\mathrm{KS}=0.30$, indicating no significant difference. These half-number radii are all smaller than the half-light radii quoted in the literature, but this is as expected because the HST data do not include the outer parts of the cluster. In general, analyses of radial trends can be affected by variations in differential reddening as well as instrumental effects such as variations in the PSF and flat-field errors across the field-of-view. Because of the very similar central wavelengths of the F336W and F343N filters and the small foreground extinction towards NGC~2419, it appears unlikely that reddening variations could produce significant spurious trends in the $\Delta_\mathrm{F336W-F343N}$ index. To quantify the radial distributions in other colour combinations, we divided the RGB stars in NGC~2419 into blue and red samples in each colour, assigning (for simplicity) an equal number of stars to both groups. Table~\ref{tab:rdist} lists the half-number radii for the blue and red samples for each of the colour combinations shown in Fig.~\ref{fig:hist_dcol}. In most of these colour combinations, differences in the radial distributions are only marginally significant. The most significant differences are found for F555W-F814W ($p_\mathrm{KS}=0.014$) and F343N-F438W ($p_\mathrm{KS}=0.020$). For F343N-F438W, the top panel in Fig.~\ref{fig:hist_dcol} shows that this colour combination provides a fairly clear separation of the P1 and P2 stars, with the P2 stars having redder colours and being more concentrated. In the case of F555W-F814W, it is the blue stars that are more centrally concentrated. Blue F555W-F814W colours indicate enhanced He, which is characteristic of the P2 stars, so this is again consistent with the P2 stars being more centrally concentrated. These results are difficult to attribute to differential reddening variations, for which we would expect the difference $R_h$(blue) - $R_h$(red) to always have the same sign. Instrumental effects are harder to rule out definitively. One check is provided by the F814W-F850LP colour, which is expected to be insensitive to multiple populations. For this colour, Table~\ref{tab:rdist} shows no significant difference between the radial distributions of blue and red samples. This colour distribution is also the narrowest, and thus more liable to be affected by instrumental effects, so the fact that no differences between red and blue subsamples are seen in this colour combination supports the notion that the differences seen in other colours are real. \subsection{Radial distributions: Helium or CNO as the main driver?} \label{sec:hecno} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig15.pdf} \caption{Chromosome diagram divided according to He and CNO content. The half-number radii are indicated in each quadrant. The high quality measurements ($\sigma_\mathrm{F438W} < 0.02$~mag, $\chi_\nu^2 < 2$) are shown with open circles. \label{fig:chromo_xy} } \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig16.pdf} \caption{Colour distributions in $\Delta_\mathrm{F555W-F850LP}$ of the four regions identified in Fig.~\ref{fig:chromo_xy}. Histograms for the full samples are drawn with solid lines, whereas histograms for the stars remaining after error and $\chi^2$ cuts are shown with dotted lines. \label{fig:hist_dvl_abcd} } \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig17.pdf} \caption{Cumulative radial distributions of stars selected on their He and CNO content. Colour coding is the same as in Fig.~\ref{fig:chromo_xy}. \label{fig:rdist_hecno} } \end{figure} While the pseudo-chromosome diagram (Fig.~\ref{fig:chromo}) suggests some correlation between He and CNO abundance variations, a significant He abundance spread is implied for at least the P2 stars, and possibly there is also some spread for the P1 stars. In the previous sections, we have used the CNO abundance variations as our primary means to split the cluster stars into sub-populations, with the tacit assumption that the intra-population He spreads can be treated as a perturbation on top of the CNO variations. This view seems to be supported by the clear bimodality in the CNO-sensitive colours, whereas the separation into distinct He-rich and He-normal groups is much less clear. Nevertheless, it appears worthwhile to examine in more detail how the radial distributions are affected by CNO as well as He abundance variations. To this end, Fig.~\ref{fig:chromo_xy} again shows the pseudo-chromosome diagram, now divided into four regions. The stars with the highest quality photometry are shown with open circles (photometric errors in F438W less than $\sigma_\mathrm{F438W} = 0.02$~mag and \texttt{ALLFRAME} $\chi_\nu^2 < 2$) and other stars are shown with solid dots. The black lines that separate the four regions have the same slopes as the arrows in Fig.~\ref{fig:chromo} and thus separate the stars by their He- and CNO content. Stars in the lower right-hand corner (region A) have the most field-like composition (normal He, normal CNO), whereas stars in the upper left-hand corner (region C) have the most GC-like composition (enhanced He and N). In each region we also indicate the half-number radius of the corresponding radial distribution and the number of stars. Fig.~\ref{fig:hist_dvl_abcd} shows the $\Delta_\mathrm{F555W-F850LP}$ distributions for the four regions defined in Fig.~\ref{fig:chromo_xy}. Histograms drawn with solid lines represent the full sample, and those drawn with dotted lines are for the high quality sub-sample. Like $\Delta_\mathrm{F555W-F814W}$, the $\Delta_\mathrm{F555W-F850LP}$ combination is mainly sensitive to He abundance, and is used here because it is entirely independent of the colours used in Fig.~\ref{fig:chromo_xy}. The legend in Fig.~\ref{fig:hist_dvl_abcd} indicates the mean colour offsets relative to the A stars. These confirm that the C stars are the most He enriched population, and that the D stars also have a significant He enhancement relative to the A stars. Comparing the histograms for the full set of D stars with the high-quality subsample, it can be seen that the quality cut mainly removes stars with colours more similar to those of the A stars, which may have scattered into the D region. This leaves a cleaner sample of pure D stars, so that the offset for the high-quality D stars is in fact slightly greater than for the full sample. The offset of $-0.041$ mag between A and C corresponds to $\Delta\mathrm{Y} = 0.13$, which would suggest a He fraction as high as $Y\simeq0.38$ for the C stars. The C and D stars together correspond to about 29\% of the total number of RGB stars in the pseudo-chromosome map, which is comparable to the estimated fraction of RGB stars that are converted to extreme HB stars \citep{Sandquist2008}. It is thus tempting to associate the extreme HB stars with the He-enriched RGB stars in NGC~2419. The cumulative radial distributions of the four populations are shown in Fig.~\ref{fig:rdist_hecno}. The least concentrated stars are those belonging to the A group, while the D stars are formally the most concentrated, but none of the differences are highly significant. Grouping the stars by He content, we find half-number radii of $34\farcs2$ for the (He-normal) AB stars and $30\farcs5$ for the (He-rich) CD stars, respectively, with a K-S $p-$value of $p_\mathrm{KS} = 0.03$ for the comparison of the cumulative distributions. Grouping instead by CNO content, the corresponding half-number radii are $34\farcs0$ (AD) and $32\farcs5$ (BC) with $p_\mathrm{KS}=0.25$. In both cases, the stars with the most field-like composition have the most extended distribution, and there is a suggestion that the correlation with He abundance may be somewhat stronger than with CNO content, in agreement with the results in Sec.~\ref{sec:radist}. \subsection{Kinematics} \begin{figure} \centering \includegraphics[width=8cm]{fig18a.pdf} \includegraphics[width=8cm]{fig18b.pdf} \caption{Kinematics. Top: AD (CNO-normal) stars. Bottom: BC (CNO-enriched) stars. \label{fig:kinematics} } \end{figure} To test for kinematic differences between the different populations, we cross correlated the positions of the stars that we determined from the HST images against the positions of stars with measured radial velocities from \cite{Baumgardt2018}, accepting stars with a positional difference of less than $0\farcs5$ as a match. This resulted in 64 matches. We then subtracted the average velocity of NGC 2419, $v_r=-20.6 \pm 0.2$ km~s$^{-1}$ found by \cite{Baumgardt2018} from the velocities of all stars and applied a maximum-likelihood test according to \begin{equation} \ln \Lambda = \sum_i \frac{1}{\sqrt{ \sigma^2 + \epsilon^2_i}} e^{-\frac{1}{2}(v_i-v_{rot} \sin(\theta_i-\theta_0))^2/(\sigma^2 + \epsilon^2_i)} \end{equation} to determine the best-fitting values of the velocity dispersion $\sigma$, the amount of rotation $v_{rot}$ and the position angle $\theta_0$ of rotation for the different components. In the above formula, $\theta_i$ is the direction from the centre of NGC 2419 to a star (measured anti-clockwise from north), and $v_i$ and $\epsilon_i$ are the velocity and the velocity error of each star after subtraction of the mean velocity of NGC 2419. The stars with radial velocity measurements were assigned to P1 (standard CNO) and P2 (enriched CNO) using the $\Delta_\mathrm{F336W-F343N}$ index, as discussed in Sect.~\ref{sec:uvcol}. We obtained velocity dispersions of $\sigma = 4.76^{+0.66}_{-0.56}$ km~s$^{-1}$ for the 38 stars assigned to P1, and $\sigma = 5.78^{+0.89}_{-0.72}$ km~s$^{-1}$ for the remaining 26 P2 stars. Because these stars are mostly located within a few magnitudes of the tip of the RGB, assigning them to the He-normal and He-rich populations is more difficult. We made a rough assignment based on the F438W-F850LP colour. The corresponding dispersions for the standard and He-rich stars are $\sigma = 5.06^{+0.75}_{-0.62}$ km~s$^{-1}$ and $\sigma = 5.67^{+0.76}_{-0.62}$ km~s$^{-1}$ respectively. Hence there is some indication that the stars with field-like abundances have lower velocity dispersions, in agreement with their more extended spatial distribution. However the differences between the populations are still within the error bars. The only group of stars for which we find significant rotation are the P1 stars, for which we find a rotation velocity of $v_r=2.4 \pm 1.1$ km~s$^{-1}$ with a position angle of $\theta_0=337 \pm 25$ degrees. The CNO enriched (P2) stars, instead, do not show any significant rotation (Fig.~\ref{fig:kinematics}). We formally find a rotation velocity for the P2 stars of $1.02\pm1.47$ km~s$^{-1}$, which is compatible with the same rotation as the P1 stars, as well as no rotation. These results could indicate that the P1 stars in NGC~2419 formed from an initially slowly rotating gas cloud which imprinted its rotation signature on these stars, while the P2 stars may have formed from kinematically more mixed gas. To test the significance of the detection of rotation for the P1 stars, we randomly drew 38 velocities from a Gaussian distribution with a dispersion of 5 km~s$^{-1}$ and errors randomly distributed between 0.5 and 2.5 km~s$^{-1}$. We assigned random position angles to these velocities (i.e., no underlying rotation). We then measured how often a rotation signal with more than $2.2 \sigma$ was found. This happened in 43 out of 500 cases, corresponding to a false positive rate of about 8\%. \section{Discussion} Despite the unusual chemical abundance patterns found by spectroscopic studies, such as the presence of extremely Mg-depleted and K-enhanced stars \citep{Cohen2012,Mucciarelli2012a}, the photometric evidence indicates a relatively normal range of CNO abundance variations within NGC~2419. The N-sensitive $\Delta_\mathrm{F336W-F343N}$ index reveals a clearly bimodal distribution of N abundances, with a difference in average $\mathrm{[N/Fe]}$ value of $\sim1.3$ dex between N-normal P1 stars and N-rich P2 stars. Our analysis is consistent with that by \citet{Frank2015}, who found a range in $\mathrm{[N/Fe]}$ of less than 1.3~dex from Str{\"o}mgren photometry for the outer parts of the cluster. While it should be kept in mind that the photometric estimates of the $\mathrm{[N/Fe]}$ variations are uncertain, and the difference in average N abundance between P1 and P2 may underestimate the full range somewhat, the estimated N abundance variations in NGC~2419 are not particularly extreme compared to those found in other GCs, where a range up to $\sim2$~dex in $\mathrm{[N/Fe]}$ has been found \citep[e.g.][]{Yong2008}. Our photometry is consistent with previous evidence that a significant fraction of the stars in NGC~2419 have enhanced He abundances. Assuming that metallicity variations are negligible, we find a mean difference of $\Delta$Y$\simeq 0.05$ between P1 and P2, implying $\langle$Y$\rangle \simeq 0.30$ for the P2 stars if the P1 stars have a normal He fraction of Y$\simeq0.25$. However, the total range is likely greater, since both populations (especially P2) show evidence of an intrinsic He spread. NGC~2419 is yet another example of a cluster where significant He spreads are found within the sub-populations identified via CNO-sensitive colours \citep{Nardiello2018,Milone2017,Lardo2018,Milone2018}. The difference between the mean He abundances of P1 and P2 is relatively large compared to those found by \citet{Milone2018}, who found mean differences between 0 and 0.05 in $\Delta$Y for Galactic GCs, but it is consistent with their result that the larger differences tend to be found in more massive clusters. Again, it is worth emphasising that the absolute values of the He abundance spreads derived from the photometry should not be taken too literally. A double-Gaussian fit to the $\Delta_\mathrm{F336W-F343N}$ distribution assigns about 45\% of the RGB stars to the N-rich P2. For a massive GC like NGC~2419, this is a relatively low fraction of enriched stars, with enriched fractions of $\sim70$\% or more being common for clusters with masses approaching $10^6 M_\odot$ \citep{Milone2017,Bastian2018}. The enriched fraction estimated from a simple double-Gaussian fit would increase if we consider that some of the P1 stars may not be truly field-like, but the stars in region A of Fig.~\ref{fig:chromo_xy} still account for nearly 1/2 of all stars in the diagram. These estimates refer to the inner regions of NGC~2419 covered by our HST photometry, and the global fraction of enriched stars could be even lower if there are radial gradients. According to \citet{Beccari2013}, the fraction of stars with blue $u\!-\!I$ and $u\!-\!V$ colours decreases significantly outwards for radii between 100\arcsec\ and 400\arcsec. If these blue colours indicate enhanced He abundance, as assumed by B2013, then this would indeed imply a decreasing enriched fraction outwards. However, the strong radial trend seen in the $u\!-\!V$ and $u\!-\!I$ colours is somewhat puzzling, given the very small separation between P1 and P2 in the (nearly) equivalent F336W-F555W and F336W-F814W colours, and the lack of strong radial trends in these colours in the inner regions of the cluster (Fig.~\ref{fig:hist_dcol} and Table~\ref{tab:rdist}). \begin{figure} \centering \includegraphics[width=\columnwidth]{fig19.pdf} \caption{Cumulative radial distributions of RGB stars in the outer regions of NGC~2419, using Str{\"o}mgren photometry from \citet{Frank2015} \label{fig:frank} } \end{figure} \citet{Frank2015} found an enriched fraction of $53\pm5$\% in the outer parts of NGC~2419 from a double-Gaussian fit to their measurements of the N-sensitive Str{\"o}mgren $\delta_4$ index. This is formally slightly higher than our estimate for the inner regions. F2015 did not investigate radial trends in detail, except by noting that their inferred enriched fraction is higher than that found for the inner regions of the cluster by other studies. To see whether their photometry can constrain radial trends, we downloaded their photometric catalogue and defined the $\Delta_{\delta_4}$ and equivalent $\Delta_\mathrm{b-y}$ and $\Delta_\mathrm{u-y}$ parameters with respect to a ridge line for the `clean RGB sample' in the same way that F2015 did. The comparisons of the resulting cumulative radial distributions for sub-samples divided according to $\Delta_{\delta_4}$, $\Delta_\mathrm{u-y}$, and $\Delta_\mathrm{b-y}$ are shown in Fig.~\ref{fig:frank}. The F2015 sample of RGB stars is smaller than our HST sample for the central regions (177 stars), so the statistical significance of any results is inevitably lower. Nevertheless, there is no significant difference between the radial distributions of N-normal and N-rich subsamples as defined by $\Delta\delta_4$ ($p_\mathrm{KS}=0.33$). Likewise, when dividing the sample according to $\Delta_\mathrm{u-y}$, the two radial distributions are essentially identical ($p_\mathrm{KS}=0.94$). This is seemingly at odds with the strong trends found by B2013, since the Str{\"o}mgren $u\!-\!y$ index is expected to behave very similarly to the $u\!-\!V$ colour. Indeed, F2015 found that the $u\!-\!y$ colours showed no correlation with Mg abundance, which they attributed to the opposite effects of CNO and He abundance variations on this colour, a similar conclusion to that reached from our analysis. When dividing according to $\Delta_\mathrm{b-y}$, there is a somewhat significant difference ($p_\mathrm{KS}=0.038$), with the blue stars being more centrally concentrated. While the Str{\"o}mgren $b-y$ colour is insensitive to CNO abundance variations \citep{Carretta2011}, it does depend on He abundance (through $T_\mathrm{eff}$), and the mild tendency for the stars with blue $\Delta_\mathrm{b-y}$ colours (i.e.\ enhanced He) to be more centrally concentrated would be consistent with our results for the central regions. The spectroscopically identified Mg-poor stars appear to be associated primarily with P2, and the Mg-normal stars with P1. So far, we have not discussed Na, which is perhaps the best established spectroscopic tracer of multiple populations. The Na-O anticorrelation is present in nearly all GCs where it has been looked for \citep{Carretta2009}, and it is thus natural to inquire about the behaviour of Na in NGC~2419. Unfortunately, the picture remains unclear in this regard. While \citet{Cohen2012} found a significant spread in $\mathrm{[Na/Fe]}$ within the cluster ($\sim 1$~dex), they found no correlation between $\mathrm{[Na/Fe]}$ and $\mathrm{[Mg/Fe]}$, with a difference of only 0.04~dex between the mean Na abundances of (five) Mg-deficient and (eight) Mg-normal stars. Five of the stars measured by \citet{Cohen2012} fall within our HST data; two of these happen to have low Na abundances (S810 and S1166) and also have normal Mg abundances. These two stars have F336W-F343N colours consistent with normal N abundances. The other three (S1004, S1065, S1131) are Mg-deficient and Na-rich, and their F336W-F343N colours are relatively blue, indicating enhanced N abundances (Fig.~\ref{fig:uuni}). In this sense, the behaviour of these five stars is as expected, but the broader implications for the behaviour of Na remain unclear, given that many of the Mg-normal stars measured by \citet{Cohen2012} are, in fact, Na-rich. Our data do not provide strong constraints on metallicity variations except that they are small, with a mean metallicity difference of $<0.1$ dex between P2 and P1. This is in agreement with the findings by most previous studies \citep{Mucciarelli2012a,Frank2015}, although there are also claims of a larger metallicity spread \citep{Lee2013}. \section{Summary and conclusions} We have used new HST/WFC3 imaging in the F343N and F336W filters, combined with archival optical HST/WFC3 data, to study the multiple populations in the central regions of the remote globular cluster NGC~2419. The data are spatially complete within a radius of $R=70\arcsec$ (28 pc), or about 1.5 projected half-light radii. The combination of UV and optical filters allowed us to constrain variations in He and N abundances for red giants in the inner regions of the cluster. We combined the photometry with radial velocity measurements from the literature \citep{Baumgardt2018} to examine the kinematics of the different populations. Our main findings are as follows: \begin{itemize} \item The F336W-F343N colour distribution is clearly bimodal, as confirmed by a KMM test ($p<10^{-5}$). A double-Gaussian fit assigns 55\% of the stars to a population with F336W-F343N colours indicative of field-like nitrogen abundances (P1), and the rest to a population with nitrogen-enhanced composition (P2). \item From a comparison of the mean optical colours of P1 and P2 stars with model calculations, we estimate a mean difference in the He content of $\Delta \mathrm{Y} \simeq 0.05$ between the two populations. Small metallicity differences ($<0.1$~dex) could also contribute to the colour differences. \item For the P2 stars, the observed spread in optical colours such as F555W-F814W and F438W-F814W is greater than the observational uncertainties. This most likely indicates a He spread at least within P2, with some stars possibly having He content as high as $Y\simeq0.38$. Analysis of the pseudo-chromosome map suggests that a small fraction (about 16\%) of the P1 stars may also be significantly He-enriched. \item The P2 stars are somewhat more centrally concentrated within the cluster than the P1 stars, with some hint that the difference is driven primarily by differences in mean He content. The difference in the half-number radii is, at any rate, modest (about 10\%) and only moderately significant. \item The P1 stars have a slightly lower velocity dispersion than the P2 stars, although the difference is not statistically significant. Nevertheless, the difference is in agreement with the more extended spatial distribution of the P1 stars. We find evidence of rotation for the P1 stars, whereas the data for the P2 stars are consistent with no rotation, as well as the same rotation as the P1 stars. \item Stars for which spectroscopic measurements indicate a significant Mg-deficiency ($\mathrm{[Mg/Fe]}<0$) are associated primarily with the nitrogen-rich population. \end{itemize} In terms of the main elements studied and discussed in this paper, the abundance patterns seen in NGC~2419 are relatively unsurprising. The P1 stars identified through their field-like N abundances also tend to have relatively field-like He and Mg abundances, while the N-enriched P2 stars tend to have enhanced He and (strongly) depleted Mg. Nevertheless, the correlations between N and He have real scatter, and the same may well be true for N vs.\ Mg. It is well to keep in mind, however, the apparent lack of any correlation between Na and Mg. Here we cannot directly address the relation between Na and N or He, and this certainly appears to be a problem worthy of further investigation. The failure of current scenarios for the formation of GCs to provide a satisfactory account of the observed properties of multiple populations in general is well documented \citep{Bastian2018}, as are the additional problems associated with the complex chemistry of NGC~2419 specifically \citep[e.g.][]{DiCriscienzo2011,Cohen2012,Mucciarelli2012a,Carretta2013}. The relative similarity of the radial distributions of the different populations found here poses yet another potential complication for many scenarios. The possibility that NGC~2419 may be the nucleus of a disrupted dwarf galaxy has been discussed by many authors. Recent proper motion measurements, combined with the radial velocity of NGC~2419, appear to be consistent with membership of the Sagittarius dwarf spheroidal galaxy \citep{Massari2017,Sohn2018}, but since Sagittarius already has a nucleus \citep{Bellazzini2008} this would argue against NGC~2419 also being a nucleus. It is not clear, in any case, that this would help explain its peculiar abundance patterns, which are not observed in dwarf galaxies \citep{Salgado2019}. Other more plausible candidates for nuclei (such as $\omega$~Cen and M54) display a chemical inventory that is quite different from what is seen in NGC~2419, with significant metallicity spreads but no reported Mg-K anticorrelation \citep{Carretta2013}. While NGC~2419 appears to represent a relatively extreme manifestation of the multiple populations phenomenon, it should be noted that the GC NGC~2808 shares some of the features seen in NGC~2419, such as the Mg-K anticorrelation \citep{Mucciarelli2015}, although other details differ. % Hence, it seems that NGC~2419 falls within the range of behaviours that a successful theory for GC formation must be able to explain. \begin{acknowledgements} We thank the anonymous referee for a careful reading of the manuscript and several helpful comments. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. N.B. gratefully acknowledges financial support from the Royal Society (University Research Fellowship) and the European Research Council (ERC-CoG-646928-Multi-Pop). J.B. acknowledges support for HST Program number GO-15078 from NASA through grant HST-GO-15078.02 from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. \end{acknowledgements} \bibliographystyle{aa}
1,314,259,995,716
arxiv
\section{Asymptotics of the product of gamma functions}\label{a} Due to the asymptotic relation $$ \ln\Gamma(z)=\left(z-\frac12\right)\ln z-z+O(1),\quad |\arg z|<\pi, $$ one has $$ \ln\left\{\Gamma(b+z)\Gamma(b+\omega z)\Gamma(b+z/\omega)\right\}=3\left(b-\frac12\right)\ln z-\frac{2\pi}{\sqrt{3}}z+O(1),\quad |\arg z|<\frac{\pi}{3}. $$ From this it follows that $$ |P_b(z)|=C|z|^{3b-3/2}\cdot\begin{cases} e^{-\frac{2\pi}{\sqrt{3}}x},~0<\arg z<\frac{\pi}{3},\\ e^{\tfrac{\pi}{\sqrt{3}}x-\pi y},~\frac{\pi}{3}<\arg z<\frac{2\pi}{3}, \end{cases} $$ where $z=x+iy$. \section{Roots of the equation $e^{i\sqrt3 z}+2\cosh z=0$}\label{b} The fact that the roots of the equation $e^{i\sqrt3 z}+2\cosh z=0$ are symmetric under $z\to \omega z$ is easy to check directly. Since $\frac12 e^{-\pi\sqrt3 /2}=0.0329...$ is quite small the equation $e^{i\sqrt3 z}+2\cosh z=0$ will have roots close to $\pi i \left(n+\frac12\right)$, where $n$ is a non-negative integer. Below it is shown that these are the only roots in the upper half plane. Let $f(z)=e^{i\sqrt3 z}$,$~g(z)=2\cosh z$. Obviously, on the real axis $|f(z)|<|g(z)|$. Now consider $f(z)$ and $g(z)$ on the closed contour $C$ depicted in Fig.\hyperref[figure2]{2} \begin{figure}[H] \centering \begin{tikzpicture} \draw [help lines,->] (-4,0) -- (4,0) coordinate (xaxis); \draw [help lines,->] (0,-0.5) -- (0,4) coordinate (yaxis); \path [draw, line width=0.8pt, postaction=decorate] (0,0) -- (3.5,0) node [below] {$R$} arc (0:180:3.5) node [below] {$-R$} -- (0,0) ; \node [below left] {$O$}; \node at (2,3.3) {$\Gamma_{R}$}; \node at (0,-0.8) {$\text{Fig.2}$}; \end{tikzpicture} \end{figure} Here $\Gamma_R$ is a semicircle of radius $R=\pi N$ for some large natural number $N$. We have for $z=x+iy\in \Gamma_R$ $$ |f(z)|=e^{-\sqrt3 y}\le 1, $$ $$ |g(z)|=2\sqrt{\sinh^2x+\cos^2\sqrt{\pi^2N^2-x^2}}\ge 2. $$ Thus $|f(z)|<|g(z)|$ on the contour $C$. According to Rouche's theorem this means that the function $f(z)+g(z)$ has the same number of roots inside the contour $C$ as the function $g(z)$, as required. This analysis shows that the roots of $e^{i\sqrt3 z}+2\cosh z=0$ are located on the three rays $z=ir\omega^k$, $~k=0,1,2$.
1,314,259,995,717
arxiv
\section{Introduction} \label{sec:intro} Localization have attracted significant attentions in the past few years, and numerous techniques have been proposed to achieve high accuracy localization. In outdoor scenarios, GPS and its variants are the most common technologies to provide accurate position for various applications~\cite{cheng2005accuracy}. However, problems caused by weak/none GPS signal in cities often lead to a pretty bad user experience. For instance, we conduct comprehensive experiments in downtown Chicago, IL USA, to evaluate the performance of GPS positioning. Based on the experiment results, we observe that the GPS signals are very weak and unstable in some roads due to highrises, or even blocked completely in some complicated road structures, such as tunnels and underground. In addition, the largest location error we collected is over $100$m on the ground, and nearly $400$m in the underground segments (see Fig.~\ref{fig:pex} for more details). Thus, improving the location accuracy is imperative when the GPS signal is weak. In this work, we propose \emph{SmartLoc}, a localization method which improves the localization accuracy in metropolises by leveraging embedded inertial sensors in smartphones to help improve the driving patterns according to various of road conditions. Exploiting the data collected from these inertial sensors has been used in the literature to address a number of challenging and interesting tasks, \textit{e.g.}\xspace, indoor localization \cite{constandache2010towards,wang2012no,yang2012locating,liu2012push}, road condition monitoring \cite{mohan2008nericell,eriksson2008pothole}, property tracking~\cite{guha2010autowitness}, and outdoor localization \cite{guha2010autowitness,hwanggps,paek2010energy}. Note that some applications exploits accelerometer to measure walking speed and distance of pedestrian~\cite{chen2009integrated,constandache2010towards,retscher2006intelligent,wang2012no} and exploit compass to estimate the direction so as to estimate the location. However, providing realtime localization of moving cars in metropolises is far more challenging as such activity does not have a cyclic pattern in sensory data. To address these challenges, during the dead reckoning process for calculating the current position of a car, we propose a dynamic trajectory model to estimate the driving speed and velocity based on current road condition, so that the impact of inherent noise and accumulated error could be reduced to a large extent. We also design a calibration strategy based on road infrastructures (\textit{e.g.}\xspace, bridge, traffic lights, uphill, and downhill) and driving status (\textit{e.g.}\xspace, turns, stops), which are inferred from the sensory data. Our extensive evaluations indicate that leveraging inertial sensors could accurately identify the special road infrastructures using either fingerprint based approaches or pattern-matching technique. We implement \emph{SmartLoc} on Android, and evaluate the localization performance in both downtown Chicago and highway. Our extensive test results in the majority of blocks in Chicago indicate that \emph{SmartLoc} improves the location accuracy significantly: 1) the mean localization error in each time slot is $11.65$m; 2) according to the proportion of "good" road segments, the average localization error is less than $20$m such that the localization accuracy is increased from $\le 50\%$ (by purely using GPS) to $\ge 90\%$ using \emph{SmartLoc} in downtown areas. When testing \emph{SmartLoc} on highway, the localization error is at most $12$m for $95\%$ of the time. In comparison the state-of-the-art localization scheme for moving vehicles, AutoWitness~\cite{guha2010autowitness}, only produces the error of distance estimation less than $10\%$ for most of the cases, which could be large when the estimated distance is long (\textit{e.g.}\xspace, $10\%$ of the 2 miles driving is $320$m). Our results also imply that \emph{SmartLoc} can save the energy consumption by switching on/off the GPS periodically. The main contributions of this work are summarized as follows: \begin{compactenum} \item We propose a self-learning driving model to reduce the speed and trajectory distance estimation error brought by both the inherent noise and dead-reckoning. \item In a real scenario, when both the traffic condition and road infrastructures are complex and unpredictable, which hinder the trajectory estimation accurately, \emph{SmartLoc} could adjust the self-learning driving model to calculate the best parameters to match the current driving condition. \item Although self-calibration is a reliable approach to elevate the accuracy in localization, it is still difficult to calibrate the location in metropolises with weak GPS signal. \emph{SmartLoc} also exploits the current coarse-grained estimation of location to confine the search space, so that a much more accurate localization could be achieved through matching the road infrastructures and driving status. \end{compactenum} The rest of paper is organized as follows. We first review the state-of-art localization techniques in Section~\ref{sec:rw}. We show our measurement results and observations with respect to the GPS accuracy in Section~\ref{sec:preliminary}. We present the overview of \emph{SmartLoc} in Section~\ref{sec:overview}, following which novel calibration techniques of \emph{SmartLoc} are presented one by one in Section~\ref{sec:regression} and Section~\ref{sec:landmarks}. We report our detailed real-world experiment results in Section~\ref{sec:experiment}, and conclude the work in Section~\ref{sec:conclusion}. \section{Related Works} \label{sec:rw} Our work involves in a number of techniques, in this section, we mainly focus on the work related to wireless localization and dead-reckoning~\cite{levi1996dead}. \subsection{Localization Techniques} GPS~\cite{liu2012energy}, being the most popular outdoor localization system, has been widely used to provide localization and navigation services to users such that numerous techniques have been proposed in the literature to improve the GPS localization accuracy, like A-GPS, D-GPS~\cite{parkinson1996differential}, WAAS~\cite{enge1996wide}, etc. Recently, WiFi signal \cite{cheng2005accuracy,skyhook} and cellular signal \cite{varshavsky2005gsm,chen2006practical} have been used to find the locations as well. However, the median error for downtown environment based on cellular signal reaches $100$m at worst\cite{chen2006practical}, and WiFi based solutions rely on nearby WiFi APs' locations. Unfortunately, these GPS-based or WiFi-based solutions are inapplicable for navigation in metropolises because of many critical road infrastructures, such as under ground roads and multilayered roads where the GPS signal is often lost, and there are no WiFi access points at all. Some GSM-based localization methods, like \cite{varshavsky2005gsm, mohan2008nericell}, are widely available. However, their accuracy is low (up to hundreds of meters) with the assumption that the exact positions of cellular towers should be known \emph{in priori}. Work PlaceLab~\cite{cheng2005accuracy}, and ActiveCampus~\cite{griswold2004activecampus} make full use of WiFi and GSM signals for location at outdoor environment. The former creates a map by war-driving a region and maps both APs and cell tower's signals to the wireless map. The latter is quite similar except it assumes the APs' location is known \emph{in priori}. Taking advantages of two aforementioned systems, CompAcc~\cite{constandache2010towards} uses dead-reckoning combined with AGPS to further calibrate localization results rather than utilizing preliminary war-driving. Unfortunately, all these systems need time-consuming calibration, and are not suitable for large scale area. Another work Skyhook~\cite{skyhook} supply high accuracy location services with cost of hiring over $500$ drivers to create the WiFi/GSM map in certain region. Several promising techniques such as crowdsourcing are introduced in localization recently, such as Zee~\cite{rai2012zee}, which also uses inertial sensors to track users' movement. \subsection{Dead-Reckoning} Recently, dead-reckoning strategies using internal sensors to estimate motion activities have attracted many research interests. Strapdown Inertial Navigation System (SINS)~\cite{titterton2004strapdown} and Pedometer System~\cite{jirawimut2003method} use MEMS to estimate the moving speed and trace. The key issue is to deal with the noise of internal sensors and accumulated errors, which sometimes grow cubically~\cite{woodman2008pedestrian}. Personal Dead-reckoning (PDR) system~\cite{ojeda2007non} uses ``Zero Velocity Update'' to calibrate the drift. The majority of the dead-reckoning studies focus on walking estimation, such as UnLoc~\cite{wang2012no}, and CompAcc~\cite{constandache2010towards}. Their main idea is to use accelerometer sensors to estimate the number of walking steps, and then measure the walking distance. AutoWitness~\cite{guha2010autowitness} is the system with an embedded wireless tag integrated with vibration, accelerometer, and gyroscope sensors. The tag is attached to a vehicle, and accelerometer and gyroscope sensors are used to track the moving trace. \subsection{Road, Map and Traffic} Smartphones are used to analyze traffic patterns to provide better navigation system in vehicle. CTrack~\cite{thiagarajan2011accurate} and VTrack~\cite{thiagarajan2009vtrack} are two systems which process error-prone positioning systems to estimate the trajectories. These two system match a sequence of observations on the transitions between locations, while the former adopt fingerprints and the latter mainly utilizes HMM. SmartRoad~\cite{hu2013smartroad} detects and identifies traffic lights and stop signs through crowd-sensing strategies. Some research propose map matching algorithms based on Kalman Filter~\cite{obradovic2006fusion} or HMM~\cite{newson2009hidden}. However, such approaches cannot guarantee accuracy. IVMM~\cite{yuan2010interactive} is then proposed to increase the accuracy. \section{GPS Positioning in Downtown} \label{sec:preliminary} \subsection{Measurement in Downtown Chicago } To study how bad the GPS location accuracy could be, we first conduct a comprehensive measurements of GPS accuracies within some area in Downtown Chicago, red rectangle shown in Figure~\ref{fig:loop}. We drive through every road in the area while recording location information in a real-time manner. In order to remove the time dependent GPS location errors, we conduct independent measurements at three different times, and report the results by average. We find that in the test area, the largest location error reaches $400$m, and the distance of the longest road segment between two GPS locations with reasonable accuracies ($\le 30m$) is about $1$km. \begin{figure}[h] \centering \subfigure[Testing area in Chicago\label{fig:loop}] {\includegraphics[width=1.6in]{loop.eps}} \subfigure[Proportion of Error\label{fig:error_pro}] {\includegraphics[width=1.7in]{error_pro.eps}} \caption{GPS localization accuracy in Chicago.} \label{fig:pex} \end{figure} \subsection{Original Location Results} Unfortunately, the location accuracy is not as high as expected according to our measurement results. For instance, the localization results have averagely $42.22$m errors and the largest error reaches $400$m, which is nearly the length of three blocks in downtown area. We further plot the localization accuracy information of downtown Chicago based on the measurement results in Figure \ref{fig:error_pro}. Clearly, only about half of the sampling points endure the error of less than $20$m, over one quarter of the locations have an error of about $50$m while the rest quarter has an error larger than $50$m. We assume that the largest location error a user could accept inside a city should be less than $30$m, which is less than a quarter of one block. From now on, we consider the positions with GPS location error less than $30$m as the locations with \textbf{good GPS signals}, and the rest as the locations with \textbf{bad GPS signals}. Since longer segments of roads with bad GPS signals are prone to leading to wrong instruction for turning or stopping in a navigation system. We calculate the distance of road segments with bad GPS signal in the experiment area, and present the results in Figure~\ref{fig:no_gps}. In Figure~\ref{fig:seg_no_gps}, we numbered each segment of road with bad GPS signal in X axis, and plot the length of all $182$ segments of road, which indicates that the longest length reaches almost one kilometer. Meanwhile, the Figure \ref{fig:num_no_30} illustrates the distribution of each segment of roads. We notice that the average length of these bad segments of road is approximately $200$m, and those with over $400$m locate in the center of downtown, which may confuse drivers most. \begin{figure}[h] \centering \subfigure[Bad road segments.\label{fig:seg_no_gps}]{\includegraphics[width=1.6in]{seg_no_gps.eps}} \subfigure[Number of bad segments.\label{fig:num_no_30}]{\includegraphics[width=1.6in]{num_no_30.eps}} \vspace{-0.1in} \caption{Road segments with poor GPS\label{fig:no_gps}} \end{figure} \vspace{-0.1in} \section{System Overview} \label{sec:overview} \subsection{Main Idea} The objective of \emph{SmartLoc}is to use inertial sensors in smartphones to lively estimate the locations based on the trajectory and orientation through self-learnt dynamic model with high accuracy but low energy consumption. Remarkably, we not only address the inaccuracy caused by the complex infrastructures in downtown area, but also exploit them to improve the localization accuracy. In trajectory measuring, traditional methods leveraging inertial sensors introduce large inherent errors, which leads to poor traveling distance and speed estimation. Besides the mechanism noise, such errors also come from the process of extracting and transforming linear acceleration in Earth Frame Coordinate and orientation estimation. Although Extended Kalman Filter could be adopted to reduce such coarse noise to some extent, trajectory calculating error still cannot be neglected. In the following stage, we propose a self-learning predictive regression model to estimate the moving distance based on the extracted acceleration, in which the accumulated errors are minimized in the following way. \emph{SmartLoc} switches to the training process to train the predictive model when GPS signal is good. When GPS signals are unreliable, it uses the trained model to predict the moving trajectory of the vehicle. Due to the complex road conditions and unpredictable driving activities, the training process should be updated periodically in our model. In addition, \emph{SmartLoc} also detects the \textit{landmarks} by finding special patterns from sensory data when the car goes through bridges, tunnels, traffic lights or turning points, while calibrating the estimation accordingly. \subsection{Challenges} Many technical issues should be addressed here. The first issue is how to design an improved self-learning trajectory estimation model according to current driving conditions since naive methods using Newton's Law accumulate the noises (e.g., when we double the integral on acceleration results in the displacement, the noises are doubly accumulated as well). The second issue is how to recognize the landmarks, which will be further used to improve the localization accuracy in our system. The last but not least challenge comes from the fact that even if some special landmarks are recognized, traffic conditions also affect the localization estimation accuracy, e.g., the unpredictable length of waiting queues in front of traffic lights. challenges in detail. \section{Trajectory Calculation} \label{sec:regression} \subsection{Background} Although accelerometer, gyroscope and magnetometer sensors could provide sensory data to reflect the motion conditions, the intrinsic noise could make the naive distance estimation based on Newton's Law unavailable because the error will be accumulated. Since drivers have been used to mount their smartphones on the windshield as navigators, and the orientation of the smartphone changes irregularly due to driving direction changing and vehicle vibration, we build an estimation model through gyroscope-based Extend Kalman Filter to decrease the orientation error, and extract linear acceleration in the coordinate of Earth. \subsection{Self-learning Predictive Model} We observe from our preliminary experiments (Section \ref{sec:preliminary}) that the majority of the road segments with bad GPS signals (error $\geq$ 30m) are usually shorter than $400$m, which takes about $20-30$ seconds to drive through in a normal condition. On the one hand, such distance is long enough to navigate drivers to wrong places, on the other hand it is short enough to endure the errors to some extent. Therefore, we propose the following predictive dynamic trajectory estimating model which adaptively calibrates itself using GPS signals and dead-reckoning. \textbf{Velocity Estimator:} Because of the inherent noises and measurement errors, the traditional velocity estimation model is no longer reliable. In this case, we denote the velocity $V_i$ at the end of a timeslot $i$ as \begin{equation} V_i = V_{i-1} + \beta\cdot{a_i}\cdot{\Delta{t}} + \mu \label{eq:v_init} \end{equation} where $\beta$ is the parameter to be learned and adjusted in real time, $a_i$ is the average measured acceleration during the timeslot $i$, and $\mu$ is the noise. When GPS signals are strong, both $V_i$ and $V_{i-1}$ could be achieved from the GPS directly, and the mean linear acceleration $a_i$ is extracted from the accelerometer. Then we regress the model to find the best $\beta$, and calculate the noise $\mu$ hiding behind. When the localization through GPS is unreliable, we use the trained model proposed to predict the velocity $V_i$. \textbf{Distance Estimator:} For general cases, the trajectory distance gathered from GPS indicates the distance with some error. Therefore, letting $G(\Delta{t}_i)$ be the distance during a timeslot $i$ read from GPS, which could be presented as: \begin{displaymath} G(\Delta{t}_i) = {\lambda}_1\cdot{V_{i-1}}\cdot{\Delta{t}} + \frac{1}{2}\cdot{\widehat{a_i}}\cdot{\Delta{t}^2} + \eta \end{displaymath} where $\widehat{a_i}$ is the actual acceleration in the time slot $i$. Here $\lambda_1$ is multiplied to reflect the error in the estimated speed $V_{i-1}$ for the time slot $i-1$. Since the known measured acceleration $a_i$ contains both inherent noise and measurement errors, by assuming that these error follows normal distribution, we define the measured acceleration as: $$a_i = (1 + \varepsilon)\widehat{a_i} + \delta,$$ where $\widehat{a_i}$ is considered as the true acceleration which cannot be obtained. Then, we use the following formula to estimate the distance $G(\Delta t_i)$: {\small\begin{equation} G(\Delta t_i)={\lambda}_1\cdot{V_{i-1}}\cdot{\Delta{t}} + {\lambda}_2\frac{1}{2}\cdot{a_i}\cdot{\Delta{t}^2} + {\lambda}_3\cdot{\Delta{t}^2} + {\lambda}_4\cdot{\Delta{t}} + \eta \label{eq:dis_k_gps} \end{equation}} where $\lambda_1, \cdots, \lambda_4$ are parameters to be learned by our regression model. When GPS signals are strong (GPS error is $\le 20$m), based on the $V_{i-1}$, $a_i$ is computed using the sensory data and the distance from GPS, we train our model using \equref{eq:dis_k_gps}, which is in turn used to predict the distance $G(\Delta{t}_i)$ in the time slot $i$ when GPS signals are bad. From the predicted trajectory distance $G(\Delta t_i)$, the location at the timeslot $i$ could be estimated based on the obtained location, distance and orientation. However, since the location errors from GPS changes in both spacial and temporal dimensions, it is difficult to estimate the times and places at which GPS signals become weak. In addition, driving in downtown area face unpredictable traffic conditions and road infrastructures, which affects the parameters learnt from the previous model. Therefore, we propose a more flexible dynamic adjusting strategy to update the parameters to match the current driving status. In our strategy, we calculate the parameters in predictive dynamic trajectory estimating model only based on the latest driving data. We allocate a small buffer to save the latest driving informations. When the protocol is still in the learning process, the model will replace the oldest data with latest informations in order to update the model parameters. Based on our evaluation, the estimation accuracy in trajectory distance reduces to a large extent. \subsection{Movement Detection} \begin{figure*} \centering \subfigure[Acceleration\label{fig:acc_motion}]{\includegraphics[scale = 0.25]{acc_motion.eps}} \subfigure[BFC/EFC angle\label{fig:gyeo_motion}]{\includegraphics[scale = 0.25]{gyro_motion.eps}} \subfigure[Acceleration in Cruise\label{fig:cruise_acc}]{\includegraphics[scale = 0.25]{cruise_acc.eps}} \subfigure[BFC/EFC angle in Cruise\label{fig:cruise_gyro}]{\includegraphics[scale = 0.25]{cruise_gyro.eps}} \vspace{-0.12in} \caption{The Acceleration and Angle while driving in city or Cruise model.} \vspace{-0.2in} \label{fig:motion_detection} \end{figure*} Remembering that the speed estimator calculates speeds based on the accelerometer, and the speed contains noises accumulated from the integral. Therefore, even if the vehicle stops, the estimated speed is highly likely to be non-zero, which may lead to a huge error in the final prediction. Hence, determining whether the vehicle is moving or halting could further reduce the negative impact of the mechanical noises. In addition, movement detection is also the key to the process of landmark calibration, which adjusts the location when the vehicle stops in front of traffic lights or stop signs. During our preliminary experiments, we find that the movement can be reflected precisely from both accelerometer and gyroscope sensors, as shown in Figure~\ref{fig:motion_detection}. The acceleration fluctuates frequently when the vehicle is in motion, even in cruise mode, and remains relatively stable when it stops (Figure~\ref{fig:acc_motion} and~\ref{fig:cruise_acc}). The same situation occurs in the gyroscope (Figure~\ref{fig:gyeo_motion} and \ref{fig:cruise_gyro}). Although the smartphone is usually mounted to the windshield, due to the inertia while driving, especially speeding up or brake, the gyroscope could still sense small rotation changes. For all the cases, we calculate the variance for readings from both sensors, and we find that the largest differences between two vehicles is stopping and moving. For the acceleration, the variance in motion is approximately $60$ times of that in still, with $0.01$ in stopping, $0.6$ and $0.4$ for regular driving and cruise mode. The differences of variance for gyroscope sensors are similar instead. \emph{SmartLoc} continuously collect the sensory data from both accelerometer and gyroscope, if the vibration lies below the threshold, we consider the vehicle is stopped. In our experiment, we find that \emph{SmartLoc} can differentiate moving and stopping activities precisely. \section{Calibration by Landmarks} \label{sec:landmarks} \begin{figure*}[!ht] \centering \subfigure[Traffic lights\label{fig:red}]{\includegraphics[width=1.2in]{red.eps}} \subfigure[Centripetal force\label{fig:cen_for_turning}]{\includegraphics[width=1.3in]{cen_for_turning.eps}} \subfigure[Angular velocity\label{fig:ang_v}]{\includegraphics[width=1.3in]{ang_v.eps}} \subfigure[Magnetometer\label{fig:mag_turn}]{\includegraphics[width=1.3in]{mag_turn.eps}} \subfigure[Bridge\label{fig:bridge}]{\includegraphics[width=1.3in]{bridge.eps}} \vspace{-0.12in} \caption{Pattern of the sensor data collected in different road infrastructures when driving: (a) car stopping and crossing a traffic light; (b), (c), and (d) car turning $90^o$; and (e) car crossing a bridge.}\label{fig:pattern} \vspace{-0.11in} \end{figure*} As we have mentioned before, the road infrastructures, including tunnels, bridges, crossroads and traffic lights, cause large noises in the GPS data, which results in a large drift in the distance estimation if it is not treated rigorously. In this work, we exploit the precise location of these infrastructures available in Google Map to calibrate the localization without any extra cost.\medskip \textbf{Traffic Light:} When the vehicle stops due to the traffic lights and drives through crossroads, unique patterns appear in the readings of sensors (Figure~\ref{fig:red}). Actually, when vehicles encounters traffic lights, the whole process can be divided into two phases, braking and speeding up respectively. The acceleration falls below zero when the car brakes, reaching the lowest point at the very moment when vehicle stops, and gets back to zero swiftly. However, in rush hours with terrible traffic, the location where cars stop may not be near the crossroad, but with a certain distance from the crossroad. In this case, \emph{SmartLoc} adjusts the moving distance based on the estimated stopping location from the empirical data, \textit{i.e.}\xspace, subtracting the distance from the car to the crossroad. However, since the distance between the car and the crossroad is determined by the traffic condition, it is difficult to measure the exact distance from the car to the crossroad. The main approach adopted by \emph{SmartLoc} is to subtract the $\frac{n\cdot{L}}{2}$, where $L$ indicates the average length of a vehicle, and $n$ represents the current possible number of vehicles waiting for the green light. According to our observation, the number of vehicles \emph{n} waiting before the traffic lights is related to the different time period. In rush hours, the number of vehicles waiting is much larger, so that we assume such number follows normal distribution $n\sim{\mathcal{N}(\mu_t, {\sigma_t}^2)}$. \begin{figure*}[hptb] \centering \subfigure[Turning\label{fig:turning}]{\includegraphics[scale = 0.25]{turning.eps}} \subfigure[Changing Lane\label{fig:lane}]{\includegraphics[scale = 0.25]{lane.eps}} \subfigure[Driving Trace\label{fig:trace1}]{\includegraphics[scale = 0.25]{trace1.eps}} \subfigure[Estimated Orientation\label{fig:angle}]{\includegraphics[scale = 0.25]{angle.eps}} \vspace{-0.15in} \caption{Turning or changing lanes, and Driving Trace\label{fig:turn_lane}} \vspace{-0.15in} \end{figure*} \medskip \textbf{Turning:} Sometimes, vehicles may turn at intersections, which could be detected by sensors. Figure~\ref{fig:cen_for_turning} indicates the centripetal force sensed by the accelerometer, and the scale of the acceleration depends on the speed at which the vehicle is turning. Simultaneously, the angular velocity sensed by the gyroscope also reaches up to $0.5$ rad/s in our test case (Figure~\ref{fig:ang_v}), and the data from the magnetometer changes as well with a large fluctuation. Finally, the orientation of the smartphone also changes approximately $90$ degrees when turning left or right. Such angle change is observed along the axis in gravity direction, and the reading $0$, $90$, $180$, $270$ represent north, east, south, and west respectively. Although the angle may not be accurate enough due to the large noise in the magnetometer (the maximum error we experienced was approximately $30^o$), we are still able to correctly determine the road segment to which the car is turning by calibration. For example, Fig. \ref{fig:turning} shows a case when vehicle turns from the north, the angle is from about $350^o$ to $100^o$, which is east. We also compare the measured angle differences for turning and lane changing (Figure~\ref{fig:lane}) since lane changing can be wrongly detected as a turning. In fact, the angle difference when a car changes its lane is much smaller than the one when a car make a turn. In addition, we also calculated the standard deviation for the angle differences in lane changing, which is less than $10$. Thus, distinguishing the turning and the lane changing is feasible. Then, we conduct more studies on the driving orientation estimation. Figure~\ref{fig:trace1} plots the raw trace of the vehicle achieved from the GPS with good signals, and Figure~\ref{fig:angle} illustrates the raw orientation generated only by the inertial sensors. We employ moving average to cancel some noises and calculate the driving orientation, which matches the ground truth. Other possible road infrastructures that a vehicle may experiences are bridges, and tunnels. In our measurement, such patterns are more obvious and easier to be detected, mainly reflected in acceleration along the gravity direction, where the reading experiences a large and fluctuation when driving in a uphill or a downhill, as shown in Figure~\ref{fig:bridge}. In fact, certain driving patterns, such as turing left or right and stopping for traffic lights or stop signs, can be more accurately detected and thus classified. To classify other road infrastructures, we collect sensor readings of those patterns as the fingerprints, and then match the real-time sensor readings with the trained fingerprints. To improve the classification and the matching accuracy, we rely on the coarse-grained estimation of the location from dead-reckoning first, and then we further use our predictive regression model (Section \ref{sec:regression}) to confine the search space: only the road infrastructures (stored fingerprints) $I$ within a certain distance $\delta$ from the estimated location $x$ will be considered as the matching candidate for the real-time pattern $P$ achieved from the sensory data. We select the infrastructure that maximizes the \emph{weighted matching score}: \begin{displaymath} \alpha M(I, P)+ (1-\alpha) e^{-D(x, L(I))} \end{displaymath} where $M(I,P)$ is the matching score between the fingerprint of an infrastructure $I$ and the observed pattern $P$, $\alpha \in (0,1)$ is a constant, and $D(x, L(I))$ is the geodesic distance between the location $x$ and the location $L(I)$ of infrastructure $I$. Then, the estimated location $x$ is updated as the location $L(I^*)$ of the infrastructure $I^*$ which maximizes the weighted matching score. \section{Experiments and Evaluations} \label{sec:experiment} We conduct extensive evaluation of \emph{SmartLoc} in two different scenarios, both in downtown Chicago and suburb highways. We test the performance in highways to evaluate the effectiveness and reliability of \emph{SmartLoc} replacing traditional GPS localization, so that it could save energy in navigation process. In our evaluation, Samsung Galaxy S3 is mounted to the windshield, and we drive for over $100$ different road segments in downtown Chicago ranging from $1$km to $10$kms and over $30$kms in highway. Since the inertial sensors provide the driving orientation, combined with driving distance from the location in last timeslot, the real-time location could be obtained. Thus, the key problem becomes estimating the trajectory distance. We evaluate the traveling distance, road infrastructure recognition, accuracy, and energy consumption. \subsection{Trajectory Distance Estimation} In trajectory distance estimation, we denote the trajectory distance in a timeslot as a \emph{traveling segment}. Since the typical frequency for reliable GPS update in a smartphone (0.5Hz) is much lower than that of the sensors (1Hz-20Hz), we take duration for reliable GPS updating period as s timeslot. We focus on the evaluation of the trajectory distance estimation in two aspects: (1) the accuracy in distance estimating in \emph{traveling segment}; and (2) the accuracy in final distance estimation of longer road segments. Then, we analyze the performance in details in the rest of the section. \subsubsection{Prediction in City Without Using Landmarks} We first test \emph{SmartLoc} in downtown Chicago for over $30$ different roads, where some roads have reliable GPS signals and some not. We separate these roads into more than $100$ road segments, whose sizes are determined by our evaluations presented in the rest of the section. Before we describe the performance of \emph{SmartLoc} in metropolises, we have to admit that the GPS signals in downtown Chicago are relatively poor and time dependent. Therefore, it is difficult to obtain the ground truth of all locations using smartphones, even if we adopt WiFi or GSM, fine grained location information are also hard to get. In this case, we adopt the experiment in some areas with accuracy locations getting from GPS, and we remove some of the GPS information in these areas to simulate the missing good signal. And we apply \emph{SmartLoc} to calculate the location in those removed road segments to compare with the ground truth. Similarly, we analyze the performance of \emph{SmartLoc} in two phases as aforementioned. \begin{figure} \centering \subfigure[Mean error in each slot\label{fig:machine_learning_seg}] {\includegraphics[scale=0.25]{machine_learning_seg.eps}} \subfigure[Mean overall distance error\label{fig:machine_learning1}]{\includegraphics[scale=0.25]{machine_learning1.eps}} \vspace{-0.15in} \caption{Accuracy vs. Learning Distance.} \vspace{-0.0in} \label{fig:machine_local} \vspace{-0.3in} \end{figure} Obviously, the driving habits and road conditions in a city are difficult to predict, and slight deceleration makes the predicted result deviate from the ground truth. We first evaluate the reliability of \emph{SmartLoc} when different driving distances are used to train the system, ranging from $0.5$km to $3.5$km. Generally speaking, the accuracy increases when the learning distance increases as illustrated in Figure~\ref{fig:machine_learning_seg}. In this figure, the X axis indicates the driving distance used for training our predictive regression model, and the Y axis represents the mean distance (between the actual location reported by the GPS and the location estimated by our \emph{SmartLoc}) in every timeslot when we update GPS locations (\textit{i.e.}\xspace, every 2 seconds, or about every $22$m when driving at the speed $40$km/h). This experiment measures the accuracy of the prediction when we drive for over four different road segments with length from $0.5$km to $2$km ($24$ different cases in total). Due to the unstable driving activities, short road segments for training \emph{SmartLoc} leads to a large estimation error in each time slot. When \emph{SmartLoc} learns only using the trace of $1$km, the mean error in every time slot in different scenarios is around $15$ meters, and the largest one is nearly $30$ meters. When \emph{SmartLoc} trains our predictive regression model using a longer trace, the mean estimation error decreases in all the test cases. The smallest error is less than $6$m, which is less than half of the error when the training trace is $1$km. We also observe that the error grows with the increase of the length of the test road segment in most scenarios. For example, by training \emph{SmartLoc} using a trace of $3.5$km, the mean error of the estimation in a $2$km road segment is nearly twice of that when testing a $0.5$km road segment. We then evaluate the error on estimating the overall trajectory distance (Figure~\ref{fig:machine_learning1}) all the road segments and measure the error between the predicted distance and the ground truth distance for each segment (of all segments with distance from $0.5$km to $2$km) under different training traces. If \emph{SmartLoc} learns the model for only 1km, the parameters in \equref{eq:dis_k_gps} cannot be computed accurately enough. Thus, the estimation errors increase to $180$m in all our tests. When \emph{SmartLoc} learns enough samples, the parameters are much more reliable, and the average accumulated error is far below $30$m, which is significantly better than the GPS in Chicago downtown. \subsubsection{Prediction in City Using Landmarks} \begin{figure*} \centering \subfigure[Sensors Only\label{fig:MotionSensor}] {\includegraphics[scale=0.21]{MotionSensor.eps}} \subfigure[Sensors and Traffic Lights \label{fig:Traffic}] {\includegraphics[scale=0.21]{Traffic.eps}} \subfigure[Landmark vs No-Landmark\label{fig:landmark_comparison}] {\includegraphics[scale=0.21]{with_without_landmark.eps}} \subfigure[SmartLoc\label{fig:smartLoc}] {\includegraphics[scale=0.21]{smartLoc.eps}} \subfigure[Overall Comparison\label{fig:loc_gps_est}] {\includegraphics[scale=0.21]{loc_gps_est.eps}} \vspace{-0.1in} \caption{Distance prediction comparison among three methods and ground truth.} \vspace{-0.1in} \label{fig:local_compare1} \vspace{-0.15in} \end{figure*} \begin{figure*} \centering \subfigure[Error of driving distance in every time slot for three methods\label{fig:dif}] {\includegraphics[scale=0.33]{dif.eps}} \subfigure[Error of driving distance in every time slot for three methods\label{fig:dis_cdf2_seg}] {\includegraphics[scale=0.33]{dis_cdf2_seg.eps}} \subfigure[Error of long distance estimation for three methods\label{fig:dis_cdf1}] {\includegraphics[scale=0.33]{dis_cdf1.eps}} \vspace{-0.1in} \caption{Comparison of three methods.} \vspace{-0.1in} \label{fig:local_compare2} \vspace{-0.15in} \end{figure*} \emph{SmartLoc} calibrates the location as soon as it detects specific patterns, especially traffic lights and turnings. We test the performance of \emph{SmartLoc} in a real drive route with the calibration using landmarks, and the result is presented as Figure~\ref{fig:michi_spot}, \begin{figure}[h] \centering \vspace{-0.15in} \includegraphics[scale=0.33]{michi_spot.eps} \vspace{-0.15in} \caption{Localization In the Street.} \label{fig:michi_spot} \vspace{-0.10in} \end{figure} which is a bird's-eye view of the driving trajectory. The blue dots are the ground truth samples that we achieved from the GPS (where the GPS signals are good), the red dots are the predicted locations from our \emph{SmartLoc} with all calibration techniques, and the length of green lines denote the dimension of error. In this figure, most of the red dots and blue dots are overlap with each other, which reflect the high accuracy in real downtown scenario. We then compare the performance of three different methods in detail: using inertial sensors only, using sensors and landmark calibration, and using \emph{SmartLoc} with all learning model and calibration. In this experiment, we assume the first $3400$m is with reliable GPS signals, and the precise locations are accessible. The estimation starts from $3400$m, and the first three figures in Figure~\ref{fig:local_compare1} indicate the driving distance from the starting point versus the elapsed time. In Figure~\ref{fig:MotionSensor}, we conducted the experiment based on sensors only, without any calibration or noise canceling. The double integration on acceleration leads to the final deviation of over $400$m after driving about $1200$m. When the road pattern detection is introduced, the location is calibrated when \emph{SmartLoc} senses the road infrastructure pattern. During the same experiment, our vehicle crossed $5$ traffic lights in total, and successfully detected all 5 traffic lights. The estimated locations are all then adjusted accordingly. The error in Figure~\ref{fig:Traffic} is still high, especially in the crossroads. Surprisingly, after combining our predictive regression model, \emph{SmartLoc}'s result almost coincides with the ground truth, as shown in Figure~\ref{fig:smartLoc}. For the first $900$m, the curve of \emph{SmartLoc} nearly overlaps with the curve of the ground truth. For the first $450$m, the vehicle passes three crossroads with all green lights, and the error is less than $20$m in most of the time. After the final traffic lights, the vehicle has to drive at a relatively low speed because of the road construction. The predicted distance consequently deviates from the ground truth a little, but at the end of the road, the errors remain small. We plot all the estimated distances by three methods in Figure~\ref{fig:loc_gps_est}, with the X axis being the ground truth distance and Y axis being the predicted distance, \textit{i.e.}\xspace, the perfect prediction will have a diagonal line. \emph{SmartLoc} results are distributed almost along the diagonal line, and pure sensor approach deviates greatly. The deviation of the results from the ground truth comes from the accumulated errors from all time slots. Based on the previous experiments, we plot the error in every time slot in Figure~\ref{fig:dif}. \emph{SmartLoc} with landmarks calibration has the smallest mean error of the estimated locations for all time slots: $90\%$ of them are lower than $20$m from the CDF in Figure~\ref{fig:dis_cdf2_seg}. The other two approaches have larger errors, and the last figure describes the CDF of the total driving distance error. \subsubsection{Prediction in Highway} \begin{figure*} \centering \subfigure[Traveling Distance\label{fig:highway_dis}] {\includegraphics[scale=0.25]{highway_dis.eps}} \subfigure[Absolute error\label{fig:highway_seg}] {\includegraphics[scale=0.25]{highway_seg_total.eps}} \subfigure[Relative error\label{fig:highway_seg_cdf}]{\includegraphics[scale=0.25]{highway_seg_percent_cdf.eps}} \subfigure[Long Distance Estimation\label{fig:highway_dif_cdf}]{\includegraphics[scale=0.25]{highway_dif_cdf.eps}} \vspace{-0.1in} \caption{Traveling in a highway.} \vspace{-0.3in} \label{fig:highway} \end{figure*} In addition, we test the performance of \emph{SmartLoc} on the highway to evaluate the probability of replacing traditional GPS to save energy. In the highway, GPS signals are almost always good, so the GPS data served as the ground truth only in this evaluation. We drive over $10$ different highway segments with total distance being over $60$km (with driving speed $100$km/h-$120$km/h approximately). The smartphone has access to the precise location information from the GPS, which is updated every $2$ seconds. Meanwhile, we collect the readings from the sensors and train our predictive regression model for $3$km. Then, we predict the traveling distance for the next $2$km and compare the distances from the ground truth, \emph{SmartLoc} and the pure sensors. Figure~\ref{fig:highway_dis} illustrates the comparisons of driving distance estimation using \emph{SmartLoc} (with sensors) and the GPS. The ground truth (GPS readings) is plotted by the green curve. It is obvious that the error between pure sensor-based solution and the ground truth is becoming larger along the time, which is due to the accumulated errors without any calibration. By using our predictive regression model, \emph{SmartLoc} calculate suitable parameters and apply them into the prediction. The estimation errors gets much smaller after then. Figure~\ref{fig:highway_seg} indicates that the largest error is only $12$m among the $10$ different highway segments (each of length $2$km), and in over $80\%$ cases, the errors are less than $5$m. Compared with the actual distance extracted from the ground truth (Figure~\ref{fig:highway_seg_cdf}), at over $95\%$ locations (among all locations where GPS location can be extracted), the errors are less than $1\%$ of the actual driving distance, and the largest error is less than $2\%$ of the actual driving distance. We also notice that the accuracy of the prediction decreases with the increase of the driving distance. We predict the driving distance for both $1$km and $2$km after taking the data of the first $3$km to build the model. In our experiments, $80\%$ of the prediction error for both (1km) and (2km) cases are less than $10$m and $15$m respectively, and even the largest error fall within $19.8m$ and $23m$ as plotted in Figure~\ref{fig:highway_dif_cdf}. However, based on the evaluation, we discover that the estimation results cannot maintain high accuracies for a long distance even in highway. The main reason comes from the user dependent driving behaviors and the unpredictable special conditions, such as traffic jam. We also consider that \emph{SmartLoc} has a better estimation accuracy when the driving speeds remain stable, and when the driving speed fluctuates frequently, the error of \emph{SmartLoc}'s predicted results still in an acceptable range. Calibrating the location periodically is a feasible way to improve the location accuracy in real life applications, which is also an alternative to replace traditional GPS to save energy in the highway. \subsubsection{Evaluations Analysis} Based on the evaluation results presented in this section, an obvious conclusion is that \emph{SmartLoc} provides precise driving distance estimation in certain scenarios. In every time slot, the driving distance is estimated from the current sensor data as well as our predictive regression model. Suppose the error (denoted as $D_i$) in the estimation of each time slot $i$ follows normal distribution: $D_i\sim{\mathcal{N}(\mu, \sigma^2)}$, with mean $\mu$ and variance $\sigma^2$. Then, the estimation of the total traveling distance $S_t$ in $t$ timeslots is the summation of the traveling distance in all time slots: $S_t = {\displaystyle\sum\limits_{i=1}^{t} D_i}$. In this case, the error, from a long term perspective, will be accumulated. Obviously, $S_t \sim{\mathcal{N}(t\mu, t\sigma^2)}$. The variance of the variable $S_t$ will be $ t\sigma^2$. Thus, the mean error increases along the time, which leads to the conclusion that it is difficult to predict the traveling distance precisely in a long term, although sometimes the deviation in some continuous timeslots may be neutralized. For a given error bound $\delta$, $Pr(S_t \geq \delta)$ is higher when $t$ is larger. \vspace{-0.1in} \subsection{Localization in the City} \begin{figure} \centering \subfigure[Error of location estimation in every sampling timeslot\label{fig:total_block_seg_cdf}] {\includegraphics[scale=0.25]{total_block_seg_cdf.eps}} \subfigure[Error in locating the final destination in different blocks\label{fig:total_block_cdf}] {\includegraphics[scale=0.25]{total_block_cdf.eps}} \vspace{-0.1in} \caption{Navigation performance evaluation.} \vspace{-0.15in} \label{fig:local_compare3} \vspace{-0.15in} \end{figure} We then present the localization results in Chicago downtown. As aforementioned, it is difficult to get the ground truth for the majority of the sampling locations. We set the experiments of estimating the final location. Since, Section~\ref{sec:preliminary} has demonstrated that there are $9$ bad road segments with lengths over $400$m, which is less than $3$ blocks in downtown Chicago. The goal of \emph{SmartLoc} is then to obtain a relatively accurate distance estimation within three blocks. We randomly select $100$ points as destinations in the experiment, and a destination could be one block, two blocks, or three blocks away from the starting point. We drive to these destination points to evaluate if the destination is precisely calculated by \emph{SmartLoc}. We assume that the GPS signals are good before the starting point, and \emph{SmartLoc} will train the dead-reckoning model during the driving. In this experiment, we test the accuracy of estimating the traveling distance in every time slot and of estimating the overall driving distance (\textit{i.e.}\xspace, locating the final destination) as shown in Figure~\ref{fig:total_block_seg_cdf} and Figure~\ref{fig:total_block_cdf} respectively. When \emph{SmartLoc} only navigates to the destination within one block, with probability $70\%$, the error of estimating the location for each sampling slot is less than $10$m, and with probability $85\%$, the mean error is less than $30$m. When the destination is two blocks away, about $75\%$ of the errors are less than $30$m; when the destination is three blocks away, about $80\%$ errors are less than $50$m. From these figures, the error of destination locating within a few blocks is acceptable. We also plot the localization results for one road segment with length over $6400$m in Figure~\ref{fig:michi_spot}. In this figure, the red spots denote the ground truth generated from GPS, and the blue spots represent the localization calculated by \emph{SmartLoc}, where the green line between them is the localization error for every location. \section{Conclusion} \label{sec:conclusion} This paper presented \emph{SmartLoc}, a metropolis localization system by using the inertial sensors and the GPS module of smartphones. We established a predictive regression model to estimate the trajectory using linear regression, and the proposed \emph{SmartLoc} detects the road infrastructures and driving patterns as landmarks to calibrate the localization results. Our extensive evaluations shows that SmartLoc improves the localization accuracy to less than $20m$ for more than $90\%$ roads in Chicago downtown, compared with $\geq 50\%$ with raw GPS data. \small{ \input{GPS-infocom2013.bbl} } \end{document}
1,314,259,995,718
arxiv
\section{Introduction} Ecosystems -- from rainforests to the human gut -- can harbor a surprisingly large number of different species \cite{hutchinson1959homage,ghazoul2010tropical,lozupone2012diversity}. These species in general compete for a limited number of resources and possibly prey on each other. Inspired by this observation researchers from various fields examine the role biodiversity plays in complex ecosystems and how their community structure is shaped. However, mathematical studies of model ecosystems were mostly performed for systems with only a few species and resources \cite{macarthur_species_1970,tilman1982resource,huisman1999biodiversity}. Results obtained for small settings do not straightforwardly generalize to large systems characterized by collective phenomena and emergent properties \cite{levins1966strategy,anderson1972mor}. Starting with the pioneering work of May \cite{may1972will} such phenomena are increasingly addressed by studying large models with random parameters. This is a sensible approach if self-averaging properties of the system may be identified that only depend on the features of the underlying distributions and not on the individual realization of the randomness. Methods from the statistical mechanics of disordered systems then provide useful tools for a quantitative characterization of typical properties of the system \cite{TM17,tikhonov2017innovation,biroli2017marginally,advani2017environmental}. Along these lines, Tikhonov and Monasson \cite{TM17,tikhonov2016community} recently investigated a high-dimensional version of MacArthur's consumer-resource model \cite{macarthur_species_1970}. In this model, different species compete for a number of resources which are supplied by fixed influxes from the environment. An increasing population size of a species leads to a lower resource availability and consequentially to a reduced growth rate, creating a negative feedback loop. Even though the interactions between the species are purely competitive Tikhonov and Monasson found a transition into a collective phase at large potential diversity, i.e., when the number of available species in the regional pool sufficiently exceeds the number of resources. In this phase, all resources are equally well available and the number of surviving species is equal to the number of resources, saturating the upper bound set by the competitive exclusion principle \cite{armstrong1980competitive}. Moreover, by performing a stability analysis Altieri and Franz \cite{altieri2018constraint} revealed that the collective phase exhibits marginally stable behavior. In \cite{TM17} the phase transition was examined by constructing a Lyapunov-function for the population dynamics and a subsequent replica calculation characterizing its extremum. In the present letter we show that the transition is more general and may be derived without reference to the actual dynamics of the model. We first point out a connection between stationary states of MacArthur's model and the space of non-negative solutions to large systems of random linear equations. By the Farkas lemma this solution space is related to a random fractional volume in high dimensions the typical value of which we then determine. \section{The model} The version of MacArthur's resource-competition model considered in \cite{TM17} consists of $S$ species with abundancies $n_\mu\geq 0,\, \mu=1,\dots, S$ which can utilize $N$ resources with availability $h_i,\, i=1,\dots, N$. The resources are supplied from the outside by fixed influxes $R_i$ that, depending on the overall demand $T_i$, are reduced to the resource availabilites $h_i$, cf. Figure~\ref{fig:schematic of model}. Since we are interested in high-dimensional situations we consider the combined limit $N\to \infty,\, S\to \infty$ with the ratio, $\alpha:=S/N$ staying constant. As one of the main parameters of the model $\alpha$ specifies the potential diversity of the ecosystem. \begin{figure} \centering \includegraphics[width=\linewidth]{Figure1.pdf} \caption{ Schematic of the resource-competition model. The fixed external resource influxes $R_i$ get modified to the resource availabilites $h_i$ depending on the overall demand $T_i$ of resource $i$. If many species with high abundancies use the same resource, its availability will decrease; if only a few species with low abundancies can use a resource, its availability will be high. Species are characterized by their metabolic strategies $\sigma_{\mu i}$ that specify which resource a species may consume. Increasing the abundancy of one species will always imply a slower growth of the other species, the interaction is purely competitive.} \label{fig:schematic of model} \end{figure}% Each species $\mu$ is characterized by a metabolic strategy vector $\boldsymbol{\sigma}_\mu \in \mathbb{R}^N $ specifying which resources it can employ. Its entries $\sigma_{\mu i}$ are one if it may consume resource $i$ and zero otherwise. Moreover, each species needs a minimal resource supply $\chi_\mu$ in order to survive: if the resource intake is smaller than $\chi_\mu$, its population shrinks; otherwise it grows. The system dynamics is described by a Malthusian law \begin{equation} \frac{dn_\mu}{dt}=n_\mu \left[\sum_i \sigma_{\mu i} h_i - \chi_\mu \right]. \label{eq:Dynamics} \end{equation} In order to allow for a fair competition between specialists and universalists the threshold $\chi_\mu$ is chosen to increase with the number of utilizable resources \cite{remark}, \begin{equation}\label{eq:defchi} \chi_\mu=\sum_i \sigma_{\mu i}. \end{equation} The resource availabilities $h_i$ derive from the resource influxes $R_i$ and are decreasing functions of the total demand $T_i=\sum_\mu \sigma_{\mu i} n_\mu$ of resource $i$. Different models of resource supply differ in the form of these depletion functions $h_i(T_i)$. We require that if influx and total demand balance for every resource the dynamics \eqref{eq:Dynamics} should be in a stationary state which according to \eqref{eq:defchi} corresponds to $h_i=1$ for all $i$. The dependence of the resource availabilities on the total demand is hence given by \begin{equation} h_i(T_i)=1-f_i(T_i), \label{eq:resource availability} \end{equation} where the functions $f_i$ are monotonically increasing functions of their argument and satisfy $f_i(R_i)=0$. We do not need any further specification of the depletion functions in our analysis. In line with previous investigations \cite{may1972will,tokita2015analytical,TM17} it is assumed that the model parameters $\sigma_{\mu i}$ and $R_i$ are independent random variables. More specifically, $\sigma_{\mu i}$ is taken to be one with probability $p$ and zero with probability $1-p$. Small values of $p$ therefore describe populations with many specialists whereas large $p$ favours universalists. The $R_i$ are taken to be of the form $R_i=1+\delta R_i$ where the fluctuations $\delta R_i$ are Gaussian random variables with zero mean and variance $r^2/N$. The scaled variance $r^2$ remains $O(1)$ for $N\to\infty$ and characterizes as a second central parameter of the model the heterogeneity of resource influxes to the system. For linearized depletion functions \eqref{eq:resource availability} it was shown in \cite{TM17} that the system possesses two different phases. For small potential diversity, $\alpha\leq\alpha_c$, the system is in the vulnerable or V-phase. Here the inhomogeneity of resource influxes $R_i$ penetrates down to the level of the resource availabilities $h_i$ and the number of surviving species is less than $N$. In contrast, in the shielded or S-phase at large potential diversity $\alpha$, all resource availabilities $h_i$ are equal to one despite the differences in the external influxes $R_i$. In this phase the species form a kind of collective field and shield each other from the external inhomogeneities. At the same time the number of surviving species attains its maximum value $N$. In their calculation Tikhonov and Monasson construct a convex Lyapunov function $F( \mathbf{n})$ which is bounded from above and increases on every trajectory. They show that the stationary states of the system lie on the boundary of the so-called unsustainable region in the space of resource availabilities \begin{equation} U=\bigcap^S_{\mu=1} \{ \mathbf{h} \, | \, \boldsymbol{\sigma}_\mu \cdot \mathbf{h} < \chi_\mu \}. \end{equation} A partition function of the system is then defined by \begin{equation} Z=\int_{U} \prod_i dh_i e^{\beta \tilde{F}(\mathbf{h})}, \end{equation} where the integration is performed over the unsustainable region and $\tilde{F}(\mathbf{h})$ is the Legendre transform of $F(\mathbf{n})$. In the limits $\beta,N \rightarrow \infty $ the expectation value $\frac{1}{\beta N}\left< \log Z \right>$ is determined by application of the replica trick, thereby characterizing the stationary states and determining the critical line of the phase transition. In the following we present a different approach to the transition which does not make use of the Lyapunov function. \section{Relation to systems of linear equations} \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{Figure2.eps} \caption{Average fraction of realizations of Eq.~(\ref{eq:linear equation}) which possess a non-negative solution $\mathbf{n}$. The solution space clearly separates into a part with typically no such solution (lower right) and a phase in which such a solution typically exists. The dashed line is the critical line determined in \cite{TM17} and also given by Eq.~\eqref{eq:Critical Line} below. The system size is $N=300$, $p=0.5$, and each data point was averaged over 50 realizations.} \label{fig:Fraction of solutions} \end{figure} \begin{figure}[] \vspace{-0.4cm} \centering \includegraphics[width=\linewidth]{Figure3.eps} \vspace{0.0cm} \caption{Finite size analysis of the critical value $\alpha_c$ at which the phase transition occurs. The figure shows the fraction of realizations of Eq.~(\ref{eq:linear equation}) which possess a non-negative solution for $r^2=0$ and $N=25, \, 125$ and $525$, respectively, ordered by increasing steepness. Each data point was averaged over 400 realizations. The insets show the finite-size approximations to $\alpha_c$ plotted as function of $N$ for $r^2=0$ (upper left) and $r^2=1$ (lower right). The dashed lines are the analytical results from Eq.~\eqref{eq:Critical Line}. $p=0.5$ for all cases shown here. For further details see the supplementary material.} \label{fig:Finite Size} \end{figure} The phase transition observed in the resource-competition model can be related to a problem in linear algebra. Let us denote by $\Hat{\sigma}$ the matrix $\sigma_{\mu i}$ of metabolic strategies, by $\mathbf{n}\in\mathbb{R}^{\alpha N}$ the vector of species abundancies, and by $\mathbf{R}\in\mathbb{R}^N$ the vector of resource influxes. The central point is that the S-phase is characterized by $h_i=1$ for all resources $i$. By \eqref{eq:resource availability} this implies $T_i=R_i$ for all $i$ and using the definition of $T_i$ it translates to \begin{equation}\label{eq:linear equation} \Hat{\sigma}^T \mathbf{n}=\mathbf{R}. \end{equation} We therefore expect that the system is in the S-phase if these inhomogeneous linear equations possess a {\em non-negative} solution $\mathbf{n}, n_\mu\geq 0$ and that it is in the V-phase if no such solution exists. Figure~\ref{fig:Fraction of solutions} tests this assumption on the basis of numerical simulations. Shown is the fraction of random realizations of Eq.~(\ref{eq:linear equation}) for which a non-negative solution was found by application of a least squares solver \cite{usedsolver}. It is clearly seen that the solution space of Eq.~(\ref{eq:linear equation}) is separated into a phase in which typically no solution exists and a phase in which a solution can always be found. The dashed line marks the phase transition derived in \cite{TM17}. Similar behavior is found for other values of $p$. The observed transition becomes sharper when the systems gets larger as shown by the finite size analysis of Fig.~\ref{fig:Finite Size}. As can be seen the steepness of the transition increases with increasing system size $N$ and the extrapolated values of $\alpha_c$ converge to the analytical result given by Eq.~\eqref{eq:Critical Line}. \section{Analytical determination of the critical line} The question whether the linear system \eqref{eq:linear equation} for large $N$ typically possesses a non-negative solution $n_\mu$ can be analyzed analytically. To this end we first employ Farkas' Lemma \cite{farkas1902theorie} that stipulates that for given $\Hat{\sigma}$ and $\mathbf{R}$ either \eqref{eq:linear equation} has a non-negative solution or there is a vector $\mathbf{y}\in \mathbb{R}^N$ such that \begin{equation}\label{Eq: Farkas Lemma} \Hat\sigma \mathbf{y} \geq 0\qquad\mathrm{and}\qquad \mathbf{R}\cdot\mathbf{y}<0. \end{equation} The intuitive meaning of this theorem is simple: the linear combinations of the row vectors $\boldsymbol{\sigma}_\mu$ of $\Hat{\sigma}$ with non-negative coefficients form what is called the non-negative cone of these vectors. If $\mathbf{R}$ lies within this cone, there is a non-negative solution to Eq.~\eqref{eq:linear equation}; if not, there must be a hyperplane (with normal vector $\mathbf{y}$) separating the cone from $\mathbf{R}$. The dual problem defined by \eqref{Eq: Farkas Lemma} is rather similar to the storage problem in the theory of feedforward neural networks \cite{engel2001statistical} and can be addressed by similar means. We define the fractional volume of vectors $\mathbf{y}$ that fulfil Eq.~(\ref{Eq: Farkas Lemma}) \begin{align} \Omega(\Hat{\sigma},\mathbf{R}):=\frac{\int^\infty_{-\infty} \prod_i dy_i\, \delta \left(\sum_i y_i^2-N\right) \mathbf{1}(\mathbf{y};\hat{\sigma},\mathbf{R}) }{\int^\infty_{-\infty} \prod_i dy_i\, \delta(\sum_i y_i^2-N)}, \label{Equ: Volume of solutions} \end{align} with the indicator function \begin{align} \mathbf{1}(\mathbf{y};\hat{\sigma},\mathbf{R}):= \prod^{\alpha N}_{\mu=1} \Theta \left( \frac{1}{\sqrt{N}} \sum_i \sigma_{\mu i} y_i \right) \hspace{-0.1cm}\Theta \hspace{-0.1cm} \left(-\frac{1}{\sqrt{N}}\sum_i R_i y_i \right). \end{align} The spherical constraint $\sum_i y_i^2=N$ is introduced to lift the trivial degeneracy of solutions $\mathbf{y}\to\lambda\mathbf{y}$ for any positive $\lambda$. If $\Omega(\Hat{\sigma},\mathbf{R})$ is zero there are no solutions to \eqref{Eq: Farkas Lemma} and correspondingly there is a non-negative solution to \eqref{eq:linear equation}. Complementary, if $\Omega(\Hat{\sigma},\mathbf{R})$ is larger than zero, there are vectors $\mathbf{y}$ fulfilling \eqref{Eq: Farkas Lemma} and therefore no non-negative solution to \eqref{eq:linear equation} exists. The transition occurs when $\Omega(\Hat{\sigma},\mathbf{R})$ shrinks to zero. Due to the product structure of $\Omega$ the entropy $\frac{1}{N}\log \Omega$ is expected to be self-averaging with respect to $\Hat{\sigma}$ and $\mathbf{R}$. We may hence characterize the typical situation in a large system by considering the average entropy \begin{equation} S(\alpha,p,r^2):=\left< \log \Omega \right>_{\hat{\sigma},\mathbf{R}}. \end{equation} With the help of the replica trick \cite{mezard1987spin} and using standard techniques \cite{engel2001statistical} this entropy may be expressed as a saddle-point integral over order parameters (see supplementary material) \begin{equation} m^a=\frac{1}{\sqrt{N}}\sum_i y^a_i\quad\mathrm{and}\quad q^{ab}=\frac{1}{N}\sum_i y^a_i y^b_i. \end{equation} Within the replica-symmetric ansatz we find \begin{align}\nonumber S(\alpha,p,r^2)=\mathrm{extr}& \left[\frac{1}{2}\log(1-q)+\frac{q}{2(1-q)}-\frac{\kappa^2(1-p)}{2pr^2(1-q)}\right.\\ &\left.\qquad+\alpha\int Dt\, \log H \Big(\frac{\sqrt{q}\,t-\kappa}{\sqrt{1-q}}\Big)\right], \label{eq:Final entropy} \end{align} where the extremum is over $\kappa$ and $q$ and the abbreviations $Dt:=dt/\sqrt{2\pi}\,e^{-t^2/2},\; H(x):=\int_x^\infty Dt$ and $\kappa:=m\sqrt{p/(1-p)}$ were used. \\ At the transition the volume $\Omega$ shrinks to zero and the typical overlap $q$ between two different solutions $\mathbf{y}$ approaches one. Keeping only the most divergent terms of \eqref{eq:Final entropy} in this limit we find the following parametric representation of the critical line $\alpha_c(r^2)$ (see supplementary material): \begin{equation} r^2= \frac{1-p}{p}\frac{\kappa^2}{1-\alpha_c I(\kappa)}, \, \, \, \, \alpha_c H(\kappa)=1, \label{eq:Critical Line} \end{equation} where $I(\kappa):=\int_\kappa^\infty Dt\,(t-\kappa)^2$. This is the same result as found in \cite{TM17} exploiting the properties of the Lyapunov function. The expression for the entropy \eqref{eq:Final entropy} is rather similar to the one for the average entropy in the storage problem of a perceptron as obtained by Gardner \cite{gardner1988space}. In particular, for $r^2=0$ we find from \eqref{eq:Critical Line} $\kappa=0$ and therefore $\alpha_c=2$, the classical result for the storage capacity of the perceptron. \section{Conclusion} We have shown that the phase transition in a high-dimensional version of MacArthur's resource-competition model discovered recently by Tikhonov and Monasson is related to the existence of non-negative solutions of large random systems of linear equations. The starting point of our analysis is the observation that the 'shielded' phase in which the species collectively regulate the resource demand to make all resources equally available is also characterized by the maximally possible number of surviving species set by the competitive exclusion principle. In contrast, in the 'vulnerable' phase in which the species are susceptible to disturbances from the environment the number of surviving species remains below this margin. Since concentrations cannot be negative the difference between the two phases is related to the existence of non-negative solutions for species abundancies realizing appropriate resource availabilites. The transition depends on the ratio between the number of variables and the number of equations, the density of non-zero entries in the coefficient matrix and the variance of the inhomogeneity vector. The existence of non-negative solutions to underdetermined linear equations is an active field of research in its own. While prevalent techniques require sparseness of the solutions \cite{donoho2005sparse,wang2011unique} here we make -- in the limit where the number of unknowns and the number of equations tend to infinity -- also predictions for dense solutions. Using Farkas' Lemma the question on the existence of non-negative solutions to linear systems can be mapped onto a dual problem involving a set of linear inequalities. Using methods from the statistical mechanics of disordered systems we have analytically analyzed the typical properties of this dual problem in the thermodynamic limit. The result is in perfect agreement with numerical simulations, reproduces the transition line found by Tikhonov and Monasson, and points out an interesting connection with the storage problem of the single-layer perceptron. This link may indicate a way to further improvements in the quantitative characterization of large random ecosystems. \acknowledgments We would like to thank Remi Monasson, Katharina Janzen and Mattes Heerwagen for fruitful discussions. Financial support from the German Science Foundation DFG under grant EN 278/10-1 is gratefully acknowledged.
1,314,259,995,719
arxiv
\section*{\textsf{Introduction}} \lettrine[lines=2]{T}{he} ability to capture the two-dimensional (2D) image information is extremely significant in plenty of applications, ranging from astronomical observation \cite{Kastner2002}, phase retrieval \cite{YuOC2017} to hyperspectral imaging \cite{Studer2012}. However, the sampling specified by the Nyquist-Shannon criterion for all pixels of an object image generally involves the acquisition of a huge amount of information, and is also accompanied by the questions of both transmission and storage. In addition, imaging with high spatial resolution needs advisable compromises on the conflicting goals of acquiring high signal-to-noise (SNR) for each pixel-unit whilst keeping low acquisition time. For example, in order to obtain a higher SNR, one can either increase the integration time \cite{Woringer2017}, just as in pixelated array/scanning detectors, thereby increasing the acquisition time, or gather the total luminous flux together, just as in single-pixel systems, but sacrificing the sampling time and the computation time. To our knowledge, single-pixel imaging (SPI) might trace back to early raster scanning schemes, like the flying-spot camera in 1884, optical coherence tomography \cite{Huang1991} in 1991. As an alternative, it is also possible to calculate the intensity correlation between the random-modulated illumination patterns and the detected bucket signals, based on a statistical mechanism called ghost imaging (GI) \cite{ShihPRL1995,Boyd2002,Shapiro2008,Zhao2012,YuOE2014}. But in traditional GI, in order to obtain a nice reconstruction, the number of measurements should be considerably higher than the pixel dimension $N$ of the object image. Hadamard \cite{Duran2012,Clemente2013,MJSunNC2016,Huynh2016,Lochocki2016} and Fourier \cite{Zhang2015,Bian2016,Jiang2017} SPI are other two techniques that use complete deterministic orthogonal bases, allowing one to reduce the number of measurements to $N$. Since there is no need to use any pixelated array, SPI schemes can largely improve the flux, which is a benefit especially under ultra-weak light conditions. Additionally, they can work with low-cost at non-visible wavelengths, where array cameras are expensive and not well developed \cite{Radwell2014}. Recently, some SPI methods based on compressed sensing (CS) \cite{Donoho2006,Candes2006,Candes2008,Baraniuk2008} have been proposed to acquire a better performance by exploiting the sparsity of the object. It enables a considerable decrease in the number of measurements without compromising the SNR, but at the cost of increased computational time (in minutes or hours). We also note that, for the general cases, one cannot acquire a good image quality when the sampling ratio is below 30\%. Many CS applications in fields including magnetic resonance imaging \cite{Lustig2008}, astronomy \cite{Bobin2008}, and microscopic imaging \cite{Shin2017,YuOC2016}, have resolutions generally smaller than $128\times128$ pixels. Besides, the larger the pixel resolution, the more stringent the computational restrictions, which is unsuitable for practical real-time imaging. In terms of imaging mechanisms with few measurements, differential GI \cite{Ferri2010}, which is an early differential SPI scheme, enhances the SNR by subtracting the background noise. Then correspondence imaging (CI) has been proposed by Luo et al. \cite{Luo2011,Luo2012} and explained from many perspectives \cite{Shih2011,YuCPB2015,Yao2015}, allowing one to obtain a positive or negative image by conditional averaging the reference patterns, but without correlation calculation. As a result, the number of measurements is cut by at least half. Although diverse CI-based schemes \cite{LiAPL2013,MJSunAO2015,GLLi2016,Wu2017} have been developed, the performance is not greatly improved. Then, it is found that based on the differential measurements of random binary patterns and their inverse \cite{BSunSci2013}, a ghost image can be reconstructed by correlating non-inverted patterns. Note that the inverse patterns also can be utilized, a technique called complementary compressive imaging \cite{YuSR2014} and its derivative named differential CS \cite{YuOC2016} have been proven to produce a satisfactory image quality in orders of magnitude better than conventional CS or GI, with a sampling ratio around 15\%. More recently, an approach based on a ``Russian Dolls" (RD) ordering approach \cite{MJSunSR2017} is proposed to yield a comparable quality compared to CS at 6\% sampling ratio but the spatial resolution is still limited. Here we present a single-pixel compressive imaging technique that can acquire high-quality images of large pixel scale $1024\times1024$ with super sub-Nyquist sampling ratio even below 0.2\%, by using a strategy that we call the ``cake-cutting" (CC) sort to optimally reorder the deterministic Hadamard basis. From the aspect of its physics nature, the sorting is based on the contribution of the pattern basis to the reconstruction, which is inspired from CI. Following this idea, the most significant patterns are always modulated firstly. Meanwhile, by utilizing the structured characteristic of Hadamard matrix, the computational overhead and the memory consumption is greatly reduced. It is shown through numerical simulation to significantly reduce the number of measurements along with the acquisition time. Furthermore, with a single-photon single-pixel camera setup based on differential modulation, we demonstrate its ability to retrieve clear images through partially obscuring scenes under noisy environmental illumination conditions. \section*{\textsf{Results}} \textbf{Principles description.} The core idea of single-pixel imaging is to shift the spatial resolution away from the array detectors and onto the modulated patterns, which are typically generated by a spatial light modulator, in order to acquire the spatial information of the target. As a consequence, the spatial resolution of each pattern should be equivalent to the pixel resolution $p\times q$ of the target image $x$. By leveraging the fact that natural images can be sparsely represented in an appropriate basis $\Psi$, i.e., $x=\Psi x'$, compressive imaging methods allow one to reconstruct the images with a few patterns whose number is only a fraction of the number of pixels $N$. Here we define the sampling rate as the ratio of the number of measurements to the number of pixels. This sampling rate is much smaller than that prescribed by Nyquist sampling. It typically takes about $M=O(K\cdot\log(N/K))<N$ random patterns, considering all but the largest $K(\leq N)$ elements of the sparse representation coefficients in some basis to be set to zero. Thus CS in principle provides a benefit in reducing acquisition time. And generally, $\Psi$ is an invertible (e.g. orthogonal) matrix or a redundant dictionary. The image $x$ can be reshaped into a column vector of size $N\times1$, where $N=p\times q$. In CS, the patterns are modulated by a digital micromirror device (DMD) consisting of millions of micromirrors, each of which is orientated either $12^\circ$ or $-12^\circ$ with respect to the normal of the DMD work plane, corresponding to a bright pixel 1 or a dark pixel 0. Each pattern $a_{ij}$ sequentially encoded on the DMD can be flattened into a row vector of size $1\times N$, thus $M$ such binary patterns constitute a known $M\times N$ measurement (or sensing) matrix $A$. This measurement matrix actually projects the object signal $x$ into a single-pixel (bucket) compressed signal $y=Ax+e$ of smaller size $M\times1$. Here $e$ is of the same size $M\times1$, denoting the stochastic noise. Thereby, the single-pixel total intensity measurement is mathematically equivalent to the inner product between each pattern and the object image, i.e., the interaction between the pattern sequence and the scene. Then the goal is to solve such an ill-posed linear problem through optimization algorithms by finding the sparsest representation $x'$ such that $y=A\Psi x'+e$. In order to ensure a good estimation of $x'$, the measurement matrix $A$ should satisfy the restricted isometry property (RIP) \cite{Candes2008}, which requires the sensing matrix with the property that column vectors taken from arbitrary subsets are approximately orthogonal and incoherent with $\Psi$. As we know, the Hadamard matrices can fulfil this property. Here we apply the total variation minimization \cite{CBLi2010}, whose objective function can be written as an augmented Lagrangian function: \begin{equation \min\limits_{x}\sum\limits_i||D_ix||_p+\frac{\mu}{2}||y-Ax||_2^2, \end{equation} where $||x||_p=(\sum_{i=1}^{N}|x_i|^p)^\frac{1}{p}$, $D_ix$ denotes the discrete gradient vector of $x$ at the $i$th position, $D$ is the gradient operator, and $\mu$ is a balance constant. Here a TVAL3 solver \cite{CBLi2010} is used to recover the image. In this work, we make use of the Hadamard matrices to form our patterns. A Hadamard matrix is named after the French mathematician Jacques Salomon Hadamard. It is a symmetric square matrix with entries $\pm1$. Let $H$ be a Hadamard matrix of order $N$, then we have $HH^T=NI_N$ and $H^T=H$, where $I_N$ is the $N\times N$ identity matrix and $T$ represents the transpose operator. Dividing $H$ by $\sqrt{N}$ gives an orthogonal matrix whose transpose equals to its inverse. The Hadamard matrix of order $2\leq2^k\in N$ can be given by the following recursive formula \begin{equation H_{2^k}=\left[{\begin{array}{*{20}{c}} H_{2^{k-1}}&H_{2^{k-1}}\\ H_{2^{k-1}}&-H_{2^{k-1}} \end{array}}\right]=H_2\otimes H_{2^{k-1}}, \end{equation} where $H_1=[1]$, $H_2=\left[{\begin{array}{*{20}{c}} {1}&{1}\\ {1}&{-1} \end{array}}\right]$, $\otimes$ stands for the Kronecker product. Such natural ordered Hadamard matrix is also called Walsh matrix. It is interesting to note that the elements in the first row and the first column are all ones. In CI, the patterns can be easily divided into a positive and a negative subset according to the bucket intensity fluctuation $\Delta y_i=y_i-\langle y\rangle$, expressed as \begin{equation A^+=\{A_i|\Delta y_i\geq0\}\ \textrm{and}\ A^-=\{A_i|\Delta y_i<0\}, \end{equation} where $\langle\cdot\rangle$ signifies the ensemble average. A positive (or negative) correspondence image can be recovered by only averaging some fractions of the subset $A^+$ (or $A^-$). From this theory, it can be seen that the single-pixel (bucket) intensity is proportional to the contribution of the pattern to the image reconstruction. Now we plot the ordinary intensity signal $y=Ax$ in Fig.~\ref{fig:y}(a). Here the measurement matrix $A$ is randomly disrupted from $H$ (i.e., the rows and columns of $H$ are all scrambled to generate the $A$). The intensity values show a positive and negative distribution due to the fact that the entries of $A$ are either $+1$ or $-1$. Then we sort the signal $y$ and its absolute value form $|y|$ in a descending order, as displayed in Figs.~\ref{fig:y}(b)--\ref{fig:y}(c). By using the complete set of the patterns, the CS retrieved image is illustrated in Fig.~\ref{fig:y}(e). After that, we use some different sampling rates of 6.25\%, 12.50\%, 25.00\% and 50.00\% for CS image reconstruction, whilst the intensity values $y$ (or $|y|$) are in a descending order and a ascending order, respectively. The front part of Figs.~\ref{fig:y}(f)--\ref{fig:y}(g) show the retrieved images using the most significant fractions. From the results given in the latter part of Figs.~\ref{fig:y}(f)--\ref{fig:y}(g), we can see that, by using CS, the first few fractions of the subset of $A$ corresponding to $|y_i|-\langle|y|\rangle<0$, compared with the ones for $\Delta y_i<0$, present some poorer quality since they are unessential patterns to be more precisely. The results along with the relative errors (RE) demonstrate that, with the same sampling ratio, the significant fractions of the subset of $A$ with respect to $|y_i|-\langle|y|\rangle>0$ yield a better performance (see the front part of Figs.~\ref{fig:y}(f)--\ref{fig:y}(g)). In consequence, the most significant patterns should be modulated first. Although there exists many orders for the Hadamard matrix, such as sequency order, dyadic order, and so forth, none of them can make the most significant patterns appear first. Since it is hard to know \textit{a priori} which patterns can generate the most significant intensity values, one must perform a complete sampling and then pick up the crucial patterns needed according to the recorded signal. \begin{figure}[H \centering\includegraphics[width=0.95\linewidth]{figure-1} \caption{Image reconstructions with different subsets of Hadamard patterns. (a) The original distribution of the bucket intensity signal $y$. (b) and (c) gives the intensity distribution when $y$ and $|y|$ is in a descending order, respectively. (d) is an original image of $128\times128$ pixels. (e) CS reconstruction using a complete set of the random patterns. (f) and (g) Comparison of CS reconstructions with the compression ratios of 6.25\%, 12.50\%, 25.00\% and 50.00\%, while the signal $y$ (or $|y|$) is in their descending order and ascending order, respectively.} \label{fig:y} \end{figure} The RD ordering \cite{MJSunSR2017} of the Hadamard basis provides an alternative GI approach. It numbers each row. The rows the Hadamard matrix are ordered such that the top half of $H_{2^{2z}}$ equals to the rows of $H_{2^{2z-1}}$ (in the two-dimensional pattern view, the former is an scaled version of the latter with a factor 2), just like a Russian dolls set, then the third quarter of $H_{2^{2z}}$ are ordered as the transpose of its second quarter, the rest is catalogued into the fourth quarter, and at last the patterns within each quarter are reordered again. By this means, the image can be reconstructed from a subset (e.g. 6\% sampling ratio) of significant patterns, with a quality comparable to the one of CS. And there is no need to disorder internal pixel layout in each pattern. But the RE of RD-based GI method presents a sawtooth descent as the sampling ratio increases, so that its performance curve is neither stable nor smooth. Since this method is based on the second order correlation, it is sensitive to the environmental noise. Additionally, the pixel resolution is also limited, generally $128\times 128$ pixels, as this ordering operation is too complex. Meanwhile, once the sampling ratio is fixed, the number of patterns required is determined by the total number of pixels of the reconstructed image. Therefore, it leads to a long acquisition time especially in imaging for large pixel resolution. \noindent\textbf{Cake-cutting Hadamard basis sort.} In this work, we propose a cake-cutting (CC) strategy to generate an optimized sort of the Hadamard basis. At first, each row of the Hadamard matrix $H$ is reshaped into a matrix of $p\times q=N$ pixels. Imagine each reshaped basis pattern as a cake, we can count how many pieces this cake is cut into. One piece of the cake can be defined as a connected region. In topology and mathematics, a connected region is a topological region that cannot be represented as the union of two or more disjoint nonempty subsets. This suggests each piece of the cake being either all $-1$ (in black) or $1$ (in white). Thus the piece number of the cake can be denoted by the number of connected regions of $-1$ plus those of $1$. Besides, for one pixel in one basis pattern, its adjacent pixels (up and down, left and right) with the same values can be all treated as some part of its connected region. According to the CI theory described above, only a small fraction of the complete patterns contributes to a larger intensity value $|y_i|$. Here we find that the less connected regions a pattern contains, the more probability of this pattern to be significant or to generate a higher measured value for a common object. Therefore, we order the complete Hadamard basis patterns according to their piece number and acquire a sort sequence $seq$ of size $N\times1$. After that, the $N$ reordered patterns can be flattened into $N$ row vectors, each of size $1\times N$, forming a $N\times N$ measurement matrix. Here, Fig.~\ref{fig:Hadamard} gives an example how our cake-cutting Hadamard basis sort works. By picking out each row of the $H_{16}$ matrix (Fig.~\ref{fig:Hadamard}(a)) and transforming each row into a $4\times4$ 2D pattern, a complete set of $16$ Hadamard basis patterns is then presented in Fig.~\ref{fig:Hadamard}(b), e.g., in natural order. Following our cake-cutting strategy, these Hadamard basis patterns are sorted by their piece numbers (see Fig.~\ref{fig:Hadamard}(c)). After that, we rebuild a Hadamard matrix $H_{16384}$ of order $N=128\times 128$, which is used to reconstruct a image of $128\times 128$ pixel resolution. By using our method, we compare the performance of recovered images from the first 6.25\%, 12.50\%, 25.00\% and 50.00\% of the complete measurements when the piece (block) number of each pattern are in ascending order and descending order, respectively, as shown in Figs.~\ref{fig:Hadamard}(e)--\ref{fig:Hadamard}(f). The results fit quite well with the theory. From the RE data, we can see that the performance is in proportion to the compression ratio for the ascending order but this feature is not suitable for the descending order. It is easy to think that the difference of the corresponding recovered images will produce a better quality, but it does not turn out that way due to the environmental noise, as shown in Fig.~\ref{fig:Hadamard}(g). \begin{figure}[H \centering\includegraphics[width=0.95\linewidth]{figure-2} \caption{An example for the ``cake-cutting" Hadamard basis. (a) A $16\times16$ Hadamard matrix $H_{16}$. (b) The basis patterns of $H_{16}$ in natural order. (c) The basis patterns of $H_{16}$ in the optimized ``cake-cutting" order. (d) An original head phantom image of $128\times128$ pixels. (e) and (f) Comparison of CS reconstructions with the first 6.25\%, 12.50\%, 25.00\% and 50.00\% of the fully sampled measurements while the piece numbers are in their ascending order and descending order, respectively. (g) The difference image of the fourth images of (e) and (f).} \label{fig:Hadamard} \end{figure} \noindent\textbf{Fast Hadamard computation.} Now let us recall the structured characteristic of the Hadamard matrix. In computational mathematics, the Hadamard ordered fast Walsh-Hadamard transform (FWHT) is an efficient algorithm which can reduce the computational complexity of original $n$-order Walsh-Hadamard transform (WHT) from $O(n^2)$ to $O(n\log n)$, where $O$ is short for the order. This algorithm is a divide and conquer algorithm which recursively divides a WHT problem of size $n$ into two smaller WHT sub-problems of size $n/2$ \cite{Fino1976}. The idea of the FWHT applied to a column vector of length 16 is illustrated in Fig.~\ref{fig:Calculation}. Actually, the operation $H_Nx=y$ can also be explained by a graph with a set of vertices of edges. The weight of each edge is either $1$ or $-1$. Just like the neural networks or convolutional neural networks, the Hadamard matrix can be regarded as a propagation function or a network consisting of connections, in which each connection transfers the output of a neuron $i$ to the input of another neuron $j$, whilst $x=\{x_1,x_2,x_3,\ldots,x_N\}^T$ and $y=\{y_1,y_2,y_3,\ldots,y_N\}^T$ can be treated as the original input and the final output. There should be $\log_2N-1$ hidden layers, depending on the order $N$ of the Hadamard matrix. Additionally, there are $N$ transverse edges and $N$ intersection edges from the current layer to the next layer, and the number of the neurons in each layer is all $N$. Form the graph, it is interesting to find that all oblique intersection lines are in green, and all the lines that point to the right are half in green and half in red, where green stands for plus and red is minus. \begin{figure}[H \centering\includegraphics[width=0.95\linewidth]{figure-3} \caption{Simplified mathematical model (graph) for the Hadamard matrix multiplication calculation, $H_{16}$ for example.} \label{fig:Calculation} \end{figure} For the Hadamard matrix $H$ of order $N$, the number of the computation layers of the fast computation for the formula $H_Nx=y$ should be $\log_2N$. First of all, we initialize an intermediate column vector $b$ such that $b=x$ and let $t=N/2$ be the original length of each group in the first layer (i.e., the input layer). For the $i$th layer expect for the output layer, i.e., $i=1,2,3,\ldots\log N$, the current $x$ can be divided into $2^i$ groups. In the $i$th layer, we will traverse $2^{i-1}$ times to compute every element in each group such that $temp=b(index)$, $b(index)=temp+b(index+t)$ and $b(index+t)=temp-b(index+t)$, where the range of $index$ is from $1+2(j-1)t$ to $(2j-1)t$, $j$ is from 1 to $2^{i-1}$, and $t$ denotes the length of each group in this layer. In the next $(i+1)$th layer, we update $t$ via $t=t/2$, repeat the above operations until the $(\log_2N+1)$th layer is reached. At last, $y=b$. If we want to pick out the $r_i$th row of the Hadamard matrix to form a modulated pattern, it just need to compute $H_Nx$ where the $r_i$th element of $x$ is set to one with the rest elements all zeros. The operation $H_N^{-1}x$ is equivalent to calculate $\frac{1}{\sqrt{N}}H_N^Tx=\frac{1}{\sqrt{N}}H_Nx$. Similarly, the $r_i$th element of $y$ can be also easily obtained after performing the above graph calculation. The sequency ordered (also known as Walsh ordered) FWHT, is generated by computing the natural (Hadamard) ordered FWHT, and then rearranging the outputs. Therefore, here we perform our cake-cutting strategy on the Walsh ordered operator, and choose the front $M$ elements of the output signal $y$ following the cake-cutting sort sequence. It should be noted that if the operator is chosen as other kind of FWHT, like the dyadic (Paley) ordered FWHT, the natural (Hadamard) ordered FWHT, the CC method should also be applied on the corresponding operator, otherwise the optimized order will be incorrect. Based on the above rules, the computation can be greatly simplified. Since there is no need to store the sensing matrix again, the memory consumption is also dramatically reduced. As for the order sequence generation time of our method, we present the comparison result in Table~\ref{tab:time}. Here we hypothesize that the image to be reconstructed is square, i.e., $p=q$. Since there is no need for the nested grouping like in RD method, our approach can greatly reduce the generation time of the sort sequence, especially for the large scale Hadamard matrix. But we still think that computing the number of the connected regions is time consuming (see the fourth row in Table~\ref{tab:time}), thus we plot the curve between the piece number of the Walsh ordered patterns and the pattern number, as shown in Fig.~\ref{fig:rule}. Fortunately, we find that there exists a remarkable regularity for $i=1,2,3,\ldots,q$, that is \begin{equation \left\{\begin{array}{l} Seq\left[{(i-1)p+1:ip} \right]=i:-{(-1)^{mod(i,2)}}i:ip,{\textrm{\ for\ }}i{\textrm{\ is\ odd,}}\\ Seq\left[{(i-1)p+1:ip} \right]=ip:-{(-1)^{mod(i,2)}}i:i,{\textrm{\ for\ }}i{\textrm{\ is\ even.}} \end{array}\right. \end{equation} By this means, we provide the order sequence generation time of $\textrm{CC}_{\textrm{rule}}$ for different $n$ in the fifth row of Table~\ref{tab:time}. From the data, we can see that this rule can further dramatically reduce the generation time of our CC method and makes our method more practical. Specifically, this regularity is only effective for the Walsh ordered patterns. \begin{table}[ht] \centerin \caption{Comparison of the order sequence generation time (s) for RD and our cake-cutting method.} \vspace{0.15cm} \begin{tabular}{cccccccc} \hline\hline $n$&64&256&1024&4096&16384&65536\\ $p=q$&8&16&32&64&128&256\\ \hline RD&0.0126&0.0957&2.2301&103.0852&5818.4470&too long\\ CC&0.0052&0.0232&0.1629&2.0058&33.8586&3220.034\\ $\textrm{CC}_{\textrm{rule}}$&0.000046&0.000051&0.000055&0.000111&0.000224&0.000562\\ \hline\hline \end{tabular}\label{tab:time} \end{table} \begin{figure}[H \centering\includegraphics[width=0.95\linewidth]{figure-4} \caption{The piece number of the Walsh ordered patterns as a function of the pattern number.} \label{fig:rule} \end{figure} \noindent\textbf{Numerical simulations.} In order to test the performance of our method for image reconstruction, some numerical simulations are performed. Here we first create a head phantom image as the original image, which is normalized to a range of $0\sim255$. The results are acquired from 1\% and 2\% measurements by using five different approaches: CS, differential compressed sensing (DCS), sorted compressed sensing (SCS) (like in Fig.~\ref{fig:y}(g)), ``Russian Dolls" CS (RDCS), and our ``cake-cutting" method, as illustrated in Figs.~\ref{fig:Performance}(a)--\ref{fig:Performance}(b). It is worth mentioning that the ``Russian Dolls" method used here is applied to compressive imaging, rather than ghost imaging in the original scheme \cite{MJSunSR2017}, definitely generating a better image quality. It only takes a little more (negligible) time for our ``cake-cutting" method to iteratively compute the images, but yields a much better performance, compared with the other existing methods. Then we draw the RE and the peak signal-to-noise ratio (PSNR) of reconstructed images as a function of sampling ratio. From Figs.~\ref{fig:Performance}(c)--\ref{fig:Performance}(d), it is clearly seen that our CC method is much better than the CS and RD methods with an overwhelming superiority for any sampling ratio, and will exceed the DCS and SCS methods for the sampling ratios over 40\%. As mentioned above, SCS has a major drawback that it needs to fully sample the image, and then to pick up the most significant intensities. Our CC method makes up for this defect. It is important to note that, RE and PSNR, served as the performance metrics, all quantify the visibility via the calculation of pixel errors. These pixel-wise performance measures may fail to capture the correlation structure of the natural images and may cause evaluation misjudgments; e.g., an image which is supposed to have a better visibility may instead have a worse RE or PSNR value. Thereby from Figs.~\ref{fig:Performance}(a)--\ref{fig:Performance}(b), the CC method actually has a much better image quality when the sampling ratio is very low, but cannot be characterized very well with the data of Figs.~\ref{fig:Performance}(c)--\ref{fig:Performance}(d). \begin{figure}[H \centering\includegraphics[width=0.95\linewidth]{figure-5} \caption{Comparison of the simulation results. (a) and (b) show the reconstructions using compressed sensing (CS), differential compressed sensing (DCS), sorted compressed sensing (SCS), ``Russian Dolls" compressed sensing (RDCS), our ``cake-cutting" (CC) method, with a sampling rate of 1\% and 2\%, respectively. The corresponding evaluation parameters like the recovery time, relative error (RE) and peak signal-to-noise ratio (PSNR) are also given here. (c) and (d) give the comparison of above methods, in terms of RE along with PSNR as a function of the sampling ratio.} \label{fig:Performance} \end{figure} Next, another simulation is made to see the applicability of our method for the object images of large pixel scale. The original gray-scale objects are chosen from the open access standard test image gallery. Here the images of the man (Fig.~\ref{fig:Large}(a)), the baboon (Fig.~\ref{fig:Large}(g)) and Lena (Fig.~\ref{fig:Large}(i)) are used, all have the same resolution of $1024\times1024$ pixels. The reconstructions of the man image are performed at sampling ratio set from 0.78\% to 12.50\%, with a $2\times$ stepping increase, as shown in Figs.~\ref{fig:Large}(b)--\ref{fig:Large}(f). Since these results are retrieved with the same CC method, it is okey to use the RE and PSNR as the quality evaluation criterion. From the results we can see that the image quality and calculation time increases with the sampling ratio. Then the reconstructions (see Figs.~\ref{fig:Large}(h) and \ref{fig:Large}(j)) for different object images with 12.5\% sampling ratio aim to simulate the imaging of general scenes. For color-scale cases, we choose our school badge (Fig.~\ref{fig:Large}(k)) as the object, which will be split into the red, green and blue layers. By synthesizing the recovered images of the three wavelength components, the reconstruction of the color image can be obtained, as shown in Fig.~\ref{fig:Large}(l). The result shows that a multi-wavelength composite image can be reconstructed clearly with 255 tones with little color distortion. Above results are all performed with additive white gaussian noise (its mean is 1\% of the measured values mean, and its variance is 1). \begin{figure}[H \centering\includegraphics[width=0.95\linewidth]{figure-6} \caption{Reconstructions of different large-pixel-size 2D images, covering gray-scale and color-scale, all of $1024\times1024$ pixels. (a) An original man image. (b)--(f) Retrieved images with a $2\times$ stepping increase of the sampling ratio. (g) An original mandrill image. (h) Recovered image of the mandrill with a sampling ratio of 12.50\%. (i) An original Lena image. (j) The reconstructed image of Lena using 12.50\% measurements. (k) The original color image of the school badge of Beijing Institute of Technology. (l) The reconstruction of the school badge with a compression ratio of 12.50\%. All original images are open access. We acknowledge Beijing Institute of Technology for the permission of using the school badge as an experimental object.} \label{fig:Large} \end{figure} \noindent\textbf{Experimental setup and results.} In our experimental setup, as shown in Fig.~\ref{fig:Experiments}(a), the object is illuminated by the collimated and attenuated thermal light beam emitted from a stabilized halogen tungsten lamp whose wavelength range covers from 360~nm to 2600~nm. Some 2 inch $\times$ 2 inch neutral density filters are used to attenuate the light to the ultra-weak light level. The transmission light from the object vertically incidents upon a DMD via a imaging lens. The reflected light from the DMD in $-24^\circ$ direction with respect to the normal incidence input beam is then sampled by a counter-type Hamamatsu H10682-210 photomultiplier tube (PMT). Since the PMT records the total intensity in the form of photon counts, it can be regarded as a single-photon single-pixel (bucket) detector. Our 0.7 inch DMD (ranged from 350~nm to 2700~nm) consists of $1024\times768$ pixels, each of size 13.68~$\mu$m$\times$13.68~$\mu$m. The states ``on" and ``off" of the micromirrors are determined by a preloaded sequence of binary patterns. The nominal maximal binary pattern switching rates of the commercially available DMDs reach $32550$~Hz (patterns/s) with an onboard storage for up to 45000 patterns. We have developed an improved DMD which enables us to load the pattern sequence onto the DMD in real time. We use the Hadamard basis in ``cake-cutting" sequence to generate our DMD modulated patterns. The elements of the Hadamard matrix $A$ take values of 1 or $-1$, but the binary patterns encoded on the DMD consist of the values of 1 or 0, which can not ensure a good image quality with respect to the RIP. We found that by subtle shifting and stretching operations the matrix $A$ can be divided into two complementary matrices $\hat{A}=(A+1)/2$ and $\check{A}=1_{N\times N}-\hat{A}$, where 1 stands for an array consisting all ones. Thus the Hadamard matrix in optimal sort can be modulated by displaying each basis pattern of $\hat{A}$ immediately followed by its inverse (complementary) pattern (the one of $\check{A}$, i.e., the micromirror states ``on" and ``off" are reversed) on the DMD. Then the range of values of the differential patterns $A=\hat{A}-\check{A}$ becomes either 1 or $-1$, actually realizing ``positive-negative" intensity modulation and making the differential measurements with a mean $\sim0$. As a result, the SNR is also greatly improved \cite{YuOC2016,YuSR2014}. For simplicity, we test our system by imaging a gray-scale object. Here we choose a negative 1951 USAF resolution test chart as the original object (see Fig.~\ref{fig:Experiments}(b)), whose black parts block the light and white parts transmit the light. The red square is the object image projected on the square central region of our DMD, covering $512\times512$ pixels (micromirrors). Note that the red suqare is in Group $-1$, thus the width of each line for Elements $3\sim5$ is 793.70~$\mu$m, 707.11~$\mu$m and 629.96~$\mu$m, respectively. The computation in detail is presented in the section of Methods. In our experiments, we first modulate an optimized sequence of $64\times64$-pixel Hadamard basis patterns, where each pattern pixel comprises of $8\times8$ adjacent micromirrors. Without loss of generality, we make the DMD operate at 100~Hz. By utilizing our strategy, a coarse image of the object with a low-resolution of $64\times64$ pixels is retrieved from 1024 patterns, i.e., with a sampling rate of 25\%, as shown in Fig.~\ref{fig:Experiments}(c). By using a neutral density filter of transmissivity 0.001, the number of detected signal photons per image pixel is $\sim0.79$. Then Figs.~\ref{fig:Experiments}(d)--\ref{fig:Experiments}(m) illustrate $512\times512$ reconstructed images by using a series of sampling ratios with a coverage from 0.20\% to 100\%, when the piece numbers are in their ascending order. By contrast, Figs.~\ref{fig:Experiments}(n)--\ref{fig:Experiments}(q) show some examples of using the descending order of the Hadamard patterns, all with bad performances. Then we compare the results under different SNRs (see Figs.~\ref{fig:Experiments}(r)--\ref{fig:Experiments}(u)), by applying four different neutral density filters, whose transmissivity is 0.001, 0.0025, 0.005 and 0.01, respectively. Note that in above experiments there is no obstacle placed in the light path to block the detection light. Now, as shown in Fig.~\ref{fig:Experiments}(a), the PMT, whose photosensitive surface faces the scene, is covered with some pieces of lens cleaning tissues (some kind of organic fiber and can be treated as the obstacle here). The piece number of the tissues is increasing from 1 to 3 and the results are presented in Figs.~\ref{fig:Experiments}(v)--\ref{fig:Experiments}(x). Some part of the light reflected from the DMD passes through the tissues and is partly scattered. Thus the light collected by the PMT is a mixture of the direct light and the indirect light. In both SNR changing and partially obscuring cases, the numbers of measurements for the reconstructions of $512\times512$ images are all only 8192. \begin{figure}[htbp \centering\includegraphics[width=0.95\linewidth]{figure-7} \caption{Schematic of the experimental setup and experimental results. (a) The thermal light from a Halogen tungsten lamp illuminates the object through a beam expander and some neutral density filters, and then the light is projected onto a digital micromirror device (DMD). The reflected light is collected by a photomultiplier tube (PMT) through a focusing lens and some obstacles. (b) A negative 1951 USAF resolution test chart of 3~inch$\times$3~inch is treated as an object to be detected. (c) The recovered image of $64\times64$ pixel size with a sampling ratio of 25\%. (d)--(m) Reconstructed images of $512\times512$ pixels with a $2\times$ stepping increase of the sampling ratio, when the piece numbers are in their ascending order. (n)--(q) Retrieved images of $512\times512$ pixels with a $4\times$ stepping increase of the sampling ratio, when the piece numbers are in their descending order. (r)--(u) Reconstructions under different SNRs. It is noteworthy that the results of (c)--(u) are acquired without obstacles. (v)--(x) Results with increasing tissues as the obstacle. The third row results all use a sampling ratio of 3.13\%.} \label{fig:Experiments} \end{figure} \section*{\textsf{Discussion and Conclusion}} In our single-pixel imaging system, the noise cannot be neglected. There are many sources of noise, like the ambient illumination noise induced by the light source (with temperature drift), the dark noise of the counter-type PMT, the stray light reflected from the metal frame of the DMD, the specular and diffuse reflections from the metal surface under the intervals and flipping gaps of micromirrors (the latter is associated with the patterns as well), the stray light that bounces back and forth between the metal surfaces, and so forth. In addition, the SNR decreases as the transmissivity of neutral density filters is increased, as they are placed after the light source to attenuate the illumination. But the more the light is transmitted, the larger the stray light of the metal surfaces will be. Here the differential measurements we performed can also be used to average the variance of independently and identically distributed noise, thus significantly improving the SNR in the measurement process, as well as the imaging performance. Furthermore, the image quality can be also improved if the other kinds of stochastic noise is well suppressed. The programmable DMDs operating in binary mode with pattern displaying rate of 32.55~kHz are very common. Since the pulse-pair resolution of the used PMT is 20~ns, thus its working frequency is not the limitation of the system. As a consequence, the performance of the system is mainly restricted by the modulation rate of the DMD. Even so, tens of thousands of patterns per second could enable 1.97~ms (in total) measurement at $64\times64$-pixel resolution and 377.51~ms (in total) sampling at $1024\times768$-pixel resolution, all with a sampling ratio of 0.78\% (incorporating two adjacent complementary patterns for differential measurements). Therefore, for relatively low-resolution applications, it allows video rate image acquisition. By making full use of the structured characteristic of Hadamard matrix, we have demonstrated that the proposed method is capable of reconstructing large pixel-resolution images with high performance using few computational overhead and memory consumption. Thus the realization of the system hardware in the future will bring the single-pixel imaging closer to practical applications, for instance, mobile phones, night vision goggles or satellites. The proposed technique employs ``cake-cutting" strategy for the Hadamard basis ordering. We think that the optimized sort sequence of the patterns might be deterministically described in mathematics or could be generated by other new methods in the near future. In this case, the generation time of the sort sequence will be further shortened. It is worth noting that the orthogonality of Hadamard matrix is the key (but not the only) factor of fast computation and its combination with CS allows a perfect reconstruction from super sub-Nyquist measurements even in the presence of noise. The reconstructions of other orthogonal matrices or deterministic matrices will be our future work. For full-colour imaging, one can use three spectral filters to restore the red, green, and blue sub-images, and then synthesize the three sub-images to a color image. Moreover, the operational spectrum of the DMD ranges from 350~nm to 2700~nm, allowing the proposed system to be extended to the non-visible region of the spectrum where the array detectors are not well developed, such as in the infrared or ultraviolet wavelengths. In these situations, it only needs to change the lens and the single-pixel detector to fit the corresponding wavelength. In a nutshell, we propose a single-pixel compressive imaging method based on ``cake-cutting" Hadamard basis ordering, which is capable of precisely reconstructing images of large resolution up to $1024\times1024$ pixels from super sub-Nyquist measurements. The sampling ratio can be shortened to even 0.2\%, thus significantly reducing the acquisition time. According to the significant contribution of the deterministic Hadamard basis to the image reconstruction, a optimized sequence of patterns is obtained by directly making the internal piece numbers of basis patterns in their ascending order. By making full advantage of the structured characteristic of Hadamard matrix, the predetermined patterns can be loaded onto the DMD in real time, without the need to be all stored on the DMD. Additionally, in terms of computational efficiency, it also offers fast computation along with a small computational memory requirement (in computer), due to the simplified mathematical calculation model for the Hadamard matrix multiplication and the orthogonality of the Hadamard matrix. We have demonstrated this method with a single-photon single-pixel camera based on differential modulation of the DMD. The experimental results proved that our technique enables a good image reconstruction from indirect measurements through a partially obscuring scene in the presence of noise or under ultra-weak illumination. The technique can be easily extended to single-pixel imaging in other non-visible wavebands and offers an avenue to overcome the limitations existing in the recently introduced single-pixel imaging schemes. \section*{\textsf{Methods}} \textbf{Target object.} The 1951 USAF resolution test chart consists of a series of stripes decreasing in size, while the standard target element is composed of two sets of lines, each set is made up of three lines separated by spaces of equal width. Suppose $r$ to be the number of lines per millimeter, the parallel lines are $2.5/r$ millimeters long and $0.5/r$ millimeters wide with space $0.5/r$ millimeters wide between the parallel lines. The space between the vertical and horizontal lines is $1/r$ millimeters wide. The elements within a group are numbered from 1 to 6, which are progressively smaller. The group number covers from $-2$ to 7. The length of any target element line can be expressed as $2.5/2^{\textrm{Group}+(\textrm{Element}-1)/6}$mm, while the width equals to the length divided by 5, also is equivalent to $0.5/$Resolution (line pair$/$mm). Thus it is easy to compute the width of each line in the red square (all in Group $-1$), i.e., 793.70~$\mu$m for Element 3, 707.11~$\mu$m for Element 4, and 629.96~$\mu$m for Element 5. \noindent\textbf{Data processing.} All the data was analyzed and processed with MATLAB R2018b (The MathWorks, Inc.). The ordering and reconstructions were performed on a standard desktop computer with an Inter Core i7-6700 CPU @ 3.40~GHz and a memory of 16~GB. If a supercomputer with parallel processing is used or the system hardware is realized, the computation time will be much shorter. \noindent\textbf{Image analysis.} To obtain a quantitative measure of the image quality, the relative error (RE) is defined here as a figure of merit: \begin{equation} RE=\frac{||\tilde U-U_o||_F}{||U_o||_F}\times100\%, \label{eq:RE} \end{equation} where $\tilde U$ denotes the reconstructed image and $U_o$ stands for the original image, all of $p\times q$ pixels. Here, $||X||_F$ is called Frobenius norm which can be defined as \begin{equation} ||X||_F=\sqrt{\sum_{i=1}^p\sum_{j=1}^q|X_{ij}|^2}=\sqrt{\textrm{trace}(X^\ast X)}=\sqrt{\sum_{i=1}^{min\{p,q\}}\sigma_i^2}, \label{eq:Fnorm} \end{equation} where $\ast$ denotes the conjugate transpose operator, and $\sigma_i$ is the singular value of $X$. Additionally, here we introduce another unitless performance measure, the peak signal-to-noise ratio (PSNR), which is defined as \begin{equation} \textrm{PSNR}=10\log(255^2/\textrm{MSE}), \label{eq:PSNR} \end{equation} where $\textrm{MSE}=\frac{1}{pq}\sum\nolimits_{i,j=1}^{p,q}[U_o(i,j)-\tilde U(i,j)]^2$. The MSE describes the squared distance between the recovered image and the original image. Naturally, the larger the PSNR value, the better the quality of the image recovered. In order to allow a fair comparison of the image quality, all the recovered images of Figs.~\ref{fig:Hadamard}, \ref{fig:Performance}--\ref{fig:Large} are normalized to a range of $0\sim255$. Since the optical experiments generally have no original image as a reference, the results of Fig.~\ref{fig:Experiments} are directly normalized to a range of $0\sim1$.
1,314,259,995,720
arxiv
\section{Introduction} GW170817, the first detection of a merger between two neutron stars \citep[NSs,][]{Abbott2017b}, marked the beginning of multi-messenger astronomy. For the first time, electromagnetic emission accompanying the gravitational wave (GW) event was observed \citep{Abbott2017c}, ranging from gamma rays (e.g. \citealt{Abbott2017c,Goldstein2017,Savchenko2017}) to X-rays (e.g. \citealt{Margutti2017}), to optical, near-infrared (e.g. \citealt{Coulter2017,Soares-Santos2017,Chornock2017,Cowperthwaite2017,Nicholl2017,Pian2017}) and radio wavelengths (e.g. \citealt{Alexander2017}). The formation of merging double NSs (DNSs) like GW170817 is still matter of debate: understanding this process would provide crucial insights for both stellar evolution and GW astrophysics. Merging DNSs are expected to form either from the evolution of isolated close binaries (e.g. \citealt{Flannery1975,Bethe1998,Belczynski2002,Voss2003,Dewi2003,Podsiadlowski2004,Dewi2005,Andrews2015,Tauris2017,Chruslinska2017,Kruckow2018,Vigna2018,Giacobbo2018b,Mapelli2018,Mapelli2018b}) or through dynamical interactions in star clusters (e.g. \citealt{Grindlay2006,East2012,Lee2010,Ziosi2014}). Many uncertainties still affect both formation channels. In particular, one of the most debated and also one of the most important physical ingredients for the formation of DNSs is the magnitude of the natal kick imparted by the supernova (SN) explosion to the newborn NS \citep{Janka2012}. From a study on the proper motion of 233 young isolated pulsars, \citet{Hobbs2005} estimated that their velocity distribution follows a Maxwellian curve with a one dimensional root-mean-square (1D rms) velocity $\sigma=265$~km~s$^{-1}$ and an average natal kick speed of $\sim 420$~km~s$^{-1}$. On the other hand, there is increasing evidence that some NSs form with a significantly smaller natal kick. Several studies \citep{Cordes1998,Arzoumanian2002,Brisken2003,Schwab2010,Verbunt2017} claim that the velocity distribution proposed by \citet{Hobbs2005} underestimates the number of pulsars with a low velocity and suggest that the natal kick distribution of NSs is better represented by a bimodal velocity distribution. This bimodal distribution might result from two different mechanisms of NS formation. For instance, two out of nine accurate pulsar velocities computed by \citet{Brisken2002} are smaller than $40$~km~s$^{-1}$. Moreover, \citet{Pfahl2002} study a new class of high-mass X-ray binaries with small eccentricities and long orbital periods, which imply a low natal kick velocity ($\lesssim 50$~km~s$^{-1}$) for the newborn NSs. Similarly, \cite{Knigge2011} show that Be X-ray binaries could be divided in two sub-populations: one with short ($\sim{}10$ s) and one with long ($\sim{}200$ s) spin period. The two populations are characterized also by different orbital period and eccentricity distributions, indicative of two natal kick distributions. Even considerations about the orbital elements of some Galactic DNSs suggest that a low natal kick is required \citep{Heuvel2007,Beniamini2016}. It has been proposed that NSs with a low natal kick come from electron-capture SNe (ECSNe, \citealt{Miyaji1980,Nomoto1984,Nomoto1987,Heuvel2007}), a more rapid and less energetic process with respect to iron core-collapse SNe (CCSNe, \citealt{Dessart2006,Kitaura2006}). In ECSN explosions, the asymmetries are more difficult to develop and the newborn NS receives a lower kick \citep{Dessart2006,Jones2013,Schwab2015,Gessner2018}. Low natal kicks might occur not only in ECSNe, but in all low-mass progenitors ($\lesssim{}10$ M$_\odot$), because of their steep density profile at the edge of the core, which allows for rapid acceleration of the SN shock wave. The shock is revived on a shorter time scale than in more massive progenitors, and therefore there is less time for large-scale asymmetries (which would result in a larger kick) to develop (see e.g. \citealt{Mueller2016}). Alternatively, the low kick of some DNSs could be explained also by the fact that they come from ultra-stripped SNe, i.e. from the SN explosion of a naked Helium star that was stripped by its compact companion \citep{Tauris2013,Tauris2015,Tauris2017}. In this case, the natal kick is thought to be lower because of the low mass of the ejecta. In this paper, we use our new population-synthesis code {\sc MOBSE} \citep{Giacobbo2018}, to investigate the impact of ECSNe and low natal kicks on the formation of merging DNSs. We show that ECSNe are an important channel for the formation of DNSs, if they are associated to low natal kicks. Moreover, we discuss the extreme case in which all NSs receive a small kick, regardless of the SN process. \begin{table} \begin{center} \caption{Definition of the simulation sets.\label{tab:ecsim}} \begin{tabular}{cccc} \toprule ID & $\sigma_{\rm{ECSN}}$ & $\sigma_{\rm{CCSN}}$ & $\alpha$ \\ & [km~s$^{-1}$] & [km~s$^{-1}$] & \\ \midrule EC0\A1 & 0.0 & 265.0 & 1 \\ EC7\A1 & 7.0 & 265.0 & 1\\ EC15\A1 & 15.0 & 265.0 & 1\\ EC26\A1 & 26.0 & 265.0 & 1\\ EC265\A1 & 265.0 & 265.0 & 1\vspace{0.2cm}\\ EC0\A5 & 0.0 & 265.0 & 5 \\ EC7\A5 & 7.0 & 265.0 & 5\\ EC15\A5 & 15.0 & 265.0 & 5\\ EC26\A5 & 26.0 & 265.0 & 5\\ EC265\A5 & 265.0 & 265.0 & 5\vspace{0.2cm}\\ CC15\A1 & 15.0 & 15.0 & 1\\ CC15\A5 & 15.0 & 15.0 & 5\\ \bottomrule \end{tabular} \end{center} {\small Column 1: simulation name; column 2-3: 1D rms of the Maxwellian natal kick distribution for ECSNe and CCSNe, respectively (see sec.~\ref{sec:2.2}); column 4: values of $\alpha$ in the CE formalism (see sec.~\ref{sec:2.3}). Simulations CC15$\alpha{}1$ and CC15$\alpha{}5$ are the same as we already presented in \cite{Giacobbo2018b}.} \end{table} \section{Methods} \label{sec:2} {\sc MOBSE} is an updated version of the {\sc BSE} code \citep{Hurley2000,Hurley2002}. Here we summarize the main characteristics of {\sc MOBSE} and we describe the new features we have added to it for this work. A more detailed discussion of {\sc MOBSE} can be found in \cite{Giacobbo2018} and in \cite{Mapelli2017}. In this paper, we adopt the version of {\sc MOBSE} described as {\sc MOBSE1} in \cite{Giacobbo2018}. The main differences between {\sc MOBSE} and {\sc BSE} are the treatment of stellar winds of massive stars and the prescriptions for SN explosions. Stellar winds of O and B-type stars are implemented in {\sc MOBSE} as described by \cite{Vink2001}, while the mass loss of Wolf-Rayet (WR) stars is implemented following \cite{Belczynski2010}. Finally, the mass loss of luminous blue variable (LBV) stars is described as \begin{equation}\label{eq:LBV} \dot{M} = 10^{-4}\,{}f_{\rm LBV}\,{}\left(\frac{Z}{\text Z_{\odot}}\right)^{\beta{}}\,{} {\rm M}_{\odot}\,{}{\rm yr}^{-1}, \end{equation} where $f_{\rm LBV}=1.5$ \citep{Belczynski2010} and $Z$ is the metallicity. In {\sc MOBSE}, all massive hot massive stars (O, B, WR and LBV stars) lose mass according to $\dot{M}\propto{}Z^{\beta}$, where $\beta{}$ is defined as \citep{Chen2015} \begin{equation} \label{eq:scaling} \beta = \begin{cases} 0.85 & \rm{if}~~~ \Gamma_{\rm e} < 2/3 \cr 2.45 - 2.40 \,{} \Gamma_{\rm e} & \rm{if}~~~ 2/3 \leq \Gamma_e \leq 1~ \cr 0.05\,{} & \rm{if}~~~ \Gamma_{\rm e}>1, \end{cases} \end{equation} where $\Gamma_{\rm e}$ is the electron-scattering Eddington ratio, expressed as (see eq.~8 of \citealt{Graefener2011}): \begin{equation}\label{eq:gamma} \log{\Gamma_e}=-4.813+\log{(1+X_{\rm H})}+\log{(L/L_\odot)}-\log{(M/M_\odot)}. \end{equation} In equation~\ref{eq:gamma}, $X_{\rm H}$ is the Hydrogen fraction, $L$ is the star luminosity and $M$ is the star mass. The new prescriptions for core-collapse SNe (CCSNe) in {\sc MOBSE} include the rapid and the delayed SN model described by \cite{Fryer2012} (see also \citealt{Spera2015}). The rapid SN model is adopted for the simulations presented in this paper, because it allows us to reproduce the remnant mass gap between $\sim{}2$ M$_\odot$ and $\sim{}5$ M$_\odot$ \citep{Ozel2010,Farr2011}. Pair-instability and pulsational pair-instability SNe are also implemented in {\sc MOBSE} using the fitting formulas by \cite{Spera2017}. Finally, we have also updated the prescriptions for core radii following \cite{Hall2014}, we have extended the mass range up to 150 M$_\odot$ \citep{Mapelli2016}, and we have revised the treatment of Hertzsprung-gap (HG) donors in common envelope (CE): HG donors are assumed to always merge with their companions if they enter a CE phase. For this work, we have added several updates to the description of ECSNe and natal kicks in {\sc MOBSE}, as we describe in the following sections. \begin{figure*} \includegraphics[scale=0.4]{MERGINGvsALL} \caption{Impact of different kick velocities for ECSNe on the number of DNSs. Left: number of DNSs in each set of simulations (see Table~\ref{tab:ecsim}) as a function of progenitor's metallicity. Right: number of DNSs merging in less than a Hubble time (hereafter: merging DNSs) as a function of progenitor's metallicity. Different runs are indicated by different lines, as explained in the legend and in Table~\ref{tab:ecsim}.}\label{fig:mergingall} \end{figure*} \subsection{Electron-capture SNe (ECSNe)} \label{sec:2.1} NSs can form via CCSN, via ECSN or through the accretion-induced collapse of a white dwarf (WD). In {\sc MOBSE}, the outcome of a CCSN is considered a NS if its mass is less than 3.0 \ensuremath{\,\textrm{M}_{\odot}}~and a black hole (BH) otherwise. This approach is overly simplified, but more constraints on the equation of state of a NS are required for a better choice of the transition between NS and BH. In the case of both an ECSN and an accretion-induced WD collapse, the NS forms when the degenerate Oxygen-Neon (ONe) core collapses as a consequence of electron-capture reactions, inducing a thermonuclear runaway \citep{Miyaji1980, Nomoto1984, Nomoto1987, Nomoto1991, Kitaura2006, Fisher2010, Jones2013, Takahashi2013, Schwab2015, Jones2016}. In {\sc MOBSE}, we decide whether a star will undergo an ECSN by following the procedure described by \cite{Hurley2000} and \cite{Fryer2012}. First, we look at the Helium core mass at the base of the asymptotic giant branch\footnote{Mass loss during the asymptotic giant branch and dredge-up efficiency are assumed to be the same as in \cite{Hurley2000}.} ($M_{\rm BABG}$). If $1.6$ \ensuremath{\,\textrm{M}_{\odot}}~$ \leq M_{\rm BABG} < 2.25$ \ensuremath{\,\textrm{M}_{\odot}}, the star forms a partially degenerate Carbon-Oxygen (CO) core. If the CO core grows larger than $\sim{}1.08$ M$_\odot$, it can form a degenerate ONe core. If this degenerate ONe core reaches the mass $M_{\rm ECSN}=1.38$ \ensuremath{\,\textrm{M}_{\odot}}{}, it collapses due to the electron-capture on $^{24}$Mg and on $^{20}$Ne \citep{Miyaji1980,Nomoto1984,Nomoto1987}, otherwise it forms an ONe WD, which can still collapse to a NS if it will accrete sufficient mass. The outcome of the electron-capture collapse is a NS with baryonic mass $M_{\rm rem, bar} = M_{\rm ECSN}$, which becomes \begin{equation} M_{\rm rem,grav} = \frac{\sqrt{1 + 0.3M_{\rm rem,bar}}-1}{0.15}=1.26 \ensuremath{\,\textrm{M}_{\odot}}~, \end{equation} considering the mass loss due to neutrinos and by using the formula suggested by \citet{Timmes1996}. Even if only a few per cent of all SN events should be produced by electron-capture reactions in single stars \citep{Poelarends2007,Doherty2015}, this fraction could drastically raise if we consider binary systems \citep{Podsiadlowski2004}. In binary systems the possibility of accreting material by a companion broadens the mass range of progenitor stars in which the electron-capture collapse may occur, because mass transfer can significantly change the evolution of the core \citep{Sana2012, Dunstall2015,Poelarends2017}. In appendix~\ref{sec:app}, we show that the mass range of ECSNe is crucially affected by binary evolution. In particular, we find that mass transfer tends to widen the mass range of ECSNe. Recently, \cite{Jones2016} have shown that an ECSN might lead to the ejection of a portion of the degenerate core, rather than to the collapse into a NS. The collapse and the formation of a NS takes place only if the ignition density is $\gtrsim{}2\times{}10^{10}$ g cm$^{-3}$. This finding must be taken into account when interpreting the outcomes of our simulations: our results should be regarded as upper limits to the impact of ECSNe on the statistics of DNSs. \begin{center} \begin{figure*} \includegraphics[scale=0.41]{ecc_semi_merging_ECR15} \caption{Distribution of eccentricity (left-hand column) and semi-major axis (right-hand column) for all DNSs (black thin lines) and only for merging DNSs (red thick lines). For each simulation we show the distributions obtained at three different metallicities: $Z=0.02$ (dotted lines), 0.006 (dashed lines), and 0.0002 (solid lines). Simulations CC15$\alpha{}1$ and CC15$\alpha{}5$ are not shown in this Figure, because they have already been discussed in Giacobbo \&{} Mapelli (2018).\label{fig:ecca}} \end{figure*} \end{center} \begin{center} \begin{figure} \centering \includegraphics[scale=0.3]{standard_scenario.pdf} \caption{The percentage of merging DNSs which follow the standard scenario (see Sec. 3.2) as a function of progenitor's metallicity. Top: simulations with $\alpha=1$. Bottom: simulations with $\alpha=5$.\label{fig:scenario}} \end{figure} \end{center} \subsection{Natal kicks} \label{sec:2.2} The natal kick of a NS is drawn from a Maxwellian velocity distribution \begin{equation} f(v,\sigma)=\sqrt{\frac{2}{\pi}}\frac{v^2}{\sigma^3}\exp\left[{-\frac{v^2}{2\sigma^2}}\right] \qquad v \in~ [0,\infty ) \end{equation} where $\sigma$ is the one dimensional root-mean-square (1D rms) velocity and $v$ is the modulus of the velocity. Given the uncertainties on the natal kick distribution, we have implemented in {\sc MOBSE} the possibility to draw the natal kick from two Maxwellian curves with a different value of the 1D rms: $\sigma_{\rm CCSN}$ and $\sigma_{\rm ECSN}$, for iron CCSNe and ECSNe, respectively. $\sigma{} = 265$~km~s$^{-1}$ is adopted as a default value for CCSNe in {\sc MOBSE}. This value was derived by \citet{Hobbs2005}, studying the proper motion of 233 young isolated Galactic pulsars and corresponds to an average natal kick speed of $\sim 420$~km~s$^{-1}$. In this paper, we consider different values of $\sigma_{\rm ECSN}$, ranging from 0 to 265~km~s$^{-1}$, to investigate the impact of ECSNe on the statistics of DNSs. Because low natal kicks might originate not only from ECSNe, but also from iron CCSNe involving low-mass progenitors and from ultra-stripped SNe, we have run also an extreme case ($\sigma_{\rm CCSN}=\sigma_{\rm ECSN} = 15$~km~s$^{-1}$), in which all NSs receive a low natal kick independently on the SN type (see Table~\ref{tab:ecsim}). We will discuss this extreme case in Section~\ref{sec:alllow}. \subsection{Simulations and initial distributions} \label{sec:2.3} Here we describe the initial conditions used to perform our population-synthesis simulations. We randomly draw the mass of the primary star ($m_{\mathrm{1}}$) from a Kroupa initial mass function \citep[IMF,][]{Kroupa2001} \begin{equation} \mathfrak{F}(m_1) ~\propto~ m_1^{-2.3} \qquad \mathrm{with}~~ m_1 \in [5-150]\ensuremath{\,\textrm{M}_{\odot}} ~. \end{equation} The other parameters (mass of the secondary, period and eccentricity), are sampled according to the distributions proposed by \citet{Sana2012}. In particular, we obtain the mass of the secondary $m_{\mathrm{2}}$ as follows \begin{equation} \mathfrak{F}(q)~ \propto ~q^{-0.1} \qquad ~~~\mathrm{with}~~~q = \frac{m_2}{m_1}~ \in [0.1-1]~, \end{equation} the orbital period $P$ and the eccentricity $e$ from \begin{equation} \mathfrak{F}(\mathscr{P}) ~\propto~ (\mathscr{P})^{-0.55} ~~\mathrm{with}~ \mathscr{P} = \mathrm{log_{10}}(P/\mathrm{day}) \in [0.15-5.5] \end{equation} and \begin{equation} \mathfrak{F}(e) ~\propto ~e^{-0.42} \qquad ~~\mathrm{with}~~~ 0\leq e < 1~ \end{equation} respectively. For the CE phase we have adopted the $\alpha{}\lambda$ formalism \citep[see][]{Webbink1984,Ivanova2013}. This formalism relies on two parameters, $\lambda{}$ (which measures the concentration of the envelope) and $\alpha{}$ (which quantifies the energy available to unbind the envelope). To compute $\lambda$ we used the prescriptions derived by \citet{Claeys2014} (see their Appendix A for more details) which are based on \citet{Dewi2000}. We have run 12 sets of simulations, by changing the value of $\alpha{}$ and that of both $\sigma_{\rm ECSN}$ and $\sigma_{\rm CCSN}$ (see Table~\ref{tab:ecsim}). In the first 10 simulations reported in Table~\ref{tab:ecsim}, we have fixed $\sigma_{\rm CCSN}=265$~km~s$^{-1}$ and we have varied $\alpha{}=1,\,{}5$ and $\sigma_{\rm ECSN}=0,7,15,26,265$~km~s$^{-1}$ (corresponding to an average natal kick of about $0,11,23,41,420$~km~s$^{-1}$, respectively). In the last two simulations reported in Table~\ref{tab:ecsim} (CC15$\alpha{}1$ and CC15$\alpha{}5$), we have set $\sigma_{\rm CCSN} = \sigma_{\rm ECSN} = 15$~km~s$^{-1}$ for both $\alpha{}=1,5$. We will discuss simulations CC15$\alpha{}1$ and CC15$\alpha{}5$ in Section~\ref{sec:alllow}, while in the following sections we will focus on the other 10 simulations (i.e. on the effect of $\sigma_{\rm ECSN}$ on the statistics of DNSs). Finally, for each set of simulations we considered 12 sub-sets with different metallicities $Z=0.0002$, $0.0004$, $0.0008$, $0.0012$, $0.0016$, $0.002$, $0.004$, $0.006$, $0.008$, $0.012$, $0.016$ and $0.02$. In each sub-set, we simulated $10^7$ binary systems. Thus, each of sets of simulations is composed of $1.2\times10^8$ massive binaries. \section{Results} \begin{center} \begin{figure*} \includegraphics[scale=0.4]{Fraction_merger_EC_all} \caption{\label{fig:fraction}Top (bottom) panels: fraction of merging DNSs in which the first (second) SN is an ECSN as a function of progenitor's metallicity. Left-hand (right-hand) panels: simulations with $\alpha{}=1$ ($\alpha{}=5$).} \end{figure*} \end{center} \subsection{Impact of $\sigma{}_{\rm ECSN}$ on DNSs} \label{sec:4} The left-hand panel of Figure~\ref{fig:mergingall} shows all DNSs formed in our simulations as a function of metallicity. It is apparent that the lower $\sigma_{\rm ECSN}$ is, the higher the total number of DNSs. This is not surprising, because a lower $\sigma_{\rm ECSN}$ implies a lower probability to unbind the system. This effect is particularly strong for the simulations with $\alpha{}=1$, in which the number of DNSs is $\sim{}10-25$ times higher if $\sigma{}_{\rm ECSN}=0$ than if $\sigma{}_{\rm ECSN}=265$~km~s$^{-1}$. In the simulations with $\alpha{}=5$, the number of DNSs is $3-6$ times higher if $\sigma{}_{\rm ECSN}=0$ than if $\sigma{}_{\rm ECSN}=265$~km~s$^{-1}$. We also note that DNSs form more efficiently if $\alpha{}=5$ than if $\alpha{}=1$. In our simulations, the number of DNSs depends on metallicity, especially if $\alpha{}=1$. In particular, the number of DNSs is minimum for $Z\sim 0.002$. This trend originates from the evolution of stellar radii of $\sim{}8-20$ M$_\odot$ stellar progenitors, which are significantly larger for $Z\sim{}0.002$, than for the other metallicities (especially in the terminal main sequence and in the HG phases). The trend is stronger for $\alpha{}=1$ than for $\alpha{}=5$, because a low value of $\alpha{}$ corresponds to a more efficient shrinkage of the orbit during CE: two main sequence or HG stars are more likely to merge during CE if $\alpha{}$ is low. The right-hand panel of Figure~\ref{fig:mergingall} shows only the DNSs which merge in less than a Hubble time (hereafter: merging DNSs). In the simulations with $\alpha{}=5$, we find again a monotonic trend with $\sigma_{\rm ECSN}$, but the differences are much less significant. In the simulations with $\alpha=1$ the number of merging DNSs does not show a monotonic trend with $\sigma{}_{\rm ECSN}$: runs with $\sigma{}_{\rm ECSN}=7-26$~km~s$^{-1}$ produce a factor of $\sim{}5$ more merging DNSs than simulations with $\sigma{}_{\rm ECSN}=0$ and 265~km~s$^{-1}$. The only exception is represented by very metal-poor stars ($Z=0.0002$), for which the number of merging DNSs with $\sigma{}_{\rm ECSN}=0$ is similar to the one of systems with $\sigma{}_{\rm ECSN}=7-26$~km~s$^{-1}$. This behavior can be easily explained by considering that the merging time ($t_{\rm gw}$) due to GW emission depends on both the eccentricity ($e$) and the semi-major axis ($a$) as \citep{Peters1964} \begin{equation}\label{eq:eqpeters} t_{\rm gw} = \frac{5}{256}\frac{c^5}{G^3} \frac{a^4(1-e^2)^{7/2}}{m_1m_2(m_1+m_2)}~, \end{equation} where $c$ is the speed of light, $G$ is the gravitational constant, and $m_1$ ($m_2$) is the mass of the primary (secondary) member of the binary. Equation~\ref{eq:eqpeters} implies that more eccentric binaries have a shorter merging time. Moderate natal kicks do not unbind a binary, but increase its eccentricity, shortening its merging time. Since most binaries evolve through processes which tend to circularize their orbits (e.g. tidal torques, mass transfer and CE phase), the natal kicks are a fundamental ingredient to obtain highly eccentric orbits. This behavior is shown in the left-hand column of Figure~\ref{fig:ecca}, where the initial eccentricity distribution of all DNSs is compared with that of the merging DNSs (here ``initial'' refers to the time when the second NS is formed). A large number of DNSs have initial eccentricity close to zero in run~EC0$\alpha1$ (corresponding to $\sigma_{\rm ECSN}=0$ and $\alpha{}=1$), but only very few of them merge within a Hubble time. Many DNSs have initial eccentricity close to zero and most of them do not merge within a Hubble time also in run~EC0$\alpha5$ (corresponding to $\sigma_{\rm ECSN}=0$ and $\alpha{}=5$). However, run~EC0$\alpha5$ is also efficient in producing DNSs with non-zero eccentricity, which are able to merge within a Hubble time. In contrast, only few DNSs with eccentricity close to zero form in the other runs, because of the SN kicks. We note that the second NS originates from an ECSN in the vast majority of DNSs with eccentricity $e\sim{}0$. The right-hand column of Figure~\ref{fig:ecca} compares the distribution of the initial semi-major axis of all DNSs with that of the merging systems. We see that increasing $\sigma_{\rm ECSN}$ the widest systems tend to disappear, because they can be disrupted more easily by the natal kicks. \subsection{DNS formation channels} \label{sec:formation} From our simulations we find that the most likely formation channel for merging DNSs is consistent with the standard scenario described in \citet{Tauris2017} (see their Figure 1): first the primary star expands and fills its Roche lobe, transferring mass to the companion; then the primary explodes leaving a NS; when the secondary expands, the system enters CE; after CE ejection, the system is composed of a NS and a naked Helium star and the NS starts stripping its companion; the stripped Helium star undergoes a SN explosion, which is most likely an ultra-stripped SN \citep{Tauris2013,Tauris2015,Tauris2017}; the final system is a close DNS which will merge within a Hubble time. Figure~\ref{fig:scenario} shows the fraction of merging DNSs which follow the standard scenario we have just described ($f_{\rm std}$). For $\alpha=5$, $f_{\rm std}$ is nearly independent of the metallicity of the progenitor, while it depends on the natal kicks. At low kicks ($\sigma_{\rm ECSN}\leq 26$~km~s$^{-1}$) $>>80$ per cent of merging DNSs form via the standard scenario, while if $\sigma_{\rm ECSN}= 265$~km~s$^{-1}$ the percentage lowers to $\sim{}60-70$ per cent. For $\alpha=1$, $f_{\rm std}$ depends on both the metallicity and the natal kicks. For a given kick distribution, $f_{\rm std}$ is minimum at metallicity $Z\sim 0.0016 - 0.006$ (especially in run EC0$\alpha1$ and EC265$\alpha1$), while for a fixed metallicity $f_{\rm std}$ is maximum ($\sim 80 - 90$ per cent) for $\sigma{}_{\rm ECSN}=7-26$~km~s$^{-1}$. This behavior confirms that ECSNe are a fundamental process for the formation of DNSs, but what is the fraction of systems undergoing an ECSN? Is ECSN more frequently the first or the second SN of a merging system? Figure~\ref{fig:fraction} shows the fraction of merging DNSs in which at least one of the two SN explosions is an ECSN. Most merging DNSs ($\sim{}50-90$ per cent) undergo at least one ECSN in the vast majority of simulations (EC7$\alpha{}1$, EC15$\alpha{}1$, EC26$\alpha{}1$, EC0$\alpha{}5$, EC7$\alpha{}5$, EC15$\alpha{}5$ and EC26$\alpha{}5$). In the simulation EC0$\alpha{}1$ ($\sigma{}_{\rm ECSN}=0$ and $\alpha{}=1$), ECSNe are important at low metallicity ($Z=0.0002$) and negligible for intermediate and high metallicity. Only in the simulations with large ECSN kicks (runs EC265$\alpha{}1$ and EC265$\alpha{}5$), the fraction of DNSs undergoing at least one ECSN is always less than 50 per cent. Moreover, in simulations with $\alpha=5$ the percentage of DNSs which undergo at least one ECSN increases with the progenitor's metallicity. Overall, we find that the ECSN is the first SN in the vast majority of merging DNSs. Less than $\sim{}10$ per cent of merging DNSs go through an ECSN as second SN, independently of the assumptions about natal kicks and CE efficiency. This result is in agreement with \cite{Chruslinska2017} and \cite{Kruckow2018} (but see \citealt{Tauris2017} for a different argument). This is likely due to the fact that the first SN explosion occurs before that other processes (e.g. a CE phase) are able to shrink the binary; therefore the system is less bound and it can be more easily disrupted if the natal kick of the newborn NS is too strong. In contrast, the second SN explosion tends to occur after a CE, when the system is usually on a very close and less eccentric orbit, hence it can survive even stronger kicks. Moreover, the fact that the second SN explosion induces a high kick velocity facilitates the formation of highly eccentric orbits, which are more likely to merge via GW emission. The fact that the ECSN is often the first SN occurring in a merging DNS might seem awkward, because ECSN progenitors are usually less massive than iron CCSN progenitors. Indeed, this happens because most merging DNSs originate from very close binary systems, in which the primary has lost a significant fraction of its mass by mass transfer during Roche lobe overflow. Because of mass loss, the primary enters the regime of ECSNe. In contrast, the secondary accretes part of the mass lost by the primary and enters the regime of iron CCSNe. This explains why the first SN is more often an ECSN in the progenitors of merging DNSs. \subsection{GW170817-like systems} Figure~\ref{fig:gwlike} shows the number of GW170817-like systems that form in our simulations. We define as GW170817-like systems those merging DNSs with $M_{\rm rem,1} \in [1.36, 1.60] \ensuremath{\,\textrm{M}_{\odot}}$ and $M_{\rm rem,2} \in [1.17, 1.36] \ensuremath{\,\textrm{M}_{\odot}}$ ($M_{\rm rem,1}$ and $M_{\rm rem,2}$ being the mass of the primary and of the secondary NS, assuming effective spin $\leq{}0.05$, \citealt{Abbott2017b}). Because of its large mass ($1.36 - 1.60 \ensuremath{\,\textrm{M}_{\odot}}$), the most massive component of GW170817-like systems cannot have formed via ECSN. In other words, at least one of the two SNe must be a CCSN, in order to form a GW170817-like system. Figure~\ref{fig:gwlike} shows that at high metallicity ($Z\gtrsim{}0.002$ for $\alpha{}=1$ and $Z\gtrsim{}0.012$ for $\alpha{}=5$) all simulations follow a similar trend independently of the value of $\sigma_{\rm ECSN}$, while for lower metallicities the number of GW170817-like systems becomes sensitive to the value of $\sigma_{\rm ECSN}$. In particular, the higher $\sigma_{\rm ECSN}$ is, the lower the number of GW170817-like systems. Furthermore, in the simulations with $\alpha=5$ the number of GW170817-like systems increases with decreasing metallicity. The reason is that at high metallicity the majority of GW170817-like systems form from binaries which undergo two iron CCSNe (see Figure~\ref{fig:fractiongw}), while at low metallicity most of the progenitors pass through at least one ECSN. Figure~\ref{fig:fractiongw} shows that the effect of increasing the value of $\alpha$ is to increase the maximum metallicity at which the majority of GW170817-like systems form through at least one ECSN from $Z\sim 0.002$ ($\alpha=1$) to $Z \sim 0.012$ ($\alpha=5$). \begin{center} \begin{figure} \includegraphics[scale=0.36]{GWlike} \caption{Number of GW170817-like simulated merging DNSs as a function of progenitor's metallicity.\label{fig:gwlike}} \end{figure} \end{center} \begin{center} \begin{figure*} \includegraphics[scale=0.4]{Fraction_gw_EC_all} \caption{Top (Bottom) panels: fraction of GW170817-like systems in which the first (second) SN is an ECSN as a function of progenitor's metallicity. The left-hand (right-hand) panels are for the simulation with $\alpha=1$ ($\alpha=5$). \label{fig:fractiongw}} \end{figure*} \end{center} \subsection{Low kicks in iron core-collapse SNe}\label{sec:alllow} Low natal kicks might occur not only in ECSNe but also in iron CCSNe, especially in the case of low-mass progenitors \citep{Mueller2016} and ultra-stripped SNe \citep{Tauris2017}. Moreover, \cite{Giacobbo2018b} and \cite{Mapelli2018} have shown that population-synthesis simulations can reproduce the local merger rate of DNSs inferred from GW170817 \citep{Abbott2017b} only if low natal kicks ($\lesssim{}50$~km~s$^{-1}$) are assumed for all DNSs. Thus, in this Section we discuss how much the main results of this paper change if we make the extreme assumption that all NSs receive a low natal kick, i.e. $\sigma{}_{\rm ECSN}=\sigma{}_{\rm CCSN}=15$~km~s$^{-1}$. To this purpose, we have considered two additional runs: CC15$\alpha{}1$ and CC15$\alpha{}5$, which have been already presented in \cite{Giacobbo2018b} and \cite{Mapelli2018}. Simulation CC15$\alpha{}1$ (CC15$\alpha{}5$) differs from simulation EC15$\alpha{}1$ (EC15$\alpha{}5$) only for the choice of $\sigma{}_{\rm CCSN}$ (see Table~\ref{tab:ecsim}). From Figure~\ref{fig:mergingall} it is apparent that the number of DNSs is about one order of magnitude larger in simulation CC15$\alpha{}1$ (CC15$\alpha{}5$) than in simulation EC15$\alpha{}1$ (EC15$\alpha{}5$). This is not surprising, because a lower CCSN kick implies a lower probability to unbind the system. On the other hand, if we consider only the DNSs merging within a Hubble time, we find an interesting difference. The number of merging DNSs is a factor of ten larger in simulation CC15$\alpha{}1$ than in simulation EC15$\alpha{}1$, whereas the number of merging DNSs in simulations CC15$\alpha{}5$ and EC15$\alpha{}5$ are comparable. Moreover, simulations CC15$\alpha{}5$ and EC15$\alpha{}5$ show a significantly different trend with metallicity. As already discussed in \cite{Giacobbo2018b}, there is a strong interplay between the effects of natal kicks and those of CE efficiency. Figure~\ref{fig:fraction} shows that the percentage of merging DNSs which underwent at least one ECSN is dramatically affected by the choice of $\sigma{}_{\rm CCSN}$: less than $\sim{}40$ per cent of all merging DNSs underwent at least one ECSN if $\sigma{}_{\rm CCSN}=15$~km~s$^{-1}$. This difference is particularly strong for the first SN. In fact, the binary system is still quite large at the time of the first SN explosion and can be easily broken by the SN kick. This result has relevant implications for the GW170817-like systems. As shown in Fig.~\ref{fig:fractiongw}, no GW170817-like systems underwent an ECSN as first SN in simulations CC15$\alpha{}1$ and CC15$\alpha{}5$. This comes from the trend we described above, plus the fact that the first SN usually produces the most massive NS of the system in runs CC15$\alpha{}1$ and CC15$\alpha{}5$. In our models, the mass of a NS born from ECSN is assumed to be 1.26 M$_\odot{}$, insufficient to match the mass of the primary member of GW170817 (under the assumption of low spins). In contrast, the second SN is always an ECSN in the GW170817-like systems formed in simulation CC15$\alpha{}1$. We stress, however, that this result critically depends on the assumption about the mass of a NS formed via ECSN \citep{Hurley2000,Fryer2012}. \section{Summary} We have investigated the importance of ECSNe on the formation of DNSs. ECSNe are thought to occur frequently in interacting binaries \citep{Podsiadlowski2004,Tauris2017} and to produce relatively small natal kicks \citep{Dessart2006,Jones2013,Schwab2015}. We assumed that natal kicks generated by ECSNe (iron CCSNe) are distributed according to a Maxwellian function with 1D rms $\sigma_{\rm ECSN}$ ($\sigma_{\rm CCSN}$). First, we have assumed $\sigma{}_{\rm CCSN}=265$~km~s$^{-1}$ (according to \citealt{Hobbs2005}) and we have explored five different values of $\sigma{}_{\rm ECSN}=0,$ 7, 15, 26 and 265~km~s$^{-1}$. Moreover, we have also investigated the impact of CE, by considering $\alpha{}=1$ and $\alpha{}=5$. We find that the number of simulated DNSs scales inversely with $\sigma_{\rm ECSN}$. In particular, the largest (smallest) number of DNSs form if $\sigma{}_{\rm ECSN}=0$ ($\sigma{}_{\rm ECSN}=265$~km~s$^{-1}$). This effect is maximum for $\alpha{}=1$, while it is only mild for $\alpha{}=5$. The number of DNSs merging within a Hubble time also depends on $\sigma_{\rm ECSN}$, but with a rather different trend depending on the assumed value for $\alpha$. For $\alpha=5$, the number of merging systems follows the same trend as the total number of DNSs. For $\alpha=1$ the number of DNS mergers is maximum for $\sigma{}_{\rm ECSN}=7-26$~km~s$^{-1}$, while it drops by a factor of $\sim{}3-10$ if $\sigma_{\rm ECSN}=0$ and if $\sigma_{\rm ECSN}=265$~km~s$^{-1}$. The reason is that very large kicks ($\sigma_{\rm ECSN}=265$~km~s$^{-1}$) completely break the binary, while moderate kicks ($\sigma{}_{\rm ECSN}=7-26$~km~s$^{-1}$) leave the binary bound but increase its eccentricity. A larger eccentricity implies a shorter timescale for merger by GW emission, as shown by \cite{Peters1964}. In contrast, null natal kicks produce a large number of systems with zero initial eccentricity, which have longer merger times. A large percentage ($\sim{}50-90$ per cent) of merging DNSs undergo at least one ECSN explosion in most of our simulations. This percentage drops below 40 per cent only if $\sigma{}_{\rm ECSN}=265$~km~s$^{-1}$, or if $\sigma{}_{\rm CCSN}=15$~km~s$^{-1}$, or if $\sigma{}_{\rm ECSN}=0$~km~s$^{-1}$, $\alpha{}=1$ and $Z>0.0002$. In the majority of merging DNSs, the ECSN is the first SN occurring in the binary. This happens because, in most cases, the first SN occurs before the binary has shrunk significantly (e.g. by CE) and is easily broken if the kick is too strong. Moreover, we have selected the simulated DNSs whose mass matches that of GW170817. We call these systems GW170817-like systems. At high metallicity ($Z \gtrsim 0.002$ for $\alpha=1$ and $Z \gtrsim 0.012$ for $\alpha=5$) the formation of GW170817-like systems is independent of $\sigma{}_{\rm ECSN}$, because most GW170817-like systems form through iron CCSNe, while for lower metallicity most GW170817-like systems undergo at least one ECSN and their statistics depends on $\sigma{}_{\rm ECSN}$. Finally, we have considered an extreme case in which not only ECSNe but also CCSNe are associated to low kicks, by imposing $\sigma{}_{\rm CCSN}=\sigma_{\rm ECSN}=15$~km~s$^{-1}$. \cite{Mapelli2018} and \cite{Giacobbo2018b} suggest that this extreme assumption is necessary to match the local DNS merger rate density inferred from GW170817 \citep{Abbott2017b}. The number of simulated DNSs increases by a factor of ten if we assume $\sigma{}_{\rm CCSN}=15$~km~s$^{-1}$, because less binary systems are disrupted by the first SN explosion. Moreover, this assumption strongly suppresses the percentage of merging DNSs (especially GW170817-like systems) which evolved through an ECSN as first SN. These results confirm the importance of natal kicks to understand the properties of merging DNSs. \section*{Acknowledgements} We thank the referee, Samuel Jones, for his critical reading which significantly improved this work. The authors are grateful to Alessandro Bressan, Mario Spera and Chris Pankow for useful discussions. Numerical calculations have been performed through a CINECA-INFN agreement and through a CINECA-INAF agreement, providing access to resources on GALILEO and MARCONI at CINECA. NG acknowledges financial support from Fondazione Ing. Aldo Gini and thanks the Institute for Astrophysics and Particle Physics of the University of Innsbruck for hosting him during the preparation of this paper. MM acknowledges financial support from the MERAC Foundation through grant `The physics of gas and protoplanetary discs in the Galactic centre', from INAF through PRIN-SKA `Opening a new era in pulsars and compact objects science with MeerKat', from MIUR through Progetto Premiale 'FIGARO' (Fostering Italian Leadership in the Field of Gravitational Wave Astrophysics) and 'MITiC' (MIning The Cosmos: Big Data and Innovative Italian Technology for Frontier Astrophysics and Cosmology), and from the Austrian National Science Foundation through FWF stand-alone grant P31154-N27 `Unraveling merging neutron stars and black hole - neutron star binaries with population-synthesis simulations'. This work benefited from support by the International Space Science Institute (ISSI), Bern, Switzerland, through its International Team programme ref. no. 393 {\it The Evolution of Rich Stellar Populations \& BH Binaries} (2017-18). \bibliographystyle{mnras}
1,314,259,995,721
arxiv
\section*{Introduction} Let us start by recalling the algebraic interpretation of the integration of a vector field. Let $X$ be a complex algebraic variety and $\chi$ an algebraic vector field on $X$, or, equivalently, a $\mathbb{C}$-derivation $\delta:{\EuScript O}_X\to {\EuScript O}_X$ of the sheaf of regular functions. Let us denote by $X[t]= \mathbb{A}^1_{\mathbb{C}} \times X$, $\mathbb{C}[\varepsilon] = \mathbb{C}[t]/(t^2)$, $X[\varepsilon] = \Spec \mathbb{C}[\varepsilon] \times X$ and $\overline{\delta}:X[\varepsilon] \to X$ the map of schemes determined by (and determining) $\delta$: any section $f$ of ${\EuScript O}_X$ is mapped to the section $f + \delta(f)\varepsilon$ of ${\EuScript O}_X[\varepsilon]$. \medskip If $X$ is nonsingular, we can consider the flow $\Theta:{\cal U} \to X^{\rm an}$ associated with $\chi^{\rm an}$, where ${\cal U}\subset X[t]^{\rm an}=\mathbb{C} \times X^{\rm an}$ is an open neighbourhood of ${\cal X}=\{0\}\times X^{\rm an}$. It turns out that for any holomorphic (or algebraic) function $f$ on an open set $V\subset X^{\rm an}$, the function $\Theta^*(f)=f{\scriptstyle \,\circ\,} \Theta$ is given by $$ (t,p)\in \Theta^{-1}(V)\subset \mathbb{C} \times X^{\rm an} \mapsto \sum_{i=0}^\infty t^i \frac{\delta^i(f)}{i!}(p)\in \mathbb{C}$$ for $|t|$ small enough. Hence, the formal completion of $\Theta$ along $\cal X$, $\widehat{\Theta}:\widehat{\cal U}=\widehat{X[t]^{\rm an}} \to X^{\rm an}$, comes from the purely (formal) algebraic map $\widehat{X[t]} \to X$ associated with the {\em exponential map} $e^{t\delta}:{\EuScript O}_X \to {\EuScript O}_X[[t]]$ attached to $\chi$ (or to $\delta$) defined as $$e^{t\delta}(f) = \sum_{i=0}^\infty t^i \frac{\delta^i(f)}{i!}$$ for any regular function $f$ on some Zariski open set of $X$. \medskip The exponential map $e^{t\delta}$ is a lifting of $\delta$ (it coincides with $\overline{\delta}$ $\mod t^2$) and it can be regarded as the algebraic incarnation of the integration of the vector field $\chi$.\medskip The exponential map of a vector field makes sense not only over the complex numbers, but over any field of characteristic zero, and in fact it also works if $X$ is eventually singular. However, it does not make sense over a field $k$ of positive characteristic. \medskip Nevertheless, the notion of {\em Hasse--Schmidt derivation} allows us to define what integrability means for a vector field in such a case (see \cite{brown_1978,mat-intder-I}). Given a commutative ring $k$ and a commutative $k$-algebra $A$, a Hasse--Schmidt derivation of $A$ over $k$ (of length $\infty$) is a sequence $D=(\Id,D_1,D_2,D_3,\dots)$ of $k$-linear operators of $A$ which appear as the coefficients of a $k$-algebra map $\Phi:A \to A[[t]]$ such that $\Phi(a) \equiv a \mod t$ for all $a\in A$: $\Phi(a) = a + D_1(a) t + D_2(a) t^2 + \cdots$. That property is equivalent to the fact that the $D_i$ satisfy the Leibniz equality: $$ D_0 =\Id,\quad D_i(ab) = \sum_{r+s=i} D_r(a) D_s(b)\quad \forall a,b\in A,\ \forall i\geq 1.$$ A $k$-linear derivation $\delta:A\to A$ is said to be ($\infty$-)integrable if there is a Hasse--Schmidt derivation $D$ of $A$ over $k$ (of length $\infty$) such that $D_1=\delta$, or in other words, if the $k$-algebra map $\overline{\delta}: a\in A \mapsto a + \delta(a) \varepsilon \in A[\varepsilon]= A[[t]]/(t^2)$ can be lifted up to a $k$-algebra map $\Phi: A \to A[[t]]$. The set of $k$-linear derivations of $A$ which are integrable is a submodule of $\Der_k(A)$, which is denoted by $\Ider_k(A)$. \medskip When $A$ is a smooth $k$-algebra over an arbitrary commutative ring $k$ or when $k$ contains the rational numbers, any $k$-linear derivation $\delta:A\to A$ is ($\infty$-)integrable. The modules $\Ider_k(A)$, and more generally, the Hasse--Schmidt derivations of $A$ over $k$ seem to play an important role among the differential structures in Commutative Algebra and Algebraic Geometry (see \cite{vojta_HS}, \cite{nar_2009}). They behave better in positive characteristic than $\Der_k(A)$ (see for instance \cite{molinelli_1979} or \cite{seiden_1966}) and one expects that they can help to understand (some of) the differences between singularities in zero and non-zero characteristics, but they are difficult to deal with. For instance, it is not clear at all that ($\infty$-)integrability is a local property (in the sense that can be tested locally at the primes ideals of $A$). \medskip For a given positive integer $m$, the $m$-integrability of a $k$-linear derivation $\delta:A\to A$ is defined as the existence of a $k$-algebra map $\Phi: A \to A[[t]]/(t^{m+1})$ lifting the map $\overline{\delta}$ defined above. The set of $k$-linear derivations of $A$ which are $m$-integrable is a submodule of $\Der_k(A)$, which is denoted by $\Ider_k(A;m)$. One obviously has $\Der_k(A)=\Ider_k(A;1) \supset \Ider_k(A;2) \supset \Ider_k(A;3) \supset \cdots \supset \Ider_k(A;\infty)=\Ider_k(A)$.\medskip This paper is devoted to the study of the modules $\Ider_k(A;m)$, for $m\geq 1$. \medskip One of the main difficulties when dealing with $m$-integrability of a derivation is that one cannot proceed step by step: a derivation $\delta$ can be $(m+r)$-integrable, but it may have an intermediate $m$-integral $D=(\Id,D_1=\delta,D_2,\dots,D_m)$ which does not extends up to a Hasse--Schmidt derivation of length $(n+r)$ (cf. Example 3.7 in \cite{nar_2009}). \smallskip Our main results are the following: \smallskip \noindent (I) If $A$ is a finitely presented $k$-algebra and $m$ is a positive integer, then the property of being $m$-integrable for a $k$-derivation $\delta$ of $A$ is a local property, i.e. $\delta$ is $m$-integrable if and only if the induced derivation $\delta_{\mathfrak{p}}:A_{\mathfrak{p}} \to A_{\mathfrak{p}} $ is $m$-integrable for each prime ideal $\mathfrak{p}\subset A$. As a consequence, for any locally finitely presented morphism of schemes $f:X \to S$ and any positive integer $m$, the $S$-derivations of $X$ which are locally $m$-integrable form a quasi-coherent submodule $\fIder_S({\EuScript O}_X;m)\subset \fDer_S({\EuScript O}_X)$ such that, for any affine open sets $U=\Spec A \subset X$ and $V=\Spec k \subset S$, with $f(U)\subset V$, we have $\Gamma(U,\fIder_S({\EuScript O}_X;m)) =\Ider_k(A;m)$ and $\fIder_S({\EuScript O}_X;m)_p = \Ider_{{\EuScript O}_{S,f(p)}}({\EuScript O}_{X,p};m)$ for each $p\in X$ (see Theorem \ref{teo:criter-loc-ider-fp} and Corollary \ref{cor:Ider_fp_schemes}). We have then a decreasing sequence of quasi-coherent modules $$\fDer_S({\EuScript O}_X)= \fIder_S({\EuScript O}_X;1) \supset \fIder_S({\EuScript O}_X;2) \supset \fIder_S({\EuScript O}_X;3)\supset \cdots$$ and all the quotients $\fDer_S({\EuScript O}_X)/\fIder_S({\EuScript O}_X;m)$ are supported by the non-smooth\-ness locus of $f:X\to S$. \smallskip \noindent (II) For a given $k$-algebra $A$ and for any positive integer $m$, there is a constructive procedure to see whether \underline{all} $k$-derivations of $A$ are $m$-integrable or not. In particular, if $A$ and $k$ are ``computable'' rings, then the above procedure becomes an effective algorithm (although of exponential complexity with respect to $m$) to decide whether the equality $\Ider_k(A;m)=\Der_k(A)$ is true or not (see \ref{subsect:algo}). \smallskip Let us now comment on the content of this paper. \smallskip In section 1 we review the notion of Hasse--Schmidt derivation and its basic properties. We study logarithmic Hasse--Schmidt derivations with respect to an ideal $I$ of some ambient algebra $A$ and their relationship with Hasse--Schmidt derivations of the quotient $A/I$. In the last part we focus on the description of Hasse--Schmidt derivations on polynomial or power series algebras. \smallskip Section 2 contains the main results of this paper. First, we define $m$-integrability and logarithmic $m$-integrability and give a characterization of $(m+1)$-integrability for a Hasse--Schmidt derivation of length $m$. In section \ref{subsect:jacob} we give some criteria for a derivation to be integrable, based on and extending previous results of \cite{mat-intder-I} and \cite{traves-2000}. Next, we study the behaviour of $m$-integrability under localization, for finite $m$, and prove (I) above. In the last part we prove the results needed to justify procedure (II) above. \smallskip In Section 3 we first compute some concrete examples and illustrate the nonlinear equations one encounters when computing systems of generators of the modules $\Ider_k(A;m)$. In the second part we state some questions, which seem to be important for understanding the relationship between the modules of $m$-integrable derivations and singularities. \medskip I would like to thank Herwig Hauser and Orlando Villamayor for many helpful and inspiring discussions, and also Herwig Hauser for proposing the last two examples in section 3. I would also like to thank Herwig Hauser and Eleonore Faber for some comments and suggestions on a previous version of this paper. \section{Notations and preliminaries} \subsection{Notations} Throughout the paper we will use the following notations: \smallskip \noindent -) $k$ will be a commutative ring and $A$ a commutative $k$-algebra.\smallskip \noindent -) $\mathbb{N}_+ := \{n\in\mathbb{N}\ |\ n\geq 1\}$, $\overline{\mathbb{N}} := \mathbb{N} \cup \{\infty\}$, $\overline{\mathbb{N}}_+ := \mathbb{N}_+ \cup \{\infty\}$.\smallskip \noindent -) If $n\in \mathbb{N}_+$, $[n]:=\{0,1,\dots,n\}$, $[n]_+:=[n]\cap \mathbb{N}_+$ and $[\infty]:=\mathbb{N}$. \smallskip \noindent -) If $n\in \mathbb{N}_+$, $A_n :=A[[t]]/(t^{n+1})$ and $A_\infty =A[[t]]$. Each $A_n$ is an augmented $A$-algebra, the augmentation ideal $\ker (A_n \to A)$ being generated by $t$.\smallskip \noindent -) For $n\in \overline{\mathbb{N}}_+$ and $m\in [n]_+$, let us denote by $\pi_{nm}: A_n \to A_m$ the natural epimorphism of augmented $A$-algebras.\smallskip \noindent -) If $\alpha =(\alpha_1,\dots,\alpha_d)\in\mathbb{N}^d$, $\supp \alpha = \{r\in\{1,\dots,d\}\ |\ \alpha_r\neq 0\}$ and $|\alpha|:= \alpha_1+\cdots+\alpha_d$. \smallskip \noindent -) The ring of $k$-linear differential operators of $A$ will be denoted by $\diff_{A/k}$ (see \cite{ega_iv_4}).\smallskip \noindent -) For $A=k[x_1,\dots,x_d]$ or $A=k[[x_1,\dots,x_d]]$, we will denote by $\partial_r:A\to A$ the partial derivative with respect to $x_r$. \subsection{Hasse-Schmidt derivations} In this section we remind the definition and basic facts of Hasse--Schmidt derivations (see \cite{has37},\cite{mat_86}, \S 27, and \cite{traves-phd}, \cite{vojta_HS}, \cite{nar_2009} for more recent references). We also introduce the basic constructions that will be used throughout the paper. \begin{definicion} A {\em Hasse--Schmidt derivation} of $A$ (over $k$) of length $n\geq 1$ (resp. of length $\infty$) is a sequence $D=(D_i)_{i\in [n]}$ of $k$-linear maps $D_i:A \longrightarrow A$, satisfying the conditions: $$ D_0=\Id_A, \quad D_i(xy)=\sum_{r+s=i}D_r(x)D_s(y) $$ for all $x,y \in A$ and for all $i\in [n]$. We denote by $\HS_k(A;n)$ the set of all Hasse--Schmidt derivations of $A$ (over $k$) of length $n\in \overline{\mathbb{N}}$ and $\HS_k(A)=\HS_k(A;\infty)$. \end{definicion} \noindent \refstepcounter{numero}\noindent {\sc \thenumero\ } The $D_1$ component of any Hasse-Schmidt derivation $D\in\HS_k(A;n)$ is a $k$-derivation of $A$. More generally, the $D_i$ component is a $k$-linear differential operator of order $\leq i$ with $D_i(1)=0$ for $i=1,\dots,n$. \smallskip \noindent\refstepcounter{numero}\noindent {\sc \thenumero\ } Any Hasse--Schmidt derivation $D\in\HS_k(A;n)$ is determined by the $k$-algebra homomorphism $\Phi_D: A \to A_n$ defined by $\Phi_D(a) = \sum_{i=0}^n D_i(a)t^i$ and satisfying $\Phi_D(a)\equiv a \mod t$. The $k$-algebra homomorphism $\Phi_D$ can be uniquely extended to a $k$-algebra automorphism $\widetilde{\Phi}_D: A_n \to A_n$ with $\widetilde{\Phi}_D(t)=t$: $$ \widetilde{\Phi}_D\left(\sum_{i=0}^n a_i t^i\right) = \sum_{i=0}^n \Phi(a_i) t^i.$$ So, there is a bijection between $\HS_k(A;n)$ and the subgroup of $\Aut_{k-\text{alg}}(A_n)$ consisting of the automorphisms $\widetilde{\Phi}$ satisfying $\widetilde{\Phi}(a) \equiv a \mod t$ for all $a\in A$ and $\widetilde{\Phi}(t)=t$. In particular, $\HS_k(A;n)$ inherits a canonical group structure which is explicitly given by $D{\scriptstyle \,\circ\,} D' = D''$ with $ D''_{l} = \sum_{i+j=l} D_i {\scriptstyle \,\circ\,} D'_j$, the identity element of $\HS_k(A;n)$ being $(\Id_A,0,0,\dots)$. It is clear that the map $(Id_A,D_1)\in \HS_k(A;1) \mapsto D_1 \in \Der_k(A)$ is an isomorphism of groups, where we consider the addition as internal operation in $\Der_k(A)$. \smallskip \noindent \refstepcounter{numero}\noindent {\sc \thenumero\ } For any $a\in A$ and any $D\in\HS_k(A;n)$, the sequence $a{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} D$ defined by $(a{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} D)_i=a^i D_i$, $i\in [n]$, is again a Hasse--Schmidt derivation of $A$ over $k$ of length $n$ and $\Phi_{a{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} D}(b)(t) = \Phi_D(b)(at)$ for all $b\in A$. We have $(a a'){\hspace{.1em}\scriptsize\bullet\hspace{.1em}} D= a{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} (a'{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} D)$, $1{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} D=D$ and $0{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} D=$ the identity element. \smallskip \noindent\refstepcounter{numero}\noindent {\sc \thenumero\ } \label{nume:truncation-inverse-limit} For $1\leq m \leq n\in \overline{\mathbb{N}}$, let us denote by $\tau_{nm}: \HS_k(A;n) \to \HS_k(A;m)$ the {\em truncation} map defined in the obvious way. One has $\Phi_{\tau_{nm} D} = \pi_{nm}{\scriptstyle \,\circ\,} \Phi_D$. Truncation maps are group homomorphisms and they satisfy $\tau_{nm}(a{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} D)=a{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} \tau_{nm}D$. It is clear that the group $\displaystyle \HS_k(A;\infty)$ is the inverse limit of the groups $\HS_k(A;m)$, $m\in\mathbb{N}$. \begin{definicion} Let $q\geq 1$ be an integer or $q=\infty$, and $D\in\HS_k(A;q)$. For each integer $m\geq 1$ we define $D[m]$ as the Hasse--Schmidt derivation (over $k$) of length $mq$ determined by the $k$-algebra map obtained by composing the following maps: $$ A \xrightarrow{\Phi_D} A_q = A[[t]]/(t^{q+1}) \xrightarrow{\overline{t} \mapsto \overline{t}^m} A_{mq}= A[[t]]/(t^{mq+1}).$$ In the case $q=1$ and $D=(\Id_A,\delta)$, we simply denote $\delta[m]:=D[m]$. \end{definicion} If $D=(\Id_A,D_1,D_2,\dots)\in\HS_k(A;q)$, then $$D[m] =(\Id_A,0,\dots,0,\stackrel{\scriptsize\underbrace{m}}{D_1},0,\dots,0, \stackrel{\scriptsize\underbrace{2m}}{D_2},0,\dots)\in \HS_k(A;mq).$$ The map $ D\in \HS_k(A;q) \mapsto D[m] \in \HS_k(A;qm)$ is a group homomorphism and we have $(a^m\bullet D)[m] = a\bullet D[m]$, $(\tau_{qq'}D)[m] = \tau_{qm,q'm}(D[m])$ for $a\in A, 1\leq q'\leq q$. \begin{definicion} \label{def:ell} For each $n\in\overline{\mathbb{N}}_+$ and each $E\in\HS_k(A;n)$, we denote $\ell(E)=0$ if $E_1\neq 0$, $\ell(E)=n$ if $E$ is the identity and $\ell(E)=$ maximun of the $r\in [n]$ such that $E_1=\cdots=E_r=0$ otherwise. \end{definicion} \begin{definicion} Let $I\subset A$ be an ideal and $m\in\overline{\mathbb{N}}_+$. We say that: \begin{enumerate} \item[1)] A $k$-derivation $\delta:A\to A$ is {\em $I$-logarithmic} if $\delta(I)\subset I$. The set of $k$-linear derivations of $A$ which are $I$-logarithmic is denoted by $\Der_k(\log I)$. \item[2)] A Hasse--Schmidt derivation $D\in\HS_k(A;m)$ is called {\em $I$-logarithmic} if $D_i(I)\subset I$ for any $i\in [m]$. The set of Hasse--Schmidt derivations $D\in\HS_k(A;m)$ which are $I$-logarithmic is denoted by $\HS_k(\log I;m)$. When $m=\infty$ it will be simply denoted by $\HS_k(\log I)$. \end{enumerate} \end{definicion} The set $\Der_k(\log I)$ is obviously a $A$-submodule of $\Der_k(A)$. Any $\delta\in \Der_k(\log I)$ gives rise to a unique $\overline{\delta}\in \Der_k(A/I)$ satisfying $\overline{\delta}{\scriptstyle \,\circ\,} \pi = \pi {\scriptstyle \,\circ\,} \delta$, where $\pi:A\to A/I$ is the natural projection. Moreover, if $A=k[x_1,\dots,x_d]$ or $A=k[[x_1,\dots,x_d]]$, the sequence of $A$-modules \begin{equation*} 0 \to I\Der_k(A) \xrightarrow{\text{incl.}} \Der_k(\log I) \xrightarrow{\delta\mapsto \overline{\delta}} \Der_k(A/I)\to 0 \end{equation*} is exact. \smallskip \refstepcounter{numero}\noindent {\sc \thenumero\ } \label{num:HS-induced-by-HS-log} In the same vein, the set $\HS_k(\log I;m)$ is a subgroup of $\HS_k(A;m)$ and we have $A{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} \HS_k(\log I;m)\subset \HS_k(\log I;m)$, $\HS_k(\log I;m)[n]\subset \HS_k(\log I;mn)$, $n\in\mathbb{N}$. A $D\in \HS_k(A;m)$ is $I$-logarithmic if and only if its corresponding $k$-algebra homomorphism $\Phi_D:A\to A_m$ satisfies $\Phi_D(I) \subset I_m:= \ker \pi_m$, where $\pi_m:A_m \to \left(A/I\right)_m$ is the natural projection\footnote{Observe that $\ker \pi_m = I A_m$ when $I$ is finitely generated or $m$ is finite.}. Moreover, a $I$-logarithmic Hasse--Schmidt derivation $D\in \HS_k(\log I;m)$ gives rise to a unique $\overline{D}\in \HS_k(A/I;m)$ such that $\overline{D}_i{\scriptstyle \,\circ\,} \pi = \pi {\scriptstyle \,\circ\,} D_i$ for all $i\in [m]$, and the following diagram is commutative \begin{equation*} \xymatrix{ A \ar[r]^{\Phi_D} \ar[d]^{\pi} & A_m \ar[d]^{\pi_m} \\ A/I \ar[r]^{\Phi_{\overline{D}}} & \left(A/I\right)_m.} \end{equation*} The map $\Pi_m: D\in \HS_k(\log I;m) \to \overline{D} \in \HS_k(A/I;m)$ is clearly a homomorphism of groups and $\Pi_m(a{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} D)= \pi(a){\hspace{.1em}\scriptsize\bullet\hspace{.1em}} \Pi_m(D)$. So, its kernel contains the subgroup $I{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} \HS_k(A;m)$ generated by the $a{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} E$, with $a\in I$ and $E\in\HS_k(A;m)$. It is also clear that $\tau_{mn}{\scriptstyle \,\circ\,} \Pi_m = \Pi_n {\scriptstyle \,\circ\,} \tau_{mn}$ and $(\Pi_m D)[n]= \Pi_{mn}(D[n])$. \smallskip \noindent \refstepcounter{numero}\noindent {\sc \thenumero\ } \label{nume:basic-localiz} Let $S\subset A$ be a multiplicative set. For each $k$-linear differential operator $P:A\to A$, let us denote by $\widetilde{P}:S^{-1}A \to S^{-1}A$ its canonical extension. We know that the map $P\in\diff_{A/k} \mapsto \widetilde{P}\in\diff_{S^{-1}A/k}$ is a ring homomorphism. Let $m\geq 1$ be an integer or $m=\infty$ and $\mathfrak{a}\subset A$ an ideal. Here is a summary of the basic facts of the behaviour of Hasse-Schmidt derivations under localization: \smallskip \noindent -) For any $D=(D_i)\in\HS_k(A;m)$, the sequence $\widetilde{D}:= (\widetilde{D_i})$ is a Hasse-Schmidt derivation of $S^{-1}A$ (over $k$ of length $m$) and the following diagram is commutative \begin{equation*} \xymatrix{ A \ar[r]^{\Phi_D} \ar[d]^{\text{can.}} & A_m \ar[d]^{\text{can.}} \\ S^{-1}A \ar[r]^{\Phi_{\widetilde{D}}} & (S^{-1}A)_m.} \end{equation*} Moreover, if $D$ is $\mathfrak{a}$-logarithmic, then $\widetilde{D}$ is $(S^{-1}\mathfrak{a})$-logarithmic.\smallskip \noindent -) The map $\Theta_m:D\in \HS_k(A;m) \to \widetilde{D}\in \HS_k(S^{-1} A;m)$ is a group homomorphism, $\Theta_m(a{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} D) = \frac{a}{1} {\hspace{.1em}\scriptsize\bullet\hspace{.1em}}\Theta_m(D)$ and the following diagram is commutative: \begin{equation*} \xymatrix{ \HS_k(\log \mathfrak{a};m) \ar[r]^{\Theta_m} \ar[d]^{\Pi_m} & \HS_k(\log (S^{-1}\mathfrak{a});m) \ar[d]^{\Pi_m} \\ \HS_k(A/\mathfrak{a};m) \ar[r]^{\Theta_m} & \HS_k(S^{-1}A/S^{-1}\mathfrak{a};m).} \end{equation*} Moreover, $\tau_{mn}{\scriptstyle \,\circ\,} \Theta_m = \Theta_n {\scriptstyle \,\circ\,} \tau_{mn}$ and $(\Theta_m D)[n]= \Theta_{mn}(D[n])$. \smallskip The extension of Hasse--Schmidt derivations to rings of fractions is a particular case of the formally \'etale extensions (cf. \cite{masson_PhD} and \cite{traves-2000}, th. 1.5). \subsection{Hasse--Schmidt derivations of polynomial or formal power series algebras} Throughout this section we assume that $A=k[x_1,\dots,x_d]$ or $A=k[[x_1,\dots,x_d]]$. The {\em Taylor differential operators} $\Delta^{(\alpha)}:A\to A$, $\alpha\in\mathbb{N}^d$, are defined by: $$ g(x_1+T_1,\dots,x_d+T_d) =\sum \Delta^{(\alpha)}(g) T^\alpha,\quad \forall g\in A.$$ It is well known that $\{\Delta^{(\alpha)}\}_{|\alpha|\leq i}$ is a basis of the left (resp. right) $A$-module of $k$-linear differential operators of $A$ of order $\leq i$. So, if $D\in \HS_k(A;m)$, there are unique $C^i_{\alpha}\in A$, $\alpha\in\mathbb{N}^d$, $0<i\leq |\alpha|\in [m]_+$, such that $D_i = \sum_{0<|\alpha|\leq i} C^i_{\alpha} \Delta^{(\alpha)}$, $i\in [m]_+$. On the other hand, there are unique $c_{ri}\in A$, $i\in [m]_+$, $1\leq r\leq d$, such that $$\Phi_D(x_r)= x_r + \sum_{i=1}^m c_{ri} t^i,\quad 1\leq r\leq d.$$ In fact, any system of $c_{ri}\in A$, $i\in [m]_+$, $1\leq r\leq d$, determines uniquely such a homomorphism of $k$-algebras $A\to A_m$ and so a Hasse--Schmidt derivation $D\in \HS_k(A;m)$. \medskip The following proposition gives the relationship between the $C^i_{\alpha}$ and the $c_{ri}$ above. Its proof does not contain any surprise and it is left up to the reader. \begin{proposicion} \label{prop:formulon} With the above notations, the following properties hold: \begin{enumerate} \item[1)] $c_{ri}= D_i(x_r) = C^i_{e_r}$, with $e_r = (0,\dots,\stackrel{\scriptsize\underbrace{r}}{1},\dots,0)$, for all $i\in [m]_+$, $r=1,\dots,d$. \item[2)] $$C_\alpha^i=\sum_{\substack{\{\varepsilon_r\}_{r\in\supp \alpha}\\ \varepsilon_r\geq \alpha_r, |\varepsilon|=i}} \left( \prod_{r\in \supp \alpha} \left( \sum_{\substack{\beta_1+\cdots+\beta_{\alpha_r}=\varepsilon_r\\ \beta_k> 0}} \prod_{k=1}^{\alpha_r} c_{r,\beta_k}\right)\right)$$ for all $\alpha\in\mathbb{N}^d$, $|\alpha|\in [m]_+$, $0<i\leq |\alpha|$. \end{enumerate} \end{proposicion} The above proposition is a particular case of Theorem 2.8 in \cite{magda_nar_hs}. For the sake of completeness we include, without proof, the following result. \begin{proposicion} Let $C^i_{\alpha}\in A$, $\alpha\in\mathbb{N}^d$, $0<i\leq |\alpha|\in [m]_+$, be a system of elements of $A$ and define $D_0=\Id_A$, $D_i = \sum_{0<|\alpha|\leq i} C^i_{\alpha} \Delta^{(\alpha)}$, $i\in [m]_+$. The following properties are equivalent: \begin{enumerate} \item[(a)] The sequence $D=(D_i)_{i\in [m]}$ is a Hasse--Schmidt derivation of $A$ over $k$ of length $m$. \item[(b)] For all $i\in [m], i\geq 2$, for all $\varrho\in\mathbb{N}^d$ with $2\leq |\varrho|\leq i$ and for all $\beta,\gamma\in\mathbb{N}^d$ with $\varrho=\beta+\gamma$, $|\beta|,|\gamma| >0$ we have $\displaystyle C^i_{\varrho}= \sum C^j_{\beta} C^l_{\gamma}$, where the summation indexes are the $(j,l)$ with $j\geq |\beta|, l\geq |\gamma|$ and $j+l=i$. \end{enumerate} \end{proposicion} Let us notice that, if the equivalent properties of the preceding proposition hold, then the $C^i_{\alpha}$ with $2\leq |\alpha|\leq i$ are determined by the $C^j_{\beta}$ with $1\leq |\beta|\leq j\leq i-1$. This applies in particular to the symbol of the $D_i$, $\sigma(D_i)=\sum_{|\alpha|=i} C^i_{\alpha} \xi^{\alpha}$, which only depend on $D_1$ (compare with Proposition 2.6 in \cite{nar_2009}). \begin{definicion} \label{def:Taylor-HS} The {\em Taylor Hasse-Schmidt derivations} of $A$ are the $$\Delta^{(s)} := (\Id_A,\Delta_1^{(s)},\Delta_2^{(s)}, \Delta_3^{(s)},\dots ) \in \HS_k(A),\quad 1\leq s\leq d,$$ where $\Delta^{(s)}_i = \Delta^{(0,\dots,\stackrel{\scriptsize\underbrace{s}}{i},\dots,0)}$ for each $i\geq 1$. \end{definicion} \begin{proposicion} \label{prop:surjec-HS-log} Assume that $R=k[x_1,\dots,x_d]$, $S\subset R$ is a multiplicative set and $A=S^{-1}R$ or $A=k[[x_1,\dots,x_d]]$. For any ideal $I\subset A$, the group homomorphisms $\Pi_m: \HS_k(\log I;m) \to \HS_k(A/I;m)$, $m\in\overline{\mathbb{N}}$, (see \ref{num:HS-induced-by-HS-log}) are surjective. \end{proposicion} \begin{prueba} Let us prove the proposition in the case $A=S^{-1}R$, the case $A=k[[x_1,\dots,x_d]]$ being completely similar. Let us call $\sigma:R\to A, \pi:A \to A/I, \pi_m: A_m \to (A/I)_m$ the canonical maps and let $E\in \HS_k(A/I;m)$ be any Hasse--Schmidt derivation. Let $a_{ri}\in A$ be elements such that $$ \Phi_E(\pi(\sigma(x_r))) = \pi(\sigma(x_r)) +\sum_{i\in [m]} \pi(a_{ri}) t^i \in (A/I)_m, \quad r=1,\dots,d,$$ and let $\Psi:R \to A_m$ be the $k$-algebra map defined by $$ \Psi(x_r) = \sigma(x_r) +\sum_{i\in [m]} a_{ri} t^i \in A_m,\quad r=1,\dots,d.$$ Since $\Psi(f)\equiv \sigma(f) \mod t$ for each $f\in R$, we deduce that $\Psi(s)$ is invertible for all $s\in S$ and the map $\Psi$ induces $\widetilde{\Psi}:A\to A_m$. It is clear that $\widetilde{\Psi}(a)\equiv a \mod t$ for each $a\in A$ and $\pi_m{\scriptstyle \,\circ\,} \widetilde{\Psi} = \Phi_E{\scriptstyle \,\circ\,} \pi$. So, $\widetilde{\Psi}$ induces a $I$-logarithmic Hasse-Schmidt derivation $D\in\HS_k(\log I;m)$ such that $\Pi_m(D)=E$ (see \ref{num:HS-induced-by-HS-log}). \end{prueba} \begin{proposicion} \label{prop:surjec-HS-log-localiz-poly} Assume that $R=k[x_1,\dots,x_d]$, $S\subset R$ is a multiplicative set and let $\mathfrak{a}\subset R$ be a finitely generated ideal. For any (finite) integer $m\geq 1$, the map $$ (s,D)\in S \times \HS_k(\log \mathfrak{a};m)\mapsto \frac{1}{s}{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} \Theta_m(D) \in\HS_k(\log (S^{-1}\mathfrak{a});m)$$ is surjective. \end{proposicion} \begin{prueba} Let $E\in\HS_k(\log (S^{-1}\mathfrak{a});m)$ be any $(S^{-1}\mathfrak{a})$-logarithmic Hasse--Schmidt derivation. Since $m$ is finite, there are $a_{ij}\in R$, $1\leq i=1 \leq d$, $1\leq j\leq m$ and $\sigma\in S$ such that $$ \Phi_E\left(\frac{x_i}{1}\right) = \frac{x_i}{1} + \left(\frac{a_{i1}}{\sigma}\right) t + \cdots + \left(\frac{a_{im}}{\sigma}\right) t^m \in (S^{-1}R)_m,\quad i=1,\dots,d.$$ Let us consider the $k$-algebra map $\Phi^0: R \to R_m$ given by $$ \Phi^0(x_i) = x_i + a_{i1} t + \sigma a_{i2} t^2 +\cdots + \sigma^{m-1} a_{im} t^m \in R_m,\quad i=1,\dots,d$$ and the corresponding Hasse-Schmidt derivation $D^0\in \HS_k(R;m)$ with $\Phi^0 =\Phi_{D^0}$. It is clear that $\left(\frac{\sigma}{1}\right){\hspace{.1em}\scriptsize\bullet\hspace{.1em}} E = \Theta_m(D^0)$. Let $f_1,\dots,f_u\in \mathfrak{a}$ be a finite system of generators. Since $\Theta_m(D^0)$ is $(S^{-1}\mathfrak{a})$-logarithmic, we deduce the existence of a $\tau\in S$ such that $\tau\Phi_{D^0}(f_l) \in A_m \mathfrak{a}$ for all $l=1,\dots,u$. So, $D:=\tau{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} D^0$ is $\mathfrak{a}$-logarithmic and $E = \left(\frac{1}{\sigma\tau}\right){\hspace{.1em}\scriptsize\bullet\hspace{.1em}} \Theta_m(D)$. \end{prueba} Proposition \ref{prop:surjec-HS-log-localiz-poly} is false for $m=\infty$, as shown for instance in example 1.4 in \cite{traves-2000}. \begin{corolario} \label{cor:surjec-HS-log-localiz-fp} Assume that $A$ is a finitely presented $k$-algebra and let $T\subset A$ be a multiplicative set. Then, for any (finite) integer $m\geq 1$, the map $$ (t,E)\in T \times \HS_k(A;m)\mapsto \frac{1}{t}{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} \Theta_m(E) \in\HS_k(T^{-1}A;m)$$ is surjective. \end{corolario} \begin{prueba} We may assume that $A=R/\mathfrak{a}$ with $R=k[x_1,\dots,x_d]$ and $\mathfrak{a}\subset R$ a finitely generated ideal. Denote by $\pi:R\to A$ the natural projection and $S=\pi^{-1}(T)$. We have $T^{-1}A = S^{-1}R/S^{-1}\mathfrak{a}$. Let us look at the following commutative diagram \begin{equation*} \xymatrix{S\times \HS_k(\log \mathfrak{a};m) \ar[r]^{} \ar[d]^{\pi\times\Pi_m} & \HS_k(\log (S^{-1}\mathfrak{a});m) \ar[d]^{\Pi_m} \\ T \times \HS_k(A;m) \ar[r]^{} & \HS_k(T^{-1}A;m).} \end{equation*} The vertical arrows are surjective by Proposition \ref{prop:surjec-HS-log}. To conclude, we apply Proposition \ref{prop:surjec-HS-log-localiz-poly}. \end{prueba} \section{Integrability} \subsection{Integrable Hasse--Schmidt derivations} In this subsection, $A$ will be again an arbitrary $k$-algebra. \begin{definicion} (Cf. \cite{brown_1978,mat-intder-I}) \label{def:HS-integ} We say that a $k$-derivation $\delta:A\to A$ is {\em $n$-integrable} (over $k$), with $n\in \overline{\mathbb{N}}$, if there is a Hasse--Schmidt derivation $D\in \HS_k(A;n)$ such that $D_1=\delta$. A such $D$ will be called a {\em $n$-integral} of $\delta$. The set of $n$-integrable $k$-derivations of $A$ is denoted by $\Ider_k(A;n)$. We simply say that $\delta$ is {\em integrable} if it is $\infty$-integrable and we denote $\Ider_k(A)=\Ider_k(A;\infty)$.\\ \noindent More generally, we say that a Hasse--Schmidt derivation $D'\in\HS_k(A;m)$ is {\em $n$-integrable} (over $k$), with $m,n\in \overline{\mathbb{N}}, n\geq m$, if there is a Hasse--Schmidt derivation $D\in \HS_k(A;n)$ such that $\tau_{nm}D=D'$. A such $D$ will be called a {\em $n$-integral} of $D'$. The set of $n$-integrable Hasse--Schmidt derivations of $A$ over $k$ of length $m$ is denoted by $\IHS_k(A;m;n)$. We simply say that $D'$ is {\em integrable} if it is $\infty$-integrable and we denote $\IHS_k(A;m) = \IHS_k(A;m;\infty)$. \end{definicion} It is clear that the $\Ider_k(A;n)$ are $A$-submodules of $\Der_k(A)$, $ \Der_k(A)=\Ider_k(A;1)\supset\Ider_k(A;2)\supset \Ider_k(A;3)\supset \cdots$ and \begin{equation} \label{eq:intersection-ider} \Ider_k(A) \subset \bigcap_{n\in\mathbb{N}_+} \Ider_k(A;n). \end{equation} It is also clear that the $\IHS_k(A;m;n)$ are subgroups of $\IHS_k(A;m)$, stable by the ${\hspace{.1em}\scriptsize\bullet\hspace{.1em}}$ operation, $\IHS_k(A;m) = \IHS_k(A;m;m) \supset \IHS_k(A;m;m+1)\supset \cdots$ and \begin{equation} \label{eq:intersection-iHS} \IHS_k(A;m) \subset \bigcap_{n\geq m} \IHS_k(A;m;n). \end{equation} \begin{ejemplo} \label{ex:1} (1) Let $n\geq 1$ be an integer. If $n!$ is invertible in $A$, then any $k$-derivation $\delta$ of $A$ is $n$-integrable: we can take $D \in \HS_k(A;n)$ defined by $D_i=\frac{\delta^i}{i!}$ for $i=0,\dots,n$. In the case $n=\infty$, if $\mathbb{Q}\subset A$, one proves in a similar way that any $k$-derivation of $A$ is integrable. \smallskip \noindent (2) If $A$ is $0$-smooth (i.e. formally smooth for the discrete topologies) $k$-algebra, then any $k$-derivation of $A$ is integrable (cf. \cite{mat_86}, Theorem 27.1). \end{ejemplo} \begin{nota} \label{nota:taylor-HS} A particularly important case of example \ref{ex:1} is $A=k[x_1,\dots,x_d]$ or $A=k[[x_1,\dots,x_d]]$. In this case we can do better than in example \ref{ex:1} and even exhibit a special integral for each $D\in\HS_k(A;m)$, $m\in\mathbb{N}_+$. Namely, consider the Hasse--Schmidt derivation $\varepsilon(D)\in\HS_k(A)$ determined by the $k$-algebra map $A=k[x_1,\dots,x_d] \to A[[t]]$ sending each $x_r$ to $ \sum_{i\in [m]} D_i(x_r) t^i \in A[[t]]$. In other words, if $\varepsilon(D)=(D'_i)_{i\in\mathbb{N}}$, then $D'_i=D_i$ for all $i\in [m]$ and $D'_i(x_r)=0$ for all $i > m$ and all $r=1,\dots,d$. It is clear that $\varepsilon(\Id_A,\partial_s)$ coincides with the ``Taylor Hasse-Schmidt derivation'' $\Delta^{(s)}$ defined in \ref{def:Taylor-HS}. \end{nota} Definition \ref{def:HS-integ} admits the following obvious logarihtmic version. \begin{definicion} Let $I\subset A$ be an ideal and $n\in \overline{\mathbb{N}}$. We say that: \begin{enumerate} \item[1)] A $I$-logarithmic derivation $\delta\in \Der_k(\log I)$ is {\em $I$-logarithmically $n$-integrable} if there is a $D\in \HS_k(\log I;n)$ such that $D_1= \delta$. A such $D$ will be called a {\em $I$-logarithmic $n$-integral} of $\delta$. The set of $I$-logarithmic $k$-linear derivations of $A$ which are $I$-logarithmically $n$-integrable will be denoted by $\Ider_k(\log I;n)$. When $n=\infty$ it will be simply denoted by $\Ider_k(\log I)$. \item[2)] A $I$-logarithmic Hasse--Schmidt derivation $D'\in\HS_k(\log I;m)$, with $m\leq n$, is {\em $I$-logarithmically $n$-integrable} if there is a $D\in \HS_k(\log I;n)$ such that $\tau_{nm}D=D'$. A such $D$ will be called a {\em $I$-logarithmic $n$-integral} of $D'$. The set of $I$-logarithmically $n$-integrable $I$-logarithmic Hasse-Schmidt derivations of $A$ over $k$ of length $m$ will be denoted by $\IHS_k(\log I;m;n)$. When $n=\infty$ it will be simply denoted by $\IHS_k(\log I;m)$. \end{enumerate} \end{definicion} It is clear that the $\Ider_k(\log I;n)$ are $A$-submodules of $\Der_k(\log I)$ and $\Der_k(\log I)=\Ider_k(\log I;1)\supset \Ider_k(\log I;2)\supset \cdots$ \begin{equation} \label{eq:intersection-iderlog} \Ider_k(\log I) \subset \bigcap_{n\in\mathbb{N}_+} \Ider_k(\log I;n). \end{equation} It is also clear that the $\IHS_k(\log I;m;n)$ are subgroups of $\IHS_k(\log I;m)$, stable by the ${\hspace{.1em}\scriptsize\bullet\hspace{.1em}}$ operation, $\IHS_k(\log I;m) = \IHS_k(\log I;m;m) \supset \IHS_k(\log I;m;m+1)\supset \cdots$ and \begin{equation} \label{eq:intersection-iHSlog} \IHS_k(\log I;m) \subset \bigcap_{n\geq m} \IHS_k(\log I;m;n). \end{equation} The inclusions (\ref{eq:intersection-iderlog}) and (\ref{eq:intersection-iHSlog}) seem not to be equalities in general (see question \ref{cuestion:1}). Nevertheless, we have the following proposition. \begin{proposicion} \label{prop:inclusions-equalities} The following properties hold: \begin{enumerate} \item[1)] Let $n\geq 1$ be an integer. If any $k$-derivation of $A$ is $n$-integrable, then any Hasse--Schmidt derivation $D\in\HS_k(A;m)$ is also $n$-integrable, for all $m\leq n$. \item[2)] If any $k$-derivation is $n$-integrable for all integers $n\geq 1$, then any Hasse--Schmidt derivation $D\in\HS_k(A;m)$ is also $\infty$-integrable, for all integers $m\geq 1$. \end{enumerate} \end{proposicion} \begin{prueba} For 1) we can mimic the proof of Proposition 1.4 in \cite{nar_2009} by using Theorem 2.8 in \cite{magda_nar_hs} (see Remark 1.5 in \cite{nar_2009}). For 2), we apply 1) and we obtain a sequence $E^n\in\HS_k(A;n)$, $n\geq m$, with $E^m=D$ and $\tau_{n+1,n}E^{n+1}=E^n$ for all $n\geq m$. It is clear that the inverse limit of the $E^n$ (see \ref{nume:truncation-inverse-limit}) is a $\infty$-integral of $D$. \end{prueba} \begin{lema} \label{lema:technical} Assume that $R=k[x_1,\dots,x_d]$, $S\subset R$ is a multiplicative set and $A=S^{-1}R$ or $A=k[[x_1,\dots,x_d]]$. Let $I\subset A$ be an ideal and $n\geq 1$ an integer. Then, any Hasse--Schmidt derivation $D$ in the kernel of the group homomorphism $\Pi_n$ (see \ref{num:HS-induced-by-HS-log}) is $I$-logarithmically ($\infty$-)integrable. \end{lema} \begin{prueba} Let us prove the proposition in the case $A=S^{-1}R$, the case $A=k[[x_1,\dots,x_d]]$ being completely similar. Denote by $\widetilde{\delta_r}:A\to A$ the induced derivation by the partial derivative $\partial_r:R\to R$. We proceed by decreasing induction on $\ell(D)$ (see Definition \ref{def:ell}). If $\ell(D)=n$, then $D$ is the identity and the result is clear. Let $m$ be an integer with $0\leq m < n$ and suppose that any $D'\in\ker \Pi_n$ with $m+1\leq \ell(D')$ is $I$-logarithmically integrable, and let $D\in\ker \Pi_n$ with $\ell(D)=m$, i.e. $D$ has the form $(\Id_A,0,\dots,0,D_{m+1},\dots,D_n)$ with $D_{m+1}\neq 0$, and so $D_{m+1}$ must be a $k$-derivation. Since $D\in \ker \Pi_n$, we deduce that $D_i(A)\subset I$ for all $i$. In particular, there are $a_1,\dots,a_d\in I$ such that $D_{m+1}=\sum_{r=1}^d a_r \widetilde{\delta_r}$. The $I$-logarithmic Hasse-Schmidt derivation $E=(a_1{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} \widetilde{\Delta^{(1)}}){\scriptstyle \,\circ\,} \cdots {\scriptstyle \,\circ\,} (a_d{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} \widetilde{\Delta^{(d)}})$ $\in \ker \Pi_{\infty}$ is an ($\infty$-)integral of $D_{m+1}$. Let us consider $D'=D{\scriptstyle \,\circ\,} (\tau_{\infty n} E[m+1])^{-1}\in \ker \Pi_n$. It is clear that $\ell(D')\geq m+1$ and, by induction hypothesis, $D'$ is $I$-logarithmically integrable. We conclude that $D=D'{\scriptstyle \,\circ\,} (\tau_{\infty n} E[m+1])$ is also $I$-logarithmically integrable. \end{prueba} \begin{nota} The proof of the above lemma shows that $\ker \Pi_n$ is generated by the $n$-truncations of the $(a{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} E)[m]$, with $a\in I$, $E\in\HS_k(A)$, $m\in [n]$. In fact, for $n=\infty$ we obtain that $\ker \Pi_\infty$ is the closure of subgroup of $\HS_k(\log I)$ generated by the $(a{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} E)[m]$, with $a\in I$, $E\in\HS_k(A)$ and $m\in\mathbb{N}_+$, where we consider in $\HS_k(A)$ the inverse limit topology of the discrete topologies in the $\HS_k(A;m)$, $m\in\mathbb{N}$ (see \ref{nume:truncation-inverse-limit}). Namely, for $D\in\ker \Pi_\infty$, by the same procedure as in the proof of the lemma we construct inductively a sequence $E^q=(a_1^q{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} \widetilde{\Delta^{(1)}}){\scriptstyle \,\circ\,} \cdots {\scriptstyle \,\circ\,} (a_d^q{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} \widetilde{\Delta^{(d)}})$, $q\geq 1$, $a_r^s\in I$, such that $\ell(D{\scriptstyle \,\circ\,} (F^q)^{-1})\geq q$, where $F^q=E^q[q]{\scriptstyle \,\circ\,} \cdots {\scriptstyle \,\circ\,} E^1[1]$. So $D{\scriptstyle \,\circ\,} (F^q)^{-1}$ tends to the identity element as $q \to \infty$ and $D$ is the limit of $F^q$ as $q \to \infty$. \end{nota} \begin{proposicion} \label{prop:crit-integ-log-HS} Assume that $R=k[x_1,\dots,x_d]$, $S\subset R$ is a multiplicative set and $A=S^{-1}R$ or $A=k[[x_1,\dots,x_d]]$. Let $I\subset A$ be an ideal, $m\geq 1$ an integer, $n\in\overline{\mathbb{N}}$ with $n\geq m$ and $E\in\HS_k(A/I;m)$. The following properties are equivalent: \begin{enumerate} \item[(a)] $E$ is $n$-integrable. \item[(b)] Any $D\in \HS_k(\log I;m)$ with $\overline{D}=E$ is $I$-logarithmically $n$-integrable. \item[(c)] There is a $D\in \HS_k(\log I;m)$ with $\overline{D}=E$ which is $I$-logarithmically $n$-integrable. \end{enumerate} \end{proposicion} \begin{prueba} The implication (b) $\Rightarrow$ (c) is an obvious consequence of Proposition \ref{prop:surjec-HS-log} and (c) $\Rightarrow$ (a) comes from \ref{num:HS-induced-by-HS-log}. For the remaining implication (a) $\Rightarrow$ (b), let $Z\in\HS_k(A/I;n)$ be an $n$-integral of $E$ and let $D\in\HS_k(\log I;m)$ be a logarithmic Hasse-Schmidt derivation with $\overline{D}=E$. From Proposition \ref{prop:surjec-HS-log}, there is a $U\in\HS_k(\log I;n)$ such that $\overline{U}=Z$. Since $\overline{\tau_{nm}U} = \tau_{nm}\overline{U}= \tau_{nm}Z= E = \overline{D}$, we have $D{\scriptstyle \,\circ\,} (\tau_{nm}U)^{-1} \in \ker \Pi_m$ and so, by Lemma \ref{lema:technical}, we deduce that $D$ is $I$-logarithmically $n$-integrable. \end{prueba} \begin{corolario} \label{cor:crit-integ-log-HS} Under the hypotheses of Proposition \ref{prop:crit-integ-log-HS}, the map $\Pi_m:\IHS_k(\log I;m;n) \to \IHS_k(A/I;m;n)$ is surjective. \end{corolario} \begin{corolario} Under the hypotheses of Proposition \ref{prop:crit-integ-log-HS}, the following properties are equivalent: \begin{enumerate} \item[(a)] $\IHS_k(A/I;m;n)=\HS_k(A/I;m)$. \item[(b)] $\IHS_k(\log I;m;n)=\HS_k(\log I;m)$. \end{enumerate} \end{corolario} \begin{prueba} It is a straightforward consequence of the proposition. \end{prueba} \begin{ejemplo} \label{ej:ncd} {\em (Normal crossings)} Let us take $\displaystyle f=\prod_{i=1}^e x_i\in A=k[x_1,\dots,x_d]$ and $I=(f)\subset A$. The $A$-module $\Ider_k(\log I)$ is generated by $$\textstyle\left\{x_1\partial_1,\dots,x_e\partial_e,\partial_{e+1},\dots,\partial_d\right\}$$ and any of these $I$-logarithmic derivations are integrable $I$-logarithmically, since $ \Delta^{(j)}, x_i{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} \Delta^{(i)} \in \HS_k(\log I)$ for $i=1,\dots,e$ and $j=e+1,\dots,n$. In particular $\Ider_k(\log I)=\Der_k(\log I)$ and $\Ider_k(A/I)=\Der_k(A/I)$. \end{ejemplo} \begin{proposicion} \label{prop:algorit-1} Let $A$ be an arbitrary $k$-algebra, $I\subset A$ an ideal with generators $f_l$, $l\in L$, and $n\geq 1$ an integer. Let $D\in\HS_k(\log I;n)$ be a $I$-logarithmic Hasse--Schmidt derivation and assume that $D$ is $(n+1)$-integrable and let $(\Id_A,D_1,\dots,D_n,D_{n+1})\in\HS_k(A;n+1)$ be an $(n+1)$-integral of $D$. The following properties are equivalent: \begin{enumerate} \item[(a)] $D$ is $I$-logarithmically $(n+1)$-integrable. \item[(b)] There is a derivation $\delta\in\Der_k(A)$ such that $D_{n+1}(f_l) + \delta(f_l) \in I$ for all $l\in L$ \end{enumerate} \end{proposicion} \begin{prueba} It comes from the fact that any other $(n+1)$-integral of $D$ must be of the form $(\Id_A,D_1,\dots,D_n,D_{n+1}+\delta)$ with $\delta\in \Der_k(A)$. \end{prueba} \begin{corolario} \label{cor:algorit-2} Assume that $A=k[x_1,\dots,x_d]$ or $A=k[[x_1,\dots,x_d]]$. Let $I=(f_1,\dots,f_p)\subset A$ be an ideal and $n\geq 1$ an integer. Let $D\in\HS_k(\log I;n)$ be a $I$-logarithmic Hasse--Schmidt derivation and let us consider its integral $D'=\varepsilon(D)$ (see remark \ref{nota:taylor-HS}). The following properties are equivalent: \begin{enumerate} \item[(a)] $D$ is $I$-logarithmically $(n+1)$-integrable. \item[(b)] There are $\alpha_r, a_{st}\in A$, $r=1,\dots,d$, $s,t=1,\dots,p$, such that $$ D'_{n+1}(f_s) = \alpha_1 \left(f_s\right)'_{x_1} + \cdots + \alpha_d \left(f_s\right)'_{x_d} + a_{s1}f_1+\cdots+a_{sp}f_p\quad \forall s=1,\dots,p. $$ \end{enumerate} Moreover, if (b) holds, an explicit $I$-logarithmic $(n+1)$-integral of $D$ is given by $(\Id_A,D_1,\dots,D_n,D'_{n+1}-\delta)$, with $\delta=\sum_{r=1}^d \alpha_r \partial_r$. \end{corolario} \begin{nota} \label{nota:comput-1} (1) In the case of a ``computable'' base ring $k$ (for instance, any finitely generated extension of $\mathbb{Z},\mathbb{Q}$ or of any finite field) and a finitely presented $k$-algebra $A$, Proposition \ref{prop:crit-integ-log-HS} and Corollary \ref{cor:algorit-2} give an effective way to decide whether a given Hasse--Schmidt derivation $D\in\HS_k(A;n)$ of finite length $n$ is $(n+1)$-integrable or not and, if yes, to compute an explicit $(n+1)$-integral of $D$. \smallskip \noindent (2) Nevertheless, the question of deciding whether a given Hasse--Schmidt derivation $D\in\HS_k(A;n)$ of finite length $n$ is $(n+r)$-integrable or not, with $r\geq 2$, is much more involved. First of all, we cannot proceed ``step by step'', since $D$ can be $(n+r)$-integrable and simultaneously admit an $(n+1)$-integral which is not $(n+r)$-integrable (cf. example 3.7 in \cite{nar_2009}). On the other hand, the condition of $(n+r)$-integrability of $D$, $r\geq 2$, gives rise to nonlinear equations which seem not obvious to treat in general with the currently available methods, either theoretical or computational (see for instance Lemmas \ref{lema:ej-nolineal}, \ref{lema:ej-nolineal-3}, \ref{lema:ej-nolineal-4}). \smallskip \noindent (3) The following example is a very particular case of a general result, but it also serves to illustrate the nonlinear nature of integrability and the difficulties that come from: Let $A=k[x_1,\dots,x_d]$, $f\in A$, $I=(f)$ and $\delta=\sum_{r=1}^d a_r \partial_r$ any $k$-derivation of $A$. The following properties are equivalent: \begin{enumerate} \item[(a)] $\delta$ is a $I$-logarithmic derivation and it is $I$-logarithmically $2$-integrable. \item[(b)] $\sum_{r=1}^d f'_{x_r} a_r \in I$ and $\sum_{|\alpha|=2} \Delta^{(\alpha)}(f)\ \underline{a}^\alpha \in (f,f'_{x_1},\dots,f'_{x_d})$. \end{enumerate} So, in order to compute a system of generators of the $A$-module $\Ider_k(\log I;2)$, we have to deal with nonlinear homogeneous equations of degree $2$ (see examples in sections \ref{subsect:cusp-2-3}, \ref{subsect:cusp2-Z}). \end{nota} \subsection{Jacobians and integrability} \label{subsect:jacob} Let $k$ be an arbitrary (commutative) ring and assume that $R=k[x_1,\dots,x_d] $ or $R=k[[x_1,\dots,x_d]] $. Let $I=(f_1,\dots,f_u)\subset R$ be a finitely generated ideal and $A=R/I$. For each $e=1,\dots,\min\{d,u\}$ let $J^0_e$ be the ideal generated by all the $e\times e$ minors of the Jacobian matrix $(\partial f_j/\partial x_i)$, and $J_e = (J^0_e+I)/I$. We have $J_1 \supset J_2 \supset \cdots$. Let $c$ be the maximum index $e$ with $J_e\neq 0$ (or equivalently with $J^0_e\nsubseteq I$), in case it exists. The ideal $J_c$ only depends on the $k$-algebra $A$ and is called the {\it Jacobian ideal} of $A$ over $k$ and denoted by $J_{A/k}$. It is nothing else but the smallest non-zero Fitting ideal of the module of $k$-differentials $\Omega_{A/k}$ (see \cite{lipman_1969}). \begin{proposicion} \label{prop:Jder-in-Ider} Under the above hypotheses, any $\delta\in \Der_k(\log I) \cap (J^0_c+I) \Der_k(R)$ is $I$-logarithmically integrable. \end{proposicion} \begin{prueba} The proof follows the same lines that the proof of Theorem 11 in \cite{mat-intder-I}. Let us write $J^0=J^0_c$. Since $I \Der_k(R) \subset \Ider_k(\log I)$, we can assume that $\delta=\sum_{r=1}^d c_{r1} \partial_r$ with $c_{r1}\in J^0$. Let us consider $D^1=(\Id_A,\delta)\in\HS_k(\log I;1)$ and $E^1=\varepsilon(D^1)\in\HS_k(R;\infty)$ (see \ref{nota:taylor-HS}). We have that $E^1_2 = \sum_{|\alpha|=2} \left(\prod_{r=1}^d c_{r1}^{\alpha_r}\right) \Delta^{(\alpha)}\in (J^0)^2\diff_{R/k}$, and so $E^1_2(f_j)\in (J^0)^2$ for all $j=1,\dots,u$. From Lemma \ref{lema:gene-jacob-matsu} there is $(c_{12},\dots,c_{d2})\in R^d$, with $c_{r2}\in J^0$, such that $$ (c_{12},\dots,c_{d2}) \left((\partial f_j/\partial x_i)_{i,j}\right) \equiv (E^1_2(f_1), \dots,E^1_2(f_u)) \mod I,$$ i.e. $E^1_2(f_j)-\sum_{r=1}^d c_{r2}(f_j)'_{x_r}\in I$, and so we deduce that $D^1$ is $I$-logarithmically $2$-integrable, an $I$-logarithmic $2$-integral being $D^2=(\Id_A,\delta,D^2_2)$ with $D^2_2=E^1_2 - \sum_{r=1}^d c_{r2} \partial_r\in J^0 \diff_{R/k}$ (see Corollary \ref{cor:algorit-2}). \smallskip Assume that we have found a $D^m=(\Id_A,\delta,D^2_2,\dots,D^m_m)\in \HS_k(\log I;m)$ with $D^s_s \in J^0 \diff_{R/k}$, $s=1,\dots,m$, hence with $c_{rs}:=D^s_s(x_r)\in J^0$, $r=1,\dots,d$. Let us consider $E^m=\varepsilon(D^m)\in \HS_k(R;\infty)$. From Proposition \ref{prop:formulon}, 2) we deduce that $E^m_{m+1} \in (J^0)^2 \diff_{A/k}$ and so $E^m_{m+1}(f_j)\in (J^0)^2$ for all $j=1,\dots,u$. From Lemma \ref{lema:gene-jacob-matsu}, there is $(c_{1,m+1},\dots,c_{d,m+1})\in R^d$, with $c_{r,m+1}\in J^0$, such that $$ (c_{1,m+1},\dots,c_{d,m+1}) \left((\partial f_j/\partial x_i)_{i,j}\right) \equiv (E^m_{m+1}(f_1),\dots,E^m_{m+1}(f_u)) \mod I, $$ i.e. $E^m_{m+1}(f_j)- \sum_{r=1}^d c_{r,m+1}(f_j)'_{x_r} \in I$, and so we deduce again that $D^m$ is $I$-logarithmically $m+1$-integrable, an $I$-logarithmic $(m+1)$-integral being $D^{m+1}=(\Id_A,\delta,D^2_2,\dots,\D^m_m,D^{m+1}_{m+1})$ with $D^{m+1}_{m+1}=E^m_{m+1} - \sum_{r=1}^d c_{r,m+1} \partial_r\in J^0 \diff_{R/k}$ (see Corollary \ref{cor:algorit-2}). \smallskip In that way, we construct inductively the $D^m_m$, $m\geq 2$, such that $(\Id_A,\delta,D^2_2,\dots)\in \HS_k(\log I;\infty)$ and so $\delta$ is $I$-logarithmically integrable. \end{prueba} \begin{lema} \label{lema:gene-jacob-matsu} Let $\mathbf{X}=(X_{ij})$, $i=1,\dots,d$, $j=1,\dots,u$, be variables, $W=\mathbb{Z}[\mathbf{X}]$, $\mathfrak{a}_{e}\subset W$ the ideal generated by the $e\times e$ minors of $\mathbf{X}$ and $U=W/\mathfrak{a}_{c+1}$. Then, for each $c\times c$ minor $\mu$ of $\mathbf{X}$ and for each $j=1,\dots,u$, the system $$ (u_1,\dots,u_d) \mathbf{X} = (0,\dots,0,\stackrel{\scriptsize\underbrace{j}}{\mu},0,\dots,0) $$ has a solution in $U$. \end{lema} \begin{prueba} We know that $U$ is an integral domain (cf. \cite{bruns-vetter-1988}, Theorem (2.10) and Remark (2.12)). Denote by $K$ its field of fractions and by $\pi:W\to U$ the natural projection. The lemma is an easy consequence of the fact that the matrix $\pi(\mathbf{X})\otimes K$ has rank $c$. \end{prueba} The following corollary of Proposition \ref{prop:Jder-in-Ider} generalizes Theorem 11 in \cite{mat-intder-I}, which was only stated and proved for $k$ a perfect field. \begin{corolario} Under the above hypotheses, we have $$J_{A/k}\subset \ann_A\left(\Der_k(A)/\Ider_k(A)\right).$$ \end{corolario} The proof of the following result is similar to the proof of Proposition \ref{prop:Jder-in-Ider}. \begin{proposicion} Let $f\in R$, $I=(f)$, and $J^0=(f'_{x_1},\dots,f'_{x_d})$ the gradient ideal. If $\delta:R\to R$ is a $I$-logarithmic $k$-derivation with $\delta\in J^0\Der_k(R)$, then $\delta$ admits a $I$-logarithmic integral $D\in\HS_k(\log I)$ with $D_i(f)=0$ for all $i>1$. In particular, if $\delta(f)=0$, the integral $D$ can be taken with $\Phi_D(f)=f$. \end{proposicion} \noindent \refstepcounter{numero}\noindent {\sc \thenumero\ } \label{nume:traves} We quote here Theorem 1.2 in \cite{traves-2000}: Let $I\subset A=k[x_1,\dots,x_d]$ be an ideal generated by quasi-homogeneous polynomials with respect to the weights $w(x_r)\geq 0$. Then, the Euler vector field $\chi = \sum_{r=0}^d w(x_r) \partial_r$ is $I$-logarithmically ($\infty$-)integrable. In fact, a $I$-logarithmic integral of $\chi$ is the Hasse--Schmidt derivation associated with the map $A \to A[[t]]$ given by \begin{eqnarray*} &x_r \mapsto x_r\left(\frac{1}{1-t}\right)^{w(x_r)},\quad r=1,\dots,d.& \end{eqnarray*} \begin{proposicion} \label{prop:isol-sing-general} Let $f\in A=k[x_1,\dots,x_d]$ be a quasi-homogeneous polynomial with respect to the weights $w(x_r)> 0$ and $I=(f)\subset A$. Assume that the weight of $f$ is a unit in $k$ and that all the partial derivatives of $f$ are non-zero and form a regular sequence. Then $\Der_k(\log I)=\Ider_k(\log I)$. \end{proposicion} \begin{prueba} From the hypotheses we deduce that the $A$-module $\Der_k(\log I)$ is generated by the Euler vector field $\chi$ and the crossed derivations $\theta_{rs}=f'_{x_s}\partial_r - f'_{x_r}\partial_s$, $1\leq r < s\leq d$. But $\chi$ is $I$-logarithmically integrable by \ref{nume:traves} and $\theta_{rs}$ is $I$-logarithmically integrable by Proposition \ref{prop:Jder-in-Ider}. \end{prueba} \subsection{Behaviour of integrability under localization} Throughout this section, $k$ will be an arbitrary commutative ring. \medskip The proof of the following proposition is clear from \ref{nume:basic-localiz}. \begin{proposicion} \label{prop:coh-1} Let $A$ be a $k$-algebra, $S\subset A$ a multiplicative set, $\mathfrak{a}\subset A$ be an ideal, $m\geq 1$ an integer, $n\in\overline{\mathbb{N}}$ with $n\geq m$ and $D\in\HS_k(\log \mathfrak{a};m)$. If $D$ $\mathfrak{a}$-logarithmically $n$-integrable, then $\widetilde{D}\in\HS_k(S^{-1}A;m)$ is $(S^{-1}\mathfrak{a})$-logarithmically $m$-integrable. In particular, the map $\Theta_m$ sends $\IHS_k(\log \mathfrak{a};m;n)$ to $\IHS_k(\log (S^{-1}\mathfrak{a});m;n)$. \end{proposicion} The two following propositions are straightforward consequences of Proposition \ref{prop:surjec-HS-log-localiz-poly} and Corollary \ref{cor:surjec-HS-log-localiz-fp} respectively. \begin{proposicion} \label{prop:surjec-IHS-log-localiz-poly} Assume that $A=k[x_1,\dots,x_d]$ and let $S\subset A$ be a multiplicative set and $\mathfrak{a}=(f_1,\dots,f_u)\subset A$ be a finitely generated ideal. Then, for any integers $m\geq q\geq 1$, the map $$ (s,F)\in S \times \IHS_k(\log \mathfrak{a};q;m)\mapsto \frac{1}{s}{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} \Theta_q(F) \in\IHS_k(\log (S^{-1}\mathfrak{a});q;m)$$ is surjective. \end{proposicion} \begin{proposicion} \label{prop:surjec-IHS-localiz-fp} Assume that $A$ is a finitely presented $k$-algebra and let $T\subset A$ be a multiplicative set. Then, for any integers $m\geq q\geq 1$ the map $$ (t,G)\in T \times \IHS_k(A;q;m)\mapsto \frac{1}{t}{\hspace{.1em}\scriptsize\bullet\hspace{.1em}} \Theta_q(G) \in\IHS_k(T^{-1}A;q;m)$$ is surjective. \end{proposicion} Proposition \ref{prop:surjec-IHS-localiz-fp} can be also obtained form Proposition \ref{prop:surjec-IHS-log-localiz-poly} and Corollary \ref{cor:crit-integ-log-HS}. \begin{corolario} \label{cor:coh-2} Assume that $A=k[x_1,\dots,x_d]$ and let $S\subset A$ be a multiplicative set, $\mathfrak{a}=(f_1,\dots,f_u)\subset A$ be a finitely generated ideal. Then, for any integer $m\geq 1$ the canonical map $$ \frac{\delta}{s}\in S^{-1} \Ider_k(\log \mathfrak{a};m) \mapsto \frac{1}{s}\widetilde{\delta}\in \Ider_k(\log (S^{-1}\mathfrak{a});m)$$ is an isomorphism of $(S^{-1}A)$-modules. \end{corolario} \begin{prueba} The injectivity is a consequence of the fact that, under the above assumptions, the canonical map $S^{-1}\Der_k(A) \to \Der_k(S^{-1}A)$ is an isomorphism. The surjectivity is given by Proposition \ref{prop:surjec-IHS-log-localiz-poly} in the case $q=1$. \end{prueba} \begin{corolario} \label{cor:ider-localiz_fp} Assume that $A$ is a finitely presented $k$-algebra and let $T\subset A$ be a multiplicative set. Then, for any integer $m\geq 1$ the canonical map $$ T^{-1} \Ider_k(A;m) \to \Ider_k(T^{-1}A;m)$$ is an isomorphism of $(T^{-1}A)$-modules. \end{corolario} \begin{prueba} The injectivity goes as in the proof of Corollary \ref{cor:coh-2}. The surjectivity is given by Proposition \ref{prop:surjec-IHS-localiz-fp} in the case $q=1$. \end{prueba} \begin{teorema} \label{teo:criter-loc-ider-fp} Assume that $A$ is a finitely presented $k$-algebra, $m\geq 1$ is an integer and let $\delta\in \Der_k(A)$. The following properties are equivalent: \begin{enumerate} \item[(a)] $\delta\in \Ider_k(A;m)$. \item[(b)] $\delta_{\mathfrak{p}}\in \Ider_k(A_{\mathfrak{p}};m)$ for all $\mathfrak{p}\in\Spec A$. \item[(c)] $\delta_{\mathfrak{m}}\in \Ider_k(A_{\mathfrak{m}};m)$ for all $\mathfrak{m}\in\Specmax A$. \end{enumerate} \end{teorema} \begin{prueba} The implication (a) $\Rightarrow$ (b) is a consequence of Proposition \ref{prop:coh-1}. The implication (b) $\Rightarrow$ (c) is obvious. For the remaining implication (c) $\Rightarrow$ (a), assume that property (c) holds. Then, by Corollary \ref{cor:ider-localiz_fp}, for each $\mathfrak{m}\in\Specmax A$ there is a $f^\mathfrak{m} \in A - \mathfrak{m}$ and a $\zeta^\mathfrak{m} \in \Ider_k(A;m)$ such that $f^\mathfrak{m}\delta_{\mathfrak{m}} = \left(\zeta^\mathfrak{m}\right)_\mathfrak{m}$, and so there is a $g^\mathfrak{m} \in A - \mathfrak{m}$ such that $g^\mathfrak{m}f^\mathfrak{m}\delta = g^\mathfrak{m}\zeta^\mathfrak{m}$. Since the ideal generated by the $g^\mathfrak{m} f^\mathfrak{m}$, $\mathfrak{m}\in\Specmax A$, must be the total ideal, we deduce the existence of a finite number of $\mathfrak{m}_i\in\Specmax A$ and $a_i\in A$, $1\leq i\leq n$, such that $1=a_1 g_1f_1 +\cdots + a_n g_nf_n$, with $f_i= f^{\mathfrak{m}_i}$, $g_i= g^{\mathfrak{m}_i}$, and so $$ \delta = \sum_{i=1}^n a_i g_i f_i \delta = \sum_{i=1}^n a_i g_i\zeta^{\mathfrak{m}_i}$$ is $m$-integrable. \end{prueba} \begin{corolario} \label{cor:Ider_fp_schemes} Let $f:X \to S$ be a locally finitely presented morphism of schemes. For each integer $n\geq 1$ there is a quasi-coherent sub-sheaf $\fIder_S({\EuScript O}_X;n)\subset \fDer_S({\EuScript O}_X)$ such that, for any affine open sets $U=\Spec A \subset X$ and $V=\Spec k \subset S$, with $f(U)\subset V$, we have $\Gamma(U,\fIder_S({\EuScript O}_X;n)) =\Ider_k(A;n)$ and $\fIder_S({\EuScript O}_X;n)_p = \Ider_{{\EuScript O}_{S,f(p)}}({\EuScript O}_{X,p};n)$ for each $p\in X$. Moreover, if $S$ is locally noetherian, then $\fIder_S({\EuScript O}_X;n)$ is a coherent sheaf. \end{corolario} \begin{prueba} For each open set $U\subset X$, we define $$\Gamma(U,\fIder_S({\EuScript O}_X;n)) =\{\delta\in \Gamma(U,\fDer_S({\EuScript O}_X))\ |\ \delta_p \in \Ider_{{\EuScript O}_{S,f(p)}}({\EuScript O}_{X,p};n)\ \forall p\in U\}.$$ The behaviour of $\fIder_S({\EuScript O}_X;n)$ on affine open sets and its quasi-coherence is a straightforward consequence of Theorem \ref{teo:criter-loc-ider-fp}. \end{prueba} \subsection{Testing the integrability of derivations} In this section $k$ will be an arbitrary commutative ring and $A$ an arbitrary $k$-algebra. \begin{definicion} Let $n\geq m > 1$ be integers and $D\in\HS_k(A;n)$. We say that $D$ is {\em $m$-sparse} if $D_i=0$ whenever $i\notin\mathbb{N} m$. We say that $D$ is {\em weakly $m$-sparse} if $\tau_{n,qm}D$ is $m$-sparse, where $q=\left\lfloor\frac{n}{m}\right\rfloor$. The set of $m$-sparse (resp. weakly $m$-sparse) Hasse--Schmidt derivations in $\HS_k(A;n)$ will be denoted by $\HS_k^{m-sp}(A;n)$ (res. $\HS_k^{m-wsp}(A;n)$). \end{definicion} The proof of the following proposition is easy and its proof is left up to the reader. \begin{proposicion} \label{prop:sparse} Let $n\geq m > 1$ be integers, $q=\left\lfloor\frac{n}{m}\right\rfloor$ and $r=n-qm$. The following properties hold: \begin{enumerate} \item[1)] $\HS_k^{m-sp}(A;n)$ and $\HS_k^{m-wsp}(A;n)$ are subgroups of $\HS_k(A;n)$. \item[2)] For any $D\in\HS_k(A;q)$ and any $\underline{\delta}=(\delta_1,\dots,\delta_r)\in \Der_k(A)^r$, the sequence $$ \Theta(D,\underline{\delta})=(\Id_A,0,\dots,0,\stackrel{\scriptsize\underbrace{m}}{D_1},0,\dots,0,\stackrel{\scriptsize\underbrace{2m}}{D_2},0,\dots,0,\stackrel{\scriptsize\underbrace{qm}}{D_q}, \stackrel{\scriptsize\underbrace{qm+1}}{\delta_1},\dots,\stackrel{\scriptsize\underbrace{n}}{\delta_r})$$ is a weakly $m$-sparse Hasse--Schmidt derivation of $A$ (over $k$) of length $n$ and the map $\Theta: \HS_k(A;q) \times \Der_k(A)^r \to \HS_k^{m-wsp}(A;n)$ is an isomorphism of groups. \end{enumerate} \end{proposicion} \begin{teorema} \label{th:main} Let $n\geq 1$ be an integer. The following assertions hold: \begin{enumerate} \item[1)] If $n$ is odd and $\Ider_k(A;q) = \Der_k(A)$, with $q= \frac{n+1}{2}$, then any $D\in\HS_k(A;n)$ with $D_1=0$ is $(n+1)$-integrable. \item[2)] If $n$ is even and $\Ider_k(A;p) = \Der_k(A)$, with $p=\left\lfloor \frac{n+1}{3}\right\rfloor$, then any $D\in\HS_k(A;n)$ with $D_1=0$ is $(n+1)$-integrable. \end{enumerate} \end{teorema} \begin{prueba} 1) Since $D_1=0$ we have $1\leq\ell(D)\leq n$. If $n=1$, then $D$ is the identity and the result is clear. Assume $n\geq 3$ and so $q\geq 2$. Let us proceed by decreasing induction on $\ell(D)$. If $\ell(D) =n$ then $D$ is the identity and the result is clear. Let $m$ be an integer with $1\leq m < n$ and suppose that any $D'\in\HS_k(A;n)$ with $m+1\leq \ell(D')$ is $(n+1)$-integrable. Let $D\in\HS_k(A;n)$ be a Hasse--Schmidt derivation with $\ell(D)=m$, i.e. $$ D=(\Id_A,0,\dots,0,D_{m+1},\dots,D_n)\quad\text{with}\ D_{m+1}\neq 0.$$ Since $\tau_{n,m+1}D$ is $(m+1)$-sparse, we can apply Proposition \ref{prop:sparse}, 2) and deduce that $D_{m+1}$ is a derivation and so, by hypothesis, it must be $q$-integrable. Let $E\in\HS_k(A;q)$ be a $q$-integral of $D_{m+1}$. We have that $q(m+1) \geq 2q =n+1$ and so $F=\tau_{q(m+1),n}(E[m+1])$ is $(n+1)$-integrable, an $(n+1)$-integral being $\tau_{q(m+1),n+1}(E[m+1])$, and has the form $$ F=(\Id_A,0,\dots,0,\stackrel{\scriptsize\underbrace{m+1}}{D_{m+1}},0,\dots,F_n). $$ It is clear that for $D' = F^{-1}{\scriptstyle \,\circ\,} D$ we have $D'_1=\dots=D'_{m+1}=0$, and so $\ell(D')\geq m+1$. The induction hypothesis implies that $D'$ is $(n+1)$-integrable and we conclude that $D=F{\scriptstyle \,\circ\,} D'$ is also $(n+1)$-integrable. \smallskip \noindent 2) If $n=2$, then $D=(\Id_A,0,D_2)$ and obviously $(\Id_A,0,D_2,0)$ is a $3$-integral of $D$. Assume that $n$ is even $\geq 4$, and let us write $n=2q$, $q\geq 2$, and $n+1=3p+r$ with $0\leq r < 3$, $p\geq 1$. Since $\tau_{n3}D$ is weakly $2$-sparse, we deduce that $D_3$ must be a derivation (see Proposition \ref{prop:sparse}) and so, by hypothesis, it is $p$-integrable. Let $E^3\in\HS_k(A;p)$ be a $p$-integral of $D_3$. It is clear that (see Proposition \ref{prop:sparse}) $$ F^3=(\Id_A,0,0,\stackrel{\scriptsize\underbrace{3}}{E^3_1},0,0,\stackrel{\scriptsize\underbrace{6}}{E^3_2},0,\dots,0, \stackrel{\scriptsize\underbrace{3p}}{E^3_p},0,0)$$ is a $(3p+2)$-integral of $E^3[3]$, and since $3p+2\geq n+1$, $G^3=\tau_{3p+2,n}F^3$ is $(n+1)$-integrable and $(G^3)^{-1}{\scriptstyle \,\circ\,} D$ has the form $(\Id_A,0,D_2,0,\dots)$. Assume that we have found $G^3,G^5,\dots,G^{2s-1}\in\HS_k(A;n)$, all of them $(n+1)$-integrable, with $3\leq 2s-1 < n$, such that $(G^{2s-1})^{-1}{\scriptstyle \,\circ\,} \cdots {\scriptstyle \,\circ\,} (G^3)^{-1}{\scriptstyle \,\circ\,} D$ has the form $$ D'=(\Id_A,0,\stackrel{\scriptsize\underbrace{2}}{D'_2},0,\stackrel{\scriptsize\underbrace{4}}{D'_4},0,\dots,0, \stackrel{\scriptsize\underbrace{2s}}{D'_{2s}},D'_{2s+1},\dots,D'_n).$$ If $2s=n$, we already have what we are looking for. If $2s<n$, then $D'_{2s+1}$ is a derivation (see Proposition \ref{prop:sparse}) and so, by hypothesis, it is $p$-integrable. Let $E^{2s+1}\in\HS_k(A;p)$ be a $p$-integral of $D'_{2s+1}$. Let us consider $F^{2s+1}=E^{2s+1}[2s+1]\in\HS_k(A;p(2s+1))$. Since $p(2s+1)\geq 5p \geq 3p+2\geq n+1$, $G^{2s+1}:= \tau_{p(2s+1),n}F^{2s+1}$ is $(n+1)$-integrable and $(G^{2s+1})^{-1}{\scriptstyle \,\circ\,} D'$ has the form $$ D''=(\Id_A,0,\stackrel{\scriptsize\underbrace{2}}{D''_2},0,\stackrel{\scriptsize\underbrace{4}}{D''_4},0,\dots,0, \stackrel{\scriptsize\underbrace{2s}}{D''_{2s}},0,\stackrel{\scriptsize\underbrace{2s+2}}{D''_{2s+2}},\dots,D''_n).$$ We conclude with the existence of $G^3,G^5,\dots,G^{n-1}\in\HS_k(A;n)$, all of them $(n+1)$-integrable, such that $ H = (G^{n-1})^{-1}{\scriptstyle \,\circ\,} G^{n-3} \cdots {\scriptstyle \,\circ\,} (G^3)^{-1}{\scriptstyle \,\circ\,} D\in\HS_k(A;n)$ ($n=2q$) is $2$-sparse. From Proposition \ref{prop:sparse} again we deduce that $H$ is $(n+1)$-integrable, and so $D$ is also $(n+1)$-integrable. \end{prueba} \begin{definicion} For each integer $n\geq 1$, let us define $$ \rho(n)= \left\{\begin{array}{ll} \frac{n+1}{2} & \text{if $n$ is odd}\\ \left\lfloor \frac{n+1}{3} \right\rfloor & \text{if $n$ is even.}\end{array} \right. $$ \end{definicion} Notice that $\rho(n) < n$ for all $n\geq 2$. \begin{corolario} \label{cor:main} Let $n\geq 1$ be an integer, and assume that $\Ider_k(A;\rho(n)) = \Der_k(A)$. Then, for any $n$-integrable derivation $\delta\in\Ider_k(A;n)$, the following properties are equivalent: \begin{enumerate} \item[(a)] Any $n$-integral of $\delta$ is $(n+1)$-integrable. \item[(b)] There is an $n$-integral of $\delta$ which is $(n+1)$-integrable. \end{enumerate} \end{corolario} \begin{prueba} Assume that $E\in\HS_k(A;n+1)$ is an $(n+1)$-integral of $\delta$ and let $D\in\HS_k(A;n)$ be any $n$-integral of $\delta$. The $1$-component of $F=D{\scriptstyle \,\circ\,}(\tau_{n+1,n}E)^{-1}$ vanishes and so, by Theorem \ref{th:main}, $F$ is $(n+1)$-integrable. We deduce that $D=F{\scriptstyle \,\circ\,} \tau_{n+1,n}E$ is also $(n+1)$-integrable. \end{prueba} \subsection{Algorithms} \label{subsect:algo} Let $k$ be a ``computable'' base ring $k$ (for instance, any finitely generated extension of $\mathbb{Z},\mathbb{Q}$ or of any finite field), $f_1,\dots,f_p\in A=k[x_1,\dots,x_d]$ and $I=(f_1,\dots,f_p)$. The starting point is the computation of a system of generators $\{\delta^1,\dots,\delta^q\}$ of $\Der_k(\log I)$. \medskip The following algorithm decides whether the equality $$\Der_k(\log I)\stackrel{\text{?}}{=}\Ider_k(\log I;2)\quad \left(\Leftrightarrow \Der_k(A/I)\stackrel{\text{?}}{=}\Ider_k(A/I;2)\right)$$ is true or not, and if yes, returns a $2$-integral for each generator of $\Der_k(\log I)$. \bigskip \noindent {\sc ALGORITHM--1:} \begin{description} \item[Step 1:] For each $j=1,\dots,q$, apply Corollary \ref{cor:algorit-2} as explained in remark \ref{nota:comput-1}, (1) to decide whether $\delta^j$ is $I$-logaritmically $2$-integrable or not, and if yes to compute a $I$-logarithmic $2$-integral $D^{j,2}$ of $\delta^j$. \item[Step 2:] (Y) If the answer in Step 1 is YES for all $j=1,\dots,q$, then save the $I$-logarithmic $2$-integrals $D^{1,2},\dots,D^{q,2}$ and answer ``THE EQUALITY $\Der_k(\log I)=\Ider_k(\log I;2)$ IS TRUE''.\\ (N) If the answer in step 1 is NOT for some $j=1,\dots,q$, then answer ``THE EQUALITY $\Der_k(\log I)=\Ider_k(\log I;2)$ IS FALSE''. \end{description} \medskip Assume that we have an {\sc ALGORITHM--(N-1)} to decide whether the equality $$\Der_k(\log I)\stackrel{\text{?}}{=}\Ider_k(\log I;N)\quad \left(\Leftrightarrow \Der_k(A/I)\stackrel{\text{?}}{=}\Ider_k)A/I;N)\right)$$ is true or not, and if yes, to compute an $N$-integral for each generator of $\Der_k(\log I)$. \medskip \noindent {\sc ALGORITHM--N:} \begin{description} \item[Step 1:] Apply {\sc ALGORITHM--(N-1)}, and if the answer is NOT, then STOP and answer ``THE EQUALITY $\Der_k(\log I)=\Ider_k(\log I;N+1)$ IS FALSE''.\\ If the answer to {\sc ALGORITHM--(N-1)} is YES, keep the computed $I$-logarithmic $N$-integrals $D^{1,N},\dots,D^{q,N}$ of $\delta^1,\dots,\delta^q$ and go to step 2. \item[Step 2:] For each $j=1,\dots,q$, apply Corollary \ref{cor:algorit-2} as explained in remark \ref{nota:comput-1}, (1) to decide whether $D^{j,N}$ is $I$-logaritmically $(N+1)$-integrable or not, and if yes to compute a $I$-logarithmic $(N+1)$-integral $D^{j,N+1}$ of $D^{j,N}$. \item[Step 3:] (Y) If the answer in Step 2 is YES for all $j=1,\dots,q$, then save the $I$-logarithmic $(N+1)$-integrals $D^{1,N+1},\dots,D^{q,N+1}$ and answer ``THE EQUALITY $\Der_k(\log I)=\Ider_k(\log I;N+1)$ IS TRUE''.\\ (N) If the answer in Step 2 is NOT for some $j=1,\dots,q$, then answer ``THE EQUALITY $\Der_k(\log I)=\Ider_k(\log I;N+1)$ IS FALSE''. \end{description} \medskip Corollary \ref{cor:main} is the key point for the correctness of Step 3, (N). \section{Examples and questions} We have used Macaulay 2 \cite{M2} for the preliminary computations needed in the following examples. \subsection{The cusp $x^2+y^3$ in characteristic $2$ or $3$} \label{subsect:cusp-2-3} Let $k$ be a base ring containing the field $\mathbb{F}_p$, $p>0$, and $f=x^2+y^3 \in R=k[x,y]$. Let $I=(f)$ and $A=k[x,y]/I$. The computation of $\Ider_k(A;\infty)$ has been treated in \cite{mat-intder-I}, example 5. Here we are interested in the computation of $\Ider_k(A;m)$, $m\geq 2$. \medskip Let start with $p=2$. Then the Jacobian ideal of $f$ is $J=(y^2,f)=(x^2,y^2)$. \medskip The module $\Der_k(\log I)$ is free with basis $\{\dpar{x}, f\dpar{y}\}$. It is clear that $f\dpar{y}$ is $I$-logarithmically ($\infty$-)integrable. Let $g\in R$ be a polynomial. From Corollary \ref{cor:algorit-2}, we have that $g\dpar{x}$ is $I$-logarithmically $2$-integrable if and only if $g^2\in J$. Since $\{g\in R\ |\ g^2 \in J\} = (x,y)$, we deduce that $\{x\dpar{x},y\dpar{x},f\dpar{y}\}$ is a system of generators of $\Ider_k(\log I;2)$. \medskip The derivation $x\dpar{x}$ is the Euler vector field for the weights $w(x)=3, w(y)=2$. From \ref{nume:traves} we know that $x\dpar{x}$ is $I$-logarithmically ($\infty$-)integrable. \medskip Let $c\in R$ be an arbitrary polynomial and $\delta=cy\dpar{x}$. A $I$-logarithmic $2$-integral of $\delta$ is determined by the $k$-algebra map $$ p(x,y)\in R \mapsto p(x+cyt,y+c^2t^2) + (t^3) \in R_3=R[[t]]/(t^3).$$ Since the coefficient of $t^3$ in $f(x+cyt,y+c^2t^2)$ is $0$, we deduce that $\delta$ is $I$-logarithmically $3$-integrable and so $\Ider_k(\log I;3)=\Ider_k(\log I;2)$. A generic $I$-logarithmic $2$-integral of $\delta$ is determined by the $k$-algebra map $$ p(x,y)\in R \mapsto p(x+cyt+dt^2,y-c^2t^2) + (t^3) \in R_3,$$ with $d\in R$, and a generic $I$-logarithmic $3$-integral of $\delta$ is determined by the $k$-algebra map $$ p(x,y)\in R \mapsto p(x+cyt+dt^2+e t^3,y+c^2t^2) + (t^4) \in R_4,$$ with $d,e\in R$. The coefficient of $t^4$ in $f(x+cyt+dt^2+e t^3,y+c^2t^2)$ is $d^2 + yc^4$, and so, the following conditions are equivalent: \begin{enumerate} \item[(a)] $\delta$ is $I$-logarithmically $4$-integrable. \item[(b)] There is a $d\in R$ such that $d^2 + yc^4\in J$. \end{enumerate} The proof of the following lemma is easy: \begin{lema} \label{lema:ej-nolineal} The set $\Gamma:=\{c\in R\ |\ \exists d\in R,\ d^2 + yc^4\in J\}$ is the ideal generated by $x,y$. \end{lema} As a consequence of the lemma we deduce that $\{x\dpar{x},y^2\dpar{x},f\dpar{y}\}$ is a system of generators of $\Ider_k(\log I;4)$. But $y^2\dpar{x}$ is $I$-logarithmically ($\infty$-)integrable after Proposition \ref{prop:Jder-in-Ider}, and so \begin{eqnarray*} &\Der_k(A)= \langle \overline{\dpar{x}}\rangle \varsupsetneq \Ider_k(A;2) = \langle \overline{x\dpar{x}}, \overline{y\dpar{x}}\rangle = \Ider_k(A;3) \varsupsetneq &\\ &\Ider_k(A;4) = \langle \overline{x\dpar{x}}, \overline{y^2\dpar{x}}\rangle = \Ider_k(A;5) = \cdots =\Ider_k(A;\infty). \end{eqnarray*} In particular, we have \begin{eqnarray*} & \ann_A \left( \Der_k(A)/\Ider_k(A;2)\right) = ( \overline{x},\overline{y}) =\sqrt{\overline{J}} \varsupsetneq &\\ & \ann_A \left( \Der_k(A)/\Ider_k(A;\infty)\right) = (\overline{x},\overline{y}^2) \varsupsetneq \overline{J}=(\overline{x}^2,\overline{y}^2). \end{eqnarray*} \medskip Let us now compute the case $p=3$. The Jacobian ideal of $f$ is $J=(x,f)=(x,y^3)$. In a similar way to the preceding case, we obtain that: \begin{enumerate} \item[-)] $\Der_k(\log I) = \langle f\dpar{x},\dpar{y}\rangle$. \item[-)] Since $2$ is invertible in $k$ we have $\Der_k(\log I) =\Ider_k(\log I;2)$. \item[-)] $\Ider_k(\log I;3)=\langle x\dpar{y},y\dpar{y},f\dpar{x}\rangle$. \item[-)] $\Ider_k(\log I;3) = \Ider_k(\log I;\infty)$. \item[-)] $\Der_k(A)= \langle \overline{\dpar{y}}\rangle =\Ider_k(A;2) \varsupsetneq \Ider_k(A;3) = \langle \overline{x\dpar{y}}, \overline{y\dpar{y}}\rangle = \Ider_k(A;4) = \cdots =\Ider_k(A;\infty)$ and $\ann_A \left( \Der_k(A)/\Ider_k(A;\infty)\right) = (\overline{x},\overline{y}) =\sqrt{J_{A/k}}$. \end{enumerate} Let us notice that for the cusp in characteristics $\neq 2,3$ we can apply Proposition \ref{prop:isol-sing-general} and obtain that any derivation is integrable. \subsection{The cusp $x^2+y^3$ over the integers} \label{subsect:cusp-Z} Assume that $k=\mathbb{Z}$ and $f=x^2+y^3 \in R=\mathbb{Z}[x,y]$. Let $I=(f)$ and $A=\mathbb{Z}[x,y]/I$. The Jacobian ideal of $f$ is $J=(2x,3y^2,f)=(2x,3y^2,x^2,y^3)$. The $I$-logarithmic derivations of $R$ are generated by $\delta_1=3x\dpar{x}+2y\dpar{y}$, $\delta_2=3y^2\dpar{x}-2x\dpar{y}$, $f\dpar{x}$ and $f\dpar{y}$. The first derivation $\delta_1$ is the Euler vector field for the weights $w(x)=3, w(y)=2$. As in \ref{subsect:cusp-2-3}, $\delta_1$ is $I$-logarithmically integrable. For the second derivation $\delta_2$, we apply Proposition \ref{prop:Jder-in-Ider} and we deduce that it is also $I$-logarithmically integrable. So this is an example of a non-smooth $\mathbb{Z}$-algebra $A$ for which any derivation is integrable. \subsection{The cusp $3x^2+2y^3$ over the integers} \label{subsect:cusp2-Z} Assume that $k=\mathbb{Z}$ and $f=3x^2+2y^3 \in R=\mathbb{Z}[x,y]$. Let $I=(f)$ and $A=\mathbb{Z}[x,y]/I$. The Jacobian ideal of $f$ is $J=(6x,6y^2,f)=(6x,6y^2,3x^2,2y^3)$. The $I$-logarithmic derivations of $R$ are generated by $\delta_1=3x\dpar{x}+2y\dpar{y}$ and $\delta_2=-y^2\dpar{x}+x\dpar{y}$, which in fact form a basis (we can say that ``$f$ is a free divisor'' of $R$). As in \ref{subsect:cusp-2-3}, $\delta_1$ is the Euler vector field for the weights $w(x)=3, w(y)=2$ and so it is $I$-logarithmically integrable. \medskip Let us study the integrability of $a\delta_2$, $a\in R$. The coefficient of $t^2$ in $f(x-ay^2t,y+axt)$ is $a^2(3y^4+6x^2 y)$. Since $6x^2\in J$, this coefficient belongs to $J$ if and only if $3a^2y^4 \in J$, i.e. $a^2\in J:3y^4$. \begin{lema} \label{lema:ej-nolineal-2} \begin{enumerate} \item[(a)] $J:3y^4 = (2,x^2)$. \item[(b)] $\{a\in R\ |\ a^2 \in (2,x^2)\} = (2,x)$. \end{enumerate} \end{lema} \begin{corolario} The $R$-module $\Ider_\mathbb{Z}(\log I;2)$ is generated by $\{\delta_1,2\delta_2,x\delta_2\}$ and so $ \ann_A \left( \Der_\mathbb{Z}(A)/\Ider_\mathbb{Z}(A;2)\right) = (2,x)$. \end{corolario} Let us study the $3$-integrability of $$(2b+cx)\delta_2= -y^2(2b+cx)\dpar{x}+(2b+cx)x\dpar{y},\quad b,c\in R.$$ Let us write $a=2b+cx$. The coefficient of $t^2$ in $f(x-y^2(2b+cx)t,y+(2b+cx)xt)$ is $A(2y^3)+B(3x^2)$ with $A=6b(b+cx)y, B=c^2y^4 + 2a^2y$, which can be expressed as $$ (A-B)xf'_x+(A-B)yf'_y+(3B-2A)f.$$ Hence, the coefficient of $t^2$ in $$f(x-y^2(2b+cx)t+(B-A)xt^2,y+(2b+cx)xt+(B-A)yt^2)$$ is $(3B-2A)f$ and the reduction $\mod t^3$ of the $\mathbb{Z}$-algebra map $$\Psi^{(2)}: p(x,y)\in R \mapsto p(x-y^2(2b+cx)t+(B-A)xt^2,y+(2b+cx)xt+(B-A)yt^2) \in R[[t]]$$ is $I$-logarithmic and gives rise to a $I$-logarithmic $2$-integral of $a\delta_2$. So, the reduction $\mod t^3$ of the $\mathbb{Z}$-algebra map $\Psi_g^{(2)}:R\to R[[t]]$ given by \begin{eqnarray*} x & \mapsto &x-y^2(2b+cx)t+[(B-A)x+3dx-ey^2]t^2,\\ y & \mapsto &y+(2b+cx)xt+[(B-A)y+2dy+ex]t^2 \end{eqnarray*} is the associated map to a generic $I$-logarithmic $2$-integral of $a\delta_2$. Moreover, the coefficient of $t^2$ in $\Psi_g^{(2)}(f)$ is $(3B-2A+6d)f$. \medskip The coefficient of $t^3$ in $\Psi_g^{(2)}(f)$ is $6 x^{2} y^{6} c^{3}+12 {x} y^{6} {b} c^{2}+12 x^{4} y^{3} c^{3}+36 x^{3} y^{3} {b} c^{2}+2 x^{6} c^{3}+36 x^{2} y^{3} b^{2} {c}+12 x^{5} {b} c^{2}+24 {x} y^{3} b^{3}+24 x^{4} b^{2} {c}+6 {x} y^{4} {c} {e}+16 x^{3} b^{3}+6 x^{2} y^{2} {c} {d}+12 y^{4} {b} {e}+12 x^{3} {y} {c} {e}+12 {x} y^{2} {b} {d}+24 x^{2} {y} {b} {e}$, and it belongs to $J$ if and only if $$ 2x^6c^3+16x^3b^3\in J \Leftrightarrow x^3c^3+8b^3 \in (J:2x^3). $$ \begin{lema} \label{lema:ej-nolineal-3} With the above notations, the following assertions hold: \begin{enumerate} \item[(a)] $J:2x^3 = (3,y^3)$. \item[(b)] $ x^3c^3+8b^3 \in (J:2x^3)\Leftrightarrow a^3\in (J:2x^3) \Leftrightarrow a\in (3,y)$. \end{enumerate} \end{lema} \begin{corolario} The $I$-logarithmic derivation $a\delta_2$ is $I$-logarithmically $3$-integrable if and only if $a\in (2,x)\cap (3,y) = (6,3x,2y,xy)$, and so the $R$-module $\Ider_\mathbb{Z}(\log I;3)$ is generated by $\{\delta_1,6\delta_2,3x\delta_2,2y\delta_2,xy\delta_2\}$ and \begin{eqnarray*} &\ann_A \left( \Der_\mathbb{Z}(A)/\Ider_\mathbb{Z}(A;3)\right) = (2,\overline{x})\cap (3,\overline{y}),&\\ & \ann_A \left( \Der_\mathbb{Z}(A;2)/\Ider_\mathbb{Z}(A;3)\right) =(3,\overline{y}) . \end{eqnarray*} \end{corolario} The following lemma cannot be deduced directly from Proposition \ref{prop:Jder-in-Ider}. Its proof proceeds by induction and it is left up to the reader. \begin{lema} \label{lema:tecnico-Z-cusp} Let $a\in (2,x)\cap (3,y)$. There are sequences $a_i, b_i\in R$, $i\geq 2$, such that the $\mathbb{Z}$-algebra map $$\Psi: p(x,y)\in R \mapsto p\left(x-ay^2t+\sum_{i=2}^\infty a_i t^i,y+axt+\sum_{i=2}^\infty b_i t^i\right) \in R[[t]]$$ is $I$-logarithmic, i.e. $\Psi(f) \in R[[t]]f$. \end{lema} \begin{corolario} We have $$ \Ider_\mathbb{Z}(A;3) = \Ider_\mathbb{Z}(A;4)= \cdots = \Ider_\mathbb{Z}(A),$$ and so $$ \ann_A \left( \Der_\mathbb{Z}(A)/\Ider_\mathbb{Z}(A)\right) = (2,\overline{x})\cap (3,\overline{y}) \supsetneq \sqrt{J_{A/\mathbb{Z}}} = (3\overline{x},2\overline{y}).$$ \end{corolario} The following two examples have been proposed by Herwig Hauser. \subsection{The surface $x_3^2+x_1 (x_1+x_2)^2=0$ in characteristic $2$} Let $k$ be a field of characteristic $2$, $f=x_3^2+x_1 (x_1+x_2)^2 \in R=k[x_1,x_2,x_3]$, $I=(f)$ and $A=R/I$. The Jacobian ideal is $J=(\ell^2,f)= (\ell^2,x_3^2)$ with $\ell=x_1+x_2$, and $\sqrt{J}=(\ell,x_3)$. A system of generators of $\Der_k(\log I)$ mod. $f\Der_k(R)$ is $\{\dpar{2}, \dpar{3}\}$. \medskip \begin{lema} Let $\alpha,\beta\in R$ and $\delta=\alpha \dpar{2} + \beta \dpar{3}$. The following conditions are equivalent: \begin{enumerate} \item[(a)] $\delta$ is $I$-logarithmically $2$-integrable. \item[(b)] $x_1\alpha^2+\beta^2\in J$. \end{enumerate} \end{lema} \begin{lema} The module $\{(\alpha,\beta)\in R^2\ |\ x_1\alpha^2+\beta^2\in J\}$ is generated by $(x_3,0), (\ell,0),(0,x_3), (0,\ell)$. \end{lema} \begin{corolario} A system of generators of $\Ider_k(\log I;2)$ $\mod f\Der_k(R)$ is $\{x_3 \dpar{2}, \ell \dpar{2}, x_3 \dpar{3}, \ell \dpar{3}\}$. \end{corolario} \begin{proposicion} $\Ider_k(A;2)=\Ider_k(A)$. \end{proposicion} \begin{prueba} We need to prove that $x_3 \dpar{2}, \ell \dpar{2}, x_3 \dpar{3}, \ell \dpar{3}$ are $I$-logarithmically integrable. \smallskip The derivation $x_3\dpar{3}$ is the Euler vector field for the weights $w(x_1)=w(x_2)=2, w(x_3)=3$. From \ref{nume:traves} we deduce that $x_3\dpar{3}$ is $I$-logarithmically integrable. \smallskip The derivation $\ell \dpar{3}$ is $I$-logarithmically integrable since $f(x_1+t^2,x_2+t^2,x_3 + \ell t) =\cdots= f\in R[t]\subset R[[t]]$ and so a $I$-logarithmic integral of $\ell \dpar{3}$ is given by the $k$-algebra map $R \to R[[t]]$ determined by $$ x_1 \mapsto x_1+t^2,\quad x_2 \mapsto x_2+t^2,\quad x_3\mapsto x_3 + \ell t.$$ \smallskip For the derivation $x_3 \dpar{2}$ let us write $W(t) = \frac{x_1^2 t^2}{1-x_1 t^2} \in (t^2)R[[t]]$ and consider the homomorphism of $k$-algebras $\Psi: R \to R[[t]]$ given by: $$x_1 \mapsto x_1+ W(t),\quad x_2 \mapsto x_2 + x_3 t + W(t),\quad x_3 \mapsto x_3.$$ We have $\Psi(f)=f(x_1+ W,x_2 + x_3 t + W,x_3)=\cdots= \left(\frac{1}{1-x_1 t^2}\right) f$ and so $\Psi$ gives rise to a $I$-logarithmic integral of $x_3 \dpar{2}$. \smallskip For the derivation $\ell \dpar{2}$ let us write $V(t) = \frac{x_1 t^2}{1-t^2} \in (t^2) R[[t]]$ and consider the homomorphism of $k$-algebras $\Psi: R \to R[[t]]$ given by: $$x_1 \mapsto x_1+ V(t),\quad x_2 \mapsto x_2 + \ell t + V(t),\quad x_3 \mapsto x_3.$$ We have $\Psi(f)=f(x_1+ V,x_2 + \ell t + V,x_3)=\cdots = f$ and so $\Psi$ gives rise to a $I$-logarithmic integral of $\ell \dpar{2}$. \end{prueba} In this example the descending chain of modules of integrable derivations stabilizes from $N=2$: $$ \Der_k(A)=\Ider_k(A;1) \supset \Ider_k(A;2) =\Ider_k(A;3)=\cdots= \Ider_k(A;\infty)$$ and $$\ann_A \left( \Der_k(A)/\Ider_k(A;\infty)\right) = (\ell,x_3) = \sqrt{J}/I.$$ \subsection{The surface $x_3^2+x_1 x_2 (x_1+x_2)^2=0$ in characteristic $2$} Let $k$ be a field of characteristic $2$, $f=x_3^2+x_1 x_2 (x_1+x_2)^2 \in R=k[x_1,x_2,x_3]$, $I=(f)$ and $A=R/I$. The Jacobian ideal is $J=(x_2\ell^2,x_1\ell^2,f)= (x_2\ell^2,x_1\ell^2,x_3^2)$ with $\ell=x_1+x_2$. It is clear that $\sqrt{J}=(\ell,x_3)$. The module $\Der_k(\log I)$ is generated mod. $f\Der_k(R)$ by $\dpar{3}$, $\varepsilon = x_1\dpar{1} + x_2\dpar{2}$ and $\eta = x_1^2 \ell^2\dpar{1}+x_3^2 \dpar{2}$ ($\dpar{3}(f)=\varepsilon(f)=0, \eta(f)=x_1\ell^2f$). Since $\varepsilon$ is the Euler vector field for the weights $w(x_1)=w(x_2)=1, w(x_3)=2$, we deduce from \ref{nume:traves} that $\varepsilon$ is $I$-logarithmically integrable. From Proposition \ref{prop:Jder-in-Ider} we also deduce that $\eta$ is $I$-logarithmically integrable. \medskip To find a system of generators of $\Ider_k(\log I;2)$ we need the conditions on $a\in R$ which guarantee that $a\dpar{3}$ is $I$-logarithmically $2$-integrable. The coefficient of $t^2$ in $f(x_1,x_2,x_3+at)=f+a^2t^2$ is $a^2$, and so $a\dpar{3}$ is $I$-logarithmically $2$-integrable if and only if $a^2 \in J$. \begin{lema} $\{a\in R | a^2 \in J\} = (x_3,x_1\ell,x_2\ell)$. \end{lema} \begin{corolario} A system of generators of $\Ider_k(\log I;2)$ mod. $f\Der_k(R)$ is $\{x_3\dpar{3},x_1\ell\dpar{3},x_2\ell\dpar{3},\varepsilon,\eta\}$. In particular we have $$ \ann_A \left( \Der_k(A)/\Ider_k(A;2)\right) = (\overline{x_3},\overline{x_2} \overline{\ell},\overline{x_1}\overline{\ell)}.$$ \end{corolario} The following lemma is a very particular case of a general result. \begin{lema} Any Hasse--Schmidt derivation $E\in\HS_k(A;2)$ is $3$-integrable. \end{lema} \begin{prueba} Since $3$ is invertible in $k$, we can consider the differential operator $E_3= E_1 E_2 - \frac{1}{3}E_1^3$ and check that $(\Id_A,E_1,E_2,E_3)$ is a Hasse--Schmidt derivation. \end{prueba} As a consequence of the above lemma we have $\Ider_k(A;2)=\Ider_k(A;3)$. \smallskip Let us see the conditions for $a\dpar{3}$, with $a=\alpha x_3 +\beta x_1\ell+\gamma x_2\ell$, $\alpha,\beta,\gamma\in R$, to be $I$-logarithmically $4$-integrable. The algebra map associated with a general $I$-logarithmic $3$-integral of $a\dpar{3}$ is $\Psi^{(3)}:R \to R_3$ given by: \begin{eqnarray*} x_1 & \mapsto &x_1+(\alpha^2 x_1 +\gamma^2 x_2+B_1x_1+C_1x_1^2\ell^2) t^2+(B_2x_1+C_2x_1^2\ell^2)t^3,\\ x_2 & \mapsto &x_2 + (\beta^2 x_1+B_1 x_2+C_1 x_3^2)t^2+(B_2 x_2+C_2 x_3^2)t^3,\\ x_3 &\mapsto &x_3+(\alpha x_3 +\beta x_1\ell+\gamma x_2\ell)t + A_1 t^2+A_2 t^3 \end{eqnarray*} with $A_2,B_2,C_2\in R$, and let $\Psi_0^{(4)}:R\to R_4$ be the obvious lifting of $\Psi^{(3)}$. The coefficient $\mod J$ of $t^4$ in the expression of $\Psi_0^{(4)}(f)$, is $x_1 x_2^3(\alpha+\beta+\gamma)^4+A_1^2$. So, we have proved the following lemma. \begin{lema} With the above notations, the following assertions are equivalent: \begin{enumerate} \item[(a)] The logarithmic derivation $a\dpar{3}$, with $a=\alpha x_3 +\beta x_1\ell+\gamma x_2\ell$, is $I$-logarithmically $4$-integrable. \item[(b)] There is $A_1\in R$ such that $x_1 x_2^3(\alpha+\beta+\gamma)^4+A_1^2\in J$, or, equivalently, $x_1 x_2^3(\alpha+\beta+\gamma)^4 \in J + R^2$. \end{enumerate} \end{lema} \begin{lema} \label{lema:ej-nolineal-4} We have $\{\varphi\in R\ |\ x_1 x_2^3 \varphi^4 \in J + R^2\}=(x_3,\ell)$. \end{lema} \begin{prueba} Let us write $\mathfrak{A}=\{\varphi\in R\ |\ x_1 x_2^3 \varphi^4 \in J + R^2\}$. It is clear that $x_3,\ell\in \mathfrak{A}$, since $x_3^4\in J$ and $x_1 x_2^3 \ell^4\in J$. Let $\varphi$ be an element in $\mathfrak{A}$ and let us write $\varphi = q x_3 +\varphi_1(x_1,x_2)$, with $q\in R$ and $\varphi_1(x_1,x_2)\in \mathfrak{A}$. We have $$x_1 x_2^3 \varphi_1^4 = U(x_1,x_2) x_1 \ell^2 + V(x_1,x_2) x_2 \ell^2 + P(x_1,x_2)^2.$$ By taking derivatives with respect to $x_1$ we obtain $x_2^3 \varphi_1^4 = U'_{x_1} x_1 \ell^2 +U \ell^2 + V'_{x_1} x_2\ell^2$ and so $\ell$ divides $\varphi_1$. We conclude that $\mathfrak{A} = (x_3,\ell)$. \end{prueba} As a consequence of the above lemma and the fact that $(x_3,\ell)$ is a prime ideal, the condition $x_1 x_2^3(\alpha+\beta+\gamma)^4 \in J + R^2$ is equivalent to $\alpha+\beta+\gamma \in (x_3,\ell)$, i.e. to $\alpha = \alpha_1 x_3 + \alpha_2 \ell + \beta +\gamma$ and so $ a = \cdots = \alpha_1 x_3^2 + \alpha_2 x_3\ell + \beta(x_3+x_1\ell) +\gamma(x_3+x_2\ell)$. We conclude with the following corollary. \begin{corolario} A system of generators of $\Ider_k(\log I;4)$ $\mod f\Der_k(R)$ is $\{x_3^2\dpar{3},x_3\ell\dpar{3},(x_3+x_1\ell)\dpar{3},(x_3+x_2\ell)\dpar{3},\varepsilon,\eta\}$. In particular we have \begin{eqnarray*} & \ann_A \left( \Der_k(A)/\Ider_k(A;2)\right) = (\overline{x_3},\overline{x_2} \overline{\ell},\overline{x_1}\overline{\ell}),&\\ & \ann_A \left( \Der_k(A)/\Ider_k(A;4)\right) = (\overline{x_3}^2,\overline{x_3}\overline{\ell},\overline{x_3}+\overline{x_2}\overline{\ell},\overline{x_3}+\overline{x_1}\overline{\ell}),&\\ & \ann_A \left( \Ider_k(A;2)/\Ider_k(A;4)\right) = (\overline{x_3},\overline{\ell})& \end{eqnarray*} and all the inclusions $$ J_{A/k} \subset (\overline{x_3}^2,\overline{x_3}\overline{\ell},\overline{x_3}+\overline{x_2}\overline{\ell},\overline{x_3}+\overline{x_1}\overline{\ell}) \subset (\overline{x_3},\overline{x_2} \overline{\ell},\overline{x_1}\overline{\ell}) \subset (\overline{x_3},\overline{\ell})=\sqrt{J_{A/k}}$$ are strict. \end{corolario} From Proposition \ref{prop:Jder-in-Ider} we deduce that $x_3^2\dpar{3}$ is $I$-logarithmically integrable. \begin{lema} \label{lema:tecnico-xld3} The derivation $x_3\ell\dpar{3}$ is $I$-logarithmically integrable. \end{lema} \begin{prueba} Let us write $\delta=x_3\ell\dpar{3}$ and $D=(x_3\ell){\hspace{.1em}\scriptsize\bullet\hspace{.1em}} \Delta^{(3)}$. We have $\Phi_D(f)= f + (x_3\ell)^2 t^2$ and $(x_3\ell)^2 = f_{x_1}'f_{x_2}'+\ell^2 f = x_1 x_2 \ell^4 + \ell^2 f$. Let us also write $S=k[x_1,x_2]$ and $\mathfrak{b}=(f_{x_1}',f_{x_2}')=(x_2 \ell^2,x_1 \ell^2)\subset S$. We are going to construct inductively a sequence of differential operators $E^m_m\in\mathfrak{b}\diff_{S/k}$, $m\geq 1$, with $E^1_1=0$, $E^2_2(f)= x_1 x_2 \ell^4$, $E^m_m(f)=0$ for all $m\geq 3$ and such that $(\Id,E^1_1,E^2_2,E^3_3,\dots)$ is a Hasse-Schmidt derivation of length $\infty$. \smallskip For $m=2$, let us take $E^2_2 =f_{x_2}'\dpar{1}$. \smallskip Assume that we have already found a Hasse--Schmidt derivation $E^m=(\Id,E^1_1,\dots,E^m_m)\in\HS_k(S;m)$ with the required properties. Let us consider $F^m=\varepsilon(E^m)\in \HS_k(S;\infty)$. From Proposition \ref{prop:formulon}, 2) we deduce that $F^m_{m+1} \in \mathfrak{b}^2 \diff_{S/k}$ and so $F^m_{m+1}(f)\in \mathfrak{b}^2$. Hence, there are $\alpha,\beta \in \mathfrak{b}$ such that $F^m_{m+1}(f) = \alpha f_{x_1}'+\beta f_{x_2}'$ and consequently we can take $E^{m+1}_{m+1}=F^m_{m+1} - (\alpha \dpar{1}+\beta\dpar{2})$ \smallskip Once the Hasse--Schmidt derivation $E=(\Id,0,E^2_2,E^3_3,\dots)\in \HS_k(S;\infty)$ has been constructed, we extend it in the obvious way to the ring $R$ (we keep the same name $E$ for the extension). We have $\Phi_{D{\scriptstyle \,\circ\,} E}(f) = \widetilde{\Phi}_D\left( \Phi_E(f)\right) = \widetilde{\Phi}_D\left( f + x_1 x_2 \ell^4 t^2 \right) = \Phi_{D}(f) + \Phi_{D}(x_1 x_2 \ell^4)t^2 = f + (x_3\ell)^2 t^2 + x_1 x_2 \ell^4t^2 = (1+\ell^2 t^2)f$ and so $D{\scriptstyle \,\circ\,} E$ is a $I$-logarithmic integral of $\delta$. \end{prueba} The proof of the following lemma is due to M. M\'erida. \begin{lema} The derivations $(x_3+x_1\ell)\dpar{3}$ and $(x_3+x_2\ell)\dpar{3}$ are $I$-logarithmically integrable. \end{lema} \begin{prueba} By symmetry, it is enough to consider the case $(x_3+x_1\ell)\dpar{3}$, for which the logarithmic integrability is a consequence of the fact that the map $\Psi: R \to R[[t]]$ given by: \begin{eqnarray*} x_1 & \mapsto &x_1+x_1V,\\ x_2 & \mapsto &x_2+x_1V,\\ x_3 &\mapsto &x_3+(x_3+x_1\ell)t+x_3V, \end{eqnarray*} with $\displaystyle V = \sum_{i=1}^\infty t^{2^i}$, is $I$-logarithmic. Namely, since $t^2=V^2+V$, we have \begin{eqnarray*} & f(x_1+x_1V,x_2+x_1V,x_3+(x_3+x_1\ell)t+x_3V) =&\\ &(x_3+(x_3+x_1\ell)t+x_3V)^2 + (x_1+x_1V)(x_2+x_1V)\ell^2=&\\ &x_3^2 +(x_3^2+x_1^2 \ell^2) t^2 +x_3^2 V^2 + (x_1 x_2 + x_1^2 V + x_1 x_2 V + x_1^2 V^2)\ell^2=&\\ & x_3^2 +(x_3^2+x_1^2 \ell^2) t^2 +x_3^2 V^2 + (x_1 x_2 + x_1^2 t^2 + x_1 x_2 V)\ell^2 = &\\ & x_3^2 +x_3^2 t^2 +x_3^2 V^2 + (x_1 x_2 + x_1 x_2 V)\ell^2 = f + x_3^2 t^2 +x_3^2 V^2 + x_1 x_2 V\ell^2= &\\ & f + x_3^2 V + x_1 x_2 V\ell^2 = (1+V) f.& \end{eqnarray*} \end{prueba} \begin{corolario} $\Ider_k(A;4)=\Ider_k(A)$. \end{corolario} \subsection{Some questions} \begin{question} \label{cuestion:1} Assume that $R=k[x_1,\dots,x_d]$, $S\subset R$ is a multiplicative set and $A=S^{-1}R$ or $A=k[[x_1,\dots,x_d]]$. Let $I\subset A$ be an ideal, $m\geq 1$ an integer, $D\in\HS_k(\log I;m)$ and $E=\overline{D}\in\HS_k(A/I;m)$. Let us consider the following properties: \begin{enumerate} \item[(a)] $D$ is $I$-logarithmically $n$-integrable for all integers $n\geq m$ (or equivalently, $E$ is $n$-integrable for all integers $n\geq m$). \item[(b)] $D$ is $I$-logarithmically $\infty$-integrable (or equivalently $E$ is $\infty$-integrable). \end{enumerate} Under which hypotheses on $k$ and on $I$ are properties (a) and (b) equivalent for any $D\in\HS_k(\log I;m)$? Are they equivalent if $k$ is a field or the ring of integers and $I$ is arbitrary? \medskip Notice that this question is the same as asking whether the inclusion in \ref{eq:intersection-iderlog} (or in \ref{eq:intersection-ider} for $m=1$) is an equality or not. \end{question} \begin{question} The proofs of propositions \ref{prop:surjec-IHS-log-localiz-poly} and \ref{prop:surjec-IHS-localiz-fp} do not work for $m=\infty$ and, presumably, these propositions are not true for $m=\infty$ without additional finiteness hypotheses on $k$. Let us notice that if the maps in Proposition \ref{prop:surjec-IHS-localiz-fp} are surjective for $m=\infty$, then the localization conjecture for the Hasse--Schmidt algebra stated in \cite{traves-2003} is true. \end{question} \begin{question} For any finitely presented $k$-algebra $A$, find an algorithm for deciding whether \underline{a given} $\delta\in\Der_k(A)$ is $m$-integrable or not. \end{question} \begin{question} For any finitely presented $k$-algebra $A$, find an algorithm to obtain a system of generators of $\Ider_k(A;m)$, $m\geq 2$. \end{question} \begin{question} Assume that the base ring $k$ is a field of positive characteristic or $\mathbb{Z}$, or perhaps a more general noetherian ring, and $A$ a finitely generated $k$-algebra. Is there an integer $n\geq 1$ such that $\Ider_k(A;n)= \Ider_k(A;\infty)$? Or at least, is the descending chain of $A$-modules $ \Ider_k(A;1) \supset \Ider_k(A;2) \supset \Ider_k(A;3) \supset \cdots$ stationary? \end{question} \begin{question} Assume that the base ring $k$ is a field of positive characteristic or $\mathbb{Z}$, or perhaps a more general noetherian ring. Is there an integer $m\gg 1$, possibly depending on $d$ and $e$ or other numerical invariants, such that $$ \Ider_k(A;m)=\Der_k(A)\quad \Rightarrow\quad \Ider_k(A)=\Der_k(A)$$ for every quotient ring $A=k[x_1,\dots,x_d]/I$ with $\dim A= e$? \end{question} \begin{question} Assume that the base ring $k$ is a field of positive characteristic or $\mathbb{Z}$, or perhaps a more general noetherian ring, $A$ a local noetherian $k$-algebra and $\delta:A\to A$ a $k$-derivation. Under which hypotheses the $m$-integrability of $\widehat{\delta}:\widehat{A} \to \widehat{A}$ implies the $m$-integrability of $\delta$? \end{question}
1,314,259,995,722
arxiv
\section{Introduction.} One of the unsolved problems in AdS/CFT correspondence \cite{1} (for an excellent review, see \cite{AGMOO}) is how to obtain non-SUSY gauge theory with typical running coupling as the boundary side. The related question is about confinement in such theory. It is desirable to answer these questions from supergravity (SG) side as it gives strong coupling regime of boundary quantum field theory (QFT). There are different proposals to get running gauge coupling in non-SUSY theory: using Type 0 string theory approach \cite{2}, deforming ${\cal N}=4$ theory \cite{15} (also via AdS orbifolding \cite{17}) or making non-constant dilaton deformations of ${\rm AdS}_5\times{\rm S}_5$ vacuum in IIB SG \cite{3,4,6,7,8,9,NO}. In the last case non-constant dilaton breaks conformal invariance and (a part of) supersymmetry of the boundary ${\cal N}=4$ super YM theory. (In the presence of axion (RR-scalar), a part of supersymmetry may be unbroken but dilaton is still non-trivial \cite{14}). Then, exponent of dilaton actually describes the running gauge coupling with a power-law behavior and UV-stable fixed point. Within such picture the indication to the possibility of confinement is also found. The features of running and confinement depend on the axion \cite{7}, vectors \cite{8}, worldvolume scalar \cite{9} or curvature of four-dimensional space \cite{NO}. From another side, it is also realized that planar ${\rm AdS}_5$ BH is dual to a thermal state of ${\cal N}=4$ super YM theory. The corresponding coupling constant dependence has been studied in ref.\cite{GKT,AAT} based on earlier study of SG side free energy in ref.\cite{GKP}. Spherical AdS BH shows the finite temperature phase transition \cite{HP} which may be used to realize the confinement in large $N$ theory at low temperatures \cite{witten}. In this paper, we attempt to combine these two approaches, i.e. to find the deformation of IIB SG ${\rm AdS}_5\times{\rm S}_5$ background with non-trivial dilaton where AdS sector is described by BH (hence, temperature appears). Then, running gauge coupling of gauge theory at non-zero temperature is given by exponent of dilaton. We present the class of approximate solutions of IIB SG\footnote{These solutions presumably describe thermal states of non-SUSY gauge theory which descends from ${\cal N}=4$ super YM after breaking of SUSY and conformal invariance.} with such properties where running coupling shows power-like behavior in the temperature (in the expansion on radius). The quark-antiquark potential for these solutions is also found and possibility of confinement at non-zero temperature is established. Corrections to position of horizon (in near horizon regime) and to the temperature are calculated. Thermodynamics of obtained solutions is also investigated. The paper is organized as follows. In the next section we present the approximate solution of IIB supergravity. It represents dilatonic perturbation of zero mass hyperbolic AdS BH. The temperature dependence of running gauge coupling (exponent of dilaton) and of corresponding beta-function is derived in different regimes. Quark-antiquark potential which is repulsive unlike to constant dilaton case is analyzed. Section 3 is devoted to the study of the same questions for background representing dilatonic deformation of (non)planar non-zero mass AdS BH. The temperature dependence of running gauge coupling is different from the situation in previous section. Confinement is possible as it follows from the study of quark-antiquark potential. In section 4, we investigate thermodynamic properties of our AdS backgrounds. Free energy, mass and entropy are found with account of non-trivial temperature corrections due to dilaton. This is compared with the leading behaviour of free energy in $N=4$ super Yang-Mills theory. Some outlook is given in the last section. \section{Perturbative solutions of IIB supergravity, running gauge coupling and potential: zero mass BH case} We start from the action of dilatonic gravity in $d+1$ dimensions: \begin{equation} \label{i} S=-{1 \over 16\pi G}\int d^{d+1}x \sqrt{-G}\left(R - \Lambda - \alpha G^{\mu\nu}\partial_\mu \phi \partial_\nu \phi \right)\ . \end{equation} In the following, we assume $\lambda^2\equiv -\Lambda$ and $\alpha$ to be positive. The action (\ref{i}) contains the effective action of type IIB string theory. In the type IIB supergravity, which is the low energy effective action of the type IIB string theory, we can consider bosonic background where anti-self-dual five-form is given by the Freund-Rubin-type ansatz and the topology is $M_5\times {\rm S}^5$ with the manifold $M_5$ which is asymptotically ${\rm AdS}_5$. If dilaton only depends on the coordinates in $M_5$, by integrating five coordinates on ${\rm S}^5$, we obtain the effective five dimensional theory, which corresponds to $d=4$ and $\alpha={1 \over 2}$ case in (\ref{i}). This will be the case under consideration in this work. From the variation of the action (\ref{i}) with respect to the metric $G^{\mu\nu}$, we obtain\footnote{ The conventions of curvatures are given by \begin{eqnarray*} R&=&G^{\mu\nu}R_{\mu\nu} \\ R_{\mu\nu}&=& -\Gamma^\lambda_{\mu\lambda,\kappa} + \Gamma^\lambda_{\mu\kappa,\lambda} - \Gamma^\eta_{\mu\lambda}\Gamma^\lambda_{\kappa\eta} + \Gamma^\eta_{\mu\kappa}\Gamma^\lambda_{\lambda\eta} \\ \Gamma^\eta_{\mu\lambda}&=&{1 \over 2}G^{\eta\nu}\left( G_{\mu\nu,\lambda} + G_{\lambda\nu,\mu} - G_{\mu\lambda,\nu} \right)\ . \end{eqnarray*} } \begin{equation} \label{iit} 0=R_{\mu\nu}-{1 \over 2}G_{\mu\nu}R + {\Lambda \over 2}G_{\mu\nu} - \alpha \left(\partial_\mu\phi\partial_\nu\phi -{1 \over 2}G_{\mu\nu}G^{\rho\sigma}\partial_\rho \phi \partial_\sigma \phi \right) \end{equation} and from that of dilaton $\phi$ \begin{equation} \label{iiit} 0=\partial_\mu\left(\sqrt{-G}G^{\mu\nu}\partial_\nu\phi\right)\ . \end{equation} We now assume the $(d+1)$-dimensional metric is given by \begin{equation} \label{ii} ds^2=-{\rm e}^{2\rho}dt^2 + {\rm e}^{2\sigma}dr^2 + r^2 \sum_{i,j=1}^{d-1}g_{ij}dx^i dx^j\ . \end{equation} Here $g_{ij}$ does not depend on $r$ and it is the metric in the Einstein manifold, which is defined by \begin{equation} \label{vat} \hat R_{ij}=kg_{ij}\ . \end{equation} Here $\hat R_{ij}$ is Ricci tensor defined by $g_{ij}$ and $k$ is a constant, especially $k>0$ for sphere , $k=0$ for Minkowski space and $k<0$ for hyperboloid. We also assume $\rho$, $\sigma$ and $\phi$ only depend on $r$. Then the equations (\ref{iit}) when $\mu=\nu=t$, $\mu=\nu=r$ and $\mu=i$ and $\nu=j$ give, respectively, \begin{eqnarray} \label{iii} 0&=&{(d-1)k{\rm e}^{2\sigma} \over 2r^2} + {(d-1)\sigma' \over r} - {(d-1)(d-2) \over 2r^2} \nonumber \\ && + {\lambda^2 \over 2}{\rm e}^{2\sigma} - {\alpha \over 2}\left(\phi'\right)^2 \\ \label{iv} 0&=&-{(d-1)k{\rm e}^{2\sigma} \over 2r^2} + {(d-1)\rho' \over r} + {(d-1)(d-2) \over 2r^2} \nonumber \\ && - {\lambda^2 \over 2}{\rm e}^{2\sigma} - {\alpha \over 2}\left(\phi'\right)^2 \\ \label{v} 0&=&-{(d-3)k{\rm e}^{2\sigma} \over 2r^2} + \rho'' + \left(\rho'\right)^2 - \rho'\sigma' + {(d-2)(d-3) \over 2r^2} \nonumber \\ && - {\lambda^2 \over 2}{\rm e}^{2\sigma} + {\alpha \over 2}\left(\phi'\right)^2 \ . \end{eqnarray} Here $'\equiv {d \over d r}$. Other components give identities. Eq.(\ref{iiit}) has the following form \begin{equation} \label{vi} 0=\left(r^{d-1}{\rm e}^{\rho - \sigma}\phi'\right)'\ , \end{equation} which can be integrated to give \begin{equation} \label{vii} r^{d-1}{\rm e}^{\rho - \sigma}\phi'=c\ . \end{equation} Combining (\ref{iii}) and (\ref{iv}) and substituting (\ref{vii}), we obtain \begin{eqnarray} \label{viii} 0&=&{(d-1)\left(\rho' + \sigma'\right) \over r} - {\alpha c^2 {\rm e}^{2\sigma - 2\rho} \over r^{2d-2}} \\ \label{ix} 0&=&{(d-1)k {\rm e}^{2\sigma} \over r^2} + {(d-1)\left(\sigma' - \rho' \right) \over r} - {(d-1)(d-2) \over r^2} + \lambda^2{\rm e}^{2\sigma}\ . \end{eqnarray} If we introduce new variables $U$ and $V$ by \begin{equation} \label{x} U\equiv {\rm e}^{\rho + \sigma}\ ,\quad V\equiv r^{d-2}{\rm e}^{\rho - \sigma}\ , \end{equation} Eqs.(\ref{viii}), (\ref{ix}) and (\ref{vii}) are rewritten as follows \begin{eqnarray} \label{xi} 0&=&(d-1)U'-{\alpha c^2 \over r V^2}U \\ \label{xii} 0&=&\left\{{(d-1)k \over r^2} + \lambda^2 \right\}U - {(d-1) \over r^{d-1}}V' \\ \label{xiib} \phi'&=&{c \over rV} \end{eqnarray} Deleting $U$ from (\ref{xi}) and (\ref{xii}), we obtain \begin{eqnarray} \label{xiic} 0&=&V'' + \left[-{d-3 \over r} - {2\lambda^2 r \over (d-1)k + \lambda^2 r^2}\right]V' \nonumber \\ &&- {\alpha c^2V' \over (d-1)rV^2} \ . \end{eqnarray} When $c=0$ the solution is given by \begin{eqnarray} \label{xviii} U&=&1 \nonumber \\ V&=&V_0 \nonumber \\ &\equiv& {kr^{d-2} \over d-2} + {\lambda^2 \over d(d-1)}r^d - \mu\ . \end{eqnarray} Here $\mu$ corresponds to the mass of the black hole. $k=0$, positive or negative corresponds to planar, spherical or hyperbolic AdS BH, respectively. Using (\ref{xviii}), Eq.(\ref{xii}) and (\ref{xiic}) can be rewritten as follows: \begin{eqnarray} \label{xviiib} U&=&{V' \over V_0'} \nonumber \\ \label{xviiic} 0&=&\left({V' \over V_0'}\right)'-{\alpha c^2V' \over (d-1)rV_0'V^2}\ . \end{eqnarray} When $\mu=0$, the solution is isomorphic to AdS. If we choose $k<0$, the metric has the following form: \begin{equation} \label{xix} ds^2=-{(r^2 - r_0^2) \over l^2}dt^2 + {l^2 \over (r^2 - r_0^2) }dr^2 + r^2 \sum_{i,j=1}^{d-1}g_{ij}dx^i dx^j\ . \end{equation} Here \begin{equation} \label{xx} l^2\equiv {d(d-1) \over \lambda^2}\ ,\quad r_0\equiv l \sqrt{-{k \over d-2}}\ . \end{equation} The obtained AdS metric has a horizon at $r=r_0$. When $r\sim r_0$, the metric behaves as \begin{equation} \label{xxi} ds^2 \sim -{2r_0 (r - r_0) \over l^2}dt^2 + {l^2 \over 2r_0 (r - r_0)}dr^2 + \cdots \ . \end{equation} Then if we define a new coordinate $\rho$ by \begin{equation} \label{xxii} \rho=l\sqrt{2(r-r_0) \over r_0} \end{equation} the metric has the following form: \begin{equation} \label{xxiii} ds^2 \sim -{r_0 \over l^4}\rho^2 dt^2 + d\rho^2 + \cdots \ . \end{equation} Therefore when we Wick-rotate $t$ by $t=i\tau$, $\tau$ has a period of ${2\pi l^2 \over r_0}$, whose inverse gives a temperature $T$: \begin{equation} \label{xxiv} T={r_0 \over 2\pi l^2}={1 \over 2\pi l}\sqrt{-k \over d-2}\ . \end{equation} We now consider the perturbation with respect to $c$. We will concentrate on the case of type IIB SG in $d=4$, by putting $\alpha={1 \over 2}$. Note that in this approximation the radius is away from horizon. Near-horizon regime will be discussed independently. For $\mu=0$ and $k<0$ case, the leading term for the dilaton $\phi$ is given by substituting $V_0$ in (\ref{xviii}) into (\ref{xiib}) \begin{eqnarray} \label{xxv} \phi&=&\phi_0 +cl^2\left\{{1 \over 2r_0^4}\ln\left(1 - {r_0^2 \over r^2}\right) + {1 \over 2r_0^2 r^2}\right\} \nonumber \\ &=& \phi_0 + c\left\{{1 \over 2l^6(2\pi T)^4}\ln\left(1 - {l^4 (2\pi T)^2 \over r^2}\right) + {1 \over 2l^2(2\pi T)^2 r^2}\right\}\ . \end{eqnarray} which gives the temperature dependent running dilaton. We should note that there is a singularity in the dilaton field at the horizon $r=r_0=2\pi l^2 T$. The fact that dilaton may become singular at IR has been mentioned already in two-boundaries AdS solution of IIB SG in ref.\cite{3}. It is also interesting that when $r$ is formally less than $r_0$ then dilaton (and also running coupling) becomes imaginary. Since the string coupling is given by \begin{equation} \label{ci} g=g_s{\rm e}^\phi\ \quad (g_s\ :\ \mbox{constant})\ , \end{equation} we find the behaviour when $r$ is large and $c$ is small as \begin{equation} \label{cii} g\sim g_s\left\{1 + cl^2\left(-{1 \over 2r^4 } - {\left(2\pi l^2 T\right)^2 \over 3r^6} + {\cal O} \left(r^{-8}\right) \right) + {\cal O}(c^2)\right\}\ . \end{equation} Here $\phi_0$ has been absorbed into the redefinition of $g_s$. Since $r$ is the length scale corresponding to the radius of the boundary manifold, $r$ can be regarded as the energy scale of the field theory on the boundary \cite{10}. Therefore the beta-function is given by \begin{equation} \label{ciii} \beta(g)=r{dg \over dr}=-4\left(g-g_s\right) + {2^{5 \over 2} \over3}\left(2\pi T\right)^2l^3 g_s \left( {g_s - g \over c g_s} \right)^{3 \over 2}\ . \end{equation} The first term is usual and universal \cite{4,7}. The second term defines the temperature dependence. Let us comment on the case of high $T$. As we consider the behavior near the boundary, first we take $r$ to be large. After that we consider the case of high $T$. In this case $r\gg Tl^2$ and we can consider the large $T$ case in the expression (\ref{ciii}). The problem might happen when $r\sim Tl^2$. In this case, we need to solve Eq.(\ref{xxv}) with respect to $r$ as a function of $T$ and $\phi$ or coupling: $r=r(g,T)$. Then from (\ref{xxv}) and (\ref{ci}), we find the following expression of the beta-function: \begin{equation} \label{gTii} \beta(g) \sim \left.r{dg \over dr}\right|_{r=r(g,T)} ={g_s c l^2 \over r(g,T)^4 \left(1 - {l^4 (2\pi T)^2 \over r(g,T)^2}\right) }\ . \end{equation} In case $r$ is large, the above equation reproduces (\ref{ciii}). We can also consider the case that the last term in (\ref{xxv}) is larger than the second term which contains $\ln (\cdots)$. In this case, the coupling is given by \begin{equation} \label{gTib} g\sim g_s\left( 1 + {c \over 2l^2 \left(2\pi T\right)^2 r^2} \cdots \right)\ , \end{equation} which changes the leading behavior of the beta-function: \begin{equation} \label{gTiib} \beta(g)\sim - 2 \left(g-g_s\right) + \cdots\ . \end{equation} This beta-function presumably defines strong coupling regime of non-SUSY gauge theory at high temperature. It is interesting to note that in perturbative gauge theory at non-zero temperature the running gauge coupling contains not only standard logarithms of $T$ but also terms linear on $T$ (see ref.\cite{volodya} and references therein). Of course, in our case we have not AF theory but the one with stable fixed point. Now we consider the correction for $V$ and $U$ , writing them in the following form: \begin{equation} \label{xxvi} V=V_0+c^2 v\ ,\quad U=1+c^2 u\ . \end{equation} Substituting (\ref{xxvi}) and neglecting the higher orders in $c^2$, we obtain \begin{eqnarray} \label{xxvii} u&=&{v' \over V_0'} \nonumber \\ \label{xxviii} 0&=&\left({v' \over V_0'}\right)'-{1 \over 6rV_0^2}\ . \end{eqnarray} With $\mu=0$ and $k<0$ in the above equations one gets, \begin{eqnarray} \label{xxix} u&=&{4 \over 3 k^4 l^4}\left\{-{1 \over 2s^2} - {2 \over s} \right. \nonumber \\ && \left. -3\ln \left(1 - {1 \over s}\right) - {1 \over (s-1)} + c_1 \right\} \\ \label{xxx} v&=&{2 \over 3k^2 l^2}\left\{ -{1 \over 2}\left(3s^2 - 3s +1 \right)\ln \left(1 - {1 \over s}\right) \right. \nonumber \\ && \left. -{3s \over 2} + {3 \over 4} -{1 \over 4s} +{c_1 \over 2}\left(s^2 - s\right) + c_2\right\} \end{eqnarray} Here \begin{equation} \label{xxxi} s=-{2 r^2 \over kl^2} \end{equation} and $c_1$ and $c_2$ are constants of the integration, which should vanish if we require $u$, $v\rightarrow 0$ when $r\rightarrow \infty$. From (\ref{xxix}) and (\ref{xxx}), we find that $U$ and $V$ or ${\rm e}^{2\rho}$ and ${\rm e}^{2\sigma}$ have the singularity at the unperturbative horizon corresponding to $s=1$. Eq.(\ref{xxv}) tells also that the dilaton field is also singular there. In other words, the expansion with respect to $c^2$ breaks down when $s\sim 0$. Therefore the singularity in $U$, $V$ would not be real one. In order to investigate the behavior in near-horizon regime we assume that the radius of the horizon is large and use ${1 \over r}$ expansion: \begin{equation} \label{r1} V={r^4 \over l^2} + {kr^2 \over 2} + {a \over r^4} + {\cal O}\left(r^{-6}\right)\ . \end{equation} We put the constant term to be zero assuming that the black hole mass vanishes. The absence of ${1 \over r^2}$ term can be found from (\ref{xviiib}). Eq.(\ref{xviiib}) also tells that \begin{equation} \label{r2} a={c^2l^2 \over 48} \end{equation} and ${\rm e}^\phi$, $V$ and $U$ have the following forms: \begin{eqnarray} \label{r3} {\rm e}^\phi&=&{\rm e}^{\phi_0}\left(1 - {cl^2 \over 4 r^4} + {\cal O}\left(r^{-6}\right)\right) \nonumber \\ V&=&{r^4 \over l^2} + {kr^2 \over 2} + {c^2l^2 \over 48 r^4} + {\cal O}\left(r^{-6}\right) \nonumber \\ U&=&1-{c^2l^4 \over 192 r^8} + {\cal O}\left(r^{-10}\right)\ . \end{eqnarray} From the equation $V=0$ we find the position of the horizon \begin{equation} \label{r4} r=r_h\equiv l\sqrt{-{k \over 2}}\left(1 - {c^2 \over 6 k^4 l^4}\right)\ , \end{equation} which gives the correction to the temperature: \begin{equation} \label{r5} T={1 \over 2\pi l}\sqrt{-{k \over 2}} - {c^2\left(-{k \over 2}\right)^{-{7 \over 2}} \over 192 l^5}\ . \end{equation} Let us turn now to the analysis of the potential between quark and anti-quark\cite{5}. We evaluate the following Nambu-Goto action \begin{equation} \label{rg5} S={1 \over 2\pi}\int d\tau d\sigma \sqrt{{\rm det\,}\left(g^s_{\mu\nu} \partial_\alpha x^\mu \partial_\beta x^\nu\right)}\ . \end{equation} with the ``string'' metric $g^s_{\mu\nu}$, which could be given by multiplying a dilaton function ${\rm e}^\phi$ to the metric tensor in (\ref{ii}). We consider the static configuration $x^0=\tau$, $x^1\equiv x=\sigma$, $x^2=x^3=\cdots=x^{d-1}=0$ and $r=r(x)$. Choose the coordinates on the boundary manifold so that the line given by $x^0=$constant, $x^1\equiv x$ and $x^2=x^3=\cdots=x^{d-1}=0$ is geodesic and $g_{11}=1$ on the line. Substituting the configuration into (\ref{rg5}), we find \begin{equation} \label{rg7} S={{\cal T} \over 2\pi}\int dx {\rm e}^\phi(r)\sqrt{U(r)V(r)\left( {U(r) \over V(r)}\left(\partial_x r\right)^2 + 1 \right)}\ . \end{equation} Here ${\cal T}$ is the length of the region of the definition of $\tau$ and we choose $\phi_0=0$ for simplicity. The orbit of $r$ can be obtained by minimizing the action $S$ or solving the Euler-Lagrange equation ${\delta S \over \delta r}- \partial_x\left({\delta S \over \delta\left(\partial_x r\right)}\right)=0$. The Euler-Lagrange equation tells that \begin{equation} \label{rg8} E_0={\rm e}^\phi(r)\sqrt{U(r)V(r) \over {U(r) \over V(r)}\left(\partial_x r\right)^2 + 1 } \end{equation} is a constant. If we assume $r$ has a finite minimum $r_{\rm min}$, where $\partial_x r|_{r=r_{\rm min}}=0$, $E_0$ is given by \begin{equation} \label{rg9b} E_0={\rm e}^{\phi(r_{\rm min})}\sqrt{U(r_{\rm min})V(r_{\rm min})} \ . \end{equation} Introducing a parameter $t$, we parametrize $r$ by \begin{equation} \label{rg9} r=r_{\rm min}\cosh t\ . \end{equation} Then we find \begin{eqnarray} \label{rg10} {dx \over dt}&=& {l \over r_{\rm min}\cosh^2t\left(\cosh^2t + 1 \right)^{1 \over 2}} \nonumber \\ && \times \left\{1 + {kl^2 \over 4r_{\rm min}^2} {\cosh^4 t - \cosh^2 t -1 \over \left(\cosh^2 t + 1 \right)\cosh^2 t } + {\cal O}\left(r_{\rm min}^{-4}\right) \right\}\ . \end{eqnarray} Taking $t\rightarrow +\infty$, we find the distance $L$ between "quark" and "anti-quark" \begin{eqnarray} \label{rg11} L&=& {lA \over r_{\rm min}} + {kl^3 B \over 4r_{\rm min}^3} + {\cal O}\left( r_{\rm min}^{-5}\right) \\ A&\equiv& \int_{-\infty}^\infty {dt \over \cosh^2t \left(\cosh^2t + 1\right)^{1 \over 2}} =1.19814... \nonumber \\ B&\equiv& \int_{-\infty}^\infty dt {\cosh^4t - \cosh^2t -1 \over \cosh^4t \left(\cosh^2t + 1\right)^{3 \over 2}} =-0.162061... \ .\nonumber \end{eqnarray} As one sees the next-to-leading correction to distance depends on the curvature of space-time\cite{NO} or temperature. Eq.(\ref{rg11}) can be solved with respect to $r_{\rm min}$ and we find \begin{equation} \label{rg12} r_{\rm min}={lA \over L} + {klBL \over 4A^2} + {\cal O}\left(L^3\right)\ . \end{equation} Using (\ref{rg8}), (\ref{rg9}) and (\ref{rg11}), we find the following expression for the action $S$ \begin{eqnarray} \label{rg13} S&=&{{\cal T} \over 2\pi}E(L) \\ E(L)&=&\int_{-\infty}^\infty dt {\cosh^2 t \over \left(\cosh^2 t + 1\right)^{1 \over 2}}\left\{1 + {kl^2 \over 4 r_{\rm min}^2} {1 \over \cosh^2 t \left(\cosh^2 t + 1\right)} + {\cal O}\left(r_{\rm min}^{-4}\right)\right\}\ . \nonumber \end{eqnarray} Here $E(L)$ expresses the total energy of the ``quark''-``anti-quark'' system. The energy $E(L)$ in (\ref{rg13}), however, contains the divergence due to the self energies of the infinitely heavy ``quark'' and ``anti-quark''. The sum of their self energies can be estimated by considering the configuration $x^0=\tau$, $x^1=x^2=x^3=\cdots =x^{d-1}=0$ and $r=r(\sigma)$ (note that $x_1$ vanishes here) and the minimum of $r$ is $r_D$, where branes would lie : $r_D\gg r_{\rm min}$. We devide the region for $r$ to two ones, $\infty>r>r_{\rm min}$ and $r_{\rm min}<r<r_D$. Using the parametrization of (\ref{rg9}) for the region $\infty>r>r_{\rm min}$, we find the following expression of the sum of self energies: \begin{eqnarray} \label{rg14} E_{\rm self}=2r_{\rm min}\int_0^\infty dt\, \sinh t + 2\left(r_{\rm min} - r_D \right) + {\cal O}\left(r_{\rm min}^{-3}\right)\ . \end{eqnarray} Then the finite potential between ``quark'' and ``anti-quark'' is given by \begin{eqnarray} \label{rg15} E_{q\bar q}(L)&\equiv&E(L) - E_{\rm self} \nonumber \\ &=&r_{\rm min}\left(C + {kl^2D \over 4r_{\rm min}^2} + {\cal O}\left(r_{\rm min}^{-4}\right)\right) \nonumber \\ &=&{lAC \over L} + {kl \over 4}\left({BC \over A^2} + {D \over A} \right)L + {\cal O}\left(L^3\right) \\ &=&{lAC \over L} - {l^3 \left(2\pi T\right)^2 \over 2} \left({BC \over A^2} + {D \over A} \right)L + {\cal O}\left(L^3\right) \nonumber \\ C&=&2\int_0^\infty dt\,\left\{ {\cosh^2 t \over \left(\cosh^2 t + 1\right)^{1 \over 2}} -\sinh t\right\} -2 =-1.19814... \nonumber \\ D&=& 2\int_0^\infty {dt \over \left(\cosh^2 t + 1\right)^{3 \over 2}} =0.711959 \ .\nonumber \end{eqnarray} Here we neglected the $r_{\rm min}$ or $L$ independent term. We should note that next-to-leading term is linear in $L$, which might be relevant to the confinement. For the confinement, it is necessary that the quark-antiquark potential behaves as \begin{equation} \label{cnfpt} E_{q\bar q}\sim a L \end{equation} with some positive constant $a$ for large $L$. For high temperature, it is usually expected that there occurs the phase transition to the deconfinement phase, where the potential behaves as Coulomb force, \begin{equation} \label{dcnfpt} E_{q\bar q} \sim {a' \over L}\ . \end{equation} Since ${BC \over A^2} + {D \over A}>0$ and $k<0$, the contribution from next-to-leading term in the potential is repulsive. The leading term expresses the repulsive but shows the Coulomb like behavior. The next-leading-term tells that the repulsive force is long-range than Coulomb force. The expression (\ref{rg15}) is correct even at high temperature if $L$ is small or $r_{\rm min}$ is large. If $r_{\rm min}$ is small and the orbit of string approaches to the horizon and/or enters inside the horizon, the expression would not be valid. Since the horizon is given by (\ref{xx}), the expression (\ref{rg15}) would be valid if \begin{equation} \label{vali} r_{\rm min}\gg r_0=l\sqrt{-{k \over 2}} \end{equation} or using (\ref{xxiv}) and (\ref{rg12}), \begin{equation} \label{valii} L\ll A\sqrt{-{2 \over k}}=2\pi A l T\ . \end{equation} The above condition (\ref{valii}) makes difficult to evaluate the potential quantitively by the analytic calculation when $L$ is large and numerical calculation would be necesssary. In order to investigate the qualitive behavior of the potential when $L$ is large, we consider the background where the dilaton is constant $\phi=\phi_0$, which would tell the effect of the horizon or finite temperature. As $c=0$ when the dilaton is constant, we can use the solution in (\ref{xviii}). Then by the calculation similar to (\ref{rg15}) but without assuming $L$ is small or $r_{\rm min}$ is large, we obtain the following expression of the quark-antiquark potential: \begin{eqnarray} \label{PotlL} E_{q\bar q}&=&r_{\rm min} \int_{-\infty}^\infty dt \sinh t \left\{ \left( 1 - {1 \over \cosh^2 t}\cdot {1-{r_0^2 \over r_{\rm min}^2} \over \cosh^2 t - {r_0^2 \over r_{\rm min}^2}} \right)^{-{1 \over 2}} -1 \right\} \nonumber \\ &&+ 2\left(r_D - r_{\rm min}\right)\ . \end{eqnarray} Constant $-1$ in $\{\ \}$ and the last term correspond to the subtraction of the self-energy. The integration in (\ref{PotlL}) converges and the integrand is monotonically decreasing function of ${1 \over r_{\rm min}}$ if $r_{\rm min}$ is larger than the radius of the horizon $r_0$ : $r_{\rm min}>r_0$ and vanishes in the limit of $r_{\rm min}\rightarrow r_0$. Therefore if $r_{\rm min}$ decreases and approaches to $r_0$ when $L$ is large, which seems to be very natural, the potential $E_{q\bar q}$ approaches to a constant $E_{q\bar q}\rightarrow 2\left(r_D - r_0\right)$ and do not behaves as a linear function of $L$. This tells that the quark is not confined. This effect would corresponds to deconfining phase of QCD in the finite temperature. We can also evaluate the potential between monopole and anti-monopole using the Nambu-Goto action for $D$-string instead of (\ref{rg5}) (cf.ref.\cite{13}): \begin{equation} \label{rg5m} S={1 \over 2\pi}\int d\tau d\sigma {\rm e}^{-2\phi} \sqrt{{\rm det\,}\left(g^s_{\mu\nu} \partial_\alpha x^\mu \partial_\beta x^\nu\right)}\ . \end{equation} For the static configuration $x^0=\tau$, $x^1\equiv x=\sigma$, $x^2=x^3=\cdots=x^{d-1}=0$ and $y=y(x)$, we find, instead of (\ref{rg7}) \begin{equation} \label{rg7m} S={{\cal T} \over 2\pi}\int dx {\rm e}^{-\phi(r)}\sqrt{U(r)V(r)\left( {U(r) \over V(r)}\left(\partial_x r\right)^2 + 1 \right)}\ . \end{equation} Since $\phi$ is proportional to $c$ and $V$ and $U$ contain $c$ in the form of its square $c^2$, the potential between monopole and anti-monopole is given by changing $c$ by $-c$ in the potential between quark and anti-quark. Since the expression (\ref{rg15}) does not contain $c$ in the given order, the potential $E_{m\bar m}(L)$ for monopole and anti-monopole is identical with that of quark and anti-quark in this order: \begin{equation} \label{rg8m} E_{m\bar m}(L)=E_{q\bar q}(L)\ . \end{equation} Hence, we showed that non-constant dilaton deformation of IIB SG vacuum changes the structure of potential and confinement is becoming non-realistic. \section{Running coupling and quark-antiquark potential at finite temperature: non-zero mass BH case} In this section we consider another interesting case that $k=0$ and $\mu\neq 0$, which corresponds to the throat limit of D3-brane \cite{GKT,GKP}\footnote{ In ref.\cite{GKT} $\alpha'$-corrections to leading term ($T^4$) of free energy for above AdS BH have been derived. The temperature was actually fixed. In the case under discussion we consider dilatonic deformation of such AdS BH using tree level bosonic sector of IIB SG. Thus, we define the corrections (next-to-the leading term on the temperature) to solution (and free energy).} and $V_0$ has the following form: \begin{equation} \label{ki} V_0={r^4 \over l^2} - \mu \ . \end{equation} ${\rm e}^{2\rho}$ and ${\rm e}^{2\sigma}$ have the following form: \begin{equation} \label{kibb} {\rm e}^{2\rho}={\rm e}^{-2\sigma}={1 \over r^2}\left( {r^4 \over l^2} - \mu\right) \end{equation} Therefore when $c=0$, the horizon is given by \cite{GKT} \begin{equation} \label{kib} r=\mu^{1 \over 4}l^{1 \over 2} \end{equation} and the black hole temperature is \begin{equation} \label{kii} T={\mu^{1 \over 4} \over \pi l^{3 \over 2}}\ . \end{equation} In a way similar to $k<0$ and $\mu=0$ case, we obtain \begin{eqnarray} \label{xxxii} \phi&=&\phi_0 + {c \over 4\mu}\ln\left(1 - {1 \over q^2}\right) \nonumber \\ u&=&-{12 \over \mu^2}\left\{ {1 \over q^2 - 1} + \ln\left(1 - {1 \over q^2}\right) + c_1'\right\} \nonumber \\ v&=&{1 \over 12\mu}\left\{ -q^2 \ln\left(1 - {1 \over q^2}\right) - 1 + {c_1' q^2 \over 2} + c_2'\right\}\ . \end{eqnarray} Here \begin{equation} \label{xxxiii} q\equiv {r^2 \over l\sqrt\mu} \ \end{equation} and $c_1'$ and $c_2'$ are constants of the integration, which should vanish if we require $u$, $v\rightarrow 0$ when $r\rightarrow \infty$. The approximation when $r$ is far from horizon is again employed. Using (\ref{kii}) and (\ref{xxxii}), we find the behaviour of the string coupling (\ref{ci}) when $r$ is large and $c$ is small ($\phi_0$ is absorbed into the redefinition of $g_s$) : \begin{equation} \label{civ} g=g_s\left\{1 + {cl^2 \over 4}\left(-{1 \over r^4} - {\left(\pi T\right)^4 l^8 \over r^8} + {\cal O} \left(r^{-12}\right)\right) + {\cal O}\left(c^2\right) \right\}\ . \end{equation} The behavior of the second term is characteristic for $k=0$ case since the second term behaves as ${\cal O}\left(r^{-6}\right)$ for $k\neq 0$.\footnote{ The case that the boundary is the Einstein manifold with $k\neq 0$ has been discussed in \cite{NO}.} Eq.(\ref{civ}) gives the following beta-function \begin{equation} \label{cv} \beta(g)=r{dg \over r}=-4 \left(g - g_s\right) + {8 (\pi T )^4 l^6 \over g_s c} \left(g - g_s\right)^2 + \cdots \ . \end{equation} The first term is universal one \cite{4,7} but the behavior of the second temperature dependent term is characteristic for $k=0$. We now consider the high temperature and $r\sim Tl^2$ case. For this purpose, we write the coupling as follows: \begin{equation} \label{hTi} g=g_s\left(1 - {l^2\mu \over r^4}\right)^{c \over 4\mu}\ . \end{equation} Here we used (\ref{xxxii}). Eq.(\ref{hTi}) can be solved with respect to ${1 \over r^4}$: \begin{equation} \label{hTii} {l^2\mu \over r^4}=1-\left( {g \over g_s} \right)^{4\mu \over c}\ . \end{equation} On the other hand, Eq.(\ref{hTi}) gives \begin{equation} \label{hTiii} r{dg \over dr}={g_s c l^2 \over r^4}\left(1 - {1 \over r^4} \right)^{{c \over 4\mu} -1}\ . \end{equation} Substituting (\ref{hTii}) into (\ref{hTiii}), we obtain the following expression: \begin{equation} \label{hTiv} \beta(g)=r{dg \over dr} ={gc \over \mu}\left( {g \over g_s} \right)^{-{4\mu \over c}} \left( 1 - \left( {g \over g_s} \right)^{4\mu \over c}\right)\ . \end{equation} Using (\ref{kii}), we find the following expression of the temperature dependent beta-function: \begin{equation} \label{hTv} \beta(g)={gc \over \pi^2 l^6T^4} \left( {g \over g_s} \right)^{-{4\pi^2 l^6T^4 \over c}} \left( 1 - \left( {g \over g_s} \right)^{4\pi^2 l^6T^4 \over c}\right)\ . \end{equation} Since $T$ always appears in the combination of ${c \over T^4}$, the high temperature is consistent with the small $c$. As one can see $T$-dependence is quite complicated. It is qualitatively different from the case of low temperature. In order to investigate the corrections to the position of the horizon and the temperature, we use ${1 \over r}$ expansion as in the previous section assuming the black hole is large. Then we find \begin{eqnarray} \label{rii1} {\rm e}^\phi&=&{\rm e}^{\phi_0}\left(1 - {cl^2 \over 4 r^4} + {\cal O}\left(r^{-8}\right)\right) \nonumber \\ U&=&1 - {c^2 l^4 \over 48 r^8} + {\cal O}\left(r^{-12}\right) \nonumber \\ V&=&{r^4 \over l^2} - \mu + {c^2 l^2 \over 48 r^4} + {\cal O}\left(r^{-8}\right) \ . \end{eqnarray} Then corrections to the position of the horizon and the temperature are given as \begin{eqnarray} \label{rii2} r&=&r_h\equiv l^{1 \over 2}\mu^{1 \over 4}\left(1 - {c^2 \over 192 \mu^2}\right) \nonumber \\ T&=&{\mu^{1 \over 4} \over \pi l^{3 \over 2}}\left(1 - {5c^2 \over 192\mu^2}\right)\ . \end{eqnarray} The discussion of potential between quark and anti-quark for above case may be done similarly to the situation when $k<0$ and $\mu=0$. Instead of Eqs.(\ref{rg10}), (\ref{rg11}), (\ref{rg12}), we obtain \begin{eqnarray} \label{Org10} {dx \over dt}&=& {l \over r_{\rm min}\cosh^2t\left(\cosh^2t + 1 \right)^{1 \over 2}}\nonumber \\ && \times \left\{1 - {\mu l^2 \over 2r_{\rm min}^4} {\cosh^4 t -1 \over \cosh^4 t} - {c l^2 \over 4r_{\rm min}^4} {\cosh^4 t + 1 \over \cosh^4 t} + {\cal O}\left(r_{\rm min}^{-8}\right) \right\}\\ \label{Org11} L&=& {lA \over r_{\rm min}} - {\mu l^3 B_1 \over 2r_{\rm min}^5} - {c l^3 B_2 \over 4r_{\rm min}^5} + {\cal O}\left(r_{\rm min}^{-9}\right) \\ B_1&\equiv& \int_{-\infty}^\infty dt {\sinh^2 t \left(\cosh^2t + 1\right)^{1 \over 2} \over \cosh^6t} = 0.479256... \nonumber \\ B_2&\equiv& \int_{-\infty}^\infty dt {\cosh^4t + 1 \over \cosh^6t\left(\cosh^2t + 1\right)^{1 \over 2}} = 1.91702... \\ \label{Org12} r_{\rm min}&=&{lA \over L} - {\mu B_1 L^3 \over 2l A^4} - {c B_2 L^3 \over 4l A^4} + {\cal O}\left(L^7\right)\ . \end{eqnarray} Then we find the finite potential between ``quark'' and ``anti-quark'' is given by \begin{eqnarray} \label{Org15} E_{q\bar q}(L) &=&r_{\rm min}\left\{C + {l^2 A \over r_{\rm min}^4}\left( {\mu \over 2} - {5c \over 12}\right) + {\cal O}\left(r_{\rm min}^8\right)\right\} \\ &=&{lAC \over L} + {L^3 \over lA^3} \left\{\mu\left({A \over 2} - {C B_1 \over 2} \right) + c \left(-{5A \over 12} - {C B_2 \over 8} \right)\right\} + {\cal O}\left(L^7\right) \nonumber \\ &=&{lAC \over L} + {L^3 \over lA^3} \left\{l^6\left(\pi T\right)^4 \left({A \over 2} - {C B_1 \over 2} \right) + c \left(-{5A \over 12} - {C B_2 \over 8} \right)\right\} \nonumber \\ && + {\cal O}\left(L^7\right) \ .\nonumber \end{eqnarray} Here we choose $\phi_0=0$ and neglect $r_{\rm min}$ or $L$ independent terms, again. The behavior of the potential is qualitatively identical with that in \cite{BISY} except $L^3$-term in potential (next-to-leading term) contains the contribution from dilaton. Since \begin{equation} \label{TTi} {A \over 2} - {CB_1 \over 2} = 0.886178... \ ,\quad -{A \over 4} - {CB_2 \over 2} = 0.649204... \ , \end{equation} the $L^3$ potential becomes attractive if $l^6\left(\pi T\right)^4 > \gamma c$ (high temperature or small dilaton) and repulsive if $l^6\left(\pi T\right)^4 < \gamma c$ (low temperature or large dilaton). Here \begin{equation} \label{TTii} \gamma \equiv {{5A \over 12} + {CB_2 \over 2} \over {A \over 2} - {CB_1 \over 2} }= -0.732589... \ . \end{equation} Hence, we proved the possibility of confinement at finite temperature. The potential (\ref{Org15}) is valid if $r_{\rm min}$ is much larger than the radius of the horizon: \begin{equation} \label{TTiii} r_{\rm min}\gg \mu^{1 \over 4}l^{1 \over 2} \end{equation} or \begin{equation} \label{TTiv} L\ll {Al^{1 \over 2} \over \mu^{1 \over 4}}=\pi l T\ . \end{equation} The potential between monopole and anti-monopole is given by changing $c$ into $-c$ in (\ref{Org15}): \begin{eqnarray} \label{Org15m} E_{m\bar m}(L) &=&{lAC \over L} + {L^3 \over lA^3} \left\{l^6\left(\pi T\right)^4 \left({A \over 2} - {C B_1 \over 2} \right) - c \left(-{5 \over 12} - {C B_2 \over 8} \right)\right\} \nonumber \\ && + {\cal O}\left(L^7\right) \ .\nonumber \end{eqnarray} Therefore the $L^3$ potential becomes attractive if $l^6\left(\pi T\right)^4 > -\gamma c$ and repulsive if $l^6\left(\pi T\right)^4 < -\gamma c$. In other words, the behavior of monopole-antimonopole potential is reversed. We now consider more general case where either $k$ or $\mu$ do not vanish. If we define $r_\pm^2$ by \begin{equation} \label{gi} r_\pm^2\equiv {kl^2 \over 4}\left( -1 \pm \sqrt{1 + {16\mu \over k^2 l^2}}\right)\, \end{equation} $V_0$ has the following form: \begin{equation} \label{gii} V_0={1 \over l^2}\left( r^2 - r_+^2 \right) \left( r^2 - r_-^2 \right)\ . \end{equation} Since $\mu$ corresponds the black hole mass, $\mu$ should not be negative. If $\mu>0$, $r_+^2$ is positive and $r_-^2$ is negative when $k>0$ and $r_-^2$ is positive and $r_+^2$ is negative when $k<0$. Then $r=r_+$ corresponds to the horizon for $k>0$ in $c=0$ case and $r=r_-$ corresponds to the horizon for $k<0$. Therefore there is only one horizon when $\mu>0$. On the other hand, when $\mu<0$ although it might look unphysical, there are two horizons corresponding to $r=r_\pm$ when $k<0$. When $c=0$, the temperature corresponding to the horizon at $r=r_\pm$ is given by \begin{equation} \label{giib} T=\pm {r_+^2 - r_-^2 \over 2\pi r_\pm l^2}\ . \end{equation} When $c$ is small but does not vanish, the leading behavior of $\phi$ is given by, \begin{eqnarray} \label{giii} \phi&=& \phi_0 + {cl^2 \over 2}\left\{ - {1 \over r_+^2 r_-^2}\ln r^2 \right. \nonumber \\ && \left. + {1 \over r_+^2 \left(r_+^2 - r_-^2 \right)} \ln \left(r^2 - r_+^2 \right) - {1 \over r_-^2 \left(r_+^2 - r_-^2 \right)} \ln \left(r^2 - r_-^2 \right)\right\}\ . \end{eqnarray} Then the behavior of the string coupling (\ref{ci}) when $r$ is large and $c$ is small ($\phi_0$ is absorbed into the redefinition of $g_s$) : \begin{equation} \label{cvi} g=g_s\left\{1 + {cl^2 \over 2}\left(-{1 \over 2r^4} - {r_+^2 + r_-^2 \over 3r^6} + {\cal O}\left(r^{-8}\right)\right) + {\cal O}\left(c^2\right)\right\}\ , \end{equation} and the beta-function is given by \begin{equation} \label{cvii} \beta(g)=r{dg \over dr}=-4\left(g-g_s\right) + {8 \over 3}\left(r_+^2 + r_-^2\right){cg_s \over l} \left({g-g_s \over cg_s}\right)^{3 \over 2}\ . \end{equation} Note that the second term vanishes when $k=0$, which is the reason why the behavior of the next-to-leading term in $k=0$ is different from that in $k\neq 0$. Eq.(\ref{giib}) gives the temperature dependence in the coupling (\ref{cvi}) and the beta-function (\ref{cvii}). The next-to-leading term shows again power-like behavior on $g$ as it happened already in IIB SG solutions of refs.\cite{3,4,6,7,8, 9,NO} (no temperature) and in GUTs with large internal dimensions \cite{12}. Note that two of the parameters $k$, $\mu$ and $T$ are independent. If we consider the high temperature regime by fixing $k$, $\mu$ becomes large and the behavior approaches to $k=0$ case in (\ref{hTiv}). On the other hand, if we consider the high temperature regime by fixing $\mu$, $k$ is positive and becomes large and the behavior approaches to $k<0$ and $\mu=0$ case in (\ref{gTii}). One can also find quark-antiquark potential which looks very complicated so we do not write it explicitly. The corrections to $U$ and $V$ coming from the non-trivial dilaton are given by \begin{eqnarray} \label{giv} u&=&c_1'' + {l^4 \over \left(r_+^2 - r_-^2 \right)} \left\{ \left({1 \over r_+^2} - {1 \over r_-^2} \right)^2 \ln r^2 \right. \nonumber \\ && - {1 \over r_+^2}{1 \over r^2 - r_+^2} - {3r_+^2 - r_-^2 \over r_+^4 \left(r_+^2 - r_-^2 \right)} \ln \left( r^2 - r_+^2 \right) \nonumber \\ && \left. - {1 \over r_-^2}{1 \over r^2 - r_-^2} + {3r_-^2 - r_+^2 \over r_-^4 \left(r_+^2 - r_-^2 \right)} \ln \left( r^2 - r_-^2 \right) \right\} \\ \label{gv} v&=&{1 \over l^2}\left[ c_2'' + c_1'' \left\{r^4 - \left(r_+^2 + r_-^2 \right)r^2\right\} + {l^4 \over \left(r_+^2 - r_-^2 \right)^2 }\left\{ - \left({1 \over r_+^2} + {1 \over r_-^2} \right) r^2 \right.\right. \nonumber \\ && + \left( -{3r_+^2 - r_-^2 \over r_+^4\left(r_+^2 - r_-^2 \right)}\left(r^2 - r_+^2 \right)^2 \right. \nonumber \\ && \left. - {3r_+^2 - r_-^2 \over r_+^4}\left(r^2 - r_+^2 \right) - {r_+^2 - r_-^2 \over r_+^2}\right) \ln \left(1 - {r_+^2 \over r^2} \right) \nonumber \\ && + \left( {3r_-^2 - r_+^2 \over r_-^4\left(r_+^2 - r_-^2 \right)}\left(r^2 - r_-^2 \right)^2 \right. \nonumber \\ && \left.\left.\left. - {3r_-^2 - r_+^2 \over r_-^4}\left(r^2 - r_-^2 \right) + {r_+^2 - r_-^2 \over r_-^2}\right) \ln \left(1 - {r_-^2 \over r^2}\right) \right\} \right] \ . \end{eqnarray} Here $c_1''$ and $c_2''$ are constants of the integration, which should vanish if we require $u$, $v\rightarrow 0$ when $r\rightarrow \infty$. \section{Thermodynamics of approximate AdS backgrounds of IIB supergravity} In the present section we will be interesting in the thermodynamical quantities like free energy. After Wick-rotating the time variables by $t\rightarrow i\tau$, the free energy $F$ can be obtained from the action $S$ in (\ref{i}) where the classical solution is substituted: \begin{equation} \label{F1} F={1 \over T}S\ . \end{equation} Multiplying $G^{\mu\nu}$ with the equation of motion in (\ref{iit}), we find \begin{equation} \label{F2} R - {1 \over 2} G^{\mu\nu}\partial_\mu \phi \partial_\nu \phi ={5 \over 3}\Lambda\ . \end{equation} Here we only consider the case of $d=4$ and $\alpha={1 \over 2}$. Substituting (\ref{F2}) into (\ref{i}), we find after the Wick-rotation \begin{equation} \label{F3} S={1 \over 2\pi G l^2}\int d^5 x \sqrt{G}\ . \end{equation} Here we used (\ref{xx}). From the expressions of the metric $G_{\mu\nu}$ in (\ref{ii}) and (\ref{x}), Eq.(\ref{F3}) is rewritten as follows: \begin{equation} \label{F4} S={1 \over 2\pi G l^2}{V_3 \over T}\int_{r_h}^\infty dr r^3 U\ . \end{equation} Here $V_3$ is the volume of 3d Einstein manifold and $r_h$ is the radius of the horizon and we assume $\tau$ has a period of ${1 \over T}$. Since $U$ has a singularity at $r=r_h$ in the expansion with respect to $c$, we use ${1 \over r}$ expansion. Furthermore the expression of $S$ contains the divergence coming from large $r$. In order to subtract the divergence, we regularize $S$ in (\ref{F4}) by cutting off the integral at a large radius $r_{\rm max}$. After that we subtract the divergent part. In case of $k<0$ and $\mu=0$, we subtract it by using the extremal solution with $c=0$ ($U=1$): \begin{equation} \label{F4b} S_{\rm reg}={1 \over 2\pi G l^2}{V_3 \over T}\left( \int_{r_h}^{r_{\rm max}} dr r^3 U - \sqrt{V(r=r_{\rm max}) \over V^{\rm ex}(r=r_{\rm max}) } \int_{r^{\rm ex}_h}^{r_{\rm max}} dr r^3 \right) \ . \end{equation} Here \begin{equation} \label{F4c} V^{\rm ex}\equiv {1 \over l^2}\left(r^2 - {r^{\rm ex}_h}^2 \right)^2\ ,\quad r^{\rm ex}_h={l\sqrt{-k} \over 2}\ , \end{equation} which corresponds to the extremal solution (the solution has negative mass parameter $\mu=-{k^2 l^2 \over 4}$). The factor $\sqrt{V(r=r_{\rm max}) \over V^{\rm ex}(r=r_{\rm max}) }$ is chosen so that the proper length of the circle which corresponds to the period ${1 \over T}$ in the Euclidean time at $r=r_{max}$ coincides with each other in the two solutions with $\mu=0$ and $k<0$ one and extremal one in (\ref{F4c}). Then we obtain \begin{equation} \label{F5} F={V_3 \over 2\pi G l^2}\left(-{5k^2 l^4 \over 128} + {c^2 \over 48 k^2}\right)\ . \end{equation} With the help of (\ref{r5}), we find the following expression \begin{equation} \label{F6} F=-{V_3 \over 2\pi G l^2}\left( { 5l^8\left(2\pi T\right)^4 \over 32} + {c^2 \over 768 l^4 \left(2\pi T\right)^4}\right)\ . \end{equation} In order to get the entropy ${\cal S}$, we need to know $T$ dependence of $V_3$ although $V_3$ is infinite. Since $k$ is proportional the curvature, $V_3$ would be proportional to $k^{-{3 \over 2}}$. Then we find \begin{eqnarray} \label{F7} {dV_3 \over dT}&=&{1 \over k}{dk \over dT}k{dV_3 \over dk} \nonumber \\ &=&-{3V_3 \over 2T}\left(1 - {c^2 \over 6l^{12} \left(2\pi T\right)^8} + \cdots \right)\ . \end{eqnarray} Therefore the entropy ${\cal S}$ and mass (energy) $E$ are given by \begin{eqnarray} \label{F8} {\cal S}&=&-{dF \over dT}={V_3 \over 2\pi G l^2 T }\left( { 25l^8\left(2\pi T\right)^4 \over 64} + {49c^2 \over 1536 l^4 \left(2\pi T\right)^4}\right) \nonumber \\ E&=&F+T{\cal S}={V_3 \over 2\pi G l^2}\left( { 15 l^8\left(2\pi T\right)^4 \over 64} + {47 c^2 \over 1536 l^4 \left(2\pi T\right)^4}\right)\ . \end{eqnarray} In terms of string theory correspondence\cite{GKT}, the parameters $G$ and $l$ are given by \begin{eqnarray} \label{spara} l^4&=&2g_{YM}^2 N{\alpha'}^2 \nonumber \\ Gl&=&{\pi g_{YM}^2{\alpha'}^2 \over N}\ . \end{eqnarray} Here the Yang-Mills coupling $g_{YM}$ is given by the string coupling $g_s$: $g_{YM}^2=2\pi g_s$ and $N$ is the number of the coincident D3-branes. As $V_3$ is now dimensionless, we multiply $l^3$ with $V_3$: \begin{equation} \label{F10} \tilde V_3\equiv l^3 V_3\ . \end{equation} Then Eqs.(\ref{F6}) and (\ref{F8}) can be rewritten as follows: \begin{eqnarray} \label{F10b} F&=&-{\tilde V_3 \over 2\pi^2 }\left( { 5N^2 \left(2\pi T\right)^4 \over 16} + {c^2 \over 3072 l^4 g_{YM}^6 N{\alpha'}^6 \left(2\pi T\right)^4}\right)\nonumber \\ {\cal S}&=&{V_3 \over 2\pi^2 T }\left( { 25N^2\left(2\pi T\right)^4 \over 32} + {49 c^2 \over 6144 g_{YM}^6 N{\alpha'}^6 \left(2\pi T\right)^4}\right) \nonumber \\ E&=&{V_3 \over 2\pi G l^2}\left( { 15 N^2\left(2\pi T\right)^4 \over 32} + {47 c^2 \over 6144 g_{YM}^6 N{\alpha'}^6 \left(2\pi T\right)^4}\right)\ . \end{eqnarray} For $k=0$ and $\mu>0$ case, we can obtain the thermodynamical quantities in a similar way using Eqs. (\ref{rii1}) and (\ref{rii2}) in ${1 \over r}$ expansion. We regularize $S$ in (\ref{F4}) by subtracting the solution with $\mu=0$ and $c=0$ ($U=1$): \begin{equation} \label{F4bb} S_{\rm reg}={1 \over 2\pi G l^2}{V_3 \over T}\left( \int_{r_h}^{r_{\rm max}} dr r^3 U - \sqrt{V(r=r_{\rm max}) \over V(r=r_{\rm max}, \mu=0) } \int_0^{r_{\rm max}} dr r^3 \right) \ . \end{equation} We can assume here that $V_3$ does not depend on $T$ since $k$ is fixed to vanish. Then we obtain for the case \begin{eqnarray} \label{F9} F&=&-{V_3 \over 4\pi G l^2}\left( { l^8\left(\pi T\right)^4 \over 4} + {5c^2 \over 192 l^4 \left(\pi T\right)^4}\right) \nonumber \\ {\cal S}&=&{V_3 \over 4\pi G l^2 T}\left( l^8\left(\pi T\right)^4 - {5c^2 \over 48 l^4 \left(\pi T\right)^4}\right)\nonumber \\ E&=&{V_3 \over 4\pi G l^2}\left( { 3l^8\left(\pi T\right)^4 \over 4} - {25c^2 \over 192 l^4 \left(\pi T\right)^4}\right)\ . \end{eqnarray} Note that the leading term in ${\cal S}$ is the volume of 3d manifold at horizon (${V_3 r_h^3 \over l^3}$) divided by $4G$. Then by using (\ref{spara}) and (\ref{F10}), we find \begin{eqnarray} \label{F11} F&=&-{\tilde V_3 \over 4\pi^2}\left( { N^2 \left(\pi T\right)^4 \over 2} + {5c^2 \over 768 g_{YM}^6 N{\alpha'}^6 \left(\pi T\right)^4}\right) \nonumber \\ {\cal S}&=&{\tilde V_3 \over 4\pi^2 T}\left( 2 N^2 \left(\pi T\right)^4 - {5c^2 \over 192 g_{YM}^6 N{\alpha'}^6 \left(\pi T\right)^4}\right)\nonumber \\ E&=&{\tilde V_3 \over 4\pi^2 }\left( { 3 N^2 \left(\pi T\right)^4 \over 2} - {25c^2 \over 768 g_{YM}^6 N{\alpha'}^6 \left(\pi T\right)^4}\right)\ . \end{eqnarray} The leading behaviours of $F$ and $S$ are consistent with \cite{GKT}. As we used ${1 \over r}$ expansion, the second terms in (\ref{F11}) become dominant when the radius of horizon $r_h$ is large and the parameter $c$ specifying non-trivial dilaton is not very small. Notice that in other temperature regimes (or using another schemes for approximated solutions of gravitational equations) one can get also qualitatively different thermodynamical next-to-leading terms (on temperature). One has to remark that leading term in above free energy describes the strong coupling regime free energy for ${\cal N}=4$ super YM theory with the usual mismatch factor 3/4 if compare with perturbative free energy (for a detailed discussion of this case, see\cite{GKT}). \section{Discussion} We studied the approximate (dilaton perturbed) solutions of IIB SG near BH-like ${\rm AdS}_5\times{\rm S}_5$ background. Thanks to presence of dilaton, the running gauge coupling of non-SUSY gauge theory at finite temperature may be extracted from these solutions. It is interesting that corresponding strong coupling regime beta-function may depend on the temperature in the complicated way (mainly, power-like behavior). We also estimated quark-antiquark (and monopole-antimonopole) potential at finite temperature from SG side. Its comparison with the potential of ${\cal N}=4$ super YM theory at finite temperature is also done. It is remarkable that confinement depending on features of geometry and dilaton may occur. As our IIB SG solutions are approximate (actually large radius expansion) it is clear that one is able to develop other schemes to search for similar solutions. Unfortunately, it is not yet clear how to identify explicitly boundary non-SUSY thermal gauge theory corresponding to these solutions. One possibility is to calculate free energy from SG side (with non-trivial dilaton) and compare it with free energy of perturbative thermal gauge theories with running gauge coupling. The last quantity is available in QFT. Note also that one can generalize our solutions via adding RR-scalar(axion) to bosonic sector of IIB SG as it was discussed in refs.\cite{14}. As it follows from results of refs.\cite{7,NO} the structure of running gauge coupling changes drastically in this case. In particular, part of supersymmetries may be unbroken \cite{14} but asymptotic freedom may be realized in strong coupling phase \cite{7,NO}. We expect that at finite temperature this property may survive. \ \noindent {\bf Acknoweledgements} We thank J. Ambj\o rn for the interest to this work.
1,314,259,995,723
arxiv
\section{Introduction} \baselineskip .55 cm With extensive and manifold applications, fixed point theory has been one of the most influential research topics in various field of engineering and science. The most incredible result in this direction was stated by Banach, known as the Banach contraction principle \cite{ba}. This remarkable result has been generalized and extended in various abstract spaces using different conditions. However, the prospect of fixed point theory charmed many researchers and so there is a vast literature available for readers \cite{Ch,De,DD,DM,Rk1,GRRS} One of the most impressive generalizations of the notion of a metric is the concept of a fuzzy metric. Motivated from the definition of fuzzy metric spaces, recently Khojasteh et al. \cite{KKR} introduced $\theta-$metric by replacing the triangle inequality with a more generalized inequality. In recent times, Khojasteh et al. \cite{KSR} introduced the notion of $\mathcal{Z}$-contraction by using a new class of auxiliary functions called simulation functions. This kind of functions have attracted much attention because they are useful to express a great family of contractivity conditions that were well known in the field of fixed point theory. Later on, Olgun et al. \cite{OBA} provided a new class of Picard operator on complete metric spaces using the concept of generalized $\mathcal{Z}-$contraction. In this exciting context, there are a lot of developments have been done in recent times \cite{NV,RKRM}. In this manuscript, we use $\mathcal{Z}-$contractions to obtain existence and uniqueness fixed point results on $\theta-$metric spaces. Also, we introduce the concept of modified $\mathcal{Z}-$contractions there and go on to derive a fixed point result using them in the said spaces. Our main results are equipped with competent examples. This document unfolds with preliminaries section, where we review some definitions, examples and notable results that are involved in the sequel. The main results section comprises of some lemmas and fixed point results. These results extend, unify and generalize several results in the existing literature. Further we furnish some non-trivial examples to elicit the usability of the obtained theorems. \section{Preliminaries} At the outset, we dash some basic definitions and fundamental results off here. In the rest of this paper, $\mathbb{N}$ will stand for the set of all non-negative integers and $\mathbb{R}$ will denote the set of all real numbers Let $T: X\rightarrow X$ be a self-mapping. We say $x \in X$ is a fixed point of $T$ if $Tx = x$. The following notion of simulation functions was first introduced by Khojasteh et al. in \cite{KSR}. \begin{definition} \cite{KSR} \label{d1} Let $\zeta: [0, \infty) \times [0, \infty) \rightarrow \mathbb{R}$ be a mapping. Then $\zeta$ is called a simulation function, if it satisfies: \begin{enumerate} \item [($\zeta1$)] $\zeta(0,0)=0 $, \item [($\zeta2$)] $\zeta(t,s)< s-t$ for all $s,t>0 $, \item [($\zeta3$)] if $\{t_n\},\{s_n\}$ are sequences defined in $(0, \infty)$ such that $lim_{n \rightarrow \infty}t_n= lim_{n \rightarrow \infty}s_n>0$, then \[\limsup_{n \rightarrow \infty} \zeta(t_n, s_n)< 0.\] \end{enumerate} \end{definition} The authors provided a wide range of examples of simulation functions to emphasize the promising applicability to the literature of fixed point theory. We list a few here. \begin{example} \cite{KSR} Let $\zeta_i: [0, \infty) \times [0, \infty), i=1,2$ be defined by: \begin{enumerate} \item $\zeta_1(t, s)= \frac{s}{s+1}-t$ for all $t,s \in [0, \infty).$ \item $\zeta_2(t, s)= \eta(s)-t$ for all $t,s \in [0, \infty)$, where $\eta: [0, \infty) \rightarrow [0, \infty)$ is an upper semi continuous mapping such that $\eta(t)< t$ for all $t>0$ and $\eta(0)=0$. \item $\zeta_3(t, s)= s -\phi(s)-t$ for all $s, t\in [0,\infty)$ where, $\phi: [0,\infty) \rightarrow [0,\infty)$ is a continuous function such that $\phi(t) = 0 \Leftrightarrow t = 0$. \end{enumerate} \end{example} The set of all simulation functions is denoted by $\mathcal{Z}.$ \begin{definition} \cite{KSR} \label{d2} Let $T: X \rightarrow X$ be a self-mapping and $\zeta \in \mathcal{Z}$. Then $T$ is called a $\mathcal{Z}-$contraction with respect to $\zeta$, if \[ \zeta(d(Tx,Ty),d(x,y)) \geq 0\] holds for all $x,y \in X.$ \end{definition} The Banach contraction is a perfect example of $\mathcal{Z}-$contraction. It satisfies the previous non-negativity restriction by taking $\zeta(t,s)= \lambda s- t$, where $\lambda \in [0,1),$ as the corresponding simulation function. Despite the above examples, there are several other examples of simulation functions and $\mathcal{Z}-$contractions, which can be found on \cite{KSR}. \begin{remark} [cf. \cite{KSR}] \label{r1} It can be easily said from the definition of the simulation function that $\zeta(t,s)<0$ for all $t \geq s>0.$ So, if $T$ is a $\mathcal{Z}-$contraction with respect to $\zeta \in \mathcal{Z}$, then \[d(Tx,Ty)< d(x,y)\] whenever $x \neq y,$ for all $x,y \in X$. This leads us to the conclusion that every $\mathcal{Z}-$contraction is contractive and hence continuous. \end{remark} For our purposes, we need to enunciate the ideas of $B$-actions and $\theta$-metrics here. In 2013, Khojasteh et al. \cite{KKR} proposed the notion of $\theta-$metric as a proper generalization of a metric. \begin{definition} \cite{KKR} \label{d3} Let $\theta : [0, \infty) \times [0, \infty) \rightarrow [0, \infty)$ be a continuous mapping with respect to both the variables. Let Im$(\theta)=\{\theta(s,t): s\geq 0, t\geq 0\}$. The mapping $\theta$ is called an $B$-action if and only if it satisfies the following conditions: \begin{enumerate} \item[(B1)] $\theta(0,0)=0~~ and ~~\theta(s,t)= \theta(t,s)$ for all $ s,t \geq 0,$ \item[(B2)] \begin{eqnarray*} \theta(s,t) < \theta(u,v) & \Rightarrow & \begin{cases} \text{either} ~~s <u, t \leq v\\ \text{or} ~~s \leq u, t < v, \end{cases} \end{eqnarray*} \item[(B3)] for each $r \in Im(\theta)$ and for each $s \in [0,r],$ there exists $t \in [0,r]$ such that $\theta (t,s)=r,$ \item[(B4)] $\theta(s,0) \leq s,$ for all $s> 0.$ \end{enumerate} \end{definition} \begin{example} \cite{KKR} The subsequent examples illustrate the definition. \begin{enumerate} \item $\theta_1(s,t)= \frac{ts}{1+ts}.$ \item $\theta_2(s,t)= t+s+ \sqrt{ts}.$ \end{enumerate} \end{example} The set of all $B$-actions is denoted by $Y$. The idea of $B-$action has been very much functional to formulate the notion of $\theta-$metric spaces \cite{KKR}. We here recall the definition of the said spaces. \begin{definition} \cite{KKR} \label{d4} Let $X$ be a non-empty set. A mapping $d_\theta: X \times X \rightarrow [0,\infty)$ is called a $\theta$-metric on $X$ with respect to $B-$action $\theta \in Y$ if $d_\theta$ satisfies the following: \begin{enumerate} \item[($\theta1$)] $ d_\theta(x,y)=0$ if and only if $x=y,$ \item[($\theta2$)] $ d_\theta(x,y)= d_\theta(y,x)$, for all $x,y \in X,$ \item[($\theta3$)] $ d_\theta(x,y) \leq \theta(d_\theta(x,z),d_\theta(z,y)),$ for all $x,y,z \in X.$ \end{enumerate} Then the pair $(X,d_\theta)$ is called a $\theta$-metric space. \end{definition} \begin{example} \cite{KKR} Here we provide a non-trivial example of $\theta-$metric space. Let $X= \{ a,b,c\}$ and $d_\theta: X \times X \rightarrow [0,\infty)$ is defined as: \[d_\theta(x,y)= 5, d_\theta(y,z)= 12, d_\theta(z,x)=13, d_\theta(x,y)=d_\theta(y,x),\] \[ d_\theta(y,z)=d_\theta(z,y), d_\theta(z,x)=d_\theta(x,z), d_\theta(x,x)=d_\theta(y,y)=d_\theta(z,z)=0.\] Taking $\theta(s,t)= \sqrt{s^2+t^2},$ the mapping $d_\theta$ forms a $\theta$-metric. And hence the pair $(X,d_\theta)$ is a $\theta$-metric space. \end{example} \begin{remark} [cf. \cite{KKR}] If $(X,d_\theta)$ is a $\theta$-metric space and $\theta(s,t)=s+t,$ then $(X,d_\theta)$ is a metric space. Also we mention that a metric space is included in the class of $\theta$-metric spaces if we consider the $\theta$-metric as $\theta(s,t)=s+t$. \end{remark} For further terminology and derived results, we refer to \cite{KKR}. \section{Main Results} In this section, we prove some fixed point theorems for self-mappings via simulation functions owing to the concept of $\theta-$metric spaces and also we give illustrative examples. Before all else, we start with noting down following lemmas which will be crucial to our main results. \begin{lemma} \label{l1} If $(X,d_\theta)$ be any complete $\theta-$metric space and $T:X \rightarrow X$ be a $\mathcal{Z}-$contraction with respect to $\zeta \in \mathcal{Z}$, then $T$ is an asymptotically regular mapping at every $x \in X.$ \end{lemma} \begin{proof} Let $x \in X$ be any arbitrary element. Now, without loss of generality, we take $T^n x \neq T^{n+1}x,$ for all $n \in \mathbb{N}.$ Taking into account Remark \ref{r1}, we have, \begin{eqnarray*} d_\theta(T^n x, T^{n+1}x) < d_\theta(T^n x, T^{n+1}x) \end{eqnarray*} for all $n \in \mathbb{N}$. So $\{d_\theta(T^n x, T^{n+1} x)\}$ is a decreasing sequence of non-negative reals. Thus there exists a $r\geq 0$ such that $lim_{n\rightarrow \infty} d_\theta(T^n x, T^{n+1} x)= r.$ Our claim is that $r=0.$ Since $T$ is a $\mathcal{Z}-$contraction with respect to $\zeta$, we have \begin{eqnarray*} 0 & \leq & \limsup_{n \rightarrow \infty}\zeta(d_\theta(T^{n+1} x,T^n x), d_\theta(T^n x,T^{n-1} x))\\ & < & 0. \end{eqnarray*} This contradiction proves that $r=0$ and hence $\lim_{n \rightarrow \infty}d_\theta(T^n x, T^{n+1} x)=0$. So $T$ is asymptotically regular mapping at every $x \in X.$ \end{proof} \begin{lemma} \label{l2} Let $(X,d_\theta)$ be any complete $\theta-$metric space and $T:X \rightarrow X$ be a $\mathcal{Z}-$contraction with respect to $\zeta \in \mathcal{Z}$. Then if $T$ has any fixed point in $X$, then it is unique. \end{lemma} \begin{proof} Let $u \in X$ be any fixed point of $T$. We take $v \in X$ as another fixed point of $T$ with $u \neq v$. Therefore, $Tu=u$ and $Tv=v.$ Now by using $(B4)$ and $(\theta3),$ we obtain \begin{eqnarray*} d_\theta(u,v) & =& d_\theta(Tu,Tv)\\ & \leq & \theta(d_\theta(Tu,u),d_\theta(u,Tv))\\ & = & \theta(d_\theta(u,u),d_\theta(u,Tv))\\ & \leq & d_\theta(u,Tv)\\ & \leq & \theta(d_\theta(u,v),d_\theta(v,Tv))\\ & \leq & \theta(d_\theta(u,v),d_\theta(v,v))\\ & \leq & d_\theta(u,v). \end{eqnarray*} In view of Remark \ref{r1}, above inequality yields a contradiction and hence proves result. \end{proof} The first main result of this article is the following one. \begin{theorem}\label{t1} Let $(X,d_\theta)$ be any complete $\theta-$metric space and $T:X \rightarrow X$ be a $\mathcal{Z}-$contraction with respect to $\zeta \in \mathcal{Z}$. Then $T$ has a unique fixed point $u$ in $X$ and for every $x_0 \in X,$ the Picard sequence $\{x_n\}$ converges to the fixed point of $T$. \end{theorem} \begin{proof} Let $x_0$ be any arbitrary point and $\{x_n\}$ be the corresponding Picard sequence, i.e., $x_n=Tx_{n-1}$ for all $n \in \mathbb{N}$. We claim that the sequence $\{x_n\}$ is bounded. Reasoning by contradiction, we assume that, $\{x_n\}$ is unbounded. So, there exists a subsequence $\{x_{n_k}\}$ of $\{x_n\}$ such that $n_1=1$ and for each $k \in \mathbb{N}, n_{k+1}$ is minimum integer such that \[d_\theta(x_{n_{k+1}},x_{n_k}) > 1\] and \begin{eqnarray} d_\theta(x_m,x_{n_k}) & \leq & 1 \end{eqnarray} for $n_k \leq m \leq n_{k+1}-1.$ Now, using the triangle inequality $(\theta3)$ and $(3.1)$, we have \begin{eqnarray} 1 & < & d_\theta(x_{n_{k+1}},x_{n_k}) \nonumber\\ & \leq & \theta(d_\theta(x_{n_{k+1}},x_{n_{k+1}-1}),d_\theta(x_{n_{k+1}-1},x_{n_k})) \nonumber\\ & \leq & \theta(d_\theta(x_{n_{k+1}},x_{n_{k+1}-1}),1). \end{eqnarray} Letting $k \rightarrow \infty$ on both sides of $(3.2)$ and then using Lemma \ref{l1} and $(B4)$, we deduce that, \[d_\theta(x_{n_{k+1}},x_{n_k}) \rightarrow 1.\] On the other hand, using $(\theta3)$ and $(3.1)$, we derive that \begin{eqnarray*} 1 & < & d_\theta(x_{n_{k+1}},x_{n_k}) \\ & \leq & d_\theta(x_{n_{k+1}-1},x_{n_k-1})\\ & \leq & \theta(d_\theta(x_{n_{k+1}-1},x_{n_k}),d_\theta(x_{n_k},x_{n_k-1}))\\ & \leq & \theta(1,d_\theta(x_{n_k},x_{n_k-1})). \end{eqnarray*} So, as $k \rightarrow \infty$, we get, \[d_\theta(x_{n_{k+1}-1},x_{n_k-1}) \rightarrow 1.\] Since $T$ is a $\mathcal{Z}-$contraction with respect to $\zeta \in \mathcal{Z},$ we derive that \begin{eqnarray*} 0 & \leq & lim sup_{k \rightarrow \infty} \zeta(d_\theta(Tx_{n_{k+1}-1},Tx_{n_k-1}),d_\theta(x_{n_{k+1}-1},x_{n_k-1}))\\ & = & lim sup_{k \rightarrow \infty} \zeta(d_\theta(x_{n_{k+1}},x_{n_k}),d_\theta(x_{n_{k+1}-1},x_{n_k-1}))\\ & < & 0, \end{eqnarray*} and we arrive at a contradiction. So, the Picard sequence $\{x_n\}$ is bounded. Now we will show that $\{x_n\}$ is Cauchy. For this, let \[C_n= sup\{d_\theta(x_i,x_j): i,j \geq n\}.\] Note that $\{C_n\}$ is a decreasing sequence of non-negative reals. Thus there exists a $C\geq 0$ such that $lim_{n\rightarrow \infty} C_n=C.$ Our claim is that $C=0.$ Let us suppose that $C>0.$ Then by the definition of $C_n$, for every $k \in \mathbb{N},$ there exists $n_k, m_k$ such that $m_k> n_k \geq k$ and \begin{eqnarray*} C_k - \frac{1}{k} & < d_\theta(x_{m_k},x_{n_k}) & \leq C_k. \end{eqnarray*} Letting $k \rightarrow \infty$ in the above inequality, we get \begin{eqnarray*} lim_{k \rightarrow \infty}d_\theta(x_{m_k},x_{n_k})=C. \end{eqnarray*} Now, \begin{eqnarray*} d_\theta(x_{m_k},x_{n_k}) & \leq & d_\theta(x_{m_k-1},x_{n_k-1})\\ & \leq & \theta(d_\theta(x_{m_k-1},x_{m_k}),d_\theta(x_{m_k},x_{n_k-1}))\\ & \leq & \theta(d_\theta(x_{m_k-1},x_{m_k}),\theta(d_\theta(x_{m_k},x_{n_k}),d_\theta(x_{n_k},x_{n_k-1}))). \end{eqnarray*} Letting $k \rightarrow \infty$ in the previous inequality and applying $(B4)$, we derive \begin{eqnarray} C & \leq & \lim_{k \rightarrow \infty}d_\theta(x_{m_k-1},x_{n_k-1}) \nonumber \\ & \leq & \theta(0,\theta(d_\theta(x_{m_k},x_{n_k}),d_\theta(x_{n_k},x_{n_k-1}))) \nonumber \\ & \leq & \theta(d_\theta(x_{m_k},x_{n_k}),d_\theta(x_{n_k},x_{n_k-1})). \end{eqnarray} Again taking limit as $k \rightarrow \infty$ in $(3.3)$ and using $(B4)$, we get \begin{eqnarray*} C & \leq &\lim_{k \rightarrow \infty}d_\theta(x_{m_k-1},x_{n_k-1}) \\ & \leq & \theta(0,C)\\ & \leq & C. \end{eqnarray*} As a consequence, \[ lim_{k \rightarrow \infty} d_\theta(x_{m_k-1},x_{n_k-1})=C.\] Now since $T$ is a $\mathcal{Z}-$contraction with respect to $\zeta \in \mathcal{Z},$ we derive that \begin{eqnarray*} 0 & \leq & lim sup_{k \rightarrow \infty} \zeta(d_\theta(x_{m_k-1},x_{n_k-1}),d_\theta(x_{m_k},x_{n_k}))\\ & < & 0, \end{eqnarray*} which is a contradiction. Consequently, $\{x_n\}$ is Cauchy. Since $(X,d_\theta)$ is complete, there exists some $z \in X$ such that $lim_{n \rightarrow \infty} x_n=z.$ Now we show that $z$ is a fixed point of $T.$ Conversely suppose, $Tz \neq z.$ Then $d_\theta(z,Tz)>0.$ Again, \begin{eqnarray*} 0 & \leq & lim sup_{n \rightarrow \infty} \zeta(d_\theta(Tx_n,Tz),d_\theta(x_n,z))\\ & \leq & lim sup_{n \rightarrow \infty} [d_\theta(x_n,z)-d_\theta(x_{n+1},Tz)]\\ & = & -d_\theta(z,Tz). \end{eqnarray*} This contradiction proves that $d_\theta(z,Tz)=0,$ and hence, $Tz=z.$ So we can conclude that $z$ is a fixed point of $T.$ Uniqueness is guaranteed from Lemma \ref{l2}. \end{proof} Now we validate our fixed point result by the following examples. \begin{example} Let $X= [0,1]$ be endowed with the Euclidean metric $d_\theta(x,y)=|x-y|$. Also we take $\theta(s,t)=s+t+st.$ We define a mapping $T: X \rightarrow X$ by $Tx = \frac{x}{a}+b,$ where $a > 1, x \in X$ and $b+\frac{1}{a} <1$. So we have, \begin{eqnarray*} d_\theta(Tx,Ty) & = & |Tx-Ty|\\ &=& |\frac{x}{a}+b-\frac{y}{a}-b| \\ &=& \frac{1}{a}|x-y|. \end{eqnarray*} We claim that $T$ is a $\mathcal{Z}-$contraction with respect to the simulation function $\zeta(t,s)=\lambda s - t$, where $\lambda > \frac{1}{a},$ for all $t,s \in [0, \infty)$. So we have, \begin{eqnarray*} \zeta(d_\theta(Tx,Ty),d_\theta(x,y)) & = & \lambda d_\theta(x,y)- d_\theta(Tx,Ty)\\ & = & \lambda |x-y|- \frac{1}{a}|x-y|\\ & = & (\lambda - \frac{1}{a}) |x-y|\\ & \geq & 0. \end{eqnarray*} Taking into account Theorem \ref{t1} we get, $T$ has a unique fixed point and it is $u=\frac{ab}{a-1}.$ Since $b+\frac{1}{a} <1,$ it is ensured that $u \in X.$ \end{example} \begin{example} Let $X= [0,1]$ be endowed with the Euclidean metric and $\theta(s,t)=s+t+st.$ We define a mapping $T: X \rightarrow X$ by $Tx = \frac{1}{1+x}, $ $ x \in X.$ Our claim is that $T$ is a $\mathcal{Z}-$contraction with respect to the simulation function $\zeta(t,s)=\frac{s}{s+1}-t,$ for all $t,s \in [0, \infty)$. So we have, \begin{eqnarray*} \zeta(d_\theta(Tx,Ty),d_\theta(x,y))& = & \frac{d_\theta(x,y)}{d_\theta(x,y)+1}-d_\theta(Tx,Ty)\\ & = & \frac{|x-y|}{|x-y|+1}- |\frac{1}{x+1}-\frac{1}{y+1}|\\ & = & \frac{|x-y|}{|x-y|+1}- \frac{|x-y|}{|x+1||y+1|}\\ & = & |x-y|(\frac{1}{|x-y|+1}- \frac{1}{|x+1||y+1|})\\ & \geq & 0. \end{eqnarray*} Hence applying Theorem \ref{t1}, $T$ has a unique fixed point and it is $u=\frac{\sqrt 5 -1}{2} \in X.$ \end{example} Here we introduce the new class of modified $\mathcal{Z}-$contractions. \begin{definition} \label{d5} Let $T: X \rightarrow X$ be a mapping and $\zeta \in \mathcal{Z}$. Then $T$ is called a modified $\mathcal{Z}-$contraction with respect to $\zeta$, if it satisfies: \[ \zeta(d_\theta(Tx,Ty),M(x,y)) \geq 0\] for all $x,y \in X,$ where, \begin{eqnarray*} M(x,y) & = & max\{d_\theta(x,y),d_\theta(x,Tx),d_\theta(y,Ty)\}. \end{eqnarray*} \begin{example} Let $X= [0,1]$ be endowed with the Euclidean metric and $\theta(s,t)=s+t+st.$ We define a mapping $T: X \rightarrow X$ by \begin{eqnarray*} Tx & = & \begin{cases} \frac{1}{7},~~ x \in S_1=[0,\frac{1}{2}), \\ \frac{2}{7},~~ x \in S_2=[\frac{1}{2},1]. \end{cases} \end{eqnarray*} Then $T$ is a modified $\mathcal{Z}-$contraction with respect to the simulation function $\zeta(t,s)= \frac{7}{8}s - t.$ \end{example} \end{definition} Now we deliver one of our main results related to modified $\mathcal{Z}-$contraction on the context of $\theta-$metric spaces. This theorem assures us about the existence and uniqueness of the fixed point of a modified $\mathcal{Z}-$contraction. The subsequent lemma forms the basis for our result. \begin{lemma} \label{l3} Let $(X,d_\theta)$ be any complete $\theta-$metric space and $T:X \rightarrow X$ be a modified $\mathcal{Z}-$contraction with respect to $\zeta \in \mathcal{Z}$. Then if $T$ has any fixed point in $X$, it is unique. \end{lemma} \begin{proof} Let $u \in X$ be any fixed point of $T$.Suppose $v \in X$ be another fixed point of $T$ with $u \neq v$. This means that $Tu=u$ and $Tv=v.$ From Definition \ref{d5} and using the previous fact, we observe that \begin{eqnarray*} M(u,v) & = & max\{d_\theta(u,v),d_\theta(u,Tu),d_\theta(v,Tv)\}\\ & = & max\{d_\theta(u,v),d_\theta(u,u),d_\theta(v,v)\}\\ & = & d_\theta(u,v). \end{eqnarray*} Since $T$ is a modified $\mathcal{Z}-$contraction with respect to $\zeta \in \mathcal{Z},$ we attain that \begin{eqnarray*} 0 & \leq & \zeta(d_\theta(Tu,Tv),M(u,v))\\ & = & \zeta(d_\theta(Tu,Tv),d_\theta(u,v))\\ & = & \zeta(d_\theta(u,v),d_\theta(u,v)).\\ \end{eqnarray*} Considering Lemma \ref{r1}, above inequality reaches a contradiction and hence proves result. \end{proof} Now, we are ready to state our another main result here. \begin{theorem}\label{t2} Let $(X,d_\theta)$ be any complete $\theta-$metric space and $T:X \rightarrow X$ is a modified $\mathcal{Z}-$contraction with respect to $\zeta \in \mathcal{Z}$. Then $T$ has a unique fixed point $u$ in $X$ and for every $x_0 \in X,$ the Picard sequence $\{x_n\}$ converges to the fixed point of $T$. \end{theorem} \begin{proof} Let $(X, d_\theta)$ be a $\theta-$metric space and $T:X \rightarrow X$ be a modified $\mathcal{Z}-$contraction with respect to $\zeta \in \mathcal{Z}$. Let $x_0$ be any arbitrary point and $\{x_n\}$ be the respective Picard sequence, i.e., $x_n=Tx_{n-1}$ for all $n \in \mathbb{N}$. Now we suppose that $d_\theta(x_n,x_{n+1})> 0$ for all $n \in \mathbb{N}.$ Otherwise if there exists $n_p \in \mathbb{N}$ such that $x_{n_p}= x_{n_p+1}$, then $x_{n_p}$ is a fixed point of $T$ and we are done. Next we define $d_{\theta}^{n}=d_\theta(x_n, x_{n+1})$. Then, since, \begin{eqnarray*} M(x_n,x_{n+1}) & = & max\{d_\theta(x_n,x_{n-1}),d_\theta(x_n,x_{n+1}),d_\theta(x_{n-1},x_n)\}\\ & = & max\{d_{\theta}^{n},d_{\theta}^{n-1}\}. \end{eqnarray*} Since $\{d_{\theta}^{n}\}$ is a decreasing sequence of reals, $d_{\theta}^{n} < d_{\theta}^{n-1}$ for all $n \in \mathbb{N}.$ So we get, \begin{eqnarray*} 0 & \leq & \zeta(d_\theta(Tx_n,Tx_{n-1}),M(x_n,Tx_{n-1}))\\ & = & \leq \zeta(d_{\theta}^{n},max\{d_{\theta}^{n},d_{\theta}^{n-1}\})\\ & = & \leq \zeta(d_{\theta}^{n},d_{\theta}^{n-1}). \end{eqnarray*} Now $\{d_{\theta}^{n}\}$ is a decreasing sequence of non-negative real numbers and hence is convergent. Let $lim_{n \rightarrow \infty}d_{\theta}^{n}= r.$ If $r> 0,$ we have, \begin{eqnarray*} 0 & \leq & limsup_{n \rightarrow \infty} \zeta(d_{\theta}^{n},d_{\theta}^{n-1})\\ & < & 0. \end{eqnarray*} We arrive at a contradiction, so $r =0.$ Therefore $T$ has a fixed point. We claim that the sequence $\{x_n\}$ is bounded. Reasoning by contradiction, we assume that, $\{x_n\}$ is unbounded. So, there exists a subsequence $\{x_{n_k}\}$ of $\{x_n\}$ such that $n_1=1$ and for each $k \in \mathbb{N}, n_{k+1}$ is minimum integer such that \[d_\theta(x_{n_{k+1}},x_{n_k}) > 1\] and \[d_\theta(x_m,x_{n_k}) \leq 1 ~~for ~~n_k \leq m \leq n_{k+1}-1.\] Now, using the triangle inequality, we have \begin{eqnarray} 1 & < & d_\theta(x_{n_{k+1}},x_{n_k}) \nonumber \\ & \leq & \theta(d_\theta(x_{n_{k+1}},x_{n_{k+1}-1}),d_\theta(x_{n_{k+1}-1},x_{n_k})) \nonumber \\ & \leq & \theta(d_\theta(x_{n_{k+1}},x_{n_{k+1}-1}),1). \end{eqnarray} By taking the limit as $k \rightarrow \infty$ on both sides of $(3.4)$ and using $(B4)$, we infer that, \[d_\theta(x_{n_{k+1}},x_{n_k}) \rightarrow 1.\] Also, we have \begin{eqnarray*} 1 & < & d_\theta(x_{n_{k+1}},x_{n_k}) \\ & \leq & M(x_{n_{k+1}-1},x_{n_k-1})\\ & \leq & max\{d_\theta(x_{n_{k+1}-1},x_{n_k-1}),d_\theta(x_{n_{k+1}-1},x_{n_{k+1}}),d_\theta(x_{n_k-1},x_{n_k})\}\\ & \leq & max\{\theta(d_\theta(x_{n_{k+1}-1},x_{n_k}),d_\theta(x_{n_k},x_{n_k-1})),d_\theta(x_{n_{k+1}-1},x_{n_{k+1}}),d_\theta(x_{n_k-1},x_{n_k})\}\\ & \leq & max\{\theta(1,d_\theta(x_{n_k},x_{n_k-1})),d_\theta(x_{n_{k+1}-1},x_{n_{k+1}}),d_\theta(x_{n_k-1},x_{n_k})\}. \end{eqnarray*} As $k \rightarrow \infty$, we derive, $ 1 \leq M(x_{n_{k+1}-1},x_{n_k-1}) \leq 1.$ So, \[lim_{k \rightarrow \infty}M(x_{n_{k+1}-1},x_{n_k-1})=1.\] As $T$ is a modified $\mathcal{Z}-$contraction with respect to $\zeta \in \mathcal{Z},$ we obtain \begin{eqnarray*} 0 & \leq & lim sup_{k \rightarrow \infty} \zeta(d_\theta(x_{n_{k+1}},x_{n_k}),M(x_{n_{k+1}},x_{n_k}))\\ & < & 0. \end{eqnarray*} This leads to a contradiction and hence $\{x_n\}$ is bounded. Now we will show that $\{x_n\}$ is Cauchy. For this, we consider the real sequence \[C_n= sup\{d_\theta(x_i,x_j: i,j \geq n)\}.\] Note that $\{C_n\}$ is a decreasing sequence of non-negative reals. Thus there exists $C\geq 0$ such that $lim_{n\rightarrow \infty} C_n=C.$ Our claim is that $C=0.$ Let us suppose that $C>0.$ Then by the definition of $C_n$, for every $k \in \mathbb{N},$ there exists $n_k, m_k$ such that $m_k> n_k \geq k$ and \begin{eqnarray*} C_k - \frac{1}{k} & < d_\theta(x_{m_k},x_{n_k}) & \leq C_k. \end{eqnarray*} Letting $k \rightarrow \infty$ in the above inequality, we get \begin{eqnarray*} lim_{k \rightarrow \infty}d_\theta(x_{m_k},x_{n_k})=C, \end{eqnarray*} and, \begin{eqnarray*} lim_{k \rightarrow \infty}d_\theta(x_{m_k-1},x_{n_k-1})=C. \end{eqnarray*} Now, \begin{eqnarray*} d_\theta(x_{m_k},x_{n_k}) & \leq & M(x_{m_k-1},x_{n_k-1})\\ & = & max\{d_\theta(x_{m_k-1},x_{n_k-1}),d_\theta(x_{m_k-1},x_{m_k}),d_\theta(x_{n_k-1},x_{n_k})\},\\ & = & max\{d_\theta(x_{m_k-1},x_{n_k-1}),d_\theta(x_{m_k-1},x_{m_k}),d_\theta(x_{n_k-1},x_{n_k})\}. \end{eqnarray*} Consequently, taking $k \rightarrow \infty$, we get, \[ lim_{k \rightarrow \infty} M(x_{m_k-1},x_{n_k-1})=C.\] Now since $T$ is a modified $\mathcal{Z}-$contraction with respect to $\zeta \in \mathcal{Z},$ we derive that \begin{eqnarray*} 0 & \leq & lim sup_{k \rightarrow \infty} \zeta(d_\theta(x_{m_k},x_{n_k}),M(x_{m_k-1},x_{n_k-1}))\\ & < & 0, \end{eqnarray*} which is a contradiction. As a result, $C=0$ and $\{x_n\}$ is Cauchy. Since $(X,d_\theta)$ is complete, there exists some $z \in X$ such that $lim_{n \rightarrow \infty} x_n=z.$ Now we show that $z$ is a fixed point of $T.$ Suppose, on the contrary, $Tz \neq z.$ Then $d_\theta(z,Tz)>0.$ Now we employ Definition \ref{d5} and use Remark \ref{r1} to get, \begin{eqnarray*} 0 & \leq & lim sup_{n \rightarrow \infty} \zeta(d_\theta(Tx_n,Tz),M(x_n,z))\\ & \leq & lim sup_{n \rightarrow \infty} [M(x_n,z)-d_\theta(x_{n+1},Tz)]\\ & = & -d_\theta(z,Tz). \end{eqnarray*} This contradiction attests that $d_\theta(z,Tz)=0,$ and so, $Tz=z.$ Thus $z$ is a fixed point of $T.$ Uniqueness is guaranteed from Lemma \ref{l3}. \end{proof} As an application of our earlier result, we furnish the next example which illustrates Theorem \ref{t2}. \begin{example} Let $X= [0,1]$ be equipped with the usual Euclidean metric and $\theta(s,t)=s+t+st.$ We define a mapping $T: X \rightarrow X$ by \begin{eqnarray*} Tx & = & \begin{cases} \frac{2}{9},~~ x \in S_1=[0,\frac{1}{2}), \\ \frac{1}{9},~~ x \in S_2=[\frac{1}{2},1]. \end{cases} \end{eqnarray*} We argue that $T$ is a modified $\mathcal{Z}-$contraction with respect to the simulation function $\zeta(t,s)= \frac{1}{2}s - t.$ Here we have, $0 \leq d_\theta(Tx,Ty) \leq \frac{1}{9}$ for all $x,y \in X.$ Now, if both $x, y \in S_1$ or $S_2$, then $d_\theta(Tx,Ty)=0$ and we are done. Otherwise, let $x \in S_1$ and $y \in S_2$. We get $0 < d_\theta(x,y) \leq 1$. Also, $\frac{2}{9} \leq d_\theta(x,Tx) \leq \frac{5}{9}$ and $\frac{7}{18} \leq d_\theta(y,Ty) \leq \frac{8}{9}.$ Therefore $M(x,y) \geq \frac{7}{18}.$ From the calculation, it is clear that \[d_\theta(Tx,Ty) \leq \frac{1}{2}M(x,y).\] So we have, \[ \zeta(d_\theta(Tx,Ty),M(x,y))= \frac{1}{2}M(x,y)- d_\theta(Tx,Ty) \geq 0\] for all $x,y \in X.$ As a consequence $T$ is a modified $\mathcal{Z}-$contraction. Taking into account Theorem \ref{t2}, we can say that $T$ has a unique fixed point. Here $u= \frac{2}{9}$ is that required fixed point. \end{example} \begin{Acknowledgement} The first named author would like to convey his cordial thanks to DST-INSPIRE, New Delhi, India for their financial support under INSPIRE fellowship scheme. \end{Acknowledgement}
1,314,259,995,724
arxiv
\section{Introduction} Recent theoretical studies of noisy entanglement resulted in discoveries of interesting phenomena that occur in higher-dimensional systems. One prominent example is provided by bound-entangled states that cannot be created by local operations and classical communication (LOCC), but at the same time cannot be brought into a pure maximally entangled form under LOCC constraints \cite{HoroHoroPRL98}. Another profound observation is that certain states can be used to generate the cryptographic key at rates that exceed distillable entanglement \cite{HoroHoroPRL05}. These effects reveal the highly complex nature of noisy entanglement in higher dimensions. Experimental studies of such entanglement are important for two main reasons. On the fundamental side, it is always vital to confirm theoretical predictions in actual physical systems. From a more practical perspective, quantum entanglement is an essential resource in implementing a wide range of quantum-enhanced protocols for communication, sensing, etc. However, its generation and distribution are usually affected by imperfections which deteriorate the quality of the available resource. Therefore one needs tools to characterize relevant properties of noisy entanglement and to utilize it for practical purposes in an optimal way. In this paper we present an experimental scheme that can be used to produce a wide range of noisy entangled multiphoton states. Our approach exploits the geometric symmetry of the popular scheme to generate polarization-entangled photon pairs via spontaneous parametric down-conversion (SPDC) realized in two type-I crystals \cite{KwiaWaksPRA99}. As we describe below, the axial symmetry of this scheme allows one to collect simultaneously several entangled photon pairs with little additional effort. It is worth noting that the symmetry of type-I SPDC has been also exploited in schemes for generating hyperentangled two-photon states \cite{Hyp1,Hyperentanglement} and creating high-dimensional path entanglement of two photons \cite{RossVallPRL09}. In order to prepare noisy entangled states in the polarization degree of freedom, the generated photons can be subjected to variable correlated birefringence. We have used this approach to produce two specific examples of four-photon mixed entangled states. One of them is the Smolin state \cite{SmolPRA00}, whose first experimental generation carried out by Amselem and Bourennane \cite{AmseBourNPH09} illustrated the delicate nature of the phenomenon of bound entanglement in the presence of experimental imperfections \cite{LavoKaltNPH2010}. This initial work was subsequently followed by the development of more robust preparation techniques \cite{KampBrusPRA10,LavoKaltPRL10}. Our results indicate that the problems encountered in Ref.~\cite{AmseBourNPH09} are rather generic and do not depend on the specifics of the optical setup. The second example presented here is a private state, for which the secure key contents is strictly higher than the amount of entanglement distillable in the asymptotic regime. These information theoretic properties have been verified experimentally for the first time in \cite{DobeKarpPRL11}. Here we report an experimental illustration of a recent theoretical result which links privacy to incompatibility with local realistic theories \cite{AuguCavaPRL10}. Let us note that non-trivial forms of noisy entanglement have been recently observed also in a system of trapped ions subjected to decoherence induced by spontaneous decay \cite{BarrSchiNPH10}. This paper is organized as follows. First, we present the experimental setup in Sec.~\ref{Sec:ExpSetup}. The examples of noisy entangled states along with details of their generation are described in Sec.~\ref{Sec:Generation}. Results of their characterization are reviewed in Sec.~\ref{Sec:Characterisation}. Finally, Sec.~\ref{Sec:Conclusions} concludes the paper. \section{Experimental setup} \label{Sec:ExpSetup} Owing to a simple setup and room-temperature operation, SPDC has been the tool of choice to generate multiphoton entangled states. The basic ingredient is an arrangement to produce one maximally entangled photon pair. A very popular configuration is based on a single type-II crystal, with photon pairs collected from the intersection points of two emission cones with orthogonal polarizations \cite{KwiaMattPRL95}. Reflecting the pump pulse and sending it back to the same crystal allows one to generate two pairs. This approach, used to demonstrate quantum teleportation \cite{Teleportation}, has been highly refined in subsequent experiments \cite{PanDaniPRL01}. An alternative is to send the pump pulse through a sequence of type-II crystals \cite{LuZhouNPhysics07}, which enables generation of three or more pairs at once. Furthermore, the alignment can be optimized independently for each crystal. Another configuration to produce photon pairs in a maximally entangled polarization state utilizes SPDC in two type-I crystals whose optical axes are aligned in perpendicular planes, thus producing orthogonally polarized photons \cite{KwiaWaksPRA99}. Because the down-converted photons are generated as ordinary rays, they emerge on an axially symmetric cone. With a suitably adjusted polarization of the pump beam both the crystals produce photon pairs with the same probability. If the photons generated in the first and the second crystal are indistinguishable apart from their polarization, the source produces a maximally entangled polarization state. An attractive feature of this configuration is that owing to the axial symmetry of the type-I process, several entangled qubit pairs can be collected at once from different locations on the down-conversion cone. This provides a convenient source of multiple photon pairs from a single set of crystals without a need to redirect the pump beam. \begin{figure} \begin{center} \includegraphics[width=10cm]{figure1.eps} \end{center} \caption{(a) A schematic of the experimental setup to produce multiple polarization-entangled photon pairs. Boxes labelled with T are polarization analyzers detailed in (b). The box N represents polarization noise introduced by sets of wave plates shown for the Smolin state (c) and the private state (d). D, Soleil-Babinet compensator; XX, down-conversion crystsls; IF, interference filter; SMF, single-mode fiber; QWP, quarter-wave plate; HWP, half-wave plate; PBS, polarizing beam splitter; APD, avalanche photodiode.} \label{Fig:Setup} \end{figure} We implemented the above idea in a setup shown schematically in Figure~\ref{Fig:Setup}. The pump beam was a $78~\mathrm{MHz}$ train of $180~\mathrm{fs}$ pump pulses with the spectrum centered at $390~\mathrm{nm}$, obtained by frequency doubling in a $1~\mathrm{mm}$ long lithium triborate crystal the output from a Ti:sapphire oscillator (Coherent Chameleon Ultra), which resulted in $200~\mathrm{mW}$ average power. The pump beam was focused to a $70~\mu\mathrm{m}$ diameter spot in the pair of beta-barium borate crystals. The SPDC emission was collimated with a 20~cm focal length lens, sending the generated photons along parallel paths. The idea of this arrangement is similar to that used recently to generate multipath entanglement of photon pairs \cite{RossVallPRL09}. In the constructed setup, it was possible to introduce various types of polarization noise by inserting in the paths of one or two photons birefringent elements (half- and quarter-wave plates) with variable orientations. Two Soleil-Babinet compensators were placed in the pump beam and the path of one of the down-converted photons to control relative phases between the horizontal and vertical components for both the produced pairs. In experiments where photon polarizations were measured individually the photons were filtered through 10~nm interference filters and coupled into single-mode fibers, which delivered them to polarization analyzers shown in Figure~\ref{Fig:Setup}(b). Each analyzer consisted of a quarter- and half wave plate followed by a Wollaston polarizing beam-splitter, whose output ports were coupled into multimode fibers. These fibers guided the photons to avalanche photodiodes operated in the Geiger mode (Perkin-Elmer SPCM-AQR). The electronic signals from the detectors were finally processed using a coincidence circuit with a 6~ns window programmed in an field-programmable gate array (FPGA) board. All the waveplates were mounted on motorized rotation stages and the whole experiment was controlled using a dedicated LabView (National Instruments) application. Typical count rates in the setup were of the order of $10^5$~Hz for single counts, $10^4$~Hz for two-fold coincidences for detectors monitoring the same photon pair, and $2$~Hz for four-fold coincidences. \section{Generation of noisy states} \label{Sec:Generation} We employed the setup described in the preceding section to generate and characterize noisy four-qubit states that illustrate interesting phenomena occurring in the theory of high-dimensional entanglement. The first example was the Smolin state \cite{SmolPRA00} of four qubits $ABA'B'$. It is defined as an equally weighted statistical mixture of four components, each corresponding to both the pairs prepared in the same Bell state: \begin{multline} \hat{\varrho}_{\text{Smolin}} = \frac{1}{4} \bigl( \proj[AB]{\phi_+} \otimes \proj[A'B']{\phi_+} + \proj[AB]{\phi_-} \otimes \proj[A'B']{\phi_-} \\ + \proj[AB]{\psi_+} \otimes \proj[A'B']{\psi_+} + \proj[AB]{\psi_-} \otimes \proj[A'B']{\psi_-} \bigr). \label{Eq:SmolinDef} \end{multline} We denoted here the Bell states as: \begin{eqnarray} \ket{\phi_\pm} & = \frac{1}{\sqrt{2}} \bigl( \ket{H\PolH} \pm \ket{V\PolV} \bigr) \nonumber \\ \ket{\psi_\pm} & = \frac{1}{\sqrt{2}} \bigl( \ket{HV} \pm \ket{\PolVH} \bigr), \end{eqnarray} where $H$ and $V$ stand for the horizontal and the vertical polarization respectively. The Smolin state $\hat{\varrho}_{\text{Smolin}}$ is invariant with respect to any permutation of individual qubits, which can be seen most easily from its representation in terms of Pauli matrices $\hat{\sigma}^\mu$: \begin{equation} \hat{\varrho}_{\text{Smolin}} = \frac{1}{16} \left( \vphantom{\sum_{\mu=x,y,z}} \hat{\mathbbm 1}_{A} \otimes \hat{\mathbbm 1}_{B} \otimes \hat{\mathbbm 1}_{A'} \otimes \hat{\mathbbm 1}_{B'} + \sum_{\mu=x,y,z} \hat{\sigma}^{\mu}_{A} \otimes \hat{\sigma}^{\mu}_{B} \otimes \hat{\sigma}^{\mu}_{A'} \otimes \hat{\sigma}^{\mu}_{B'} \right), \end{equation} where we used the standard notation \begin{eqnarray} \hat\sigma^x & =\ket H \bra V+\ket V \bra H \nonumber \\ \hat\sigma^y & =\rmi \bigl(\ket V \bra H-\ket H \bra V \bigr) \nonumber \\ \hat\sigma^z & = \ket H \bra H-\ket V \bra V. \end{eqnarray} If we consider a partition of the four qubits into two subsystems, the first one comprising just one qubit and the second one remaining three, e.g.\ $A\mathop{:}BA'B'$, the Smolin state exhibits bipartite entanglement. Indeed, the Bell state measurement applied by the second party to qubits $A'B'$ allows her to bring the qubits $AB$ to a maximally entangled state without communication with the other party. On the other hand, the state is separable with respect to any partition into two pairs of qubits, which follows immediately from the construction of the state as a statistical mixture specified in Eq.~(\ref{Eq:SmolinDef}) and permutational invariance. This implies that if each qubit is in hands of a separate party and all the parties are restricted to LOCC manipulations only, then it is impossible to distill any entanglement. Thus the Smolin state is an example of bound entanglement \cite{HoroHoroPRL98}. To generate the Smolin state in our experimental setup, we started from two maximally entangled pairs of photons $AB$ and $A'B'$. With suitable settings of the Soleil-Babinet compensators, both the pairs were prepared in the same state $\ket{\phi_+}$. This can be converted into the Smolin state by applying randomly to one qubit from each pair, chosen to be $B$ and $B'$, equiprobable transformations $\hat{\mathbbm 1}_B \otimes \hat{\mathbbm 1}_{B'}$, $\hat{\sigma}^{x}_{B} \otimes \hat{\sigma}^{x}_{B'}$, $\hat{\sigma}^{y}_{B} \otimes \hat{\sigma}^{y}_{B'}$, or $\hat{\sigma}^{z}_{B} \otimes \hat{\sigma}^{z}_{B'}$. Differently from Ref.~\cite{AmseBourNPH09}, we realized these transformations by sending the photons $B$ and $B'$ through the same set of three waveplates, as depicted in Figure~\ref{Fig:Setup}(c). Two quarter-wave plates are equivalent either to the identity transformation or a half-wave plate depending on whether their fast axes are mutually parallel or perpendicular. Further, a suitably oriented half-wave plate realizes $\hat\sigma^x$ or $\hat\sigma^z$. This allowed us to implement all four transformations required to generate the Smolin state, since $\hat{\sigma}^y = \rmi \hat\sigma^x \hat\sigma^z$. The second example of noisy entanglement which we produced experimentally was a four qubit state \begin{multline} \hat{\varrho}_{\text{private}} = \frac{1}{4} \bigl( \proj[AB]{\phi_-} \otimes \proj[A'B']{\psi_-} + \proj[AB]{\psi_+} \otimes \proj[A'B']{\phi_+} \\ + \proj[AB]{\psi_+} \otimes \proj[A'B']{\psi_+} + \proj[AB]{\psi_+} \otimes \proj[A'B']{\phi_-} \bigr). \label{Eq:PrivateDef} \end{multline} This state has nontrivial properties in the context of quantum key distribution and therefore we refer to it as the private state. Let us assume that Alice and Bob are respectively in possession of qubits $AA'$ and $BB'$. Suppose that they measure qubits $A$ and $B$ in the eigenbasis of the $\hat{\sigma}^y$ operator composed of two vectors $\ket{\bar{0}}$ and $\ket{\bar{1}}$ defined as \begin{equation} \ket{\bar{\upsilon}} = \frac{1}{\sqrt{2}} \bigl( \ket{H} + \rmi (-1)^\upsilon\ket{V}\bigr), \qquad \upsilon=0,1. \end{equation} The reduced density matrix of the qubits $A$ and $B$ expressed in this basis takes the form \begin{equation} \textrm{Tr}_{A'B'}(\hat{\varrho}_{\text{private}}) = \frac{1}{2} \bigl( \proj[AB]{\bar{0}\bar{0}} + \proj[AB]{\bar{1}\bar{1}}\bigr) - \frac{1}{4} \bigl( \ket[AB]{\bar{0}\bar{0}} \bra{\bar{1}\bar{1}} + \ket[AB]{\bar{1}\bar{1}} \bra{\bar{0}\bar{0}} \bigr). \label{Eq:TrABrhoprivate} \end{equation} It is clearly seen that measurements performed by Alice and Bob on the qubits $A$ and $B$ in the $\hat{\sigma}^y$ eigenbasis yield equiprobable and perfectly correlated results $0$ or $1$. One may ask whether these results form a secure cryptographic key. If Alice and Bob had access only to qubits $A$ and $B$ this would not be the case, as the magnitude of the off-diagonal elements in Eq.~(\ref{Eq:TrABrhoprivate}) is less than $\frac{1}{2}$, which means that $\textrm{Tr}_{A'B'}(\hat{\varrho}_{\text{private}})$ is not maximally entangled. However, the presence of qubits $A'$ and $B'$ turns out to guarantee perfect security. As discussed in \cite{HoroHoroPRL05}, these two additional qubits serve as the {\em shield subsystems} preventing an eavesdropper from accessing any information about measurement results. In this context, $A$ and $B$ are often referred to as {\em key subsystems}. Interestingly, the four-qubit state $\hat{\varrho}_{\text{private}}$ has distillable entanglement strictly less than one, which means that asymptotic conversion into a smaller number of maximally entangled singlet pairs is not the optimal way to generate a cryptographic key from $\hat{\varrho}_{\text{private}}$. The state $\hat{\varrho}_{\text{private}}$ can be generated using our source of multiple photon pairs using an arrangement of wave plates shown in Figure~\ref{Fig:Setup}(d). The photon $B$ is sent trough a half-wave plate which realizes either $\hat{\sigma}^x$ or $\hat{\sigma}^z$, while the photon $B'$ travels through a set of three wave plates that can be set to implement identity or any Pauli operator. Applying these transformation in a correlated manner as $\hat{\sigma}^{z}_{B} \otimes \hat{\sigma}^{y}_{B'}$, $\hat{\sigma}^{x}_{B} \otimes \hat{\mathbbm 1}_{B'}$, $\hat{\sigma}^{x}_{B} \otimes \hat{\sigma}^{x}_{B'}$, and $\hat{\sigma}^{x}_{B} \otimes \hat{\sigma}^{z}_{B'}$ yields the private state. Its generation and experimental analysis of information-theoretic properties has been first reported in \cite{DobeKarpPRL11}. \section{Experimental characterization} \label{Sec:Characterisation} We performed a full tomographic reconstruction of the four-qubit state by sending individual photons to polarization analyzers and measuring all $81$ combinations of projections in the eigenbases of the operators $\hat\sigma^x$, $\hat\sigma^y$, and $\hat\sigma^z$. For each combination, four-fold coincidences were recorded over an interval of approximately 1~hour, resulting in the total time of an experimental run equal to about 81 hours. The orientation of wave plates introducing polarization noise was changed at $30$~s intervals. The counting circuit was put on hold during the operation of motors rotating the wave plates which took approximately $10$~s each time. In order to test the operation of the setup, we adopted a faster alternative procedure to collect data. In this procedure, the FPGA board was programmed to record events composed of pairs of two-fold coincidences triggered between paths $AB$ and $A'B'$, but not necessarily within the time window of the same pulse. Specifically, after registering a two-fold coincidence for one combination of paths (either $AB$ or $A'B'$) the counting circuit waited for a two-fold coincidence between detectors monitoring the other combination, and recording the result as a four-fold event. Additional two-fold coincidences involving the first combination of paths that occurred during the waiting window were ignored. This procedure allowed us to collect four-fold events at a rate that was approximately half the rate of producing single photon pairs, reducing the overall measurement time to approximately $3$ hours. In this case, the limiting factor was the speed of the motorized rotation stages. Data collected using the fast procedure were used to verify the properties of the generated state before a full experimental run. In Figure~\ref{Fig:Rho} we present density matrices for the Smolin state and the private state reconstructed from the experimental data using the maximum likelihood method \cite{HradPRA97,BanaDAriPRA99,JameKwiaPRA01}. These experimental results can be compared against the idealized states by calculating the corresonding fidelities, defined in general for two states $\hat{\varrho}$ and $\hat{\varrho}'$ as \begin{equation} F(\hat{\varrho}, \hat{\varrho}') = \textrm{Tr} \left( \sqrt{\sqrt{\varrho}\varrho'\sqrt{\varrho}}\right). \end{equation} We obtained values $F=0.923 \pm 0.002$ for the Smolin state and $F=0.971 \pm 0.001$ for the private state. The experimental Smolin state fidelity is comparable to that obtained in two other experiments aiming at generating $\hat{\varrho}_{\text{Smolin}}$ in a four-photon system \cite{AmseBourNPH09,LavoKaltPRL10} which suggests that it may be at the limit of what can be achieved in a typical realization with standard optical elements. The higher fidelity for the private state may be attributed to the fact that in this case the polarization noise was introduced with fewer birefringent elements, thus reducing the effects of their imperfections. \begin{figure} \begin{center} \includegraphics[width=10cm,bb = 90 10 470 375]{figure2a.eps} \vspace{0.125in} \includegraphics[width=10cm,bb = 55 10 450 385]{figure2b.eps} \end{center} \caption{Absolute values of the elements of reconstructed density matrices for (a) the Smolin state and (b) the private state. } \label{Fig:Rho} \end{figure} For the Smolin state, we used polarization measurements on individual photons to determine the expectation value of an entanglement witness given by \cite{AmseBourNPH09} \begin{equation} \hat{W} = \hat{\mathbbm 1}^{\otimes 4} + ( \hat{\sigma}^x )^{\otimes 4} + ( \hat{\sigma}^y )^{\otimes 4} + ( \hat{\sigma}^z )^{\otimes 4} \end{equation} which was found to be equal to $\langle \hat{W} \rangle = -1.43 \pm 0.02$, verifying the nonclassical character of the generated state. We also calculated eigenvalues of the partially transposed density matrix with respect to three possible partitions into two pairs of qubits. The results are presented in Table~\ref{Tab:Eigenvalues}. It is seen that for each partition some of the eigenvalues are negative. A similar feature was also present in the first experimental generation of the Smolin state reported in \cite{AmseBourNPH09}. The reason behind the occurrence of negative eigenvalues is that the theoretical Smolin state $\hat{\varrho}_{\text{Smolin}}$ is located exactly on the boundary of positive partial transposition states. Non-ideal implementation of the polarization noise, imperfect alignment of the polarization analyzers, and the statistical uncertainty of the measured density matrix may therefore easily produce residual entanglement in the reconstructed data for any partition. This problem can be solved by generating a mixture of $\hat{\varrho}_{\text{Smolin}}$ and a completely mixed four-qubit state \cite{KampBrusPRA10,LavoKaltPRL10} which for a suitable choice of relative weights demonstrates the phenomenon of bound entanglement in an experimentally robust way. \begin{table} \caption{Eigenvalues of the partially transposed density matrix characterizing the experimentally generated Smolin state, obtained for three possible partitions into two pairs of qubits. } \label{Tab:Eigenvalues} \begin{center} \begin{tabular}{ c c c c } \hline\hline Theory & $AB\mathop{:}A'B'$ & $AB'\mathop{:}A'B$ & $AA'\mathop{:}BB'$ \\ \hline $0.250$ & $0.229$ & $0.228$ & $0.229$ \\ $0.250$ & $0.216$ & $0.216$ & $0.217$ \\ $0.250$ & $0.214$ & $0.215$ & $0.213$ \\ $0.250$ & $0.202$ & $0.202$ & $0.204$ \\ $0.000$ & $0.036$ & $0.034$ & $0.034$ \\ $0.000$ & $0.026$ & $0.029$ & $0.029$ \\ $0.000$ & $0.024$ & $0.025$ & $0.023$ \\ $0.000$ & $0.022$ & $0.023$ & $0.022$ \\ $0.000$ & $0.016$ & $0.016$ & $0.015$ \\ $0.000$ & $0.011$ & $0.011$ & $0.012$ \\ $0.000$ & $0.008$ & $0.009$ & $0.009$ \\ $0.000$ & $-0.005$ & $-0.007$ & $-0.006$ \\ $0.000$ & $-0.003$ & $0.005$ & $0.004$ \\ $0.000$ & $0.003$ & $-0.004$ & $-0.003$ \\ $0.000$ & $0.001$ & $-0.002$ & $-0.002$ \\ $0.000$ & $0.000$ & $0.001$ & $0.001$ \\ \hline\hline \end{tabular} \end{center} \end{table} Experimental characterization of the privacy properties of the state $\hat{\varrho}_{\text{private}}$ has been described in Ref.~\cite{DobeKarpPRL11}. The reconstruction of information-theoretic quantities from experimental data was found to be very sensitive to statistical uncertainties due to highly non-linear dependence on the elements of the density matrix. The statistical distributions for the quantities of interest were obtained by evaluating them on individual density matrices that formed an ensemble consistent with experimental data. The results demonstrated a statistically significant separation between the distillable entanglement and the key contents for the generated state. Private states exhibit other specifically nonclassical properties. In particular, Augusiak {\em et al.} \cite{AuguCavaPRL10} have recently presented a general theoretical proof that perfect privacy implies incompatibility with local realistic theories. This motivated us to test whether the experimentally generated state, despite its non-ideal privacy, can be used to demonstrate a violation of Bell's inequalities. The proof presented in Ref.~\cite{AuguCavaPRL10} is based on an observation that any private state can be brought by local operations (without classical communication to satisfy the locality requirement) to a form in which the key subsystems exhibit correlations violating local realism. In our case, it is easy to see that for the ideal state $\hat{\varrho}_{\text{private}}$ defined in Eq.~(\ref{Eq:PrivateDef}) no local operations are necessary. This is because the reduced density matrix of the qubits $AB$, given explicitly by \begin{equation} \label{Eq:rhoab} \textrm{Tr}_{A'B'} (\hat{\varrho}_{\text{private}}) = \frac{1}{4} \proj[AB]{\phi_-} + \frac{3}{4} \proj[AB]{\psi_+}, \end{equation} is a statistical mixture of two Bell states with unequal weights. With a suitable choice of projective measurements, any such a mixture violates the Clauser-Horne-Shimony-Holt (CHSH) inequality, which follows from the set of necessary and sufficient conditions derived in \cite{HoroHoroPLA95}. In order to test the CHSH inequality we performed polarization measurements on the key photons in coincidence with detectors monitoring the shield photons. The CHSH inequality can be written as \begin{equation} -2 \le {\cal B} \le 2, \end{equation} where \begin{equation} {\cal B} = C({\bf a}; {\bf b}) + C({\bf a}'; {\bf b}) + C({\bf a}; {\bf b}') - C({\bf a}'; {\bf b}') \end{equation} is a combination of four correlation functions \begin{equation} C({\bf a}; {\bf b}) = \bigl\langle ({\bf a}\cdot \hat{\boldsymbol\sigma} )\otimes ({\bf b}\cdot \hat{\boldsymbol\sigma} ) \bigr\rangle \end{equation} for Bloch vectors ${\bf a}$, ${\bf a}'$, ${\bf b}$, and ${\bf b}'$ that define the measurement bases. For the specific state given in Eq.~(\ref{Eq:rhoab}), the maximum violation of the CHSH inequality is obtained for the choice of vectors: \begin{equation} {\bf a} = \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}, \quad {\bf a}' = \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}, \quad {\bf b} = \begin{pmatrix} 0 \\ \frac{2}{\sqrt{5}} \\ - \frac{1}{\sqrt{5}} \end{pmatrix}, \quad {\bf b}' = \begin{pmatrix} 0 \\ \frac{2}{\sqrt{5}} \\ \frac{1}{\sqrt{5}} \end{pmatrix}, \end{equation} resulting in the value of the CHSH combination equal to ${\cal B} = \sqrt{5} \approx 2.236$. Polarization measurements performed on the key photons in these bases yielded the result ${\cal B} = 2.12 \pm 0.01$, which is clearly above the limit permitted by local realistic theories. It is interesting to note that the reduced density matrix of the key qubits $AB$ has been found in Ref.~\cite{DobeKarpPRL11} to contain no distillable key due to experimental imperfections. Thus the violation of the CHSH inequality turns out to be a more robust way to detect quantum correlations contained in this two-qubit state. This is understandable, since the CHSH combination can be written as a single quantum mechanical observable, while the calculation of the key contents requires highly nonlinear processing of the reconstructed density matrix. This problem may be alleviated by the development of more efficient methods to characterize the amount of distillable key based on a single or a few observables, such as those recently presented in \cite{BanaHoroXXX11}. \section{Conclusions and outlook} \label{Sec:Conclusions} In conclusion, we presented an arrangement to collect photon pairs in maximally entangled polarization states from a single set of two type-I down-conversion crystals. Application of correlated noise introduced using rotating wave plates enabled us to produce noisy entangled four-photon states that illustrated fundamental results of the entanglement theory. \begin{figure} \begin{center} \includegraphics[width=10cm]{figure3.eps} \end{center} \caption{The Hong-Ou-Mandel interference dip between heralded photons from two independent pairs generated in a single beta-barium borate crystal. The interfered photons were transmitted through $2~\mathrm{nm}$ interference filters. The delay was introduced by translating one of the collimators coupling photons into a single-mode fiber. The experimental data (squares with error bars) are fitted with a Gaussian profile (solid line).} \label{fig:hom} \end{figure} Characterization of information-theoretic properties of the generated states was performed by measuring polarizations of individual photons in suitably selected bases. If measurements are restricted to this class, the produced photon pairs can be correlated in other degrees of freedom, e.g.\ frequency, provided that the modal structure of the pair is independent of the polarization state \cite{URenBanaQIC03,AlfredLasPhys}. For the configuration based on two type-I crystals, this condition is satisfied over relatively large bandwidths of the generated photons, which follows from the symmetry of the type-I down-conversion process \cite{DragPRA04}. This allows one to avoid heavy spectral filtering and consequently offers increased four-fold coincidence rates, which are notoriously low in most multiphoton experiments. However, if independently generated photons are to be interfered, it is necessary to ensure their spectral indistinguishability. This requirement can be verified with the help of the Hong-Ou-Mandel two-photon interference effect \cite{HongOuPRL85}. We carried out a preliminary test by producing two pairs in a single crystal and interfering photons from different pairs in an event-ready manner \cite{ZukoZeilPRL93} using a single-mode fiber optic directional coupler with a $50\mathop{:}50$ splitting ratio. When the interfered photons were transmitted through 2~nm bandwidth interference filters, $79\%$ depth of the Hong-Ou-Mandel dip was observed, as shown in Figure~\ref{fig:hom}. This allows for some optimism about using the described source in more sophisticated experiments utilizing multiphoton interference effects. \ack We wish to acknowledge insightful discussions with Czes{\l}aw Radzewicz and Wojciech Wasilewski. This research was supported by FP7 FET projects CORNER (contract no.\ 213681) and Q-ESSENCE (contract no.\ 248095), and the Foundation for Polish Science TEAM project cofinanced by the EU European Regional Development Fund. PH is partially supported by Polish Ministry of Science and Higher Education grant no. N N202 261938. \section*{References}
1,314,259,995,725
arxiv
\section{Active Learning and Proofreading} \label{sec:active} AL aims to train a model with minimal user input by selecting small subsets of examples that are the most informative. Formally, our algorithm starts with a small set of labeled edges $S_{0}$. We then repeat the following steps: At iteration $t$, we use the annotated set of edges $S_{t-1}$ to train classifier $C_{t-1}$ and select one or more edges to be labeled by the user and added to $S_{t-1}$ to form $S_{t}$. The edge(s) we select are those that maximize the criterion $\Delta c$ of Eq.~\ref{eq:weightMetric}. By contrast, proofreading occurs \emph{after} the classifier has been trained and a complete delineation has been produced. At this point, the main concern is not to further improve the classifier, but simply to correct potential mistakes. Therefore, the most crucial edges are those that are misclassified and whose presence or absence most affects the topology of the delineation. To find them, we again compute the $\Delta c$ value for each edge. However, some edges could have a high $\Delta c$ because they are misclassified, even though they do not influence the topology of the final delineation. To focus on potential mistakes that do affect the topology strongly, we rely on the DIADEM score~\cite{Ascoli10}, which captures the topological differences between trees, such as connectivity changes and missing or spurious branches. It ranges from 0 to 1; the larger the score, the more similar the two trees are. More specifically, let $\bR^*$ be the optimal tree given the edge weights, and let $\bR'_i$ be the tree we obtain when changing the weight of edge $e_i$ from $w_i$ to $w'_i$, as described in Section~\ref{sec:weights}. To measure the importance of each edge, we compute the score \begin{equation} s_i = \frac{\Delta c_i}{\mathrm{DIADEM}(\bR^*,\bR'_i)} \label{eq:diadem} \end{equation} and ask the user to check the highest-scoring one. The edge is assigned a weight equal to $A$ or $B$ from Section~\ref{sec:weights} according to the user's response. We then recompute $\bR^*$ and repeat the process. Note that this is very different from traditional proofreading approaches, which require the user to visually inspect the whole image. By contrast, our user only has to give an opinion about one edge at a time, which is automatically selected and presented to them. \vspace{1cm} \section{Conclusions}\label{sec:conclusion} We have presented an attention scheme that significantly reduces the annotation effort involved both in creating training data for supervised Machine Learning and in proofreading results for delineation tasks. It does so by detecting possibly misclassified samples and considering their influence on the topology of the reconstruction. We showed that our method outperforms baselines on a variety of microscopy image stacks and can be used in interactive applications thanks to its efficient formulation. \section{Attention Mechanism} \label{sec:focus} \subsection{Graph-Based Delineation} \label{sec:delin} Delineation algorithms usually start by computing a tubularity measure \cite{Law08,Turetken13c,Sironi16a}, which quantifies the likelihood that a tubular structure is present at a given image location. Next, they extract either high-tubularity superpixels likely to be tubular structure fragments ~\cite{Santamaria-Pang15,Montoya14} or longer paths connecting points likely to be on the centerline of such structures~\cite{Gonzalez08,Breitenreicher13,Neher15,Turetken16a}. Each superpixel or path is treated as an edge $e_i$ of an over-complete spatial graph $\bG$ (see Fig.~\ref{fig:graph}(a)) and is characterized by an image-based feature vector $\bx_i$. Let $\bE$ be the set of all such edges, which is expected to be a superset of the set $\bR$ of edges defining the true curvilinear structure, as shown in Fig.~\ref{fig:graph}(d). If the events of each edge $e_i$ being present in the reconstruction are assumed to be independent (conditional on the image evidence $\bx_i$), then the most likely subset $\bR^*$ is the one minimizing \begin{equation} c(\bR) = \sum_{e_i\in \bR} w_i, \mbox{ with } w_i=-\log\frac{p(y_i = 1|\bx_i)}{p(y_i = 0|\bx_i)} \; , \label{eq:ObjF} \end{equation} where $w_i \in \bR$ is the {\it weight} assigned to edge $e_i$ and $y_i$ is a binary class label denoting whether $e_i$ belongs to the final reconstruction or not. This optimization is subject to certain geometric constraints; for example, a state-of-the-art method presented in~\cite{Turetken16a} solves a more complex Mixed Integer Program (\QMIP{}), which uses linear constraints to force the reconstruction to form a connected network (or a tree). As described in Section~1 of the supplementary material, we were able to reformulate the original optimization scheme and obtain major speedups which make it practical even when delineations must be recomputed often. There, we also show that it yields better results than using a more basic method Minimum Spanning Tree with Pruning~\cite{Gonzalez08}, while also being able to handle non-tree networks. Let us remark that finding the minimizing $\bR$ is trivial to parallelize. The probabilities appearing in Eq.~\ref{eq:ObjF} can be estimated in many ways. A simple and effective one is to train a discriminative classifier for this purpose~\cite{Breitenreicher13,Montoya14,Turetken16a}. However, the performance critically depends on how well-trained the classifier is. A few misclassified edges can produce drastic topology changes, affecting the whole reconstruction, as shown in Fig.~\ref{fig:mistakes}. In this paper we address both issues with a single generic criterion. \subsection{Error Detection} \label{sec:error} The key to both fast proofreading and efficient AL is to quickly find potential mistakes, especially those that are critical for the topology. In this work, we take {\it critical mistakes} to mean erroneous edge weights $w_i$ that result in major changes to the cost $c(\bR^*,\bW)$ of the reconstruction. In other words, if changing a specific weight can significantly influence the delineation, we must ensure that the weight is correct. We therefore measure this influence, alter the edge weights accordingly, and recompute the delineation. \subsubsection{Delineation-Change Metric} \label{sec:metric} We denote by $\bR^*$ the edge subset minimizing the objective (cost) function $c(\bR, \bW) = \sum_{e_i \in \bR} w_i$ given a particular set $\bW$ of weights assigned to edges in $\bG$. Changing the weight $w_i$ of edge $e_i$ to $w_i'$ will lead to a new graph with optimal edge subset $\bR'_i$. We can thus define a delineation-change metric, which evaluates the cost of changing the weight of an edge $e_i \in \bE$: \begin{equation} \Delta c_i = c(\bR^*,\bW) - c(\bR'_i,\bW') \; . \label{eq:weightMetric} \end{equation} If $\Delta c_i > 0$, the cost has decreased; we can conjecture that the overall reconstruction benefits from this weight change and therefore the weight value may be worth investigating by the annotator as a potential mistake. The converse is true if $\Delta c_i < 0$. In other words, this very simple metric gives us a way to gauge the influence of an edge weight on the overall reconstruction. \subsubsection{Changing the Weights} \label{sec:weights} \begin{figure*}[] \centering \subfloat[]{\includegraphics[height=0.35\textwidth]{fig/supplementary/beforeConversion}} \subfloat[]{\includegraphics[height=0.35\textwidth]{fig/supplementary/afterConversion}} \caption{(a) Two Gaussian distributions corresponding to positive (green) and negative (red) classes of edges. (b) The effect of weight transformation; the original distributions are drawn with solid lines, while the corresponding distributions after the transformation are drawn with dashed lines. The described transformation causes ''swapping'' of the distributions corresponding to the two classes.} \label{fig:WeightDistribution} \end{figure*} For our cost change criterion to have practical value, we must alter weights in such a way that $\Delta c_i$ is largest for edges which require the opinion of an annotator. In practice, the weights of positive-class edges tend to follow a Gaussian distribution with negative mean and a variance such that few of them are positive values, as shown in Fig.~\ref{fig:WeightDistribution}(a). Similarly, negative edges follow a Gaussian distribution with positive mean, few of them being negative. As a result, most of the mistaken edges have $|w_i| \approx 0$. In order for our delineation-change metric to be informative, we must ensure that attention-worthy edges (probable mistakes) have high values of $\Delta c_i$. To achieve this, we must not only flip the sign of the weight (implying assigning it to the opposite class), but also increase the absolute value of likely mistakes. Without this, many of the mistakes with $|w_i| \approx 0$ could be omitted due to smaller values of $\Delta c_i$ compared to edges with weights of higher absolute value, which are much less likely to be mistakes. The above requirements can be satisfied with the following transformation: % \begin{equation} w'_i = \begin{cases} A + w_i & \mbox{if } w_i > 0, \\ B + w_i & \mbox{if } w_i < 0. \end{cases} \label{eq:weightChange} \end{equation} It is equivalent to swapping the distributions corresponding to positive and negative edges, as shown in Fig~\ref{fig:WeightDistribution}(b). We take $A$ and $B$ to be the 10\% and 90\% quantiles of the weight distribution (for robustness to outliers). These are near-extreme values of the weights for the positive and negative classes respectively, which we use as attractors for $w'_i$: for small positive $w_i$ we want $w'_i$ to be close to $A$, and for negative ones to $B$ instead. The weight change is therefore likely to yield a significant $\Delta c_i$ for probable mistakes. Finally, for edges whose weight is negative but which nevertheless do not belong to the graph, we take $\Delta c_i$ to be $w'_i$ to ensure that it is positive and that more uncertain edges are assigned higher $\Delta c_i$. \section{Introduction} Complex and extensive curvilinear structures include blood vessels, pulmonary bronchi, nerve fibers and neuronal networks among others. Many state-of-the-art approaches to automatically delineating them rely on supervised Machine Learning techniques. For training purposes, they require \textit{annotated} ground-truth data in large quantities to cover a wide range of potential variations due to imaging artifacts and changes in acquisition protocols. For optimal performance, these variations must be featured in the training data, as they can produce drastic changes in appearance. Furthermore, no matter how well-trained the algorithms are, they will continue to make mistakes, which must be caught by the user and corrected. This is known as {\it proofreading} -- a slow, tedious and expensive process when large amounts of image data or 3D image stacks are involved, to the point that it is considered as a major bottleneck for applications such as neuron reconstruction~\cite{Peng11c}. \input{fig/graph} In other words, human intervention is required both to create training data before running the delineation algorithm and to correct its output thereafter. Current approaches to making this less tedious focus on providing better visualization and editing tools ~\cite{Dercksen14,Peng11c}. While undoubtedly useful, this is not enough. We therefore propose an Active Learning (AL)~\cite{Settles10} approach to direct the annotator's attention to the most critical samples. It takes into account the expected change in reconstruction that can result from labeling specific paths. It can be used both for fast annotation purposes and, later, to detect potential mistakes in machine-generated delineations. More specifically, consider an algorithm such as those of~\cite{Santamaria-Pang15,Montoya14,Turetken16a,Neher15,Peng11b}, whose workflow is depicted by Fig.~\ref{fig:graph}. It first builds a graph whose nodes are points likely to lie on the linear structures and whose edges represent paths connecting them. Then it assigns a weight to each edge based on the output of a discriminative classifier. Since the result is critically dependent on the weights, it is important that the classifier is trained well. Finally, the reconstruction algorithm finds a subgraph that maximizes an objective (cost) function dependent on the edge weights, subject to certain constraints. However, even very small mistakes can result in very different delineations, as shown in Fig.~\ref{fig:mistakes}. Our main insight is that the decision about which edges to annotate or proofread should be based on their influence on the cost of the network. Earlier methods either ignore the network topology altogether~\cite{Freytag14} or only take it into consideration locally~\cite{Mosinska16}, whereas we consider it globally. Our contribution is therefore a cost- and topology-based criterion for detecting attention-worthy edges. We demonstrate that this can be used for both AL and proofreading, allowing us to drastically reduce the required amount of human intervention when used in conjunction with the algorithm of~\cite{Turetken16a}. To make it practical for interactive applications, we also reformulate the latter to speed it up considerably -- it runs nearly in real-time and it can handle much larger graphs than~\cite{Turetken16a}. The remainder of this paper is organized as follows. First, in Section~\ref{sec:focus}, we describe our attention mechanism for selecting important edges in the delineation. In Section~\ref{sec:active} we explain how this mechanism can be used for Active Learning and proofreading purposes. Then, in Section~\ref{sec:mip}, we introduce a new, more efficient formulation of the state-of-the-art Mixed Integer Programming delineation algorithm that ensures fast and reliable reconstruction. Finally, in Section~\ref{sec:results}, we compare the performance of our algorithm against conventional techniques. \section{Fast Reconstruction of Curvilinear Structures} \label{sec:mip} To delineate networks of curvilinear structures, we rely on the algorithm of~\cite{Turetken16a}, which involves solving the following problem: \begin{framed} \begin{center}\textbf{Min-Weight Tree Containing $r$ (MinTree)} \end{center} \begin{description} \item[\textnormal{\emph{Given:}}]A graph $\bG = (V,\bE)$, a root vertex $r \in V$, weights on edges $w : \bE \to \mathbb{R}$. Weights may be negative. \item[\textnormal{\emph{Find:}}] A tree $\bR \subseteq \bG$ containing the vertex $r$, minimizing the sum of weights of picked edges $\sum_{e \in \bR} w(e)$. \end{description} \end{framed} In our approach, MinTree is used when we expect the ground-truth image to be a tree. If such an assumption is not realistic (loopy networks, such as blood vessels), then we are instead interested in the following problem MinSubgraph: \begin{framed} \begin{center}\textbf{Min-Weight Connected Subgraph Containing $r$ (MinSubgraph)} \end{center} \begin{description} \item[\textnormal{\emph{Given:}}]A graph $\bG = (V,\bE)$, a root vertex $r \in V$, weights on edges $w : \bE \to \mathbb{R}$. Weights may be negative. \item[\textnormal{\emph{Find:}}] A connected subgraph $\bR \subseteq \bG$ which contains the vertex $r$ (and is not necessarily a tree), minimizing the sum of weights of picked edges $\sum_{e \in \bR} w(e)$. \end{description} \end{framed} Both problems are significantly harder than the Minimum Spanning Tree problem, because $\bR$ does not need to connect the entire graph and also the weights may be negative. In fact, both problems are NP-complete; we demonstrate this later in Proposition~\ref{prop:nphard}. In both~\cite{Turetken16a} and our approach they are solved using a Mixed Integer Programming (\QMIP{}) formulation, which is given as input to the Gurobi solver.\footnote{\cite{Turetken16a} also introduce a more advanced algorithm, which uses a formulation with quadratic weights, i.e., weights on pairs of adjacent edges, rather than a linear weight function; this makes the computational burden even heavier.} However, the previously considered formulation (see the model Arbor-IP in \cite{Turetken16a} and also the model M-DG in~\cite{BlumCalvo15}) has $|V||\bE|$ variables and as many constraints. This makes solving it costly for small graphs and impossible for larger ones. Our contribution is a new, linear-size \QMIP{} model for this problem. In Section~\ref{sec:our_formulation} we introduce our formulation and argue about its correctness. In Section~\ref{sec:hardness} we prove the NP-hardness of the considered problems. The major running time improvements that the new formulation brings about are measured in Section~\ref{sec:running_time}. Let us mention in passing that Blum and Calvo \cite{BlumCalvo15} also propose a ``matheuristic'' approach to solving MinTree -- although with no optimality guarantees. \subsection{Our Formulation} \label{sec:our_formulation} First, we describe how to obtain a \QMIP{} for MinTree. We replace each undirected edge with two directed edges, so as to work with a directed graph. Our objective is to find a directed tree whose each edge is directed away from the root $r$ (a so-called $r$-arborescence). We associate a binary variable $x_{uv} \in \{0,1\}$ with each directed edge $(u,v) \in \bE$, denoting the presence of the edge in the solution $\bR$. The first two linear constraints to consider are: \begin{itemize} \item any vertex $v$ has at most one incoming edge ($r$ has none) (see equations (\ref{eq:at_most_one_incoming}--\ref{eq:at_most_one_incoming_root}) below), \item an edge $(u,v)$ can be in the solution only if $u$ has an incoming edge in the solution (or $u = r$) \eqref{eq:only_if_has_incoming}. \end{itemize} These conditions almost require the solution to be an $r$-arborescence, but not quite; namely, there can still appear directed cycles (possibly with some adjoined trees). One way to deal with this issue is to enforce that every non-isolated vertex is connected to the root; this can be done using network flows. The constraints in the previous formulation require that, for every $v$ with an incoming edge, there should exist a flow $\{f_e^v\}_{e \in \bE}$ of value $1$ from $r$ to $v$. However, this leads to a large program ($|V||\bE|$ variables). Our way around this is to instead require the existence of a single flow $\{f_e\}_{e \in \bE}$ from the source vertex $r$ to some set of sinks. The main constraints are that: \begin{itemize} \item for every vertex $v \ne r$, if $v$ has an incoming edge (i.e., $v$ is not an isolated vertex in the solution, but is spanned by $\bR$), then the inflow into $v$ is at least $1$ more than the outflow (otherwise it is greater or equal to the outflow) \eqref{eq:flow_conservation}, \item $f$ is supported only on the support of $x$ (that is, the flow $f$ only uses edges which are used by the solution $\bR$) \eqref{eq:flow_only_on_x}. \end{itemize} Since $x$ has no edges into the root, neither does $f$. Thus $f$ is indeed a flow (within the $x$-subgraph) from the source $r$ to the sink set being the set of all active vertices. We write down our \QMIP{} formulation below. We use the following notation: $x(F) = \sum_{e \in F} x(e)$ for a subset $F \subseteq \bE$, $\delta^+(v)$ is the set of (directed) edges outgoing from vertex $v$, and $\delta^-(v)$ is the set of (directed) edges incoming into vertex $v$. Thus e.g. $f(\delta^+(v))$ is the total $f$-flow outgoing from vertex $v$. \begin{framed} \begin{alignat}{3} \text{minimize} & & \sum_{(u,v) \in \bE} & w(u,v) x_{uv} & & \nonumber \\ \text{subject to} & & x_{uv} &\in \{0,1\} & & \qquad \forall (u,v) \in \bE \nonumber \\ & & x(\delta^-(v)) &\le 1 & & \qquad \forall v \in V \setminus \{ r \} \label{eq:at_most_one_incoming} \\ & & x(\delta^-(r)) &= 0 & & \label{eq:at_most_one_incoming_root} \\ & & x_{uv} &\le x(\delta^-(u)) & & \qquad \forall (u,v) \in \bE, u \ne r \label{eq:only_if_has_incoming} \\ & & f(\delta^-(v)) - f(\delta^+(v)) &\ge x(\delta^-(v)) & & \qquad \forall v \in V \setminus \{r\} \label{eq:flow_conservation} \\ & & f_{uv} &\ge 0 & & \qquad \forall (u,v) \in \bE \nonumber \\ & & f_{uv} &\le (|V|-1) \cdot x_{uv} & & \qquad \forall (u,v) \in \bE. \label{eq:flow_only_on_x} \end{alignat} \end{framed} The following proposition explains the correctness of our formulation. \begin{proposition} \label{prop:soundness} For any $\bR \subseteq \bE$, the corresponding vector $x \in \{0,1\}^{\bE}$ is feasible for the \QMIP{} formulation\footnote{More precisely, there exists $f \in \mathbb{R}_+^{\bE}$ such that $(x,f)$ is feasible for the \QMIP{} formulation, where $x$ is obtained from $\bR$ by directing all edges to point away from~$r$.} iff $\bR$ is a tree containing the root $r$. \end{proposition} \begin{proof} ($\Longrightarrow$) By \eqref{eq:at_most_one_incoming}, edges $(u,v)$ with $x_{uv} = 1$ form a (directed) subgraph where every vertex has indegree at most 1. It is not hard to see that each connected component of such a graph is either a tree or a cycle (possibly with adjoined trees); the cycle case is impossible if the component contains $r$ (by \eqref{eq:at_most_one_incoming_root}). We show that actually there is no connected component except the one containing $r$. Towards a contradiction suppose that $S \subseteq V \setminus \{r\}$ is such a component; we will show that the flow conservation constraints \eqref{eq:flow_conservation} must be violated. Denote by $\delta^+(S) = \{ (u,v) \in \bE : u \in S, v \not \in S \}$ the outgoing edges of $S$, and by $\delta^-(S)$ the incoming edges. We have $x(\delta^+(S)) = x(\delta^-(S)) = 0$ and thus, by \eqref{eq:flow_only_on_x}, $f(\delta^+(S)) = f(\delta^-(S)) = 0$. However, by summing up \eqref{eq:flow_conservation} over $v \in S$ we get $f(\delta^-(S)) - f(\delta^+(S)) \ge \sum_{v \in S} x(\delta^-(v))$; the left side is $0$ but the right side is positive, a contradiction.\footnote{The observant reader will notice that the constraint \eqref{eq:only_if_has_incoming} is redundant. However, we keep it for clarity of exposition and because it makes solving the program faster in practice.} ($\Longleftarrow$) It is easy to see that constraints (\ref{eq:at_most_one_incoming}--\ref{eq:only_if_has_incoming}) are satisfied by $x$. To obtain the flow, we begin with $f = 0$. Then, for each vertex $v$ with $x(\delta^-(v)) = 1$, we route $1$ unit of flow from $r$ to $v$ inside $\bR$ (that is, we only use edges $e$ with $x_e = 1$) and add that flow to $f$. (This is possible since $\bR$ is connected.) This way we will satisfy \eqref{eq:flow_conservation}. Since the number of such vertices is at most $|V|-1$, any edge will hold at most $|V| - 1$ units of flow, thus satisfying \eqref{eq:flow_only_on_x}. \end{proof} So far we have discussed MinTree. To get a formulation for MinSubgraph, one only needs to omit the constraint \eqref{eq:at_most_one_incoming} and adjust the constraint \eqref{eq:flow_only_on_x} to become $f_{uv} \le |\bE| \cdot x_{uv}$. Then $x$ is obtained from $\bR$ by choosing any spanning tree of $\bR$ and orienting tree edges to point away from $r$ and non-tree edges arbitrarily. In the proof of Proposition~\ref{prop:soundness} we route $x(\delta^-(v))$ units of flow (rather than $1$ unit) for each $v$ (now any edge holds at most $|\bE|$ units of flow). These are the only changes. \subsection{Hardness} \label{sec:hardness} In this section we argue that our problems are extremely unlikely to be solvable in polynomial time. This makes solving \QMIP{} formulations using state-of-the-art solvers one of the most natural and efficient methods available. \begin{proposition} \label{prop:nphard} The problems MinTree and MinSubgraph are NP-complete. \end{proposition} \begin{proof} Clearly both are in NP. We will show an NP-hardness reduction from the Steiner tree problem in graphs (STP), which is a well-known NP-hard problem. An instance of STP consists of a graph $G = (V,E)$ with weights on edges $w : E \to \mathbb{R}_+$ and a set of terminal vertices $T \subseteq V$. The objective is to find a minimum-weight tree in $G$ which connects the set $T$. To obtain an instance of MinTree (or MinSubgraph) from STP, we do the following for each $t \in T$: adjoin a new vertex $t'$ to $t$ using a new edge $(t,t')$ of weight $-M$, where $M$ is a very large weight (say $M = 1 + \sum_{e \in E} |w(e)|$). Then set the root $r$ to be any of these new vertices. To see that an optimal solution of the MinTree instance corresponds to an optimal solution of the STP instance, note that the former must necessarily contain all the new edges (as we set their weight to be so low that it makes sense to select them even if it requires us to also select many positive-weight edges). Since the MinTree solution must be connected, it will therefore connect all the terminal vertices; removing the new edges from the MinTree solution gives an optimal STP solution. (The same reduction also works for MinSubgraph, since the weights of all original edges are positive and thus the optimal solution for MinSubgraph is the same as the optimal solution for MinTree.) \end{proof} \section{Results} \label{sec:results} \begin{figure*}[] \centering \subfloat[]{\includegraphics[height=0.23\textwidth]{fig/BVdouble}} \hspace{0.05cm} \subfloat[]{\includegraphics[height=0.23\textwidth]{fig/NeuronimgData}} \hspace{0.05cm} \subfloat[]{\includegraphics[height=0.23\textwidth]{fig/BFNeuronimgData}} \hspace{0.05cm} \subfloat[]{\includegraphics[height=0.23\textwidth]{fig/OPF_graph}} \vspace{-4mm} \caption{Dataset images with the over-complete graphs overlaid. (a) \textit{Blood Vessels.} (b) \textit{Axons.} (c) \textit{Brightfield Neurons}. (d) \textit{Olfactory Projection Fibers.}} \label{fig:datasets} \end{figure*} We tested our approach on 3-D image stacks depicting retinal blood vessels, rat brain axons and dendrites, and drosophila olfactory projection fibers obtained using either 2-photon or brighfield microscopes, shown in Fig.~\ref{fig:datasets}. We rely on the algorithm of~\cite{Turetken16a} for the initial overcomplete graphs, the corresponding edge features and the final delineations. To classify edges as being likely to be part of an extended linear structure or not on the basis of local image evidence, we use Gradient Boosted Decision Trees~\cite{Becker13b}. \subsection{Fast Reconstruction} \label{sec:running_time} The runtimes of our formulation compared to the one presented in~\cite{Turetken16a} are shown in Table~\ref{table:speed-up}. The optimization was executed on a 2x Intel E5-2680 v2 system (20 cores). Our formulation can be solved under 6 seconds for all real-world graph examples we have tried; the maximum for the formulation of~\cite{Turetken16a} is over 6 minutes. \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \textit{Axons1} & \textit{Axons2} & \textit{Axons3} & \textit{Axons4} & \textit{Axons5} & \textit{Axons6}\\ \hline \# edges & 164 & 223 & 224 & 265 & 932 & 2638 \\ \hline \QMIP{} \cite{Turetken16a} & 0.91 & 1.04 & 1.19 & 1.45 & 78.3 & 393.7\\ \hline \QMIP{} ours & 0.03 & 0.10 & 0.04 & 0.23 & 0.10 & 5.23 \\ \hline speedup & 26.1x & 10.1x & 27.3x & 6.3x & 743.5 & 75.2x \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \textit{BFNeuron1} & \textit{BFNeuron2} &\textit{OPF1}& \textit{OPF2}& \textit{BFNeuron3} & \textit{BFNeuron4} \\ \hline \# edges & 120 & 338 & 363 & 380 & 645 & 2826 \\ \hline \QMIP{} \cite{Turetken16a} & 0.48 & 2.25 &1.53 & 1.65& 2.13 & 308.23 \\ \hline \QMIP{} ours & 0.02 & 0.12 & 0.05 & 0.08& 0.26 & 2.30 \\ \hline speedup & 18.2x & 17.7x & 29.4x & 19.9x& 8.1x & 134.0x \\ \hline \end{tabular} \caption{Per-reconstruction runtimes (in seconds) of the \QMIP{} formulation of~\cite{Turetken16a} and ours for the proofreading task.} \label{table:speed-up} \end{table} We also compared the runtimes on randomly generated graphs of various sizes -- see Table~\ref{table:random}. The speed-ups remain similar. In Table~\ref{table:random2} we collect runtimes of our method on larger randomly generated graphs. If we assumed (more or less arbitrarily) 2 seconds to be the threshold of what is practical in an interactive setting (given that this optimization needs to be run multiple times), then we can see that the method of~\cite{Turetken16a} can deal with graphs of size at most 300, whereas our method copes with graphs having around 2000 edges. \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \# edges & 99 & 132 & 220 & 330 & 440 & 660 & 924 & 1320 & 1540 \\ \hline \QMIP{} \cite{Turetken16a} & 0.16 & 0.30 & 1.13 & 3.39 & 8.35 & 29.35 & 73.16 & 112.59 & 149.01 \\ \hline \QMIP{} ours & 0.03 & 0.04 & 0.06 & 0.12 & 0.15 & 0.29 & 0.36 & 0.67 & 0.42 \\ \hline speedup & 6.1x & 7.5x & 17.9x & 29.4x & 53.9x & 102.8x & 201.8x & 167.8x & 348.1x \\ \hline \end{tabular} \caption{Per-reconstruction runtimes (in seconds) of the \QMIP{} formulation of~\cite{Turetken16a} and ours on random graphs.} \label{table:random} \end{table} \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline \# edges & 1760 & 2420 & 3520 & 4400 & 5720 & 9900 \\ \hline \QMIP{} ours & 1.60 & 2.71 & 6.59 & 9.57 & 15.52 & 81.55 \\ \hline \end{tabular} \caption{Per-reconstruction runtimes (in seconds) of our \QMIP{} formulation on random graphs.} \label{table:random2} \end{table} One further practical method for speeding up the solver is to initialize it with a nonzero feasible solution. In cases where we needed to explore a large number of reconstructions resulting from altering just one weight at a time (which was the setting of our paper), we initialized the new solution to the current optimal solution. Note that this scenario makes performance considerations especially relevant, as $|\bE|$ reconstructions need to be made; even though they can be run in parallel, a high running time of a single \QMIP{} solution would make the approach impractical. \subsection{Active Learning} \begin{figure*}[t!] \centering \subfloat[]{\includegraphics[width=0.42\textwidth]{fig/active_learning/AL_BV}} \subfloat[]{\includegraphics[width=0.42\textwidth]{fig/active_learning/AL_neuron}} \\ \vspace{-0.4cm} \subfloat[]{\includegraphics[width=0.42\textwidth]{fig/active_learning/AL_BFNeuron}} \subfloat[]{\includegraphics[width=0.42\textwidth]{fig/active_learning/AL_OPF}} \caption{Active Learning. Accuracy as a function of the number of annotated samples. (a)~\textit{Blood vessels.} (b)~\textit{Axons.} (c) \textit{Brightfield neurons}. (d) \textit{Olfactory Projection Fibers.} The red curve denoting our approach is always above the others, except in the right-hand side of (d): because this is a comparatively easy case, the delineation stops changing after some time and error-based queries are no longer informative.} \label{fig:ALresults} \end{figure*} For each image, we start with an overcomplete graph. The initial classifier is trained using 10 randomly sampled examples. Then, we query four edges at a time, as discussed in Section~\ref{sec:active}, which allows us to update the classifier often enough while decreasing the computational cost. We report results averaged over 30 trials in Fig.~\ref{fig:ALresults}. Our approach outperforms both naive methods such as Uncertainty Sampling (\US{}) and more sophisticated recent ones such as \DPS{}~\cite{Mosinska16} and \EMOC{}~\cite{Freytag14}. \DPS{} is designed specifically for delineation and also relies on uncertainty sampling, but only takes local topology into account when evaluating this uncertainty. \EMOC{} is a more generic method that aims at selecting samples that have the greatest potential to change the output. In Fig.~\ref{fig:MSTvsQMIP} we can see that using \QMIP{} formulations indeed helps improve the AL results, compared to a more basic method Minimum Spanning Tree with Pruning~\cite{Gonzalez08} (\MSTP{}), as it produces more accurate reconstructions and thus we can more reliably detect mistakes. This is visible especially in case of \textit{Blood Vessels}, which in reality can form loops. Those can be reconstructed using MinSubgraph \QMIP{}, but not with \MSTP{}. \begin{figure*}[] \centering \subfloat[]{\includegraphics[height=0.33\textwidth]{fig/supplementary/MvQ_BV}} \subfloat[]{\includegraphics[height=0.33\textwidth]{fig/supplementary/MvQ_BFNeuron}} \caption{Comparison of our AL strategy when using \MSTP{} and \QMIP{}. (a) \textit{Blood Vessels.} (b) \textit{Brightfield Neurons.}} \label{fig:MSTvsQMIP} \end{figure*} \subsection{Proofreading} \begin{figure*}[] \centering \subfloat[]{\includegraphics[height=0.33\textwidth]{fig/ALproof/proof_neuron}} \subfloat[]{\includegraphics[height=0.33\textwidth]{fig/ALproof/proof_BFNeuron}} \\ \vspace{-0.4cm} \subfloat[]{\includegraphics[height=0.33\textwidth]{fig/ALproof/proof_OPF}} \subfloat[]{\includegraphics[height=0.33\textwidth]{fig/ALproof/ALproof_neuron}} \caption{Focused proofreading. DIADEM score as a function of the number of paths examined by the annotator. (a) \textit{Axons.} (b) \textit{Brightfield Neuron}. (c) \textit{Olfactory Projection Fibers}. (d) Combined AL and proofreading for \textit{Axons}.} \label{fig:proof} \end{figure*} \begin{figure} \centering \includegraphics[width=0.99\textwidth]{fig/BFNeuron_change} \caption{Proofreading. From left to right: initial delineation, delineations after 10 and 20 corrections, and ground truth.} \label{fig:proofSeq} \end{figure} For each test image, we compute an overcomplete graph and classify its edges using a classifier trained on 20000 samples. We then find four edges with the highest values of the score $s_i$ of Eq.~\ref{eq:diadem} and present them to the user for verification. Their feedback is then used to update the delineation. The red curves of Fig.~\ref{fig:proof}(a-c) depict the increase in DIADEM score. Rapid improvement can be seen after as few as 15 corrections. Fig.~\ref{fig:proofSeq} shows how the reconstruction evolves in a specific case. For analysis purposes, we also reran the experiment using the $\Delta c$ criterion of Eq.~\ref{eq:weightMetric} (cost-only) instead of the more sophisticated one of Eq.~\ref{eq:diadem} (cost and topology) to choose the paths to be examined. The green curves in Fig.~\ref{fig:proof}(a-c) depict the results. They are not as good, particularly in the case of Fig.~\ref{fig:proof}(c), because the highest-scoring mistakes are often the ones that tend to be in the \QMIP{} reconstruction both before and after correcting mistakes. It is therefore only by combining both cost and topology that we increase the chances that a potential correction of the selected edge will improve the reconstruction. By contrast, paths chosen by \RS{} and \US{} are not necessarily erroneous or in the immediate neighborhood of the tree. As a result, investigating them often does not give any improvements. \subsection{Complete Pipeline} \label{sec:complete} In a working system, we would integrate AL and proofreading into a single pipeline. To gauge its potential efficiency, we selected 50 edges to train our classifier using the AL strategy of Section~\ref{sec:active}. We then computed a delineation in a test image and proofread it by selecting 35 edges. For comparison purposes, we used either our approach as described in Section~\ref{sec:active}, \RS{}, or \US{} to pick the edges for training and then for verification. In Fig.~\ref{fig:proof}(d) we plot the performance (in terms of the DIADEM score of the final delineation and of the ground truth) as a function of the total number of edges the user needed to label manually.
1,314,259,995,726
arxiv
\section{Introduction} Finding and studying large samples of distant luminous and evolved galaxies is fundamental to provide a deeper insight on the formation of massive galaxies, a process that is commonly perceived as a challenging test for cosmological models of structure formation and evolution. For this reason, in the recent past, the study of early type galaxies at the highest observable redshifts made use of a considerable fraction of large telescope time and occupied a substantial part of the astronomical literature. The search for passively evolving systems at high redshift began with the so-called Extremely Red Objects (EROs; see also \cite{rieke,cimatti,mcarthy}) which reproduce the colours of ellipticals at $z\sim 1$. EROs are relatively ``new'' objects, in the sense that they have been recognized as a specific class only around 1990, due to the availability of Near InfraRed (NIR) detectors only at that epoch. Elston, Rieke \& Rieke (1988) found the first two EROs in a survey of 10 sq. arcmin. as resolved galaxies with $K\sim 16.5$ and $R-K\sim 5$. After the optical spectroscopic identifications, the two objects turned out to be an evolved galaxy at $z=0.8$ and a dusty starburst at $z=1.44$, named HR10. It was clear from this survey and successive investigations that the ERO population is heterogeneous in its main properties (star formation, mass, age, extinction, etc.). At present, there are various techniques to find evolved galaxies at high-$z$. \cite{cimatti99} utilised the criterion $R-K\ge5$(Vega) effective in the redshift interval $0.8\le z\le 1.8$ limited to $Ks\le 20$. \cite{caputi} and \cite{abraham} used a similar selection $I-K\ge4$(Vega) to select red galaxies with $1\le z\le 2$. \cite{pm00} suggested a two colour criterion ($I-K$ vs $J-K$) to separate ellipticals from dusty starbursts at $1\le z\le 2$, which could be extended at higher redshift ($2.0\le z\le 2.5$) using redder colours ($J-K$ vs $H-K$). \cite{franx} proposed a simple pure infrared criterion $J-K\ge2.3$(Vega) for $z\ge 2.0$. In a similar way, \cite{saracco04} selected 3 galaxies with $J-Ks\ge 3$(Vega) in the HDFS at $z\ge 2.5$, plausible candidates for high-z massive galaxies, though the statistic is very limited. Recently, \cite{bzk} suggested to isolate early-type galaxies according to the BzK criterion [$(z-K)_{AB}-(B-z)_{AB}\le -0.2$ and $(z-K)_{AB}\ge 2.5$] efficient at $1.4\le z\le 2.5$, and with extension at $2.5\le z\le 4.0$ using the RJL colour combination. \cite{yan} proposed a new class of objects, the high-$z$ EROS (called IEROs) with $f_\nu(3.6\mu)/f_\nu(850nm)\ge 20$ to select red galaxies at $1.5\le z\le 3.0$ using MIR data. The physical properties of massive galaxies at high-$z$ were also investigated by \cite{saracco05} through spectroscopy of a limited sample of massive, evolved galaxies with relatively bright magnitudes ($K\le 18.4$) at $1.3\le z\le 1.7$ on the MUNICS survey. A different approach has been adopted for the COMBO-17 survey, in which the intrinsic colour (U-V) rest frame is utilised to isolate galaxies belonging to the Red Sequence: \cite{combo} used the relation $(U-V)_{\rm rest}\ge 1.40-0.31\cdot z$, efficient at $0.2\le z\le 1.1$ according to simulations with spectral synthesis code. Finally, \cite{giallongo} adopted a slightly different approach: the bi-modality in $(U-V)_{\rm rest}$ is empirically fitted to the observations and could be extended up to $z\sim 3$. In this paper we focus on the so-called Distant Red Galaxies (DRGs; \cite{franx}). These galaxies are selected through a $J-K>2.3$(Vega) criteria, designed to be sensitive to galaxies with a large 4000 \AA~ break at $z\ge 2$. \cite{franx} used this technique in the FIRES survey (\cite{labbe03}) selecting 14 DRGs in the HDFS, down to faint $Ks$ magnitudes ($Ks\le 24.5$ in AB mag). By using ultra-deep spectroscopy \cite{vandokkum} provided evidence that the brighter DRGs are indeed galaxies at $z\sim 2$. Even if the evidence for the existence of old and massive galaxies is settled by these observations, however the lack of a statistical significant sample of DRGs hampered the detailed study of many of their properties, in particular their number density, their clustering properties and their physical properties like mass, star formation, age and spectral energy distribution (SED). Recently, \cite{papovich05} have derived a sample of 153 DRGs from the GOODS South down to a shallower limit of $Ks=23.2(AB)$, with the aim of studying in detail the specific star formation rate (star formation per unit mass star) of DRGs. They found that the bulk of the star formation in massive galaxies ($M\ge 10^{11} M_{\odot}$) occurs at early cosmic epochs and is largely complete by $z\sim 1.5$. Analogously to \cite{papovich05}, we use the extraordinary dataset provided by the GOODS survey to extend these studies. In particular, we will adopt the GOODS-MUSIC sample, a $Ks$-selected sample with an extended wavelength range (from U to $8.0\mu$m band) that we compiled using publicly available data in the Chandra Deep Field South region and described at length in \cite{grazian}. With this complete sample of DRGs, we can define in detail their general properties and refine previous investigations by \cite{franx}, which used only 14 objects in the FIRES survey, though at a fainter magnitude limit. The structure of the paper is as follows. In \S\ 2 we describe the data used to analyse DRG properties. In \S\ 3 we select DRGs according to the selection criterion defined by \cite{franx} and provide their number density and the redshift distribution. In \S\ 4 we study the clustering properties of DRGs selected in the GOODS-South field and in \S\ 5 we discuss the link between the DRG population at $z\sim 2$ and the local ellipticals. All magnitudes, unless otherwise stated, are given in the AB system. A concordance $\Lambda CDM$ cosmological model ($\Omega_M =0.3$, $\Omega_\Lambda =0.7$ and $H_0 = 70$ km s$^{-1}$Mpc$^{-1}$) has been adopted throughout the paper. \section{The Data} We use in this paper the data from the Chandra Deep Field South (CDFS; \cite{giacconi}), obtained within the GOODS survey. This is a collaboration between STScI and ESO (\cite{renzini}) that produced an unprecedented dataset of images, covering 135 sq. arcmin. from 0.3 to 8.0$\mu$m down to relatively faint magnitude limits (\cite{giavalisco}). In particular, we used the ACS images (release V1.0, \cite{giavalisco}), the ISAAC database (release V1.0, \cite{vandame}) and the IRAC dataset (release V1.0 enhanced, \cite{dickinsonirac}), together with U band photometry from WFI@2.2m ESO-MPI and VIMOS reduced by our group. Using this public dataset, we have produced a high-quality multicolour catalog of galaxies in the GOODS-South, that we have named GOODS-MUSIC: details about the procedure adopted are discussed in \cite{grazian}. We briefly remind here that we have used all the publicly available images from U to 8.0 $\mu$m ($U,B,V,i,z,J,H,Ks,3.6\mu,4.5\mu,5.8\mu,8.0\mu$), in a contiguous area of 135 sq. arcmin., totalling 14847 objects. In particular, to isolate a complete sample of DRGs, we use here the $Ks$-selected sample, that consists of $2931$ galaxies. The GOODS survey has a complex, inhomogeneous exposure map in the $Ks$ band. To properly derive the statistical properties of galaxies in this field, the sample has been divided in 6 sub-areas of different magnitude limits, as described in details in \cite{grazian} and in Tab. \ref{khisttab}. This information is used in this work when the DRG statistical properties are studied, such as their number density or clustering properties. The typical magnitude limit for most of the sample is about $Ks=23.5$, and extends down to 23.8 in a limited area. In \cite{grazian} we included spectroscopic information for 668 galaxies. Recently, \cite{vanzrun2} have released further spectroscopic redshifts in the GOODS South region. We used this new release to compile a revised sample of 973 galaxies with good spectroscopic identification. Out of this number, 815 are in the $Ks$-selected sample ($\simeq 28 \%$ of the total). For the remaining sources, we derived a photometric redshift, as described in \cite{grazian}: the redshift accuracy in the range $0<z<6$, as shown in Fig.\ref{zszp}, on this enlarged spectroscopic sample is $\sigma_z=0.06$, which is the same value previously found in \cite{grazian}. If we restrict to the 340 galaxies with red colours ($J-Ks\ge 0.7$), as shown in Fig.\ref{drgzszp}, the redshift accuracy is $\sigma_z=0.08$ in the redshift range $0<z<4$. \begin{figure} \includegraphics[width=9cm]{grazianf01.ps} \caption{The spectroscopic vs photometric redshifts for 973 galaxies in the GOODS-MUSIC sample. The accuracy is $\sigma_z=0.06$ and $\frac{\sigma_z}{1+z}=0.03$ in the redshift range $0<z<6$. } \label{zszp} \end{figure} \begin{figure} \includegraphics[width=9cm]{grazianf02.ps} \caption{The spectroscopic vs photometric redshifts for 340 red galaxies with $J-Ks\ge 0.7$ in the GOODS-MUSIC sample. The accuracy is $\sigma_z=0.08$ and $\frac{\sigma_z}{1+z}=0.05$ in the redshift range $0<z<4$. There are only 13 galaxies with $J-Ks\ge 1.3$ and spectroscopic redshifts (red crosses). } \label{drgzszp} \end{figure} Rest--frame physical quantities (such as luminosities, mass, age, SFR) are derived by using the synthetic library of \cite{bc03} (hereafter BC03), at the spectroscopic redshift, adopting the same technique already described in several previous papers (see Fontana et al. 2004 for more details). \section{Selection of DRGs} \subsection{The number density of DRGs} We have selected DRGs according to the criterion defined by \cite{franx}, ($J-Ks\ge 1.3$ in AB system, as obtained using the transmission curves for the J and Ks filters of ISAAC), which is efficient at $z\ge 2$. Fig. \ref{JKvK} shows the effect of this selection criterion of DRGs applied to the objects of GOODS-MUSIC sample. \begin{figure} \includegraphics[width=9cm]{grazianf03.ps} \caption{Selection of $J-Ks\ge 1.3$ objects in the GOODS-South Field. Upper limits in the $J$ band are shown as vertical arrows. The horizontal line shows the selection criteria for DRGs in the GOODS-South area, while dashed line indicates the completeness on the DRG selection due to the depth of the J band (26.8 AB at $S/N=1$). } \label{JKvK} \end{figure} In the GOODS-South region we find 179 galaxies having $J-Ks\ge 1.3$. For the reasons described above, the completeness limit of the survey is not homogeneous, with a typical value of $Ks=23.5$. We use this sample of DRGs to study in particular their number density and their spatial distribution (clustering). The number density of DRGs in the GOODS South field is derived through the classical $\log N-\log S$ distribution, or the number of objects per sq. arcmin. and per magnitude bin in the $Ks$ band. This last quantity is obtained by following the recipe of \cite{avni}: \begin{equation} n(Ks)=\frac{1}{\Delta Ks}\sum_{i=1}^{N_{\rm obj}} \left[ \sum_{j=1}^{N_{\rm field}} Area_j^{\rm max} \right]^{-1} \ , \end{equation} where the sum is on the $N_{\rm field}$ surveys (here, the 6 areas with different magnitude limits described in \cite{grazian} and in Tab.\ref{khisttab}) and on the $N_{\rm obj}$ objects; $Area_j^{\rm max}$ represents the accessible area of the $j$-th survey (this is equivalent to the maximum accessible volume when the luminosity function is derived). The DRG counts have been computed in bins of $\Delta Ks=0.5$ magnitude. Fig. \ref{logns} shows the surface density of DRGs in the GOODS South field and compares it with the results derived in the HDFS by the FIRES survey (\cite{labbe03}). Even if the area of HDFS is smaller with respect to the GOODS Survey, the DRG number densities in these two independent fields are comparable (see also Table \ref{khisttab}). Notice, however, that different values for the number density of DRGs has been derived by using the data of the HDFN (\cite{lanzetta,fontana,dickinson}), in which one DRG is found at $Ks\le 23.0$, and a limited number at $23\le Ks\le 24$ with upper limit in the $J$ band. The sample variance between HDFN and HDFS is due to the limited area investigated and stresses the necessity of deriving a firm measurement for the number density of DRGs in a large and deep survey such as the GOODS-CDFS field. \begin{figure} \includegraphics[width=9cm]{grazianf04.ps} \caption{The surface density of selected DRGs in the GOODS-South Field (triangles), compared with the estimate obtained in the HDFS (squares, \cite{labbe03}).} \label{logns} \end{figure} \begin{table} \caption[]{Number density $\Sigma$ of DRGs in the $Ks$ band for GOODS-South and HDFS fields} \begin{tabular}{lccccc} \hline \hline $bin$ & $N$ & $\log(\Sigma)$ & $\bar{z}$ & $\bar{Ks}$ & {\small AREA} \\ & & mean $+1\sigma$ $-1\sigma$ & & & $arcmin^2$ \\ \hline 20.25 & 6 & -1.13 -0.91 -1.38 & 1.05 & 20.34 & 135.372 \\ 20.75 & 14 & -0.68 -0.58 -0.82 & 1.25 & 20.81 & 135.372 \\ 21.25 & 16 & -0.63 -0.53 -0.75 & 1.42 & 21.26 & 135.372 \\ 21.75 & 22 & -0.47 -0.39 -0.57 & 1.96 & 21.79 & 129.692 \\ 22.25 & 29 & -0.34 -0.27 -0.43 & 2.04 & 22.27 & 128.273 \\ 22.75 & 50 & -0.11 -0.05 -0.17 & 2.45 & 22.78 & 127.935 \\ 23.25 & 32 & -0.10 -0.03 -0.19 & 2.80 & 23.20 & 81.272 \\ 23.75 & 10 & 0.10 0.26 -0.06 & 2.75 & 23.58 & 12.585 \\ \hline 22.50 & 4 & -0.13 0.07 -0.30 & 2.75 & 22.36 & 4.500 \\ 23.50 & 7 & 0.10 0.26 -0.06 & 2.75 & 23.36 & 4.500 \\ 24.50 & 8 & 0.24 0.38 0.04 & 2.75 & 24.36 & 4.500 \\ 25.50 & 18 & 0.54 0.65 0.41 & 2.75 & 25.36 & 4.500 \\ \hline \end{tabular} \label{khisttab} \begin{list}{}{} \item a) the number density $\Sigma$ is in units of $arcmin^{-2} mag^{-1}$ \item b) $bin$ represents the central bin magnitude in $Ks$ \item c) $\bar{z}$ and $\bar{Ks}$ are the mean values of redshift and observed $Ks$ magnitude for each magnitude bin \item d) the number density in the second half of the table derives from the FIRES survey in the HDFS (\cite{labbe03}) \end{list} \end{table} \subsection{Redshift distribution of DRGs} \begin{figure} \includegraphics[width=9cm]{grazianf05.ps} \caption{{\em Upper panel}: the distribution of spectroscopic (only 13 objects; short-dashed line) and photometric (solid line) redshifts of selected DRGs in the GOODS-South Field. The dotted curve is the redshift distribution obtained for the DRGs using the probability function for the redshift for each object derived by the photometric redshift code. It is in agreement with the distribution using the best estimate for the photometric redshift code. The long-dashed line represents the redshift distribution for the HDFS (\cite{labbe03}), peaked at $z\sim 3$. It is markedly different from the redshift distribution of the GOODS field, since DRGs in the HDFS have fainter $Ks$ magnitudes. The redshift distribution of HDFS is comparable to the redshift distribution of DRGs in the GOODS field at $Ks>23$ magnitude, as shown in the lower panel. {\em Lower panel}: the photometric redshift distribution for bright ($Ks<22$; long-dashed line) and faint ($Ks>22$; solid line) DRGs. Deep pencil beam surveys (HDFs) preferentially select objects at $z\sim 2$, while large area surveys are biased towards lower-redshift ($z\le 2$) and bright ($Ks<22$) DRGs (short-dashed line).} \label{zhist} \end{figure} \begin{figure} \includegraphics[width=9cm]{grazianf06.ps} \caption{The $J-Ks$ colour of objects in the GOODS-South field as a function of their (spectroscopic or photometric) redshift. Upper limits in the $J$ band are displayed as vertical arrows. The long-dashed horizontal line shows the selection criteria adopted for DRGs in this paper. The two blue solid lines show the $J-Ks$ colour for passively evolving galaxies formed at $z=20$ and with an e-folding star formation rate with timescale $\tau=0.1$ and $\tau=1$ Gyr (upper and lower curves, respectively). The red short-dashed lines show the same colour for a star-forming galaxy with $E(B-V)=1.1$ and $E(B-V)=0.5$ (upper and lower curves, respectively). } \label{JKvred} \end{figure} The large number of DRGs in the GOODS field makes it possible to test the selection criterion and to define the window function in redshift for DRGs. Fig. \ref{zhist} shows the distribution of the photometric redshifts of DRGs: the spectroscopic sample of DRGs is very limited both in redshifts and $Ks$ magnitudes (only 13 galaxies with $19.7\le Ks\le 22.9$ and $ 0.65\le z\le 3.04$). The redshift distribution of GOODS DRGs is slightly different from that drawn for HDFS by \cite{franx} and \cite{daddi}, which covers the interval $2\le z\le 4$ with a prominent peak at $z\sim 3$, and in reasonable agreement with the similar analysis of Papovich et al 2005. In our GOODS-MUSIC sample there are DRGs at lower redshifts ($1\le z\le 2$) with bright apparent $Ks$ magnitudes ($Ks\le 22$), which are in practice absent in small and deep pencil beam surveys, like the HDFS. The redshift distribution clearly shows that there is a considerable fraction (77 out of 179, i.e. 43$\%$) of objects at low redshifts ($z\le 1.9$) which satisfy the $J-Ks$ selection. With a typical colour $J-Ks\sim 1.5$, they cannot be the result of photometric errors, since this should be negligible for relatively bright objects: in fact at $Ks\sim 21.5$ the typical error in magnitude is $\sigma=0.03$. The SEDs of these low-redshift DRGs are dominated by power-law spectra with a tilt at $\lambda\sim 6\mu m$, which are mostly fitted by relatively young galaxies (${\rm age}/\tau\le 1$) and a substantial amount of extinction ($E_{B-V}\sim 0.5-1.0$, see Fig. 8 of \cite{papovich05}). Fig. \ref{JKvred} may help in understanding this result, which is due to the complex selection effects that are effective in this colour criterion. In Fig. \ref{JKvred} we compare the observed $J-Ks$ colour as a function of redshift with the expected $J-Ks$ of a few, selected templates computed with the BC03 models. Two of these models are computed adopting exponentially declining star-formation histories, both started at very high redshift ($z_{\rm form}=20$), with solar metallicity. The values adopted for the e-folding timescale ($\tau=0.1$ and $\tau=1$ Gyr) both produce the same colour at low redshift and show that the $J-Ks>1.3$ threshold is effective in selecting galaxies at $z>2$ that formed their stars in a short starburst $\tau\le 1$Gyr. At the same time, large $J-Ks$ colours may be obtained by star-forming, dusty models down to lower redshift $z\simeq 1$. This highlights why the DRG population is not a unique class of $z>2$ objects, but it is contaminated by dusty starbursts with $z\sim 1.5$, whose strong dust absorption is responsible for their red infrared colours. The low-redshift DRG sub-sample is at the limit of the $J-Ks$ selection, and can be explained by dust reddening of $z\sim 1.5$ star-forming galaxies, as shown in Fig. \ref{JKvred}. If a more drastic cut $J-Ks$ colour would be applied (e.g. $J-Ks\ge 1.8$), this would ensure a much more efficient selection of galaxies with $z\ge 2$, but the sample would be strongly reduced, from 179 to 51 galaxies only. \begin{figure} \includegraphics[width=9cm]{grazianf07.ps} \caption{The distribution of the ratio between the age of DRGs and the characteristic timescale $\tau$ of the exponentially declining SFR, according to the BC03 spectral synthesis model. The solid histogram refers to the distribution of low-$z$ DRGs, dominated by relatively young objects (age/$\tau\le 3$) which are typically dusty starbursts, while the dashed histogram shows the ratio for $2\le z\le 4$ DRGs, where a considerable fraction (30\%) of old and passively evolving galaxies arise.} \label{ttau} \end{figure} The difference between low-$z$ and high-$z$ DRG has been extensively discussed in a recent paper by \cite{papovich05}. They argue that lower-$z$ DRG are dominated by dusty starbursts, while the higher-$z$ objects are made of a more complex stellar population, likely a mixture of star-forming, heavily extincted and older, passively evolving stellar components, with a minority of galaxies that are likely genuinely passively evolving. In our preliminary, simplified analysis (the most important difference with respect to \cite{papovich05} is that we do not use models with two-component stellar populations and we do not include the $24 \mu$ data in the analysis) we also have evidence of the same distinction. This is shown in Fig.\ref{ttau}, where we report the distribution of the ratio between the fitted age and the fitted star-formation e-folding timescale $\tau$ (such a ratio is in practice the inverse of the Scalo parameter). As it is shown, all low-$z$ DRG are dominated by actively star-forming, relatively young objects, while higher-$z$ DRGs have a broader distribution of age$/\tau$, including several objects (30\% of the high-$z$ DRG sample) that are fitted by passively evolving models. The average luminosities in the rest--frame I band (Vega system) that we infer from the spectral fitting of our sample are $<M_I>=-22.3$ and $<M_I>=-23.2$ at $<z>=1.5$ and $<z>=2.7$, respectively, and the average stellar masses are $<M_*>=8.15\cdot 10^{10} M_\odot$ and $<M_*>=9.90\cdot 10^{10} M_\odot$ (10.76 and 10.88 if we compute $<\log(M)>$), respectively. It is tempting to speculate on the possible spectral evolution of these objects. A lower limit to their local luminosity can be obtained by assuming that they enter into a passive evolution phase soon after we observe them. In this case, assuming a truncated star-formation history with solar metallicity, the BC03 code predicts in the rest-frame I band a fading from $<z>=1.5$ and $<z>=2.7$ to $z\simeq 0$ of 2.2 and 2.45 magnitudes, respectively. However, we have to take into account that DRG are typically dusty objects, such that we should probably normalise this fading to their unobscured luminosity. Assuming that the typical reddening of DRG is $E(B-V)\simeq 0.75\pm0.25$ with a Calzetti extinction curve, and that they evolve to present-day objects with little dust extinction, we find that the typical change in rest-frame magnitude $\Delta M_I= M_I(z)-M_I(z=0)$ is $0.26\pm 0.65$ at $<z>=1.5$ and $-0.49\pm 0.65$ at $<z>=2.7$. Given the average rest frame luminosities described above, this would imply that the descendents of DRG in this simple model have rest frame luminosities of about $M_I(z=0)=-22.56$ and $-22.71$. The typical $M^*$ magnitude in the I band for local galaxies in the SDSS is $M_i=-22.48$ (\cite{blanton01}), which increases to $-23.2$ if one considers only the reddest galaxies ($g-r\ge 0.74$). Considering that it is obviously implausible that all DRGs are observed at the end of their star--forming phase, and that therefore they will end up in more luminous and massive objects than predicted by this exercise, one can conclude that both the low-$z$ and high-$z$ DRGs are consistent with being the progenitors of local massive galaxies. The analysis of clustering will help to clarify this conclusion. \section{Spatial distribution of DRGs: the clustering properties} It is already known that DRGs are not uniformly distributed on the sky, but they are clustered on scales of several Mpc. The analysis of the HDFS shows that the DRGs are in prevalence concentrated in one quadrant of the WFPC, while in the HDFN there is only one DRG, suggesting that this population could be strongly clustered and affected by cosmic variance (\cite{vanzhdfs,franx,daddi}), such that the small area covered by surveys like HDFN or FIRES prevents to derive a robust measurement of their clustering properties and their redshift evolution. We therefore present in the following a detailed analysis of the clustering properties of our GOODS-MUSIC DRG sample. Thanks to the available statistics, we will consider both the overall sample, similarly to what already done in previous works, but we will also divide the sample into two different sub-groups: the first one, containing objects with $1<z<2$, where the dusty starburst population is expected to be the dominant component; the second one, containing objects with $2<z<4$, where also relatively evolved galaxies are represented in the sample. \begin{figure} \includegraphics[width=9cm]{grazianf08.ps} \caption{The angular distribution of selected DRGs in the GOODS-South Field. The symbols are coded according to the redshift: DRGs at $z\ge 2$ and at $z\le 2$ are shown by red triangles and blue circles, respectively; black dots refer to normal galaxies at all redshifts. For comparison, the size of the HDF is also shown. The DRGs are clustered and not uniformly distributed over areas larger than the HDFs: this shows that the cosmic variance for DRGs is dramatic at small scales.} \label{radec} \end{figure} \subsection{Angular Two Point Correlation Function} We have used the Landy-Szalay estimator (\cite{ls}) to obtain the two-point correlation function (TPCF) in the angular coordinates ($\alpha$,$\delta$) for DRGs in the GOODS Field: \begin{equation} w(\theta)=\frac{GG(\theta)-2GR(\theta)+RR(\theta)}{RR(\theta)} \ , \end{equation} where $GG(\theta)$ is the number of observed galaxy pairs at distance between $\theta$ and $\theta+d\theta$, $GR(\theta)$ is the number of observed-random pairs and $RR(\theta)$ is the random-random pairs. We compute $GR$ and $RR$ as mean values of 1,000 simulated random catalogs. The random sample of galaxies is obtained by randomly generating the coordinates ($\alpha$,$\delta$) in the GOODS-CDFS field. Each random galaxy is then retained or rejected according to the magnitude limit at the selected position. This ensures to correctly reproduce the selection function of observed DRGs, even in presence of a complicated exposure map, like the GOODS survey one (\cite{grazian}). Finally we correct the observed $w(\theta)$ taking into account the bias arising from the finite boundary of the sample (see the details in Appendix A). Errors on the angular correlation function, $\sigma_w$, are determined by Poisson statistics, through the relation \begin{equation} \sigma_w=\sqrt{\frac{1+w(\theta)}{GG(\theta)}} \ . \end{equation} We fit the angular correlation function (computed in annuli of increasing $\theta$) by a power-law relation, $w(\theta)=(\theta/\theta_0)^{-\delta}$, fixing $\delta=0.8$. Following \cite{croft} (see also \cite{croom,aerqs3}) the fit is carried out by using a Maximum Likelihood Estimator (MLE) based on Poisson statistics and unbinned data. A detailed description of the MLE can be found in Appendix A. The results for the DRG TPCF are presented in Fig. \ref{clust} (large quadrant), together with the MLE fit with the corresponding 1$\sigma$ confidence intervals. Considering the interval $1\le\theta\le 100$ arcsec, we find a clustering scale of $\theta_0=3.19^{+2.48}_{-1.90}$ arcsec. The mean redshift and absolute magnitude for the clustered galaxies are $z_{\xi}=2.1$ and $M_I=-22.8$, respectively. The small quadrant of Fig. \ref{clust} shows the TPCF integrated in circles of increasing apertures $\theta$. We do not use this quantity to fit the best value for $\theta_0$, since errors are correlated in different bins of angular separation. However, we can obtain from its value an indication of the clustering strength: at $\theta=12$ arcsec we observe 23 pairs, while simulations of random distributions predict 12 pairs only, which is a detection at about 3$\sigma$; at $\theta=6$ arcsec we derive an excess of 7 pairs over 3 random, which represents a 4$\sigma$ detection. By looking at the integrated angular TPCF shown in the small quadrant of Fig. \ref{clust}, we notice that it is still significantly non null even at large scales ($\theta \approx 50-60$ arcsecs), which are comparable to the angular size of the HDFs. This result confirms that the difference in the DRG number density found in previous surveys is due to both the cosmic variance and their strong clustering, whose effects can become dramatic when considering deep pencil beam surveys, which are conducted over small areas, like the HDFs or the Hubble Ultra Deep Field (HUDF, \cite{udf}). To have a look at the redshift evolution of the DRG clustering properties, we compute the correlation scale of the low- and high-redshift samples, separately, and find a clear evidence of a strong evolution: we indeed estimate a correlation scale of $\theta_0=3.69_{-3.35}^{+5.03}$ arcsec ($z_\xi=1.5$ and $M_I=-22.30$) for the low-redshift sample (76 galaxies), and $\theta_0=13.68_{-6.29}^{+7.84}$ arcsec ($z_\xi=2.7$ $M_I=-23.20$) for the high-redshift one (88 galaxies). We note that it is well known that at small scales ($\theta\le 10$ arcsec) the TPCF is dominated by substructures, produced by the existence of multiple galaxies inside massive halos (see, e.g. \cite{lee}). This effect is also evident in Fig. \ref{radec}, where the presence of close-by galaxy pairs or triplets is clearly visible. To measure the clustering properties of dark matter halos (DMHs) hosting DRGs, it is necessary to avoid using only the smallest scales, where the halo occupation distribution (HOD) is plausibly larger than unity. Using the total DRG sample, we obtain a correlation length of $\theta_0=5.89^{+3.74}_{-3.10}$ arcsec for $\theta\le 10$ arcsec, while in the interval $10\le \theta\le 100$ the TPCF is significantly weaker, with a MLE fit of $\theta_0=1.67^{+2.17}_{-1.50}$ arcsec (see the long dashed lines in Fig. \ref{clust}). It is important to remark, however, that the redshift evolution is clearly detected at both scales, although the uncertainties become obviously much larger. At the scale of $\theta\le 10$, indeed, the correlation length is $\theta_0=3.84^{+7.15}_{-3.46}$ arcsec at $1<z<2$ and $\theta_0=15.52^{+9.28}_{-7.60}$ arcsec at $2<z<4$. At $10\le \theta\le 100$, the correlation length is $\theta_0=2.89^{+3.90}_{-2.65}$ arcsec at $1<z<2$ and $\theta_0=8.48^{+13.20}_{-6.72}$ arcsec at $2<z<4$. Notice that in our following discussion we will use the clustering length obtained by the fit over the global range $1\le\theta\le 100$ arcsec, since it is a robust compromise against boundary effects at the largest scales and against HOD effect at the smallest scales. \begin{figure} \includegraphics[width=9cm]{grazianf09.ps} \caption{{\em Large panel}: the differential angular TPCF (in a logarithmic scale) for the DRGs in the GOODS field (filled circles with 1$\sigma$ error bars). We also plot the MLE best fit power-law relation (solid line) with its 1$\sigma$ confidence interval (short-dashed line), as computed in the interval $1\le\theta\le 100$ arcsec: the corresponding correlation scale of DRGs is $\theta_0=3.19$ arcsec. The two dotted lines refer to the MLE best fits on limited intervals: $\theta\le 10$ arcsec and $\theta\ge 10$ arcsec. Note that at small scales the TPCF is enhanced by the presence of multiple galaxies in the same DMH, while at large scales the boundary effect may become critical. {\em Small quadrant}: the angular TPCF integrated over circles of increasing radii (filled circles with 1$\sigma$ error bars).} \label{clust} \end{figure} \subsection{Spatial clustering} To convert the angular correlation length to physical units we can invert the angular TPCF through the Limber equation (\cite{limber}), adopting the DRG redshift distribution presented in Fig. \ref{zhist}. Leaving the detailed calculations to Appendix A, we have: \begin{equation} w(\theta)=\frac{r_0^\gamma \theta^{1-\gamma} I(\gamma) \int_0^{\infty} (\frac{dN}{dz})^2 r(z)^{1-\gamma}(\frac{dz}{dr}) dz}{N^2_{\rm obj}} \ , \end{equation} where $I(\gamma)=3.67909$ when $\gamma=1.8$ is assumed. Using the value for $\theta_0=3.19^{+2.48}_{-1.90}$ derived through the MLE fit to the angular TPCF, for the complete DRG sample, we obtain a correlation length of $r_0=9.78^{+2.85}_{-3.24} h^{-1}$ Mpc. Using the same Limber equation, the corresponding comoving correlation lengths are $r_0=7.41^{+3.45}_{-4.84} h^{-1}$ Mpc and $r_0=13.36^{+2.99}_{-3.20} h^{-1}$ Mpc, for the sub-samples at $1<z<2$ and $2<z<4$, respectively. We note that the TPCF for the higher-redshift sub-sample is different with respect to the value obtained by \cite{daddi} for DRGs in the HDFS, although still marginally consistent, because of the relatively large error budget. We have to notice, however, that for their analysis they applied a colour selection criterion which is bluer ($J-Ks\ge 0.7$) than the one adopted here ($J-Ks\ge 1.3$). We tested that, by selecting in the GOODS region DRGs at $2\le z\le 4$ with their same colour cut, we obtain a sample of 232 galaxies, with a typical redshift of $z_{\xi}=2.9$, having a correlation length of $r_0=8.8\pm1.7 h^{-1}$ Mpc, which is comparable to the value provided by \cite{daddi} ($8.3\pm1.2 h^{-1}$ Mpc). A redder cut ($J-Ks\ge 1.3$) applied for DRGs in the HDFS actually results in a larger correlation length of $r_0=14.5^{+3.1}_{-3.7} h^{-1}$ Mpc (\cite{daddi}), which is consistent with our estimate. As a further comment, we also notice that the error associated to our estimate for $r_0$ in our whole sample ($\sim 3 h^{-1}$ Mpc) is slightly higher than the value quoted by \cite{daddi} for the DRGs in the HDFS, even if the samples have a different number of objects (197 DRGs in GOODS against 49 in the HDFS). This is due to the fact that we include in the error budget the effects of cosmic variance, which is the dominant effect in this kind of study and it is not included in the error bars quoted for DRGs in HDFS. It is interesting to compare these results with other estimates of clustering strength, for other related classes of objects. In order to avoid the dependence of the scale length $r_0$ on the power-law fit $\gamma$, it can be useful to present the results in a non-parametric form. This can be done by using the quantity $\bar{\xi}$, defined as the correlation function $\xi(r)=(r/r_0)^{-\gamma}$ integrated over a sphere of a given radius $r_{\rm max}$: \begin{equation} \bar{\xi}(r_{\rm max})=\frac{3}{r_{\rm max}^3}\int_0^{r_{\rm max}} \xi(x)x^2{\rm d}x \ . \end{equation} In general, the larger the scale on which the clustering is measured, the easier the comparison with the linear theory of the structure evolution. Since in the following we want to compare our results with those obtained for different values for $\gamma$, we prefer to quote clustering amplitudes within $20 h^{-1}$ Mpc, a scale for which linearity is expected to better than a few per cent. Choosing a large radius also reduces the effects of small scale peculiar velocities and redshift measurement errors, which can be a function of redshift. Fig. \ref{xiz} compares the values for $\bar\xi(20h^{-1})$ that we obtained for DRGs in the GOODS field (summarised in Table \ref{clusttab}) to the corresponding estimates for other classes of objects, both at low and high redshift. It is immediately clear that the high-redshift ($z>2$) sample of DRG is drawn from a remarkably highly clustered population, most likely more clustered than the $z<2$ DRG population. At $z=0$, the only galaxies having correlation lengths as large as 10-11 $h^{-1}$ Mpc (corresponding to $\bar\xi(20h^{-1})\sim 1$) are morphologically-selected giant ellipticals or radio-galaxies. \cite{guzzo} estimate $r_0=8.35\pm0.76h^{-1}$ Mpc for early-type galaxies with $M_B\le -19.5+5log(h)$ in the Pisces-Perseus super-cluster survey, while \cite{adami} derive a significantly smaller value ($r_0=7 h^{-1}$ Mpc with $\gamma =-1.79$) from the SSRS2 redshift survey. The discrepancy between these two measurements is probably originated by the presence in the first survey of the super-cluster, which enhances the correlation function. \cite{overzier} and \cite{rottgering} find that local radio-galaxies have large clustering lengths (see also \cite{peakcock}) and that the high degree of correlation between hosting ellipticals and luminous radio-sources suggests an interesting possible comparison for distant samples. For small groups of galaxies in the local Universe, the typical value for $\bar\xi$ has been measured by \cite{girardi,zandivarez,padilla}, and again shown in Fig.\ref{xiz}. \cite{collins} report the results of the spatial two-point correlation function for the galaxy cluster survey ROSAT-ESO Flux-Limited X-ray (REFLEX), finding $\bar\xi(20h^{-1})=2.29\pm0.50$ for rich clusters at $z\le 0.3$. More ordinary elliptical galaxies show a range of clustering strength, that is strongly dependent on the absolute magnitude. We reproduce in Fig. \ref{xiz} the range corresponding to local elliptical galaxies ranging from $M_B=-17$ to $M_B=-21$, taken from \cite{2dfgrs} and \cite{sdss}. At high redshifts, we also display the values observed for EROs (\cite{daddieros}) and for bluer DRG (\cite{daddi}). This compilation of clustering strength for a wide range of objects shows that DRG are among the mostly clustered objects at galactic scales, and suggests that they might be related to the progenitors of similarly clustered objects at lower redshifts, as EROs or local massive ellipticals. Unfortunately, a firm conclusion in this context is not straightforward, since we do not know the evolution of the bias parameter for this class of high-redshift objects. This point will be better discussed in the final section. \begin{figure} \includegraphics[width=9cm]{grazianf10.ps} \caption{ The integrated clustering strength $\bar\xi(20h^{-1}Mpc)$ as a function of redshifts for different objects: DRGs, EROs, powerful radio-galaxies, ellipticals and galaxy groups/clusters. Filled squares show the results for low- and high-$z$ DRGs in the GOODS region, while void square represents the whole DRG sample. The solid lines show the predicted evolution of the clustering according to the {\em object-conserving} model, tuned to the DRGs at low- and high-$z$, while the dashed lines reproduce the clustering evolution according to the {\em merging model}. The plot suggests that high-redshift DRGs can be the progenitors of local ellipticals, but may evolve into more massive objects, like EROs at $z\sim 1$ and groups/clusters of galaxies in the local universe. The horizontal error bars show the redshift intervals for the DRGs in this work. Filled triangles show the values of the correlation strength for DRGs with $J-Ks\ge 0.7(AB)$ as estimated in the HDFS (\cite{daddi}) at $z=3.1$ and in the GOODS region at $z=2.9$ (this work).} \label{xiz} \end{figure} \begin{table} \caption[]{Clustering properties of DRGs} \begin{tabular}{lccccc} \hline \hline $Type$ & $r_0$ & $\gamma$ & $\bar\xi(20h^{-1})$ & $z$ & $M_I$ \\ & ($h^{-1}$ Mpc) & & & \\ \hline DRGs & $9.78^{+2.85}_{-3.24}$ & 1.8 & $0.690^{+0.403}_{-0.356}$ & 2.1 & -22.8 \\ low-$z$ DRGs & $7.41^{+3.45}_{-4.84}$ & 1.8 & $0.419^{+0.414}_{-0.357}$ & 1.5 & -22.3 \\ high-$z$ DRGs & $13.4^{+2.99}_{-3.20}$ & 1.8 & $1.209^{+0.530}_{-0.470}$ & 2.7 & -23.2 \\ $J-Ks\ge 0.7$ & $8.77^{+1.62}_{-1.70}$ & 1.8 & $0.657^{+0.112}_{-0.272}$ & 2.9 & -23.0 \\ \hline \hline \end{tabular} \label{clusttab} \end{table} \section{Summary and Discussion} In this paper we have presented an analysis of Distant Red Galaxies (DRG) selected in the GOODS-South region. In particular, we have used the GOODS-MUSIC sample, that has been compiled from a unique dataset that comprises accurate multi-wavelength coverage (14 bands from $0.3$ to $8 \mu m$) of $\sim$3000 galaxies in $Ks$ complete sample, with accurate estimates of the photometric redshifts for {\em all} galaxies in the field. From the GOODS-MUSIC sample, we have selected 179 DRGs according to the criterion proposed by \cite{franx}, $J-Ks\ge 1.3$ at a typical magnitude limit of $Ks=23.5(AB)$ and down to $Ks=23.8$ in a limited area. The wide and deep covered area (135 sq. arcmin), together with the extended SED information and the precision in photometric redshifts ($\sigma_z=0.06$), allows to study the statistical properties of DRGs, like the redshift distribution, number density and clustering properties at an unprecedented level. The derived number density is consistent with that found by \cite{labbe03}, with approximatively 1 DRG per sq. arcmin. at $Ks=23.5$. The redshift distribution shows a smoothed peak around $z\sim 2$, with extended tails both to $z=1$ and $z=4$. Bright DRGs ($Ks\le 22$) tend to dominate the $z\sim 1$ region, while apparently faint DRGs ($Ks > 22$) are distributed widely around $z\sim 2.0-3.5$. The two populations also have different intrinsic properties: low-redshift DRG are slightly less luminous than their higher-$z$ counterparts ($<M_I>=-22.3$ and $<M_I>=-23.2$, respectively), and possibly slightly less massive ($<M_{star}>=8.15\cdot 10^{10} M_\odot$ and $<M_{star}>=9.90\cdot 10^{10} M_\odot$, respectively). In particular, we investigated on the spatial distribution of DRGs through the Two-Point Correlation Function (TPCF) analysis. We find that DRGs from the overall sample are significantly clustered (4$\sigma$ detection), with a typical correlation length of $\theta_0=3.19^{+2.48}_{-1.90}$ arcsec, corresponding, through the Limber equation and the observed redshift distribution, to $r_0=9.78^{+2.85}_{-3.24}h^{-1}$ Mpc. We also find that the clustering strength of DRGs increases with the $J-Ks$ colour cut used for selection. Using the relatively large sample of DRG provided by the GOODS-MUSIC sample, we divided the DRG sample in two sub-groups in redshift, one with $1\le z\le 2$ and the other with $2\le z\le 4$. The clustering of low-$z$ DRGs is significantly lower than that of the high-$z$ DRGs, with $r_0=7.41^{+3.45}_{-4.84}h^{-1}$ Mpc and $13.4^{+2.99}_{-3.20}h^{-1}$ Mpc, respectively. It is useful to stress here that this behaviour is not due to a physical evolution of the DRG population. It is the result of a selection criterion which provides an heterogeneous group of dusty starburst and massive/evolved galaxies with different redshift distribution. Unfortunately, a direct comparison of the clustering properties of DRGs with those of other objects can be misleading, since it is not known a priori the connection between these classes. However, it is possible to constrain the clustering evolution of the descendents of the DRG population using two extreme, simplified models, as proposed by \cite{matarrese} and \cite{moscardini} for the merging of galaxies in a $\Lambda$CDM hierarchical clustering scenario. In one case, that was named {\em object-conserving model}, we assume that the observed DRGs do not undergo any subsequent phase of merging with other objects, included those of lower mass. This model, which is conceptually close to a sort of ``passive evolution'' scenario, assumes that the galaxies form at some characteristic redshift by some non-linear process which induces a bias parameter at that epoch, and that their subsequent motion is purely caused by gravity, following the continuity equation. An obvious consequence of this model is that the bias factor will not be constant for all time, but will tend to unity as time goes on because the galaxies will be dragged around by the surrounding density fluctuations, populated by less clustered objects. This scenario, which corresponds to have an extremely long merging or disruption time, provides an upper limit to the evolution of the clustering properties of DRG descendents, and is shown as thick solid lines in Fig.\ref{xiz}, after normalisations to the DRG values obtained in this paper. On the other side, we use a {\em merging model}, where the - even more extreme - assumption is that galaxies continue the merging process down the lowest redshifts, with the same (high) merger rate of their parent halos. This clearly extreme model provides a lower limit of the evolution of the clustering properties of DRG descendents, and is shown as dashed lines in Fig.\ref{xiz}. These theoretical predictions have been obtained adopting the standard $\Lambda CDM$ power spectrum, normalised to reproduce the local cluster abundance ($\sigma_8=0.9$). Although the error budget on the estimate of $\bar\xi(20h^{-1})$ on the two DRG samples is still relatively large, we can use these two limiting theoretical predictions to attempt a physical interpretation of our results. First, the observed value of $\bar\xi(20h^{-1})$ for the low-$z$ DRGs is outside the range predicted for the evolution of the higher $z$ sample: this suggests that is unlikely that the two samples are drawn from the same population, observed at two different stages of evolution. If we look at the low redshift range predicted for the DRG evolution, it is suggested that high-redshift DRGs (i.e. those typically selected at $Ks>22$, see Fig.\ref{zhist}) likely represent the progenitors of the more massive galaxies in the local Universe, i.e. the more luminous ellipticals, and might mark the regions that will later evolve into structures of intermediate mass, like groups or small clusters. On the other hand, low-redshift DRGs (i.e. those typically selected at $Ks<22$), will likely evolve into slightly less massive field galaxies, approximately around the characteristic luminosity $L^*$ of local ellipticals. Our observations provide further evidence for the so called ``downsizing'' scenario that has emerged in many different aspects of high redshift galaxies, providing evidences that more massive galaxies have formed preferentially at higher redshifts than less massive ones. Here we find the same trend, since high redshift DRGs are more clustered, more luminous, and most likely to evolve into more massive galaxies than their lower-$z$ counterparts. \begin{acknowledgements} It is a pleasure to thank the GOODS Team for providing all the imaging material available worldwide. Observations were carried out using the Very Large Telescope at the ESO Paranal Observatory under Program IDs LP168.A-0485 and ID 170.A-0788. We are grateful to the referee for useful and constructive comments. \end{acknowledgements}
1,314,259,995,727
arxiv
\section{Introduction} \label{sec:intro} Since Bell's paper \cite{Bell} entanglement has been studied and explored in depth. Saying that quantum information branch emerged from extensive studies of phenomenon of entanglement would not be an exaggeration. Entanglement has been used in many information-processing applications in which it either yields an advantage over the classical setting, e.g., in communication complexity \cite{Buhrman-rev}, or where a classical counterpart simply doesn't exist, e.g., in quantum key distribution (QKD) \cite{Gisin-etal-QC-review}, its device independent variant (DIQKD) \cite{acin-2007-98}, teleportation, super dense coding \cite{Nielsen-Chuang}, or Pseudo-Telepathy (PT) \cite{Brassard-pseudo, Bras_pt_survey}. Although quantum theory allows for violations of Bell inequalities (BI), in certain cases the violations can not reach their maximum algebraically possible value. Tsirelson was the first to find such upper bounds on Bell values for quantum theory \cite{Tsirelson-bound} and to relate them to the Grothendieck's inequality. Much research has been done to explain why quantum mechanics does not lead to ``algebraic'' violations of Bell inequalities \cite{PR,ravi_qmnsbox}. In \cite{Oppenheim-Wehner}, Wehner and Oppenheim argue that the trade-off between steerability and uncertainty determines how non-local a theory is. In \cite{Cleve}, Cleve et al. gave an upper bound for the winning probability for XOR games in the quantum setting; their bound depends on the classical winning probability and the Grothendieck's constant. Note that XOR game is a non-local game and that non-local games form a subset of general Bell inequalities \cite{nonlocal_rev}. The approach to bounding quantum violations via a Grothendieck-type constant is now quite common and reasonably well understood. It leads to estimates for Bell values that are of the form $\beta_{qm} \leq K_G \beta_{cl}$ \cite{junge_ubviol}. In this work we develop a different strategy, where the Bell value for a given inequality depends on the difference between the maximal algebraic value ($\beta^{max}_{alg}$) and maximal deterministic value ($\beta^{max}_{det}$) of the inequality in question. Specifically, we study quantitatively Bell inequalities with $2\times n$ inputs (henceforth $2\times n$ BI) and give a {\it universal bound} on quantum Bell values of these inequalities. To find this bound for $2\times n$ \,BI, we introduce notion of {\it fraction of determinism} (FOD) and show that it depends only on the number of outcomes Alice and Bob have at their sites. We claim that presence of FOD prevents quantum Bell value from attaining the maximal algebraic value of a Bell type inequality. Our paper is inspired by Gisin et al. \cite{GisinMS2006-pseudo}, which studied certain Bell inequalities (Pseudo-telepathy) for which quantum resources achieve algebraic violation. They show that to achieve such violations for these inequalities at minimum $3\times 3$ inputs are required. In other words, there is no $2\times n$ BI for which quantum theory attains algebraic violation. Here we uncover the heart of this effect -- the fraction of determinism -- and are able to give a quantitative bound for it. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{tri_thm.pdf} \caption{Pictorial representation of different bounds of $||\sigma-\rho||_1$. Triangle inequality gives an upper bound $2-\epsilon$, whereas {\it reverse triangle inequalities} give lower bounds $2-2\sqrt{2\epsilon}$ for general quantum states and $2-2\epsilon$ for classical (or commuting) states. \label{fig:trian_conj}} \end{figure} While looking for a lower bound for FOD, we proved a fundamental property of quantum states which is interesting on its own. Namely, if $\rho_1$ and $\rho_2$ are far from $\sigma$, then any convex mixture of them is also far from $\sigma$. More precisely, if $\Delta_1=||\rho_1-\sigma||_1\ge 2-\epsilon$ and $\Delta_2=||\rho_2-\sigma||_1\ge 2-\epsilon$ for some $\epsilon \ge 0$, then, for all $p \in [0,1]$, \begin{equation} \Delta=||p\rho_1+(1-p)\rho_2-\sigma||_1\ge 2-\mathcal{O}(\sqrt{\epsilon}). \end{equation} where $||\rho||_1 \overset{\text{def}}{=}Tr\sqrt{\rho^{\dagger}\rho}$. This inequality is in a sense dual to triangle inequality since it bounds the trace distance between $\rho$ and $\sigma$ from below. Accordingly, we call it ``reverse triangle inequality'' (RTI). Interestingly, it turns out that for classical states (commuting density matrices) one can find lower bound of $\Delta$ with the defect term linearly dependent on $\epsilon$, while for non-commuting quantum states one can not in general have dependence better than $\mathcal{O}(\sqrt{\epsilon})$. The second fundamental property which is used here is related to so called steering \cite{wiseman_steering}. Namely, by making measurement on one site of the entangled state, one can create only those ensembles which give rise to the same density matrix – the reduced state of the entangled state. This implies that if we consider two such ensembles, there must be at least two elements (one from one ensemble, and the other from the second ensemble that are not perfectly distinguishable. It has been apparently not studied to what extent they have to be indistinguishable. Here, by using the reverse triangle inequality, we are able to give a robust quantitative bound (lemma \ref{lemma-x}), which is independent of the dimension of the underlying Hilbert space. We shall use it further to give bound for FOD, which in turn will allow to bound quantum violations for $2\times n$ Bell inequalities. The paper is organized in the following manner. In section \ref{sec:prelim}, we introduce necessary definitions and the concept of FOD. In sections \ref{sec:summary} and \ref{sec:FODinQM}, we present respectively a summary of our main results and sketches of their derivations. A special case when Bob has two inputs at his site with binary outcomes is analyzed in section \ref{sec:binary-bob}. For this special case, we have explicitly calculated a bound for FOD and for the classical fraction. Finally we conclude our work in section \ref{sec:conclusions}. Details of most proofs are relegated to the Appendices. \section{Preliminaries} \label{sec:prelim} \subsection{Definitions} {\bf Box:} Consider two distant parties, Alice and Bob, sharing a physical system. Each of them perform measurements labeled as $x$ and $y$ respectively. Their corresponding outcomes are labeled as $a$ and $b$. Then, a box is defined as family of joint probability distributions $p(a,b|x,y)$, i.e., $P=\{p(a,b|x,y)\}$. By a {\it non-signalling box} (NS-box) we mean a box which satisfies following conditions, \begin{flalign} p(b|y)=\sum_a p(a,b|x,y)=\sum_a p(a,b|x',y) \hspace{1mm} \forall b,x,x' \hspace{1mm} \text{and} \hspace{1mm}y \\ \nonumber p(a|x)=\sum_b p(a,b|x,y)=\sum_b p(a,b|x,y') \hspace{1mm}\forall a,x,y' \hspace{1mm} \text{and} \hspace{1mm}y \end{flalign} A {\it local box} is defined as a box where joint probabilities can be expressed as \begin{equation} p(a,b|x,y)=\int_{\Lambda} q(\lambda)p(a|x,\lambda) p(b|y,\lambda) d\lambda , \end{equation} where the hidden variable $\lambda$ is distributed according to some probability density $q(\lambda)$. Such boxes satisfy (by definition, see below) every Bell inequality. We say that a box is a {\it Quantum box} (QM box) when conditional probabilities are realized by $p(a,b|x,y)=tr(M^x_a \otimes N^y_b \rho_{AB})$, where $\rho_{AB}$ is a shared quantum state between party A and B, and $M^x_a$ and $N^y_b$ are measurements for A and B respectively. Note that, for each input $x$ and $y$, $\{M^x_a\}, \{N^y_b\}$ are POVMs. In this work, we consider only NS-boxes. {\bf Bell Inequalities:} Let $\mathcal{S}\equiv \{s^{a,b}_{x,y}\}$ be a real vector and $P=\{p(a,b|x,y)\}$ be a box then, $\mathcal{S}.P \leq \beta_T$ is called {\it Bell inequality} when this inequality is satisfied by any local box $P$ \cite{nonlocal_rev}. Note that we can rescale the inequality and make $\mathcal{S}$ positive real. {\bf Fraction of determinism (FOD):} Consider a non-signalling box $P$. One can always express it as convex combination of $P=(1-c)X+c D$, where $X$ is an NS-box and D is a deterministic box. The {\it fraction of determinism} of $P$ is defined as \begin{equation} FOD:= \max_{D, X} \{c \,|\, P=(1-c)X+cD \} \end{equation} {\bf Classical Fraction (CF):} A non-signalling box $P$, can always be expressed as a convex combination of $P=(1-\sum_i c_i) X+ \sum_i c_i D_i$, where $X$ is an NS-box and $D_i$ are deterministic boxes. Let $c_{cf}=\sum_i c_i$ then the classical fraction of a box $P$ can be obtained by taking maximum of $c_{cf}$ over all possible decompositions of the above form, i.e., \begin{equation} CF(P)= \max_{\{D_i\}, X} \{c_{cf}| P=(1-\sum_i c_i) X+ \sum_i c_i D_i\} \end{equation} Note that FOD, CF and the cost of non locality $c_{nl}$ \cite{Brunneretal2011} satisfy the following relations \begin{equation} FOD\leq CF=1-c_{nl} \end{equation} \subsection{The Role of the Fraction of Determinism} In the classical theory, FOD is not 1. Indeed, consider the maximally mixed state, which has the smallest fraction of determinism: it is $1/k_A k_B$ where $k_A$ is number of Alice's outcomes, and $k_B$ is number of Bob's outcomes (assuming that the number of outcomes is same for all observables). In the quantum case, the set of states is larger, hence we might in principle have states with zero fraction of determinism. However this is not the case as shown here. On the other hand, PR-boxes \cite{PR} are completely noiseless and they do not have any fraction of determinism. The latter is equivalent to saying that they provide perfectly secure correlations. Indeed, the fraction of determinism is at the same time the fraction that can be known by the third party, or equivalently one can say that FOD in a given theory restricts Bell value from reaching its maximal algebraic value. We present the following proposition which indeed captures this idea. If a box has some fraction $c$ of "determinism'', then this fraction implies a bound on the maximal value of a linear function (in particular for Bell type inequalities). \begin{prop} \label{prop-c} Consider a box $P=\{p(a,b|x,y)\}$ with inputs $x\in \{x_1,\ldots,x_n\}$ on Alice side and $y\in \{y_1,\ldots,y_m\}$ on Bob's side. Suppose that we can find indices $a^{(1)}_0, \ldots, a^{(n)}_0$, $b^{(1)}_0, \ldots, b^{(m)}_0$ such that \begin{equation} \forall i, j \hspace{5mm} p(a^{(i)}_0,b^{(j)}_0|x_i,y_j)\geq c \label{eq:cond} \end{equation} Then for any linear function $\beta$ of the box, we have \begin{equation} \label{eq:lin_func} \beta(P) \leq \beta_{alg}^{\max} - c(\beta_{alg}^{\max} -\beta_{det}^{\max}) \end{equation} where $\beta_{alg}^{\max}$ is the maximum value over all boxes, while $\beta_{det}^{\max}$ is the maximum over all classical deterministic boxes. \end{prop} This follows from the fact that any such box can be expressed as convex combination of a deterministic box and some other box, i.e., $P=cD+(1-c)X$ and simply taking the maximal value of $\beta$. \section{Summary of The Results} \label{sec:summary} We give a {\it universal bound} on $2\times n$ Bell inequalities. Specifically, our main result is finding a bound on FOD for the $2\times n$ BI scenario and showing that it only depends on number of outcomes of both parties. From Proposition \ref{prop-c} we know that this gives a universal bound for any linear function. A summary of our main results is as follows. \begin{thm}\label{thm:main} For input $2\times n$, the fraction of determinism for QM box is bounded by the following quantity: \begin{equation} c\geq {0.1134\over 2k\hspace{1mm} l \hspace{1mm}l_1\hspace{1mm} l_2} \end{equation} Here, $k=\max\{|x_1|,...|x_n|\}$ and $l=\max\{l_1=|y|,l_2=|y'|\}$, where $\{x_1,...x_n\}$ are inputs on Alice's side while $\{y, y'\}$ are inputs on Bob's side. \end{thm} Here, by $|z|$ we denote number of outcomes an observable $z$ takes. To prove the above theorem we need the following fundamental property of quantum states. \begin{thm} \label{thm:szarek} {\bf (Reverse Triangle Inequality)}Let $\epsilon \geq 0$ and assume that the states $\rho_i, \sigma$ satisfy \begin{equation} \label{eq:condtn} \|\rho_i -\sigma\|\geq 2 - \epsilon \end{equation} for $i=1,\ldots,l$. Then, for any probability distribution $\{p_i\}_{i=1}^l$, \begin{itemize} \item[1)] For any states $\rho_i$, $\sigma$ satisfying \eqref{eq:condtn} \begin{equation} \label{eq:szarek} \|\sum_{i=1}^l p_i \rho_i - \sigma\| \geq 2 - 2\sqrt{l\epsilon} \end{equation} \item[2)] For commuting states $\rho_i$, $\sigma$ satisfying \eqref{eq:condtn} \begin{equation} \|\sum_{i=1}^l p_i \rho_i - \sigma\| \geq 2 - l\epsilon \end{equation} \item[3)] There exist three non-commuting states $\rho_1, \rho_2$ and $\sigma$ satisfying \eqref{eq:condtn} such that \begin{equation} \| {\rho_1+ \rho_2 \over 2} - \sigma\| \leq 2-\sqrt{2\epsilon} \end{equation} \end{itemize} \end{thm} {\bf Remark}: The third assertion says that, in the non-commuting case, $2-\sqrt{2\epsilon}$ is the best possible bound one can hope to achieve. Hence, one cannot have better lower bound than $2-\mathcal{O}(\sqrt{\epsilon})$. Using the above results, one can find lower bound for FOD in CHSH \cite{nonlocal_rev} case ($k=l=2,\beta_{alg}^{\max}=4$ and $\beta_{det}^{\max}=2$) as $c \geq 3.5438 \times 10^{-3}$. This results in bounding CHSH value for quantum theory. \begin{flalign} \label{beta_thm} \nonumber \beta_{CHSH}^{qm} &\leq \beta_{alg}^{\max} - c(\beta_{alg}^{\max} -\beta_{det}^{\max}) &\\ &\leq 4-3.5438 \times 10^{-3}(4-2)=3.9929 \end{flalign} A more direct approach gives an improved bound on FOD. \begin{equation} \beta_{FOD}(P)\le 4-\frac{0.1096*2}{4}=3.9452 \end{equation} \begin{equation} \beta_{CF}(P)\le 4-\frac{0.1123*2}{4}=3.9439 \end{equation} This has been elaborated in section \ref{sec:binary-bob}. It is interesting to note that we can also roughly estimate $\beta$ of \eqref{eq:lin_func} to upper-bound $\beta$ in the classical theory. We will get a rough estimation for CHSH (in case of the maximally mixed state $c={1\over k_A k_B}={1\over4}$) \begin{equation} \beta_{CHSH}^{cl} \leq 4-{1\over 4}(4-2)=3{1\over 2} \end{equation} We realize that these are weak bounds, but the importance of this study lies in their generality: they are valid for {\em any} Bell inequality. In the following section we shall find a bound for $c$ for quantum states for quantum theory and derive our main results. Most of the proofs are relegated to Appendix \ref{app:FODinQM}. We assume that Bob has 2 observables $\{y,y'\}$, i.e., $m=2$. \section{Fraction of Determinism in QM} \label{sec:FODinQM} We start with a proposition in which we redefine FOD more explicitly for QM boxes, which will lead to a lower bound that can be used in Proposition {\ref{prop-c}}. \begin{prop} For a QM-box with $2\times n$ input, the following quantity $c_0$ satisfies \eqref{eq:cond} \begin{equation} \label{classicbound} c_0=\inf_{\xi,\xi',\{X_r\}} \max_{r,i,j} \min\{ p_i {\rm Tr}(X_r \rho_i), q_j{\rm Tr}(X_r \sigma_j)\} \end{equation} where the infimum is taken over all ensembles $\xi=\{(p_i,\rho_i)\}_{i=1}^{|y|}$, $\xi'=\{(q_j,\sigma_j)\}_{j=1}^{|y'|}$ satisfying \begin{equation} \label{eq:ensemble} \sum_ip_i \rho_i = \sum_j q_j \sigma_j \end{equation} and over all POVMs $\{X_r\}_{r=1}^k$, i.e., sets of operators satisfying $\sum_r X_r=I$, $X_r\geq 0$, with $k=\max\{|x_1|,\ldots,|x_n|\}$. \end{prop} {\bf Proof}: By hypothesis, our quantum box is realized via POVMs $\{M^x_a\}$ (with $x\in \{x_1,\ldots,x_n\}$) on on Alice's side, two POVMs $\{N^y_b, N^{y'}_{b'}\}$ on Bob's side, and a shared quantum state $\rho_{AB}$. Depending on Bob's measurement choice ($y$ or $y'$), an ensemble $\{p(b|y),\rho_b\}_{b=1}^{|y|}$ or $\{p(b'|y'),\sigma_{b'}\}_{b'=1}^{|y'|}$ is created at Alice's site, where $p(b|y)$ and $p(b'|y')$ are marginal conditional probabilities. Even more specifically, $p(b|y)\rho_b = {\rm Tr}_B\big((I\otimes N^y_b) \rho_{AB}\big)$ and similarly $p(b'|y')\sigma_{b'} = {\rm Tr}_B\big((I\otimes N^{y'}_{b'}) \rho_{AB}\big)$. These ensembles satisfy \begin{equation} {\rm Tr}_B(\rho_{AB})=\sum_b p(b|y)\rho_b =\sum_{b'} p(b'|y')\sigma_{b'} , \end{equation} i.e., a condition of the type \eqref{eq:ensemble}. If now $\{X_r\}$ is any of Alice's POVMs (say, $\{M^x_a\}$), it is apparent that the expressions $p_i {\rm Tr}(X_r \rho_i), q_j{\rm Tr}(X_r \sigma_j)$ coincide with the conditional probabilities $p(a,b|x,y), p(a,b'|x,y')$ appearing in \eqref{eq:cond}. Now pick a triplet $(r, i, j)$ such that the probabilities of the corresponding outcomes are maximal and one can see that these indices lead to the choices of $a, b$ that yield \eqref{eq:cond} with $c=c_0$. $\blacksquare$ Next, we will give an estimate on this quantity. In this way we shall obtain a universal quantum bound for any $2\times n$ input inequalities, in terms of difference between the classical bound and the maximal algebraic bound (\ref{eq:lin_func}). In general, $c$ might be zero. But we show in $2\times n$ input boxes that indeed it is bounded away from zero. To show this, one needs to prove for some choice of $i,j$ and for any POVM $X_r$, that ${\rm Tr}(X_r \rho_i)$ and ${\rm Tr}(X_r \sigma_j)$ are bounded away from zero. Note that this indeed happens when the POVM cannot distinguish the two states $\rho_{i}$ and $\sigma_j$ too well. We elaborate this point through the following lemma. \begin{lemma} Suppose that $||\rho-\sigma||\leq 2- 2 k \epsilon$. Then for any POVM $\{X_r\}_{r=1}^k$ there exists an outcome $r_0$ such that \begin{equation} {\rm Tr} (X_{r_0} \rho) \geq \epsilon \quad \hbox{and} \quad {\rm Tr} (X_{r_0} \sigma) \geq \epsilon \label{eq:error} \end{equation} \label{lem:error} \end{lemma} Note that using this lemma we can replace conditional probabilities by $\epsilon$ and get rid of choosing maximum for all $r$ and the optimization over $\{X_r\}$. The above lemma asserts that there exist at least one outcome $r$ for each input and each pair $(\rho_i,\sigma_j)$ such that the corresponding probabilities are lower-bounded by $\epsilon_{ij}$, i.e., ${\rm Tr} (X_{r} \rho_i) \geq \epsilon_{ij}$ and ${\rm Tr} (X_{r} \sigma_j) \geq \epsilon_{ij}$. Therefore one can simplify the expression for FOD as follows. \begin{align} \nonumber &c_0\ge c_1=\inf_{\xi,\xi',\{X_r\}} \max_{r,i,j} \min\{ p_i \epsilon_{ij}, q_j\epsilon_{ij}\} &\\ &c_1\ge \frac{1}{2k} \inf_{\xi,\xi'}\max_{i,j} \min\{p_i (2- ||\rho_i-\sigma_j||), q_j (2- ||\rho_i-\sigma_j||)\}& \label{eq:c1} \end{align} where we assume $||\rho_i-\sigma_j||\leq 2- 2 k \epsilon_{ij}$. Having simplified FOD, we will now state and apply a theorem which is both vital for our results, as well as important on its own. \addtocounter{thm}{-1} \begin{thm} \label{thm:szarek} \{Restatement\}Let $\epsilon \geq 0$ and assume that the states $\rho_i, \sigma$ satisfy \begin{equation} \label{eq:condtn1} \|\rho_i -\sigma\|\geq 2 - \epsilon \end{equation} for $i=1,\ldots,l$. Then, for any probability distribution $\{p_i\}_{i=1}^l$, \begin{itemize} \item[1)] For any states $\rho_i$, $\sigma$ satisfying \eqref{eq:condtn1} \begin{equation} \label{eq:szarek} \|\sum_{i=1}^l p_i \rho_i - \sigma\| \geq 2 - 2\sqrt{l\epsilon} \end{equation} \item[2)] For commuting states $\rho_i$, $\sigma$ satisfying \eqref{eq:condtn1} \begin{equation} \|\sum_{i=1}^l p_i \rho_i - \sigma\| \geq 2 - l\epsilon \end{equation} \item[3)] There exist three non-commuting states $\rho_1, \rho_2$ and $\sigma$ satisfying \eqref{eq:condtn1} such that \begin{equation} \| {\rho_1+ \rho_2 \over 2} - \sigma\| \leq 2-\sqrt{2\epsilon} \end{equation} \end{itemize} \end{thm} We relegate the proof of the above Theorem to Appendix \ref{app:fid_prof}. Using this theorem we argue that for two ensembles (\ref{eq:ensemble}), which give rise to the same density matrix, $||\sigma_{i_0}-\rho_{j_0}||$ must be bounded away from 2 for some $i_0,j_0$. In general, we have the following lemma. \begin{lemma}\label{lemma-x} For two ensembles $\{(p_i,\rho_i)\}_{i=1}^{|y|}$, $\{(q_j,\sigma_j)\}_{j=1}^{|y'|}$ satisfying \begin{equation} ||\sum_ip_i \rho_i - \sum_j q_j \sigma_j||\leq x \end{equation} there exist $i_0$ and $j_0$ such that \begin{equation} ||\rho_{i_0}-\sigma_{j_0}||\leq 2 - \epsilon \end{equation} where $\epsilon$ is solution of the following equation \begin{equation}\label{eq-x} 2 - 2\sqrt{l_1l_2\epsilon}=x \end{equation} where $|y|=l_1$ and $|y'|=l_2$. \end{lemma} We are now almost done. However, it may still happen that, for the chosen pair of indices, the probabilities $p_{i_0}$, $q_{j_0}$ are small, and we will not have a bound for the whole quantity of \eqref{eq:c1}. Therefore we need to truncate the ensembles so that the minimal probability is bounded away from zero. Such smaller ensembles, do not give rise anymore to the same density matrix. However their density matrices are still close, provided we have not truncated too much. \begin{lemma} Suppose we are given two ensembles \begin{equation} \ecal_1= \{p_i,\rho_i\}_{i=1}^{l_1},\quad \ecal_2= \{q_j,\sigma_j\}_{j=1}^{l_2} \label{eq:ens2} \end{equation} which give rise to the same density matrix. Let $p_i$ and $q_j$ be arranged in the nonincreasing order. Let us denote \begin{equation} \delta_1= 1- \sum_{i=1}^{\tilde{l_1}} p_i,\quad \delta_2= 1- \sum_{j=1}^{\tilde{l_2}} q_j \end{equation} Consider new ensembles \begin{equation} \tilde\ecal_1= \{\tilde p_i,\rho_i\}_{i=1}^{\tilde{l_1}},\quad \tilde\ecal_2= \{\tilde q_j,\sigma_j\}_{j=1}^{\tilde{l_2}} \label{eq:ens2} \end{equation} where $\tilde p_i = p_i/(1-\delta_1)$,$\tilde q_j = q_j/(1-\delta_2)$. Then the new ensembles satisfy \begin{equation} \|\sum_{i=1}^{\tilde{l_1}}\tilde p_i \rho_i - \sum_{j=1}^{\tilde{l_2}}\tilde q_j \sigma_j\| \leq \frac{2 \max\{\delta_1,\delta_2\}}{1 - \min\{\delta_1,\delta_2\}} \end{equation} \label{lem:cut} \end{lemma} Thus we can use the new ensembles to show that there exist a pair of states $\rho_{i_0}$ and $\sigma_{j_0}$, and that at the same time the weights of the states satisfy $p_{i_0} \geq p_{\tilde{l_1}}$, $q_{j_0} \geq q_{\tilde{l_2}}$. Thus adjusting $\tilde{l_1}$ and $\tilde{l_2}$ properly we can simultaneously secure a bound on both the weights and the norm. \medskip We can now prove our final result. \addtocounter{thm}{-2} \begin{thm}\label{thm:main} \{{\it Restatement}\} For input $2\times n$, the fraction of determinism for QM box is bounded by the following quantity \begin{equation} \nonumber c\geq {0.1134\over 2k\hspace{1mm} l \hspace{1mm}l_1\hspace{1mm} l_2} \end{equation} Here, $k=\max\{|x_1|,...|x_n|\}$ and $l=\max\{l_1=|y|,l_2=|y'|\}$, where $\{x_1,...x_n\}$ are inputs on Alice's side while $\{y, y'\}$ are inputs on Bob's side. \end{thm} {\bf Proof.} First truncate the ensembles appropriately. We use the notation of Lemma \ref{lem:cut}. Let $\mu>1$ be a parameter. Let us choose the largest $\tilde{l_1}$ and $\tilde{l_2}$ such that $p_{\tilde{l_1}} > \frac{1}{l\mu}$ and $q_{\tilde{l_2}}>\frac{1}{l\mu}$, where $l=\max\{l_1,l_2\}$. Then $\delta_1$ and $\delta_2$ not larger than $l \times\frac{1}{l\mu}$. Consequently, we get the following estimate on the truncated ensemble: \begin{equation} \|\sum_{i=1}^{\tilde{l_1}} \tilde p_i \rho_i - \sum_{j=1}^{\tilde{l_2}}\tilde q_j \sigma_j\| \leq \frac{2/\mu}{1-1/\mu}= \frac{2}{\mu-1}. \end{equation} From equation \eqref{eq:c1} and from Lemma \ref{lemma-x} it follows that \begin{equation} c\ge\frac{1}{2k}\frac{\epsilon}{l\mu}, \end{equation} where $\epsilon$ satisfies equation (\ref{eq-x}) with $x=\frac{2}{\mu-1}$. After some simplifications, we get \begin{equation} \label{eq:ep_sol} \epsilon\ge \left(\frac{(\mu-2)}{l(\mu-1)}\right)^{2}. \end{equation} Substituting this $\epsilon$ value in the preceding equation we are led to \begin{equation} c\geq {1\over 2kl^3} \left({(\mu-2)\over (\mu-1)}\right)^2{1\over \mu} . \end{equation} We now note that the function $f(\mu)=\frac{1}{\mu}\left(\frac{\mu-2}{\mu-1}\right)^2$ reaches its maximum at $\mu_0=(5+\sqrt{17})/2$, which completes the proof. $\blacksquare$ {\bf Example}: Consider the CHSH case, where $k=2, l=2$ and substituting these values we find $FOD \ge 7.0875 \times 10^{-3}$. Consequently \begin{equation} \beta_{CHSH}^{qm} \le \beta_{alg}-c (\beta_{alg}-\beta_{det})=3.9858. \end{equation} In the next section bounds for FOD and CF are calculated for a simple case of $2\times n$ input with binary outcomes on Bob's side. One can find these bounds using some of the lemmas and propositions described in section \ref{sec:summary}, which in turn gives an even better bound than the ones obtained using the general result of Theorem \ref{thm:main}. \section{FOD and CF for Binary outcomes on Bob's side} \label{sec:binary-bob} \begin{figure} \begin{center} \includegraphics[width=0.42\textwidth]{det_box.pdf} \end{center} \caption{The box $P=\{p(r,b|X_r, y)\}$ of Alice and Bob. $D_1$ and $D_2$ are two orthogonal deterministic boxes with fraction $c_1$ and $c_2$ respectively. And these can be subtracted from $P$. } \label{det_box} \end{figure} Using structural property of boxes and Lemmas \ref{lem:error} and \ref{lem:cut} and Theorem \ref{thm:szarek} in section \ref{sec:FODinQM}, one can explicitly compute bounds for FOD and CF for the case when Bob has binary outcomes. Technically, we look for structures of deterministic boxes within the structure of the quantum box. The maximum fraction of these deterministic boxes bound FOD of the quantum box. This technique is explained below and on Fig. {\ref{det_box}}. Bob can create $\{p_i\rho_i\}^1_{i=0}$ or $\{q_j\sigma_j\}^1_{j=0}$ ensemble at Alice's site by making measurement $y$ or $y'$ respectively on his part of shared quantum state. Lemma \ref{lem:error} asserts that for all pairs of $\rho_i$ and $\sigma_j$ and for all POVMs $\{X_r\}$ \begin{flalign} \label{eqn:confuse} \exists \hspace{5mm} \epsilon_{ij} \ge 0, X_{r_0}, X_{r_1}, X_{r_2}, X_{r_3} \hspace{3mm}\hbox{such that} \\ \nonumber tr(X_{r_0}\rho_0)\ge \epsilon_{00} , \hspace{3mm}{\text and} \hspace{3mm} tr(X_{r_0}\sigma_0)\ge \epsilon_{00} \\ \nonumber tr(X_{r_1}\rho_0)\ge \epsilon_{01} , \hspace{3mm}{\text and} \hspace{3mm} tr(X_{r_1}\sigma_1)\ge \epsilon_{01} \\ \nonumber tr(X_{r_2}\rho_1)\ge \epsilon_{10} , \hspace{3mm}{\text and} \hspace{3mm} tr(X_{r_2}\sigma_0)\ge \epsilon_{10} \\ \nonumber tr(X_{r_3}\rho_1)\ge \epsilon_{11} , \hspace{3mm}{\text and} \hspace{3mm} tr(X_{r_3}\sigma_1)\ge \epsilon_{11}\ , \end{flalign} where $\epsilon_{ij} \le {1\over 2k}(2-||\rho_i-\sigma_j||) $. This means that when Bob obtains outcomes $(b,b')$ for inputs $(y,y')$ then for any POVM of Alice there exist at least one outcome, call it a {\it confusing outcome}, on her side such that once she obtains it, she cannot distinguish between measurement choices of Bob with certainty, i.e., to determine whether Bob chose $y$ or $y'$ to create the first ensemble. For example, in the first pair of inequalities in \eqref{eqn:confuse} above, the outcome $r_0$ of some POVM can not tell apart with certainty $\rho_{00}$ from $\sigma_{00}$. There are four pairs of $(b, b')$, hence there are four confusing outcomes corresponding to each of these four cases. Consider the particular case when Bob obtains $(0,0)$ when he measures $(y,y')$, and let us say $r_0$ is a confusing outcome for Alice when she chooses to measure POVMs $\{X_r\}$. Since Bob obtains $(0,0)$, the marginals satisfy $p_0 > 0$ and $q_0>0$. Lemma \ref{lem:error} asserts that for any measurement choice we have ${\rm Tr}(X_{r_0}\rho_0)\ge \epsilon_{00}$ and ${\rm Tr}(X_{r_0}\sigma_0)\ge \epsilon_{00}$. Hence for every POVM, there is at least one confusing outcome on Alice's side. Therefore, in the quantum box we can replace the probabilities corresponding to each of these confusing outcomes for every measurement choice of Alice with $c_{00}:= \min\{p_0\epsilon_{00}, q_0\epsilon_{00}\}$. One can now see that by this construction we can create a deterministic box (say $D_{00}$) with fraction equal to $c_{00}$. In other words, every quantum box $P_Q$ satisfies the relation $P_Q\ge (1-c_{00})X+c_{00}D_{00}$. In such a way, we can create four separate deterministic boxes ($\{D_{ij}\}^1_{i,j=0}$) corresponding to each of the outcome pairs $(b,b')$ of Bob. There is a possibility that there may exist a measurement setting for Alice such that she obtains a single confusing outcome for two or more different cases, e.g., when she obtains a confusing outcome $r_0$, she is unable to distinguish between measurement choices of Bob not only in the case when Bob obtains $(0, 0)$ but also in the case when he obtains $(1, 0)$. So, in the worst case, for some measurement choices there may be just one confusing outcome at Alice's side for all the four different cases as in Fig. \ref{clas_frac} in the last row of the box. In that case, the quantum box does not satisfy $P_Q\ge (1-\sum c_{ij})X+\sum^1_{i,j=0} c_{ij} D_{ij}$ because this would require us to use some probabilities twice. Therefore, in general ,one can use only orthogonal pairs of deterministic boxes to resolve this issue, i.e., either $P_Q\ge (1-c_{00}-c_{11})X+c_{00} D_{00}+c_{11} D_{11}$ or $P_Q\ge (1-c_{01}-c_{10})X+c_{01} D_{01}+c_{10} D_{10}$. The maximum fraction of such deterministic boxes bounds from below the FOD of the QM box under consideration. The sum of these fractions bounds CF. So to calculate FOD and CF for a fixed ensemble, we need to find \begin{flalign} FOD=\frac{1}{2k}\max\{&\min\{p_0 \epsilon_{00},q_0 \epsilon_{00}\},\min\{p_1 \epsilon_{11}, q_1 \epsilon_{11}\}, &\\ \nonumber &\min\{p_0 \epsilon_{01},q_1 \epsilon_{01}\},\min\{p_1 \epsilon_{10},q_0 \epsilon_{10}\}\}& \end{flalign} and \begin{flalign} CF=\frac{1}{2k}\max\{&\min\{p_0 \epsilon_{00},q_0 \epsilon_{00}\}+\min\{p_1 \epsilon_{11}, q_1 \epsilon_{11}\}, &\\ \nonumber &\min\{p_0 \epsilon_{01},q_1 \epsilon_{01}\}+\min\{p_1 \epsilon_{10},q_0 \epsilon_{10}\}\}& \end{flalign} To calculate these values w.l.g. we can assume $p_0\le q_0\le q_1\le p_1$. Using lemma(\ref{lem:cut}) and optimizing over p's and q's we finally get the following values (appendix contains detailed calculations) \\ Using theorem \ref{thm:szarek} we find, $FOD=\frac{0.1096}{2k}$, $CF=0.1122/2k$ and for k=2 \begin{equation} \beta_{FOD}(P)\le 4-\frac{0.1096*2}{4}=3.9452 \end{equation} \begin{equation} \beta_{CF}(P)\le 4-\frac{0.1123*2}{4}=3.9439 \end{equation} \\ These bounds are very weak, but since they hold for any $2\times n$ Bell type inequalities, they presumably can not be much better than this. \begin{figure} \begin{center} \includegraphics[width=0.42\textwidth]{cls_frac.pdf} \end{center} \caption{The box $\{p(r,b|X_r, y)\}$ of Alice and Bob. Dashed lines represents which pairs are being confused and their lower bounds. Note that $r_i$'s are independent of which input Alice chooses. Some or all $r_i$'s may coincide with each other for some inputs of Alice.} \label{clas_frac} \end{figure} \section{Conclusion} \label{sec:conclusions} Here we have given quantitatively a {\it universal bound} for $2\times n$ input Bell inequalities, which is independent of the number `n' of inputs. Specifically, we show that this universal bound depends on the number of outputs of the two parties and on the difference between the maximal algebraic value and the maximal deterministic value of the inequality. We show that presence of FOD in $2\times n$ BI prevents quantum Bell values from achieving the maximal algebraic value. Hence this result is also a quantitative proof of the theorem shown by Gisin et al. in \cite{GisinMS2006-pseudo}, which states that there exist no $2\times n$ input Pseudo-Telepathy game. Although these bounds are not tight, one can improve them by considering the classical fraction and generalize the result using it. We have analyzed a simple case where the classical fraction gives better bound than taking into account merely FOD. To obtain the above results, we established a {\it reverse triangle inequality}, which is an independent result of its own interest. The triangle inequality gives upper bounds on trace distance between two states, whereas RTI bounds the trace distance from below. We have determined that this bound is different for non-commuting states than when considering only commuting states. The bound in the commuting case is sharp, and the one in the non-commuting case is close to being sharp. \begin{acknowledgments} We thank Aram Harrow for a suggestion that lead to a simpler proof of our geometric result and to a better constant. PJ thanks P. Mazurek for useful discussions. This work was supported by ERC QOLAPS, EC IP QESSENCE, EC grant RAQUEL, MNiSW grant IdP2011 000361 and NCBiR-CHIST-ERA Project QUASAR. PJ was also supported by grant MPD/2009-3/4 from Foundation for Polish Science. SJS was partially supported by grants from the National Science Foundation (U.S.A.) and by the grant 2011-BS01-008-02 from ANR (France). TS was partially supported by the National Science Centre of Poland, grant number DEC-2012/07/B/ST1/03320. Part of this work was done at the National Quantum Information Centre of Gda{\'n}sk. \end{acknowledgments}
1,314,259,995,728
arxiv
\section{Introduction} The canonical formalism allows one to derive the classical equations of motion and the conserved currents associated with the global symmetries of a Lagrangian. Unfortunately, this formalism runs into problems in presence of a gauge symmetry. In particular, the canonical energy-momentum tensor obtained following the standard procedure turns out to be gauge dependent. Since physical observables are gauge invariant, there is a widespread belief that the canonical quantities are not really physical. Moreover, the gauge dependence of the canonical variables makes the quantization procedure particularly non-trivial. Similar problems arise in any metric theories like \emph{e.g.} general relativity. Different strategies have been adopted to deal with these issues. The oldest one goes back to Dirac \cite{Dirac:1955uv} who proposed to reformulate QED in terms of gauge-invariant fields known as \emph{Dirac variables}. These variables are constructed by adjoining phase factors to the original fields, and have been rediscovered and generalized several times under different names and in different contexts \cite{DeWitt:1962mg,Mandelstam:1962mi,Mandelstam:1968hz,Mandelstam:1968ud,BialynickiBirula:1963,Steinmann:1983ar,Steinmann:1985id,Skachkov:1985cz,Haagensen:1997pi,Horan:1998im,Masson:2010vx,Fournel:2012cr,Chen:2012vg}. We refer to \cite{Pervushin:2001kq} for a review of the subject and to \cite{Lantsman:2006ry} for a comparison with the more standard Faddeev-Popov approach. Schwinger \cite{Schwinger:1962zz,Schwinger:1962fg}, followed by Arnowitt and Fickler \cite{Arnowitt:1962cv}, adopted a different strategy. They proposed to separate explicitly the physical degrees of freedom from the unphysical ones in the gauge potential. The same idea has also been considered and generalized several times \emph{e.g.} in Refs.~\cite{Goto:1966,Treat:1973yc,Treat:1975dz,Duan:1979,Duan:1984cb,Duan:1998um,Duan:2002vh,Fulp:1983bt,Kashiwa:1996rs,Kashiwa:1996hp}, and reappeared more recently in the context of the gauge-invariant decomposition of the proton spin \cite{Chen:2008ag,Chen:2011zzh,Wakamatsu:2010qj,Wakamatsu:2010cb,Lorce:2012rr}. Interestingly, this approach shares many common features with the background field method introduced by DeWitt \cite{DeWitt:1967ub,DeWitt:1967uc,DeWitt:1980jv,'tHooft:1975vy,Grisaru:1975ei,Boulware:1980av,Abbott:1980hw}. The latter has been extensively used in gravity and supergravity \cite{'tHooft:1973us,'tHooft:1974bx,Deser:1977nt,Abbott:1981ff,Petrov:2007xva,Petrov:2012qn}, and in both continuum and lattice gauge theories \cite{Dashen:1980vm,Abbott:1982jh,Luscher:1995vs,Freire:2000bq,Binosi:2009qm,Binosi:2012st}. A nice introduction to the background field method can be found in \cite{Abbott:1981ke}. To the best of our knowledge, the Schwinger approach is usually adopted either in the path-integral formalism or as an \emph{ad hoc} procedure \emph{after} the application of the standard (gauge non-invariant) canonical formalism. Surprisingly, it has never been used to develop directly a canonical formalism consistent with the gauge symmetry. In this letter, we aim at filling this gap. We show how the explicit separation of physical and gauge degrees of freedom naturally leads to a covariant form of the Euler-Lagrange equations and of the Noether's theorem. In section \ref{sec2}, we briefly remind the textbook approach to the canonical formalism. Then we present its covariant form in the presence of external or non-dynamical gauge fields, defining on the way covariant functional derivatives. In section \ref{sec3}, we develop the gauge-covariant canonical formalism based on the decomposition of the gauge field into physical and pure-gauge parts. Applying this new gauge-covariant canonical formalism to QCD, we recover the gauge-invariant decomposition of the linear and angular momentum operators constructed by Chen \emph{et al.}~\cite{Chen:2008ag,Chen:2011zzh,Wakamatsu:2010cb}. This naturally explains why their operators are the generators of translations and Lorentz transformations for the physical fields. We also comment on the lack of uniqueness of this approach and its physical relevance. In section \ref{sec4}, we discuss the Dirac's gauge-invariant canonical formalism, and demonstrate its formal equivalence with the gauge-covariant canonical formalism we propose. Finally, we conclude this letter with section \ref{sec5}. While we confine here our discussions to the gauge theories, the same approach can easily be adapted to general relativity and any metric theories. \section{Canonical formalism}\label{sec2} We start with a short reminder of the standard derivation of the Euler-Lagrange equations and the Noether's theorem. Then we explain how one reconciles the standard approach with the gauge symmetry in presence of external gauge fields. \subsection{Standard approach}\label{standardapproach} In standard textbooks like \emph{e.g.} \cite{Ryder:1985wq}, the Lagrangian is usually thought of as a function of a generic set of fields $\phi(x)$ and their ordinary derivatives\footnote{One can also add an explicit dependence on the space-time coordinates $x$, but this does not affect in a significant way our discussions.} \begin{equation} \mathcal{L}(x)=f[\phi(x),\partial_\mu\phi(x)]. \end{equation} Setting to zero the variation of the action $\delta S=\int_\Omega\mathrm{d}^4x\,\delta\mathcal{L}(x)=0$ under arbitrary (infinitesimal) variation of the fields \begin{equation} \phi(x)\mapsto\phi'(x)=\phi(x)+\delta\phi(x) \end{equation} that satisfy the condition $\delta\phi=0$ on the space-time boundary $\partial\Omega$, one obtains the Euler-Lagrange equations \begin{equation} \partial_\mu\frac{\partial\mathcal{L}}{\partial(\partial_\mu\phi)}-\frac{\partial\mathcal{L}}{\partial\phi}=0. \end{equation} In the Noether's theorem \cite{Noether:1918zz}, one considers more general variations of the fields \begin{equation}\label{fullvar} \begin{split} \phi(x)\mapsto\phi'(x')&=\phi(x)+\Delta\phi(x)\\ &=\phi(x)+\delta\phi(x)+\partial_\mu\phi(x)\,\delta x^\mu, \end{split} \end{equation} where $\delta\phi(x)$ represents again an intrinsic change in the functional form of the fields, $\partial_\mu\phi(x)\,\delta x^\mu$ represents a change coming from the fact that the fields are evaluated at a slightly displaced point $x'=x+\delta x$, and $\Delta\phi(x)$ is the total variation. By definition, continuous symmetries leave the action invariant\footnote{We adopt the passive point of view, so that the space-time volume is not affected.} \begin{equation} \delta S=\int_\Omega\mathrm{d}^4x'\,\mathcal{L}'(x')-\int_\Omega\mathrm{d}^4x\,\mathcal{L}(x)=0, \end{equation} where $\mathcal{L}'(x')\equiv f[\phi'(x'),\partial'_\mu\phi'(x')]$. Using the Euler-Lagrange equations, one concludes that the following (infinitesimal) currents are conserved \begin{equation}\label{current} \mathcal J^\mu=\frac{\partial\mathcal{L}}{\partial(\partial_\mu\phi)}\,\Delta\phi-\left[\frac{\partial\mathcal{L}}{\partial(\partial_\mu\phi)}\,\partial_\nu\phi-\delta^\mu_\nu \mathcal{L}\right]\delta x^\nu. \end{equation} When the Lagrangian is invariant under a gauge symmetry, the Euler-Lagrange equations turn out to be gauge covariant. Note however that the terms $\partial_\mu\frac{\partial\mathcal{L}}{\partial(\partial_\mu\phi)}$ and $\frac{\partial\mathcal{L}}{\partial\phi}$ are not in general separately gauge covariant. More troublesome is the fact that the currents associated with the Poincar\'{e} transformations are also not gauge invariant. It is then often claimed that the canonical linear and angular momentum operator densities have no physical significance. \subsection{Gauge-covariant approach} When the gauge field is treated as an \emph{external} or \emph{background} field, it is actually possible to reconcile the gauge symmetry with the Noether's theorem \cite{Ray:1968,Jackiw:1978ar,Levitsky:1981rv,Levitsky:1982mr,Hamamoto:1983fv,Barnich:1994cq}. The origin of the problem comes from the fact that the standard approach deals with quantities that are not gauge covariant, and therefore ill-defined from the geometrical point of view. To keep the presentation simple, consider that the fields $\phi(x)$ transform as internal vectors under gauge transformations \begin{equation} \phi(x)\mapsto\tilde\phi(x)=U(x)\phi(x). \end{equation} All the following expressions can of course easily be adapted to any kinds of internal tensor transformation law. Since the original and transformed fields are evaluated at the same point, it follows that the intrinsic variation of the fields $\delta\phi(x)$ transforms also as an internal vector\footnote{Even when the transformation of the fields involves non-tensorial terms, the latter are cancelled in the intrinsic variation.} under gauge transformations. The action being gauge invariant, one can deduce from \begin{equation} \delta S=\int_\Omega\mathrm{d}^4x\left[\frac{\partial\mathcal{L}}{\partial\phi}-\partial_\mu\frac{\partial\mathcal{L}}{\partial(\partial_\mu\phi)}\right]\delta\phi \end{equation} that the Euler-Lagrange equations are automatically gauge covariant. In a gauge theory, the ordinary partial derivatives are not the natural geometric objects but usually appear as part of covariant derivatives $D_\mu=\partial_\mu-igA_\mu$. It is therefore more natural to consider the Lagrangian as a function of the fields and their covariant derivatives \begin{equation} \mathcal{L}(x)=f'[\phi(x),D_\mu\phi(x)]. \end{equation} Simple algebra shows that \cite{Lewis:2009na} \begin{subequations} \begin{align} \frac{\partial f}{\partial\phi}&=\frac{\partial f'}{\partial\phi}-\frac{\partial f'}{\partial(D_\mu\phi)}\,igA_\mu,\\ \frac{\partial f}{\partial(\partial_\mu\phi)}&=\frac{\partial f'}{\partial(D_\mu\phi)}. \end{align} \end{subequations} When the Lagrangian is expressed only in terms of gauge-covariant variables, the corresponding functional derivatives are automatically gauge covariant. We therefore propose to define \emph{covariant functional derivatives} and conjugate fields as follows \begin{subequations}\label{covdef} \begin{align} \mathcal{L}\!\stackrel{\leftarrow}{D}\!\!\!\!\!\phantom{\partial}_\phi&\equiv\frac{\partial f}{\partial\phi}+\frac{\partial f}{\partial(\partial_\mu\phi)}\,igA_\mu=\frac{\partial f'}{\partial\phi},\\ \Pi^\mu_\phi&\equiv\frac{\partial f}{\partial(\partial_\mu\phi)}=\frac{\partial f'}{\partial(D_\mu\phi)}. \end{align} \end{subequations} The Euler-Lagrange equations can then be rewritten as \begin{equation} \frac{\partial\mathcal{L}}{\partial(D_\mu\phi)}\!\stackrel{\leftarrow}{D}_\mu-\frac{\partial\mathcal{L}}{\partial\phi}=0 \end{equation} with the covariant derivative in the conjugate fundamental representation given by $\stackrel{\leftarrow}{D}_\mu=\stackrel{\leftarrow}{\partial}_\mu\!+igA_\mu$, or more compactly using the definitions \eqref{covdef} \begin{equation} \Pi^\mu_\phi\!\stackrel{\leftarrow}{D}_\mu\!-\mathcal{L}\!\stackrel{\leftarrow}{D}_\phi=0. \end{equation} In this form, the individual terms of the Euler-Lagrange equations are now gauge covariant. The problem with the Noether's theorem in the standard approach boils down to Eq. \eqref{fullvar}. When the fields transform \emph{e.g.} as internal vectors under gauge transformations, this expression does not make sense anymore from the geometrical point of view. Indeed, the fields $\phi'(x')$ and $\phi(x)$ live in different copies of the internal space, since they are evaluated at different space-time points. It then follows that, contrary to the intrinsic variation $\delta\phi(x)$, the total variation $\Delta\phi(x)$ does not transform covariantly. The consistent expression is \cite{DeWitt:2003pm,Frankel:1997ec} \begin{equation} \phi'(x')=\mathcal W(x+\delta x,x)\left[\phi(x)+\Delta_c\phi(x)\right], \end{equation} where the infinitesimal Wilson line and the covariant total variation are respectively defined as \begin{subequations} \begin{align} \mathcal W(x+\delta x,x)&=1+igA_\mu(x)\,\delta x^\mu,\\ \Delta_c\phi(x)&=\delta\phi(x)+D_\mu\phi(x)\,\delta x^\mu. \end{align} \end{subequations} From the gauge transformation of the $A_\mu(x)$ field \begin{equation}\label{Agauge} A_\mu(x)\mapsto\tilde A_\mu(x)=U(x)\left[A_\mu(x)+\frac{i}{g}\,\partial_\mu\right]U^{-1}(x), \end{equation} it is easy to see that the infinitesimal Wilson line transforms in a simple way \begin{align} \mathcal W(x+\delta x,x)\mapsto&\tilde{\mathcal W}(x+\delta x,x)\nonumber\\ &\hspace{-1cm}=U(x+\delta x)\mathcal W(x+\delta x,x)U^{-1}(x), \end{align} and allows one to parallel transport $\phi'(x')$ to $\phi'_\parallel(x)=\phi(x)+\Delta_c\phi(x)$ that lives in the same copy of the internal space as $\phi(x)$. The consistent expression for the conserved current is then \begin{equation}\label{covcurrent} \mathcal J^\mu=\frac{\partial\mathcal{L}}{\partial(D_\mu\phi)}\,\Delta_c\phi-\left[\frac{\partial\mathcal{L}}{\partial(D_\mu\phi)}\,D_\nu\phi-\delta^\mu_\nu \mathcal{L}\right]\delta x^\nu. \end{equation} We stress that this approach is fine as long as the gauge field is external. But once $A_\mu(x)$ is treated as a dynamical field, the gauge-covariant formalism presented in this section cannot be applied anymore, simply because the gauge field does not transform as an internal tensor under gauge transformations. To the best of our knowledge, no consistent gauge-covariant canonical formalism with dynamical gauge field has been developed so far. We fill this gap in the next section. \section{Gauge-covariant canonical formalism}\label{sec3} We propose in this section, for the first time, a canonical formalism that is consistent with the gauge symmetry. Note that the gauge field plays essentially two roles. On the one hand, it is used to form a gauge-covariant derivative and to render the Lagrangian gauge invariant. On the other hand, it gives rise to a non-vanishing field strength $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu-ig[A_\mu,A_\nu]$ and provides the coupling with the source fields. The first aspect is somewhat unphysical as it concerns only the gauge symmetry which is not observable. On the contrary, the second aspect is physical as the field strength (or curvature) affects the trajectories of particles, and is therefore observable. We adopt here the Schwinger's strategy of separating these two aspects explicitly. \subsection{Decomposition of the gauge field} Using the notation introduced by Chen \emph{et al.}, we decompose the gauge field as follows \cite{Chen:2008ag,Chen:2011zzh,Wakamatsu:2010cb} \begin{equation}\label{Adecomp} A_\mu(x)=A^\text{pure}_\mu(x)+A^\text{phys}_\mu(x), \end{equation} where $A^\text{pure}_\mu(x)$ and $A^\text{phys}_\mu(x)$ contain only gauge and physical degrees of freedom, respectively. By definition, the pure-gauge field is unphysical and therefore cannot contribute to the field strength \begin{equation}\label{Apuredef} F^\text{pure}_{\mu\nu}=\partial_\mu A^\text{pure}_\nu-\partial_\nu A^\text{pure}_\mu-ig\left[A^\text{pure}_\mu,A^\text{pure}_\nu\right]=0. \end{equation} It can then be written in the form \begin{equation} A_\mu^\text{pure}(x)=\frac{i}{g}\,U_\text{pure}(x)\partial_\mu U^{-1}_\text{pure}(x), \end{equation} where $U_\text{pure}(x)$ is some unitary matrix. From the gauge transformation law of this matrix \begin{equation}\label{Ugauge} U_\text{pure}(x)\mapsto\tilde U_\text{pure}(x)=U(x)U_\text{pure}(x) \end{equation} and Eq. \eqref{Agauge}, it is easy to obtain the gauge transformation laws of the pure-gauge and physical terms \begin{align} A_\mu^\text{pure}(x)&\mapsto\tilde A_\mu^\text{pure}(x)=\nonumber\\ &\hspace{1cm}U(x)\left[A_\mu^\text{pure}(x)+\frac{i}{g}\,\partial_\mu\right]U^{-1}(x),\label{Apureg}\\ A_\mu^\text{phys}(x)&\mapsto\tilde A_\mu^\text{phys}(x)=U(x)A_\mu^\text{phys}(x)U^{-1}(x).\label{Aphysg} \end{align} In particular, note that $A^\text{phys}_\mu(x)$ transforms as an internal tensor just like any other physical fields. It is therefore natural to treat the physical term as a dynamical field, \emph{i.e.} as part of the generic set of fields $\phi(x)$, and to treat the pure-gauge term as an external field. Note also that rewriting the decomposition \eqref{Adecomp} in a more explicit form \begin{align} (-igA_\mu)^a_{\phantom{a}b}&=(U_\text{pure})^a_{\phantom{a}a'}\partial_\mu (U^{-1}_\text{pure})^{a'}_{\phantom{a}b}\nonumber\\ &\qquad+(U_\text{pure})^a_{\phantom{a}a'}(-ig\hat A^\text{phys}_\mu)^{a'}_{\phantom{a}b'} (U^{-1}_\text{pure})^{b'}_{\phantom{b}b} \end{align} with $\hat A^\text{phys}_\mu\equiv U^{-1}_\text{pure}A^\text{phys}_\mu U_\text{pure}$, is reminiscent of the tetrad formalism in general relativity \cite{deFelice:1990hu,Unzicker:2005in} \begin{equation} \Gamma^\lambda_{\mu\nu}=e^\lambda_a\partial_\mu e^a_\nu+e^\lambda_a\omega^a_{\mu b}e^b_\nu. \end{equation} In some sense, the fields $-igA_\mu(x)$, $U_\text{pure}(x)$ and $-ig\hat A^\text{phys}_\mu(x)$ can be thought of as the analogs of the Christoffel symbol $\Gamma_\mu(x)$, the vierbein $e(x)$ and the spin connection $\omega_\mu(x)$, respectively. \subsection{Gauge-covariant approach revisited} Once more, consider for simplicity that the fields $\phi(x)$ transform as internal vectors under gauge transformations. We propose to think of the Lagrangian as a function of the gauge-covariant physical fields and their pure-gauge covariant derivatives\footnote{Note in particular that $D^\text{pure}_\mu U_\text{pure}(x)=0$.} $D^\text{pure}_\mu=\partial_\mu-igA^\text{pure}_\mu$ \begin{equation}\label{noncovariantform} \mathcal{L}(x)=f''[\phi(x),D^\text{pure}_\mu\phi(x)]. \end{equation} Such a rewriting is always possible since a gauge-invariant Lagragian involves the field $A_\mu(x)$ only through covariant derivatives and field strengths \begin{align} D_\mu&=D^\text{pure}_\mu-igA^\text{phys}_\mu,\\ F_{\mu\nu}&=\mathcal D^\text{pure}_\mu \! A^\text{phys}_\nu-\mathcal D^\text{pure}_\nu \! A^\text{phys}_\mu-ig\left[A^\text{phys}_\mu,A^\text{phys}_\nu\right], \end{align} where the pure-gauge covariant derivative in the adjoint representation is given by $\mathcal D^\text{pure}_\mu=\partial_\mu-ig[A^\text{pure}_\mu,\quad]$. Note that, owing to Eq. \eqref{Apuredef}, the pure-gauge covariant derivatives commute. They are therefore the most natural gauge-invariant extensions of the ordinary partial derivatives $\partial_\mu$. In this approach the geometrically consistent expression for the variation of the fields is obviously \begin{equation} \phi'(x')=\mathcal W_\text{pure}(x+\delta x,x)\left[\phi(x)+\Delta_\text{pure}\phi(x)\right], \end{equation} where the pure-gauge infinitesimal Wilson line and the pure-gauge covariant total variation are respectively defined as \begin{subequations} \begin{align} \mathcal W_\text{pure}(x+\delta x,x)&=1+igA^\text{pure}_\mu(x)\,\delta x^\mu,\\ \Delta_\text{pure}\phi(x)&=\delta\phi(x)+D^\text{pure}_\mu\phi(x)\,\delta x^\mu. \end{align} \end{subequations} The Euler-Lagrange equations and conserved Noether currents then take the form \begin{align} 0&=\frac{\partial\mathcal{L}}{\partial(D^\text{pure}_\mu\phi)}\!\stackrel{\leftarrow}{D}\!\!\!\!\!\phantom{\partial}^\text{pure}_\mu-\frac{\partial\mathcal{L}}{\partial\phi},\label{ELeq}\\ \mathcal J^\mu&=\frac{\partial\mathcal{L}}{\partial(D^\text{pure}_\mu\phi)}\,\Delta_\text{pure}\phi\nonumber\\ &\quad-\left[\frac{\partial\mathcal{L}}{\partial(D^\text{pure}_\mu\phi)}\,D^\text{pure}_\nu\phi-\delta^\mu_\nu \mathcal{L}\right]\delta x^\nu. \end{align} Considering infinitesimal translations of the space-time coordinates with the fields physically unchanged \begin{equation} \delta x^\mu=\varepsilon^\mu,\qquad\Delta_\text{pure}\phi(x)=0, \end{equation} we obtain from the corresponding (infinitesimal) Noether current $\mathcal J^\mu=T^{\mu\nu}\varepsilon_\nu$ the gauge-invariant canonical energy-momentum tensor \begin{equation}\label{giEM} T^{\mu\nu}=\frac{\partial\mathcal{L}}{\partial(D^\text{pure}_\mu\phi)}\,D^\nu_\text{pure}\phi-g^{\mu\nu} \mathcal{L}. \end{equation} Considering now infinitesimal Lorentz transformations $\Lambda^\mu_{\phantom{\mu}\nu}=\delta^\mu_\nu-\omega^\mu_{\phantom{\mu}\nu}$ with $\omega_{\mu\nu}=-\omega_{\nu\mu}$, the coordinate and field variations are given by \begin{equation} \delta x^\mu=\omega^\mu_{\phantom{\mu}\nu}x^\nu,\qquad\Delta_\text{pure}\phi(x)=-\frac{i}{2}\,\omega_{\alpha\beta}S^{\alpha\beta}\phi(x), \end{equation} where $S^{\alpha\beta}$ is the appropriate (antisymmetric) spin matrix. From the corresponding (infinitesimal) Noether current $\mathcal J^\mu=\frac{1}{2}\,M^{\mu\nu\rho}\omega_{\nu\rho}$, we obtain the gauge-invariant canonical generalized angular-momentum tensor \begin{equation}\label{giGAM} M^{\mu\nu\rho}=-i\,\frac{\partial\mathcal{L}}{\partial(D^\text{pure}_\mu\phi)}\,S^{\nu\rho}\phi(x)+(x^\nu T^{\mu\rho}-x^\rho T^{\mu\nu}). \end{equation} The crucial difference with the standard treatment is simply the use of the geometrically consistent total variation $\Delta_\text{pure}\phi(x)$ instead of $\Delta\phi(x)$. There is therefore a simple rule of thumb for reconciling the standard canonical formalism with the gauge symmetry: it suffices to replace formally $A_\mu$ by $A^\text{phys}_\mu$ and $\partial_\mu$ by the appropriate pure-gauge covariant derivative in any gauge-dependent expression. \subsection{Application to QCD} Using the decomposition \eqref{Adecomp}, the QCD Lagrangian can be thought of as made of three gauge-invariant terms $\mathcal{L}_\text{QCD}=\mathcal{L}_\text{D}+\mathcal{L}_\text{YM}+\mathcal{L}_\text{int}$, where the so-called Dirac, Yang-Mills and interaction terms are given by \begin{subequations} \begin{align} \mathcal{L}_\text{D}&=\overline{\psi}(i\!\stackrel{\leftrightarrow}{\,/\!\!\! \!D}\!\!\!\!\!\phantom{\partial}^\text{\,pure}-m)\psi,\\ \mathcal{L}_\text{YM}&=-\frac{1}{2}\,\mathrm{Tr}[F^{\alpha\beta}F_{\alpha\beta}],\\ \mathcal{L}_\text{int}&=g\overline{\psi}/\!\!\! A^\text{phys}\psi\label{Lint}. \end{align} \end{subequations} We used for convenience the notation $\stackrel{\leftrightarrow}{a}\,=\frac{1}{2}\,(\stackrel{\rightarrow}{a}-\stackrel{\leftarrow}{a})$. From the Euler-Lagrange equations \eqref{ELeq}, we recover the standard QCD equations of motion \begin{subequations} \begin{align} 0&=\frac{\partial\mathcal{L}}{\partial(\stackrel{\rightarrow}{D}\!\!\!\!\!\phantom{\partial}^\text{pure}_\mu\psi)}\!\stackrel{\leftarrow}{D}\!\!\!\!\!\phantom{\partial}^\text{pure}_\mu-\frac{\partial\mathcal{L}}{\partial\psi}\nonumber\\ &=\overline{\psi} (i\!\stackrel{\leftarrow}{\,/\!\!\!\!D}\!+m),\\ 0&=\stackrel{\rightarrow}{D}\!\!\!\!\!\phantom{\partial}^\text{pure}_\mu\frac{\partial\mathcal{L}}{\partial(\overline{\psi}\!\stackrel{\leftarrow}{D}\!\!\!\!\!\phantom{\partial}^\text{pure}_\mu)}-\frac{\partial\mathcal{L}}{\partial\overline{\psi}}\nonumber\\ &=(i\!\stackrel{\rightarrow}{\,/\!\!\!\!D}\!-m)\psi,\\ 0&=\mathcal D^\text{pure}_\nu\frac{\partial\mathcal{L}}{\partial(\mathcal D_\nu^\text{pure}\!A^\text{phys}_\mu)}-\frac{\partial\mathcal{L}}{\partial A^\text{phys}_\mu}\nonumber\\ &=2(\mathcal D_\nu F^{\nu\mu})^a_{\phantom{a}b}+g\overline{\psi}_b\gamma^\mu\psi^a, \end{align} \end{subequations} where $a,b$ are internal-space indices. For the gauge-invariant canonical energy-momentum tensor \eqref{giEM}, we obtain \begin{equation} T^{\mu\nu}=T^{\mu\nu}_q+T^{\mu\nu}_g-g^{\mu\nu}\mathcal{L}, \end{equation} where the quark and gluon contributions are given by \begin{subequations}\label{QCDEM} \begin{align} T^{\mu\nu}_q&=i\,\overline{\psi} \gamma^\mu\!\!\stackrel{\leftrightarrow}{D}\!\!\!\!\!\phantom{\partial}^\nu_\text{pure}\psi,\\ T^{\mu\nu}_g&=-2\,\mathrm{Tr}[F^{\mu\alpha}\mathcal D^\nu_\text{pure} A^\text{phys}_\alpha]. \end{align} \end{subequations} Similarly, for the gauge-invariant canonical generalized angular-momentum tensor \eqref{giGAM}, we obtain \begin{equation} \begin{split} M^{\mu\nu\rho}&=M^{\mu\nu\rho}_{q,\text{spin}}+M^{\mu\nu\rho}_{q,\text{OAM}}+M^{\mu\nu\rho}_{g,\text{spin}}+M^{\mu\nu\rho}_{g,\text{OAM}}\\ &\quad -x^{[\nu}g^{\rho]\mu}\mathcal{L}, \end{split} \end{equation} where the spin and orbital angular momentum (OAM) contributions of quarks and gluons are given by \begin{subequations}\label{QCDGAM} \begin{align} M^{\mu\nu\rho}_{q,\text{spin}}&=\frac{1}{2}\,\overline{\psi}\{\gamma^\mu,\Sigma^{\nu\rho}\}\psi=\frac{1}{2}\,\epsilon^{\mu\nu\rho\sigma}\,\overline{\psi}\gamma_\sigma\gamma_5\psi,\\ M^{\mu\nu\rho}_{q,\text{OAM}}&=i\,\overline{\psi} \gamma^\mu x^{[\nu}\!\!\stackrel{\leftrightarrow}{D}\!\!\!\!\!\phantom{\partial}^{\rho]}_\text{pure}\psi,\\ M^{\mu\nu\rho}_{g,\text{spin}}&=-2\,\mathrm{Tr}[F^{\mu[\nu} A^{\rho]}_\text{phys}],\label{Gspin}\\ M^{\mu\nu\rho}_{g,\text{OAM}}&=-2\,\mathrm{Tr}[F^{\mu\alpha}x^{[\nu} \mathcal D^{\rho]}_\text{pure}A^\text{phys}_\alpha], \end{align} \end{subequations} with $\epsilon_{0123}=+1$ and the notation $a^{[\mu}b^{\nu]}=a^\mu b^\nu-a^\nu b^\mu$. The expressions \eqref{QCDEM} and \eqref{QCDGAM} coincide with the gauge-invariant canonical decompositions of the proton momentum and spin proposed originally by Chen \emph{et al.} \cite{Chen:2008ag,Chen:2011zzh}, and put in a Lorentz-covariant form by Wakamatsu \cite{Wakamatsu:2010cb,Lorce:2012rr}. We therefore demonstrated here that the \emph{ad hoc} expressions of Chen \emph{et al.} can actually be derived from the canonical formalism and the Noether's theorem, once these are reconciled with the gauge symmetry. Note that they can also be derived from the standard Noether's theorem when non-standard Lorentz transformation laws for the fields are considered \cite{Guo:2012wv,Guo:2013jia}. \subsection{Stueckelberg symmetry} By construction, the decomposition \eqref{Adecomp} is gauge invariant, \emph{i.e.} one has $\tilde A_\mu(x)=\tilde A_\mu^\text{pure}(x)+\tilde A_\mu^\text{phys}(x)$. It is also consistent with the Lorentz symmetry, as discussed in detail in Ref.~\cite{Lorce:2012rr}. However, it is not unique since we still have some freedom in defining exactly what we mean by `pure-gauge' and `physical'. The reason is that decomposing the gauge field into two parts automatically introduces an additional local symmetry to the Lagrangian. The new symmetry has the same group structure as the gauge symmetry, but acts only on the pure-gauge and physical parts of the gauge field \begin{subequations}\label{stueck} \begin{align} A^\text{pure}_\mu(x)&\mapsto A^{\text{pure},g}_\mu(x)=\nonumber\\ &\hspace{-0.75cm}A^\text{pure}_\mu(x)+\frac{i}{g}\,U_\text{pure}(x)U_0^{-1}(x)\left[\partial_\mu U_0(x)\right]U^{-1}_\text{pure}(x),\\ A^\text{phys}_\mu(x)&\mapsto A^{\text{phys},g}_\mu(x)=\nonumber\\ &\hspace{-0.75cm}A^\text{phys}_\mu(x)-\frac{i}{g}\,U_\text{pure}(x)U_0^{-1}(x)\left[\partial_\mu U_0(x)\right]U^{-1}_\text{pure}(x), \end{align} \end{subequations} where $U_0(x)$ is a Stueckelberg unitary matrix. At the level of the matrices $U_\text{pure}(x)$, this transformation reads \begin{equation}\label{Ustueck} U_\text{pure}(x)\mapsto U^g_\text{pure}(x)=U_\text{pure}(x)U^{-1}_0(x). \end{equation} While the ordinary gauge transformation acts on the left of $U_\text{pure}(x)$ as in Eq.~\eqref{Ugauge}, this new transformation acts on the right. It is therefore important to distinguish them. Since the pure-gauge term $A^\text{pure}_\mu(x)$ plays a role somewhat similar to the derivative of the Stueckelberg field, we refer to this transformation as the Stueckelberg (gauge) transformation \cite{Lorce:2012rr}. This symmetry is reminiscent of the local Lorentz symmetry in the tetrad formalism of general relativity and the dual symmetry in gauge theories \cite{Chan:1997cv,Chan:2011tc,Baker:2011dg}. Note also that the Stueckelberg symmetry has no global counterpart, as one can see from Eq.~\eqref{stueck}. This means that the decomposition of the gauge field into pure-gauge and physical contributions does not introduce new conserved currents in the theory \cite{AlKuwari:1990db}. The Stueckelberg symmetry is a bit problematic in the sense that one can write in principle (infinitely) many Lagrangians equivalent to the original one, just by changing the explicit expressions for $A^\text{pure}_\mu(x)$ and $A^\text{phys}_\mu(x)$. To single out a particular Lagrangian in practice, one can add a gauge-invariant term that breaks the Stueckelberg symmetry. This amounts to constraining further $A^\text{phys}_\mu(x)$. One can use for example the light-front constraint $A^+_\text{phys}(x)=0$, the Coulomb constraint $\vec{\mathcal D}^\text{pure}\cdot\vec A^\text{phys}(x)=0$, the Fock-Schwinger constraint $x\cdot A^\text{phys}(x)=0$, or any other physical constraint that specifies the two physical degrees of freedom. It is important to keep in mind that, despite appearances, this procedure does not fix the gauge since $A^\text{pure}_\mu(x)$ also contributes to $A_\mu(x)$. For a more detailed discussion concerning the relation between Stueckelberg and gauge transformations, see Ref. \cite{Lorce:2013bja}. Explicit realizations of the decomposition \eqref{Adecomp} clarifies the physical meaning of the Stueckelberg symmetry. These realizations are essentially non-local expressions of the gauge potential $A_\mu$. The gauge symmetry is preserved in these non-local expressions thanks to compensating phase factors. In many cases, these phase factors combine into a Wilson line whose path dependence is at the origin of the Stueckelberg dependence \cite{Lorce:2012ce}. In other words, breaking explicitly the Stueckelberg symmetry amounts in many cases to determining the path of the Wilson lines. This path dependence has a physical relevance as demonstrated \emph{e.g.} by the Aharonov-Bohm effect \cite{Aharonov:1959fk} and the possibility to access the transverse canonical momentum of partons in the TMD factorization framework, see \emph{e.g.} \cite{Collins:2011zzd} and references therein. Note however that Stueckelberg dependence is more general than mere path depedence, because in certain explicit realizations, like \emph{e.g.} with the Coulomb constraint $\vec{\mathcal D}^\text{pure}\cdot\vec A^\text{phys}(x)=0$, the phase factors cannot be combined into a simple (path-dependent) Wilson line. In this sense, the approach based on the Coulomb constraint is path independent, though still Stueckelberg dependent. Note also that the Coulomb gauge is plagued by the issue of Gribov ambiguities in non-abelian gauge theories \cite{Gribov:1977wm}. On the contrary, the so-called contour gauges, which include the light-front and axial gauges, are known to be free of these Gribov ambiguities \cite{Bassetto:1983rq,Ivanov:1985np}, but suffer from other pathologies already at the perturbative level, like \emph{e.g.} the presence of divergences and/or the existence of preferred frames mirroring the effects of ghosts in covariant gauges \cite{Burnel:2008zz}. At the non-perturbative level, these issues may have an impact similar to the Gribov copies. These remarks are naturally expected to apply to the Stueckelberg fixing procedure as well. In practice, the Chen \emph{et al.} approach should better be considered as a perturbative construction. It is the actual physical process that determines the phase factors or the shape of the Wilson lines, and in turn which constraint on $A^\text{phys}_\mu(x)$ to use. Phase factors are necessary to preserve the gauge invariance, but favors a particular gauge constraint, the one in which they reduce to the identity or, equivalently, the one in which $A^\text{pure}_\mu(x)$ vanishes. Any gauge will of course give the same numerical answer for the physical observable, but the \emph{physical interpretation} of the latter will be the simplest in the gauge favored by the phase factor. The archetypical example is deep-inelastic scattering where the factorization theorem forces the Wilson lines entering the definition of the parton distribution functions to run along the light-front direction. At leading twist, these parton distribution functions can be interpreted as linear combinations of parton probabilities in the light-front gauge $A^+(x)=0$. The decomposition \eqref{Adecomp} simply allows one to extend this interpretation to any gauge, provided that one defines the physical term by the constraint $A^+_\text{phys}(x)=0$. \section{Gauge-invariant canonical formalism}\label {sec4} We present in this section an alternative to the gauge-covariant canonical formalism, which we refer to as the gauge-invariant canonical formalism. We show that these two formalisms are formally equivalent. \subsection{Dirac variables} Dirac soon realized that one of the main obstacles in the quantization of a gauge theory is the gauge dependence of the fields. He therefore built from the old gauge-variant fields new gauge-invariant fields that will play the role of the dynamical variables in the canonical formalism. In this spirit, we make use of the matrices $U_\text{pure}(x)$ to construct the field variables \begin{equation}\label{generalizedDiracvariable} \hat\phi(x)=U^{-1}_\text{pure}(x)\phi(x). \end{equation} For simplicity, we considered once more that the fields $\phi(x)$ transform as internal vectors under gauge transformations, but this expression can easily be adapted to any sort of internal tensors. Similarly, for the fields like $A_\mu(x)$ that do not transform as internal tensors under gauge transformations, we define \begin{equation}\label{generalizedDiracvariable2} \hat A_\mu(x)=U^{-1}_\text{pure}(x)\left[A_\mu(x)+\frac{i}{g}\,\partial_\mu\right]U_\text{pure}(x). \end{equation} From the gauge transformation law \eqref{Ugauge} of $U_\text{pure}(x)$, we see that the fields $\hat\phi(x)$ and $\hat A_\mu(x)$ are by construction gauge invariant. We refer to them as generalized Dirac variables\footnote{The original Dirac variables were constructed in the context of QED with the Coulomb constraint $\vec\nabla\cdot\vec{\hat A}(x)=0$.}. We stress that, despite appearances, Eqs.~\eqref{generalizedDiracvariable} and~\eqref{generalizedDiracvariable2} do not represent gauge transformations. In practice, the matrices $U_\text{pure}(x)$ can be expressed in terms of the gauge field $A_\mu(x)$ \cite{Lorce:2012ce}, and can then be thought of as dressing fields. From a geometrical point of view, $U_\text{pure}(x)$ simply determines a reference configuration in the internal space. The gauge-invariant fields $\hat\phi(x)$ then represent ``physical'' deviations from this reference configuration. \subsection{Gauge-invariant approach} In the gauge-invariant canonical formalism, the Lagrangian is thought of as a function of the generalized Dirac variables $\hat\phi(x)$ (including from now on $\hat A_\mu(x)$ as well) and their ordinary derivatives \begin{equation} \mathcal{L}(x)=f'''[\hat\phi(x),\partial_\mu\hat\phi(x)]. \end{equation} Since the generalized Dirac variables are gauge invariant, we can simply apply the standard approach of Section \ref{standardapproach}. The rule of thumb is particularly simple: it suffices to add a hat to the fields $\phi(x)$ wherever they appear. The obtained Euler-Lagrange equations and conserved Noether currents are then automatically gauge-invariant. \subsection{Equivalence with the gauge-covariant approach} The gauge-covariant and invariant canonical formalisms are essentially equivalent. Indeed, noting that\footnote{For the gauge field, we have $\hat A_\mu(x)=U^{-1}_\text{pure}(x)A^\text{phys}_\mu(x)U_\text{pure}(x)$ and $\partial_\nu\hat A_\mu(x)=U^{-1}_\text{pure}(x)[\mathcal D^\text{pure}_\nu A^\text{phys}_\mu(x)]U_\text{pure}(x)$.} \begin{equation} \partial_\mu\hat\phi(x)=U^{-1}_\text{pure}(x)D^\text{pure}_\mu\phi(x), \end{equation} the Lagrangian can be rewritten in the following form \begin{equation} \mathcal{L}(x)=f'''[U^{-1}_\text{pure}(x)\phi(x),U^{-1}_\text{pure}(x)D^\text{pure}_\mu\phi(x)]. \end{equation} Then, thanks to the gauge symmetry of the Lagrangian, we are assured that all the matrices $U_\text{pure}(x)$ disappear in the final expression so that we can write the Lagrangian in the form of Eq.~\eqref{noncovariantform}. In other words, one can switch between gauge-covariant and invariant canonical formalisms by a mere change of variables. Nonetheless, because of the similarity between the Schwinger approach in gauge theories and the background field method in general relativity (and other metric theories), our new gauge-covariant canonical formalism appears more suited in these contexts. This also means that the issue of uniqueness raised by the Stueckelberg symmetry affects the gauge-invariant canonical formalism as well. Indeed, we see from Eq.~\eqref{Ustueck} that, even if the generalized Dirac variables are gauge invariant, they are not Stueckelberg invariant \begin{equation} \hat\psi(x)\mapsto\hat\psi^g(x)=U_0(x)\hat\psi(x). \end{equation} There is the same freedom in defining precisely $\hat\phi(x)$ as in defining $A^\text{phys}_\mu(x)$. The existence of entire class of composite fields was already pointed out by Dirac and Steinmann \cite{Dirac:1955uv,Steinmann:1983ar,Steinmann:1985id}. In particular, we note that the Dirac's gauge-invariant formulation of QED is equivalent to the Chen \emph{et al.} approach, since both make use of the explicit dressing matrices $U_\text{pure}(x)=e^{ie\frac{\vec\nabla\cdot\vec A}{\vec\nabla^2}(x)}$ leading to the Stueckelberg-fixing constraint $\vec\nabla\cdot\vec A^\text{phys}(x)=0$. This particular choice makes the Coulomb gauge $\vec\nabla\cdot\vec A(x)=0$ special. In that gauge, the gauge-fixed fields coincide with the gauge-invariant ones $\phi(x)|_{\vec\nabla\cdot\vec A(x)=0}=\hat\phi(x)$. For this reason, the gauge-covariant and invariant canonical formalisms can be interpreted as gauge-invariant extensions of the standard canonical formalism. \section{Conclusion}\label{sec5} In this letter, we showed that separating explicitly the physical and unphysical degrees of freedom in the gauge potential allows one to reconcile in a natural way the Euler-Lagrange equations and the Noether's theorem of the standard canonical formalism with the gauge symmetry. Applying this formalism to QCD, we derived canonically the gauge-invariant operators proposed earlier by Chen \emph{et al.} in the context of the proton spin decomposition. Finally, we demonstrated the formal equivalence between our formalism and the Dirac's gauge-invariant canonical formalism. Because of the similarity between the approach adopted here and the background field method, we believe that the formalism developed in this letter should also be relevant to general relativity and, more generally, to any metric theories. \section*{Acknowledgements} In this study, I greatly benefited from numerous discussions with E. Leader, M. Wakamatsu and F. Wang on former works. I am also grateful to A. Courtoy for drawing my attention to the concept of Dirac variables. This work was supported by the P2I (``Physique des deux Infinis'') network.
1,314,259,995,729
arxiv
\section{#1}} \renewcommand{\theequation}{\arabic{section}.\arabic{equation}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{displaymath}}{\begin{displaymath}} \newcommand{\end{displaymath}}{\end{displaymath}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{eqnarray*}}{\begin{eqnarray*}} \newcommand{\end{eqnarray*}}{\end{eqnarray*}} \renewcommand{\k}{\kappa} \newcommand{\mu}{\mu} \newcommand{\nu}{\nu} \newcommand{\phi}{\phi} \renewcommand{\r}{\rho} \newcommand{\varrho}{\varrho} \newcommand{{\cal H}}{{\cal H}} \newcommand{{\cal N}}{{\cal N}} \newcommand{{\cal D}}{{\cal D}} \newcommand{{\cal O}}{{\cal O}} \newcommand{{\cal L}}{{\cal L}} \newcommand{\scriptscriptstyle}{\scriptscriptstyle} \newcommand{\labell}[1]{\label{#1}\qquad_{#1}} \newcommand{\reef}[1]{(\ref{#1})} \newcommand{\nonumber}{\nonumber} \newcommand{\partial}{\partial} \newcommand{\ti}[1]{\tilde{#1}} \newcommand{\textrm{d}}{\textrm{d}} \newcommand{\mt}[1]{\textrm{\tiny #1}} \newcommand{{\cal F}}{{\cal F}} \newcommand{{\cal K}}{{\cal K}} \newcommand{{\cal P}}{{\cal P}} \newcommand{{\cal A}}{{\cal A}} \newcommand{{\cal B}}{{\cal B}} \newcommand{\ell_s}{\ell_s} \newcommand{\zD}{\ensuremath{z_{D7}}} \newcommand{\tzD}{\ensuremath{\zeta_m}} \newcommand{\ensuremath{\tilde{\rho}}}{\ensuremath{\tilde{\rho}}} \newcommand{\ensuremath{\tilde{z}}}{\ensuremath{\tilde{z}}} \newcommand{\Rl}{\ensuremath{(R/l_s)^2} \newcommand{\tc}{\ensuremath{\sqrt{g_s N}}} \newcommand{\stc}{\ensuremath{(g_sN)^{\frac{1}{4}}}} \newcommand{\mq}{\ensuremath{m_q}} \newcommand{\ensuremath{\mbox{\small eff.}}}{\ensuremath{\mbox{\small eff.}}} \newcommand{\mbox{${\cal N}$}}{\mbox{${\cal N}$}} \newcommand{\ensuremath{{\cal Y}}}{\ensuremath{{\cal Y}}} \newcommand{\ensuremath{{\cal Y}^{\ell,\pm}}}{\ensuremath{{\cal Y}^{\ell,\pm}}} \newcommand{\ensuremath{\frac{\ell}{2}}}{\ensuremath{\frac{\ell}{2}}} \newcommand{\ensuremath{SU(2)_R\times SU(2)_L}}{\ensuremath{SU(2)_R\times SU(2)_L}} \newcommand{\ensuremath{\bar{\rho}}}{\ensuremath{\bar{\rho}}} \newcommand{\ensuremath{\frac{1}{2}}}{\ensuremath{\frac{1}{2}}} \newcommand{\ensuremath{M_{\pi}}}{\ensuremath{M_{\pi}}} \newcommand{\ensuremath{\Lambda_{\mbox{\small QCD}}}}{\ensuremath{\Lambda_{\mbox{\small QCD}}}} \newcommand{\ensuremath{\chi+i\,e^{-\phi}}}{\ensuremath{\chi+i\,e^{-\phi}}} \newcommand{\ensuremath{SL(2,\bbz{})}}{\ensuremath{SL(2,\bbz{})}} \newcommand{\ensuremath{{\mathcal Im}}}{\ensuremath{{\mathcal Im}}} \newcommand{\ensuremath{\bar{1}}}{\ensuremath{\bar{1}}} \newcommand{\ensuremath{\bar{2}}}{\ensuremath{\bar{2}}} \newcommand{\ensuremath{\bar{\imath}}}{\ensuremath{\bar{\imath}}} \newcommand{\ensuremath{\bar{\jmath}}}{\ensuremath{\bar{\jmath}}} \newcommand{\ensuremath{\bar{k}}}{\ensuremath{\bar{k}}} \newcommand{\ensuremath{\bar{l}}}{\ensuremath{\bar{l}}} \newcommand{\ensuremath{\bar{a}}}{\ensuremath{\bar{a}}} \newcommand{\ensuremath{\bar{b}}}{\ensuremath{\bar{b}}} \newcommand{\ensuremath{\bar{c}}}{\ensuremath{\bar{c}}} \newcommand{\ensuremath{\bar{d}}}{\ensuremath{\bar{d}}} \newcommand{\ensuremath{\bar{z}}}{\ensuremath{\bar{z}}} \newcommand{\ensuremath{\bar{w}}}{\ensuremath{\bar{w}}} \newcommand{\ensuremath{\bar{\zeta}}}{\ensuremath{\bar{\zeta}}} \newcommand{\ensuremath{\bar{\tau}}}{\ensuremath{\bar{\tau}}} \newcommand{\ensuremath{\bar{A}}}{\ensuremath{\bar{A}}} \newcommand{\ensuremath{\bar{B}}}{\ensuremath{\bar{B}}} \newcommand{\ensuremath{\bar{C}}}{\ensuremath{\bar{C}}} \newcommand{\ensuremath{\bar{D}}}{\ensuremath{\bar{D}}} \newcommand{\N}[1]{\ensuremath{{\cal N}=#1}} \newcommand{\ensuremath{\tilde{K}}}{\ensuremath{\tilde{K}}} \newcommand{{\bf Ai}}{{\bf Ai}} \newcommand{{\bf I}}{{\bf I}} \newcommand{{\bf J}}{{\bf J}} \newcommand{{\bf K}}{{\bf K}} \newcommand{\ensuremath{\tilde{\eta}}}{\ensuremath{\tilde{\eta}}} \newcommand{\ensuremath{\bar{\partial}}}{\ensuremath{\bar{\partial}}} \def\tilde{\lambda} {\tilde{\lambda}} \def\tilde{r} {\tilde{r}} \def\tilde{\rho} {\tilde{\rho}} \def r_\mt{vac}{r_\mt{vac}} \newcommand{\ensuremath{\vec{n}}}{\ensuremath{\vec{n}}} \newcommand{\ensuremath{\tilde{\lambda}}}{\ensuremath{\tilde{\lambda}}} \newcommand{\ensuremath{\cos\theta}}{\ensuremath{\cos\theta}} \newcommand{\ensuremath{\sin\theta}}{\ensuremath{\sin\theta}} \newcommand{\ensuremath{\partial_\sigma}}{\ensuremath{\partial_\sigma}} \newcommand{\ensuremath{\dot{\theta}}}{\ensuremath{\dot{\theta}}} \newcommand{\ensuremath{\dot{\varphi}}}{\ensuremath{\dot{\varphi}}} \newcommand{\ensuremath{\varphi}}{\ensuremath{\varphi}} \newcommand{\ensuremath{\partial_t}}{\ensuremath{\partial_t}} \newcommand{\ensuremath{\partial_{\tau}}}{\ensuremath{\partial_{\tau}}} \newcommand{\ensuremath{\tilde{\sigma}}}{\ensuremath{\tilde{\sigma}}} \newcommand{\ensuremath{\varepsilon_i}}{\ensuremath{\varepsilon_i}} \newcommand{\ensuremath{\sigma_0}}{\ensuremath{\sigma_0}} \newcommand{\ensuremath{\mathrm{N}}}{\ensuremath{\mathrm{N}}} \newcommand{\ensuremath{\NC^{rs}_{mn}}}{\ensuremath{\ensuremath{\mathrm{N}}^{rs}_{mn}}} \newcommand{\ensuremath{\NC^{rs}_{mn}(\ei,\sz)}}{\ensuremath{\ensuremath{\mathrm{N}}^{rs}_{mn}(\ensuremath{\varepsilon_i},\ensuremath{\sigma_0})}} \newcommand{\ensuremath{1+\ensuremath{\sin\frac{\sz}{2}}}}{\ensuremath{1+\ensuremath{\sin\frac{\sz}{2}}}} \newcommand{\ensuremath{\sin\frac{\sz}{2}}}{\ensuremath{\sin\frac{\ensuremath{\sigma_0}}{2}}} \newcommand{\ensuremath{\cos\frac{\sz}{2}}}{\ensuremath{\cos\frac{\ensuremath{\sigma_0}}{2}}} \newcommand{\ensuremath{\mathrm{P}^l_m(\cos\sz)}}{\ensuremath{\mathrm{P}^l_m(\cos\ensuremath{\sigma_0})}} \newcommand{\ensuremath{\mathrm{sign}}}{\ensuremath{\mathrm{sign}}} \newcommand{\ensuremath{\hat{P}}}{\ensuremath{\hat{P}}} \newcommand{\ensuremath{\mathbb{I}}}{\ensuremath{\mathbb{I}}} \newcommand{{\cal E }}{{\cal E }} \newcommand{\ensuremath{\mbox{arccosh}}}{\ensuremath{\mbox{arccosh}}} \newcommand{\ensuremath{\mbox{cotan}}}{\ensuremath{\mbox{cotan}}} \newcommand{\ensuremath{\mathcal{U}}}{\ensuremath{\mathcal{U}}} \renewcommand{\Re}{\ensuremath{\mathrm{Re}}} \renewcommand{\Im}{\ensuremath{\mathrm{Im}}} \begin{document} \title{\Large \bf Non-analyticity in Holographic Complexity near Critical points} \author{ Uday Sood$^1$, Martin Kruczenski$^{1,2}$\thanks{E-mail: \texttt{usood@purdue.edu, markru@purdue.edu.}} \\ $^1$ Dep. of Physics and Astronomy, and \\ $^2$ Purdue Quantum Science and Engineering Institute \\ Purdue University, W. Lafayette, IN, USA. } \maketitle \begin{abstract} The region near a critical point is studied using holographic models of second-order phase transitions. In a previous paper, we argued that the quantum circuit complexity of the vacuum ($C_0$) is the largest at the critical point. When deforming away from the critical point by a term $\int d^d x \, \tau \, {\cal O}_\Delta$ the complexity $C(\tau)$ has a piece non-analytic in $\tau$, namely $C_0 -C(\tau) \sim |\tau-\tau_c|^{\nu(d-1)} + \mathrm{analytic} $. Here, as usual, $\nu=\frac{1}{d-\Delta}$ and $\xi$ is the correlation length $\xi\sim |\tau-\tau_c|^{-\nu}$ and there are possible logarithmic corrections to this expression. That was derived using numerical results for the Bose-Hubbard model and general scaling considerations. In this paper, we show that the same is valid in the case of holographic complexity providing evidence that the results are universal, and at the same time providing evidence for holographic computations of complexity. \end{abstract} \clearpage \tableofcontents \newpage \section{Introduction} In a previous work \cite{Sood:2021cuz}, we studied the circuit complexity of the ground state of systems near quantum critical points using numerical and field-theoretical methods. The notion of complexity used in that work followed from identifying optimal circuits with geodesics on the space of circuits similar to \cite{Nielsen2006AGA, nielsen2006quantum, Dowling2008TheGO} in the context of quantum computing and \cite{jefferson2017circuit, Chapman:2017rqy, Khan:2018rzm, Hackl:2018ptj, Guo:2018kzl, Chapman:2018hou, Bhattacharyya:2018bbv, Jiang:2018nzg} in the context of quantum field theory. Related calculations of complexity in many body systems near quantum phase transitions can be found in the recent works \cite{Afrasiar:2022efk, Pal:2022ptv} for the LMG (Lipkin-Meshkov-Glick) model, \cite{Meng:2021wmz} for the Proca theory and the closely related work \cite{Huang:2021xjl} on the N-site Bose-Hubbard model. In this work, we continue our study of complexity this time through the lens of known holographic conjectures. The first such conjecture is that the complexity should be dual to the volume of the extremal codimension-one bulk hypersurface which meets the asymptotic boundary on the time slice where the boundary state is defined \cite{Stanford:2014jda}. The second conjecture states that complexity should be dual to the gravitational action evaluated on the WDW patch of the spacetime \cite{Brown:2015bva, Brown:2015lvg}. The WDW patch is the region of spacetime enclosed by past and future light sheets sent into the bulk from the time slice on the boundary. These proposals have led to numerous insights, for ex. see \cite{Stanford:2014jda, Carmi:2017jqz, Susskind:2020gnl, Susskind:2020wwe, Hashemi:2019aop}. Using these conjectures, we study the critical behavior of the complexity in holographic RG flows \cite{Skenderis:2002wp, Bianchi:2001de}. \\ A nice feature of the holographic correspondence is that it allows us to study RG flows of strongly coupled field theories using weakly coupled dual gravitational theories. The geometries that will be of interest to us are asymptotically AdS geometries. The interpretation in the field theory is that of a perturbed CFT which undergoes a renormalization group flow. In the holographic correspondence, the radial coordinate of the AdS geometry is interpreted in terms of the energy scale in the field theory, therefore a dependence of a bulk field on the radial coordinate represents an RG flow. On the field theory side, perturbations are introduced by adding a source term to the CFT Lagrangian or by giving a vacuum expectation value to a certain operator. These two types of perturbations correspond to the non-normalizable and normalizable modes of the dual bulk fields, respectively \cite{PhysRevD.59.104021}. \\ In our previous study, we found that the complexity is the largest at the critical point $\tau=\tau_c$ and, as $\tau \rightarrow \tau_c$, has a non-analytic piece that behaves as $|\tau - \tau_c|^{\nu (d-1)}$ for a d-dimensional spacetime field theory. We also found that this term is independent of the UV cutoff. We find all these features in both the volume and the action calculations near holographic critical points. This is the main result of this paper. In addition, we found that the analytic terms in $C(\tau)$ were all regularization-dependent and hence ambiguous even after subtracting $C_0$. This continues to hold in the case of holographic complexity. \\ The organization of the paper is as follows: we start with an explanation of the general gravitational setup that we use for RG flow geometries in Section \ref{bulk gravity}. We include expressions for the general form that the volume and action calculations take for these geometries. In Section \ref{non-analytic terms}, we show that all such expressions contain a term that is in general non-analytic in the deformation coupling $\tau$ by deriving a scheme to extract such a term which we denote by $v_0$ and $i_0$ for the volume and action respectively. Section \ref{n=1 flow} looks at an example where the volume and action can be computed analytically. This is the case of the ${\cal N} = 1$ flow geometry. Next, in Section \ref{ambiguity}, we show that the complexity defined using the volume or action conjectures has various ambiguities as it is regulator-dependent. We look at a few such ambiguities in the volume and action. We find that the non-analytic term discussed previously is universal and independent of the cutoff. This is similar to the nature of complexity found in \cite{Sood:2021cuz} where we found a non-universal complexity but a universal non-analytic term in the field theory calculation for the complexity near the $O(2)$ fixed point in a Gaussian approximation. Finally, we summarize our results with the discussion in Section \ref{conclusions}. \section{Bulk Gravity Setup} \label{bulk gravity} We want to study holographic complexity near a critical point that defines a UV CFT dual to AdS. Upon deforming it by an IR-relevant operator the theory may flow in the IR to another CFT or a gapped phase. In the dual gravitational theory, we consider an asymptotically AdS background of the form \begin{align} ds^2 = \frac{L^2}{z^2} (\eta_{\mu\nu} dx^{\mu} dx^{\nu} + \frac{dz^2}{f(z)}) \label{general metric} \end{align} We require that the function $f(z)$ have a scale $\xi$ and be of the form $f(z/\xi)$ such that $f(z)\simeq 1$ when $z\ll \xi$. Then, $\xi$ defines a correlation length such that for length scales smaller than $\xi$ the theory is described by a CFT. For larger length scales we can have a flow to another CFT or to a gapped phase based on how $f(z)$ behaves in the region $z>\xi$. The boundary deformation by an IR-relevant operator ${\cal O}$ is modeled by turning on a bulk scalar field $\Phi$ with a mass related to the conformal dimension of ${\cal O}$ in the bulk. Consider then the bulk action \begin{align} I_{bulk} &=\frac{1}{16\pi G_N} \int d^{D}x \sqrt{-g} \left( R-\frac{1}{2} g^{AB} \partial_A \Phi \partial_B \Phi - V(\Phi) \right) \label{gravity action} \end{align} where $g_{AB}$ is the spacetime metric and $V$ is the scalar potential which has a critical point at $\Phi =0$, satisfying $V(0) = -\frac{d(d-1)}{L^2}$. \\ When we have $f(z) \rightarrow \frac{L^2}{L^2_{0}} > 1$ as $z \rightarrow \infty$, the geometry again becomes AdS with radius $L_0$. For consistency, the potential again has a critical point at the $z \rightarrow \infty$ value of the scalars $V_0 = -\frac{d(d-1)}{L_0^2}$. Such geometries are known as domain wall geometries in the literature. \cite{Girardello:1999bd, Freedman:1999gp}\\ On the other hand, flows to a gapped phase involve an $f(z)$ that keeps on growing in the interior and typically blows up with some positive power of $z$ as $z \rightarrow \infty$ \cite{Girardello:1998pd}. The spacetime is singular here but typically becomes non-singular when considering extra dimensions and generically the space-time caps off at a finite proper distance from the boundary.\\ In addition to how the metric behaves in the interior, the RG flow geometries are also distinguished based on whether the leading deformation is source-like or vev-like. The leading behavior in the scalar dual to the relevant deformation ($\Delta < d$) of the field theory which we call $\Phi$ determines whether we have a source-like deformation \begin{align} \Phi = \Phi_{(s)} z^{d-\Delta} + .... \end{align} or a vev-like deformation \begin{align} \Phi = \Phi_{(v)} z^{\Delta} + .... \end{align} in the standard quantization framework. \\ The equations of motion following from the action in Eq. $\ref{gravity action}$ are \begin{align} \label{equations of motion} R_{AB} &= \frac{1}{2}\partial_A \Phi \partial_B \Phi + \frac{1}{d-1}g_{AB} V(\Phi) \\ \frac{1}{\sqrt{-g}} &\partial_A (\sqrt{-g}g^{AB}\partial_B\Phi) - \frac{\delta V}{\delta \Phi} = 0 \end{align} Near the boundary, the equation for $R_{zz} + R_{tt}$ gives, \begin{align} zf'(z) = \frac{(d-\Delta)^2 \Phi_{(s)}^2}{d-1}z^{2(d-\Delta)} \end{align} Thus the leading correction to $f(z)$ is positive. \begin{align} f(z) = 1 + \frac{(d-\Delta)\Phi_{s}^2}{2(d-1)} z^{2(d-\Delta)} + ... \end{align} The critical exponent $\nu$ determines the scaling of $\Phi_{(s)}$ with the correlation length in the dual field theory \footnote{$\nu$ is defined in the standard way via $\xi \sim \Phi_{(s)}^{-\nu}$. Since $\int d^{d}x \Phi_{(s)} O_{\Delta}$ is a term in the action, $\frac{1}{\nu} = d-\Delta$.}. Thus, the above equation is of the form $f(z) = 1 + (z/\xi)^{2(d-\Delta)} + ...$ \footnote{If $\Delta$ = d/2, then the leading order term in f(z) is $(z/\xi)^{d} (\log(z/\xi))^2$. This also follows from a near boundary analysis of Einstein equations and the fact that even the asymptotic leading order behavior of the scalar has a $z^{d/2}\log(z)$ term.}. A similar near-boundary analysis can be done in the case of vev deformations which yields an asymptotic form of $f(z)$ given by \begin{align} f(z) = 1+ \frac{\Delta}{2(d-1)}z^{2\Delta}\, \Phi_{(v)}^2 + ... \end{align} The positivity of the leading correction to $f(z)$ near the boundary follows more generally from the null energy theorem and Einstein equations which imply that $f(z)$ is monotonically increasing ${\it i.e.}$ $\partial_z f > 0$. \cite{Freedman:1999gp, Liu_2013}. \commentout{When there is a single scalar, the leading contribution apart from the cosmological constant term to the stress-energy tensor \begin{align} T_{AB} &= -\frac{2}{\sqrt{-g}}\frac{\delta I_{scalar}}{\delta g^{AB}} \end{align} near the boundary comes from the derivatives of the $\Phi_{(s)}$ or $\Phi_{(v)}$ terms along with the mass term in the potential. Using the mass-dimension relation $m^2L^2 = \Delta (\Delta-d)$, we have \begin{align} T_{zz} &= \frac{d}{4\kappa^2} (d-\Delta) \Phi_{(s)}^2 z^{2(d-1-\Delta)} + ... \\ T_{\mu \nu} &= \frac{1}{2\kappa^2}\eta_{\mu \nu} (d-\Delta) (\Delta - \frac{d}{2}) \Phi_{(s)}^{2} z^{2(d-1-\Delta)} + ... \end{align} for the source deformations. The quantity $T_{tt} + T_{zz}$ is manifestly positive. This can be explicitly checked for this case but it is also a consequence of the null energy condition applied near the boundary. This implies that the leading correction to $f(z)$, in this case, is positive. \begin{align} f(z) = 1 + c_d z^{2(d-\Delta)} + ... \end{align} with $c_d > 0$. In fact, $c_d \propto \Phi_{(s)}^2$ with the proportionality constant depending on d and $\Delta$. } The maximal-volume computation is straightforward for the class of metrics in eq. $\ref{general metric}$. The maximal volume slices are fixed-$t$ slices and the maximal volume is invariant under boundary time translations. Thus we can fix the boundary time at which to evaluate the maximal volume to be $t=0$. Then, the maximal slice satisfies $t=0$ for all $z$. The volume of such a slice is \begin{align} V_{\Sigma}[f] = \sigma_{d-1} L^d \int_{z_0}^{Z_0} dz \frac{1}{z^d \sqrt{f(z)}} \end{align} The volume of these slices would be infinite for $z_0 \rightarrow 0$ because proper distances near $z=0$ diverge and these slices extend to $z=0$. These UV divergences are expected in the field theory definitions of complexity as well \cite{Sood:2021cuz, jefferson2017circuit}. So, we use a regulated version of AdS space with a cutoff surface at $z=z_0$. Also, $\sigma_{d-1}$ is the volume of the spatial field theory background which is finite after making the spatial coordinates of the boundary theory periodic. In principle, such periodic identification is singular at the Poincare horizon $z \rightarrow \infty$ which can be avoided by using a large-z cutoff $Z_0$. However, this is unnecessary here since the complexity we compute is not singular in the limit $Z_0 \rightarrow \infty$. It would be useful to note that for $d\geq2$ a fixed-$t$ AdS slice has a volume given by \begin{align} V_{\Sigma}(AdS_{d+1}) = \frac{\sigma_{d-1}L^d}{(d-1)\epsilon^{d-1}} \end{align} where $z_0 = \epsilon$. \\ We also compute the action of the scalar-gravity system on the WDW patch of spacetimes given by the class of metrics in eq. $\ref{general metric}$. This is yet another coordinate-invariant object in the bulk. The full action on the WDW requires an accounting of boundary terms as well along with $I_{bulk}$ \cite{Chapman:2016hwi}. These terms come from co-dimension 1 boundary segments as well as co-dimension 2 joints formed by the intersection of these segments. Any spacelike/timelike segments require the addition of the Gibbons-Hawking-York term \cite{York:1972sj, Gibbons:1976ue} and any joints formed by these surfaces require additional terms \cite{Hayward:1993my, Brill:1994mb} for the variational principle to be well defined. Similarly, null segments require a boundary term \cite{Parattu:2015gga, Lehner:2016vdi} and so do the joints formed by the intersection of null segments with spacelike/timelike boundary segments. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{ads_space_wdw.png} \caption{WDW region for AdS spacetime} \label{ads_wdw} \end{figure} \begin{align} I = I_{bulk}(W) &+ I_{GHY}(\partial W_1) + I_{null}(\partial W_2) + I_{jnts} (J) \end{align} In the above equation, $\partial W_1$ refers to any spacelike/timelike boundary segments of the WDW patch, $\partial W_2$ to null boundary segments, and $J$ to any joints formed between boundary segments. Naively, it may seem that the definition of the WDW patch should imply that one only has null boundary segments. However, this may not be the case as some regularization schemes may introduce non-null surfaces Fig, \ref{ads_wdw}. \\ The precise formula for $I_{bulk}$ involves evaluating eq. $\ref{gravity action}$ for the specific choice of functions $f$ and $\Phi$ on the regulated WDW patch. The second term is the usual Gibbons-Hawking-York term for spacelike/timelike boundaries $\partial W_1$. It is \begin{align} I_{GHY}(\partial W_1) = \frac{1}{8 \pi G_N} \int d^{d}x \sqrt{|h|} K \end{align} with $h$ being the induced metric on $\partial W_1$ and K being the trace of the extrinsic curvature. \footnote{All normals are taken to point outwards w.r.t. W} The third term is the contribution from null boundaries $\partial W_2$. It is given by \begin{align} I_{null}(\partial W_2) = -\frac{1}{8 \pi G_N} \int d\lambda d^{d-1}\theta \sqrt{|\gamma|} \kappa \end{align} This term is required for the variational principle to be well-defined whenever a spacetime has null boundaries. This piece depends on the parametrization \footnote{ The variation and the equations of motion do not.} through $\kappa$ which measures the non-affinity in $\lambda$. $\gamma$ is the $(d-1)-$dimensional metric on the $\theta$ coordinates. A choice of $\lambda$ gives a null normal $k^A$ to the surface which satisfies \begin{align} k^A \grad_A k_B = \kappa k_B \end{align} A choice of $\lambda$ can set this term to zero and we make this choice in this section and the next. In Sec \ref{ambiguity}, we consider a different choice and check whether that changes this term in the action. Finally, the term evaluated on joints is \begin{align} I_{jnts}(J) = \frac{1}{8 \pi G_N} \int d^{d-1}x \sqrt{\sigma} a \end{align} where for timelike-null joints $a = -\ensuremath{\mathrm{sign}}(k.s)\ensuremath{\mathrm{sign}}(k.\hat{t}) \log |k.s|$ where k is the null normal and s is the one form spacelike unit normal to the timelike surface. $\hat{t}$ is a tangent vector in the tangent space of the timelike surface, orthogonal to the junction and again pointing outwards as shown in fig. \ref{timelike-null joint fig}. \begin{figure}[h] \centering \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width = \textwidth]{joint1.png} \caption{Timelike-null joints} \label{timelike-null joint fig} \end{subfigure} \hfill \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width = \textwidth]{joint2.png} \caption{Null-null joints} \label{null-null joint fig} \end{subfigure} \caption{Joints in AdS WdW spacetime} \end{figure} \commentout{ \\ The second term is \begin{align} \int_{\partial W_2} d\lambda d^{d-1}\theta \sqrt{\gamma} \tau \end{align} where $\gamma$ is the induced metric on the (d-1)-dimensional space with $\theta$ coordinates. $\lambda$ is a parameter along the null generator. $\tau$ is the non-affinity coefficient for the $\lambda$ parametrization. Given a parametrization, we can define the tangent vector $k^{\mu} = \frac{\partial x^{\mu}}{\partial \lambda}$ and then $\tau$ measures the non-affinity of the parametrization as it is zero for the affine choice. \begin{align} k^{\mu} \grad_{\mu}k^{\nu} = \tau k^{\nu} \end{align} This is the first ambiguity that we encounter in the definition of the action for the WDW patch as this term depends on a choice of the parametrization of the null surface. It turns out that this dependence goes away when we look at the variation of the action. For evaluating the action on-shell, one choice that can be made is to set $\tau = 0$ by choosing an affine $\lambda$. Then this term does not contribute to the action. \\ The third term is the joint term between segments at least one of which is null. We have a few such joints in our calculations and therefore this joint term requires a careful examination. $\sigma$ is the determinant of the induced metric on the $(d-1)$-dimensional joint sub-manifold. We encounter joints that are either spacelike-null or timelike-null and for these cases, $a$ is given by $- \ensuremath{\mathrm{sign}}(k.t)\ensuremath{\mathrm{sign}}(k.\hat{s}) \log \abs{k.t}$ and $- \ensuremath{\mathrm{sign}}(k.\hat{t})\ensuremath{\mathrm{sign}}(k.s) \log \abs{k.s}$ respectively where $s$, $t$ are one-forms that are normal to the surface and point outwards and $\hat{s}$, $\hat{t}$ are unit tangent vectors in the spacelike/timelike segments which are orthogonal to the junction and also point outwards. The above term also introduces ambiguities in the action.\\} We compute the action for $AdS_{D}$ as a warmup. Any scalars are turned off in this case and therefore we only have the cosmological constant term from the scalar. For the $AdS_{D}$ space, we have $R = - \frac{d(d+1)}{L^2}$ which gives \begin{align} I_{bulk} (AdS_{D}) = -\frac{dL^{d-1}}{\kappa^2}\int_{WDW}d^{D}x \frac{1}{z^{d+1}} \end{align} In this case, the WDW patch is enclosed by $t = \pm z$ light sheets. Including regulators at small $z=\epsilon$ and large $z=Z_0$, we have \begin{align} I_{bulk} (AdS_{D}) = -\frac{d\sigma_{d-1} L^{d-1}}{(d-1)4\pi G_N}\left( \frac{1}{\epsilon^{d-1}} - \frac{1}{Z_0^{d-1}} \right) \end{align} We find that $I_{bulk} (AdS_{D})$ is negative and that we can take the $Z_0 \rightarrow \infty$ limit. \commentout{ giving specifically for $AdS_5$ \begin{align} I_{bulk} (AdS_5) = -\frac{\sigma_3 L^{3}}{3\pi G_N}\frac{1}{\epsilon^{3}} \end{align}} For the GHY term, we find that the surface at $z=Z_0$ again gives a vanishing contribution as $Z_0 \rightarrow \infty$. For general $d \geq 2$, one obtains \begin{align} I_{GHY}(AdS_{D}) = \frac{d\sigma_{d-1}L^{d-1}}{4 \pi G_N} \left( \frac{1}{\epsilon^{d-1}} - \frac{1}{Z_0^{d-1}} \right) \end{align} \commentout{For the $AdS_5$ case with $Z_0 \rightarrow \infty$, this contribution is $\sigma_3L^3/\pi G_N \epsilon^3$.} As mentioned above, we can affinely parametrize the null generators and make any contributions from the null boundaries vanish. The WDW patch for vacuum AdS has four joints all of which are null-timelike joints. The outward null normals corresponding to affine parametrizations are $dt - dz$ and $-dt-dz$ respectively. The joint contributions give \begin{align} I_{jnts}(AdS_D) = -\frac{L^{d-1}\sigma_{d-1}}{4 \pi G_N}\left( \frac{\log{\epsilon/L}}{\epsilon^{d-1}} - \frac{\log{Z_0/L}}{Z_0^{d-1}} \right) \end{align} \commentout{to get $-L^3\sigma_3 \log{(\epsilon/L)}/4 \pi G_N\epsilon^3$ for $AdS_5$.} We can again drop the $Z_0$ terms. Adding up all these contributions, the full action for $AdS_{d+1}$ is \begin{align} I(AdS_{d+1}) = \frac{\sigma_{d-1} L^{d-1}}{4 \pi G_N\epsilon^{d-1}} \left( \frac{d(d-2)}{d-1} + \log{L/\epsilon} \right) \end{align} We find that the total action is positive here, even though the bulk contribution was negative. Hence, we find that the boundary terms are important for the action to be a valid measure of complexity. \commentout{ \begin{align} I(AdS_5) = \frac{\sigma_3 L^3}{4 \pi G_N\epsilon^3} \left( \frac{8}{3} + \log{L/\epsilon} \right) \end{align} } \section{Non-analyticity near the critical point} \label{non-analytic terms} This section shows that near the critical point, the volume and action have a piece non-analytic in the deformation coupling $\tau$ and we derive the general form for this piece. \\ The volume of interest is $V_{\Sigma}[f]$. To isolate the non-analytic piece, we use the variables $u = z/\xi$ and also take $Z_0 \rightarrow \infty$ as we did in the previous section. \begin{align} V_{\Sigma}[f] = \frac{\sigma_{d-1} L^d}{\xi^{d-1}} \int_{z_0/\xi}^{\infty} du \frac{1}{u^d \sqrt{f(u)}} \end{align} where, as discussed later in sec.\ref{ambiguity}, we introduce a value for $z_0$ defined through \begin{equation} \epsilon = \xi \int_0^{z_0/\xi}\!\!\! \frac{du}{\sqrt{f(u)}} \label{cutoff} \end{equation} with $\epsilon $ a fixed UV cut-off. The function $f(u)$ is increasing with $u$ and $f(0)=1$. We start with some general considerations to show that the complexity from the volume prescription decreases as we move away from the critical point, {\it i.e.}\ $d V_\Sigma[f]/d\tau<0$ for $\tau>0$. Indeed \begin{equation} \frac{d V_\Sigma[f]}{d\xi} = \sigma_{d-1} L^d \left\{ -(d-1) \frac{1}{\xi^{d}}\int_{z_0/\xi}^{\infty} du \frac{1}{u^d \sqrt{f(u)}} + \frac{\epsilon}{z_0^d\xi} \right\} \label{derV} \end{equation} where we used, from eq. \ref{cutoff}, that \begin{equation} \frac{d}{d\xi} \left(\frac{z_0}{\xi}\right) = -\frac{\epsilon}{\xi^2}\sqrt{f(z_0/\xi)} \end{equation} Eq. \ref{derV} can be rewritten as \begin{equation} \frac{d V_\Sigma[f]}{d\xi} = \sigma_{d-1} L^d \frac{(d-1)}{\xi^d} \int_{z_0/\xi}^\infty du\, \left(\frac{\epsilon}{z_0}- \frac{1}{\sqrt{f(u)}} \right) \end{equation} Now, due to the fact that $f(u)$ is increasing with $u$ we obtain from eq.\ref{cutoff} \begin{equation} \frac{\epsilon}{z_0} > \frac{1}{\sqrt{f(z_0/\xi)}} > \frac{1}{\sqrt{f(u)}}, \ \ \ \mbox{if} \ \ \ u>\frac{z_0}{\xi} \end{equation} implying that $ \frac{d V_\Sigma[f]}{d\xi}>0$, and, in view of $\xi\sim \tau^{-\nu}$ with $\nu>0$: \begin{equation} \frac{d V_\Sigma[f]}{d\tau} < 0 \end{equation} showing that the complexity from the volume prescription indeed has a peak at the transition $\tau=0$. To show that this peak has a non-analytic part we start by recalling that near the boundary $z \ll \xi$, \begin{align} f(z) = 1 + \sum_{m=2}^{\infty} c_m \left(\frac{z}{\xi}\right)^{m \alpha} + {\cal O}\left((z/\xi)^{2\Delta}\right) \label{powerseriesf} \end{align} where $\alpha = d- \Delta$ for a source deformation and $c_2 = 1$ \cite{Hung:2011ta,Liu_2013}. We consider only source-like deformations here but the calculation in this section can be generalized to the vev case as well. To see a power series solution for the scalar-gravity action of this form, see Appendix \ref{power series solution}. The second series of terms come from the subleading scalar terms discussed in the previous section. Next, we introduce a parameter $\delta$ which is small $\delta << 1$, and use it to break up the integral. \begin{align} V_{\Sigma}[f] = \frac{\sigma_{d-1} L^d}{\xi^{d-1}} \left[ \int_{z_0/\xi}^{\delta} du \frac{1}{u^d \sqrt{f(u)}} + \int_{\delta}^{\infty} du \frac{1}{u^d \sqrt{f(u)}} \right] \end{align} Since $\delta$ is small, we can expand the function $f$ using the small $u$ expansion in the first integral. \begin{align} f(u)^{-1/2} = 1 - \frac{1}{2} \sum_{m=2}^{\infty} \tilde{c}_m u^{m \alpha} + {\cal O}\left(u^{2\Delta}\right) \end{align} where $\tilde{c}_2=1$ and since we are interested in separating the divergences, the last ${\cal O}(u^{2\Delta})$ term can be ignored because it gives a finite contribution to the integral because $2\Delta>d$. After performing the first integral and replacing $\xi=\tau^{-\nu}$, we obtain \begin{align} V_{\Sigma}[f] = \sigma_{d-1} L^d \left[ \frac{1}{(d-1)z_0^{d-1}} + \frac{1}{2} \sum_{m=2}^{\infty} \tilde{c}_m \frac{z_0^{m\alpha - d + 1}}{m\alpha - d + 1} \tau^m + \ldots + \tau^{\nu(d-1)}v_0 \right] \label{C1a} \end{align} with $v_0$ given by \begin{align} v_0 =\lim_{\delta \rightarrow 0} \left( \int^{\infty}_{\delta} du \frac{1}{u^d \sqrt{f(u)}} - \frac{1}{(d-1)\delta^{d-1}} - \frac{1}{2}\sum_{m=2}^{\infty} \tilde{c}_m \frac{\delta^{m\alpha - d + 1}}{m\alpha - d + 1} + \ldots \right) \label{v0} \end{align} This limit is finite since we subtracted all the infinite pieces. The ellipsis ($\ldots$) in eq. $\ref{C1a}$ refer to other analytic terms similar to $\tau^{m}$ and in eq. $\ref{v0}$ refer to other divergent pieces in the limit $\delta \rightarrow 0$. The result \eqref{C1a} shows that the divergent part is analytic in $\tau$ but there is a non-analytic contribution proportional to $\tau^{\nu(d-1)}$ independent of the cut-off $z_0$. Notice that, if for some integer $m_0$ we have $m_0 \alpha = (d-1)$ then the complexity will have a (non-analytic) logarithmic term $\ln \xi$. On the other hand, we will have $\tau^{\nu(d-1)}=t^{2m_0}$, and that contribution will be analytic but still independent of the cut-off. For some toy model calculations of $v_0$, see Appendix \ref{toy model}. A similar analysis gives the non-analytic piece $i_0$ for the CA prescription of the complexity \footnote{The action has a purely gravitational part considered here and a part coming from the scalar. The sum of both should have a peak at $\tau=0$ as we show later in particular examples.}. This term comes from the bulk part of the action. Since the gravitational terms are always present in the bulk action, we restrict to these terms here and show that they contain a piece non-analytic in $\tau$. \begin{align} I_{bulk} &=\frac{1}{16\pi G_N} \int d^{D}x \sqrt{-g} \left( R - V(0) \right) \\ &= \frac{d L^{d-1}\sigma_{d-1}}{8\pi G_N} \int_{z_0}^{\infty} \frac{dz}{z^{d+1}\sqrt{f(z)}} \big( d-1 + z f'(z) - (d+1)f(z)\big) \int_{0}^{z} \frac{dy}{\sqrt{f(y)}} \nonumber \end{align} Changing variables to $u = z/\xi$ and $w = y/\xi$ so that we have \begin{align} I_{bulk} = \frac{dL^{d-1}\sigma_{d-1}}{8\pi G_N \xi^{d-1}} \int_{z_0/\xi}^{\infty} \frac{du}{u^{d+1}\sqrt{f(u)}}\big( d - 1 + u f'(u) - (d+1)f(u) \big) \int_{0}^{u}\frac{dw}{\sqrt{f(w)}} \end{align} By again introducing a scale $\delta$ between $z_0/\xi$ and $\infty$ with $\delta << 1$ as before, \begin{align} I_{bulk} = - \frac{dL^{d-1}\sigma_{d-1}}{8\pi G_N} &\left( \frac{2}{(d-1)z_0^{d-1}} + \sum_{m=2}^{\infty} f_m \tau^m z_0^{m\alpha - d + 1} \right) + L^{d-1}\sigma_{d-1}\tau^{\nu(d-1)}i_0 \end{align} with $i_0$ given by \begin{align} i_0 = \frac{d}{8 \pi G_N}\lim_{\delta \rightarrow 0} &\left( \int_{\delta}^{\infty} \frac{du}{u^{d+1}\sqrt{f(u)}} \big( d - 1 + u f'(u) - (d+1)f(u) \big) \int_{0}^{u} \frac{dw}{\sqrt{f(w)}} \right. \\ \nonumber &\left. + \frac{2}{(d-1)\delta^{d-1}} + \sum_{m=2}^{\infty}f_m\delta^{m\alpha-d+1} \right) \end{align} \commentout{\begin{align} I_{bulk} = - \frac{dL^{d-1}\sigma_{d-1}}{8\pi G_N} &\left( \frac{2}{(d-1)z_0^{d-1}} + \sum_{n=0}^{\infty}\frac{f_n (1+n(1+n+d))}{(n+1)(n+d+1)}z_0^{n+1} \right. \\ \nonumber &\left. + \sum_{m=2}^{\infty} \frac{(m\alpha - d)(m\alpha + 1) + 1}{(m\alpha + 1)(m \alpha - d + 1)} \tau^m z_0^{m\alpha - d + 1} + \ldots \right) \\ \nonumber &+ L^{d-1}\sigma_{d-1}\tau^{\nu(d-1)}i_0 \end{align} with $i_0$ given by \begin{align} i_0 = \frac{d}{8 \pi G_N}\lim_{\delta \rightarrow 0} &\left( \int_{\delta}^{\infty} \frac{du}{u^{d+1}\sqrt{f(u)}} \big( d - 1 + u f'(u) - (d+1)f(u) \big) \int_{0}^{u} \frac{dw}{\sqrt{f(w)}} \right. \\ \nonumber &\left. + \frac{2}{(d-1)\delta^{d-1}} + \sum_{m=2}^{\infty}\frac{(m\alpha - d)(m\alpha + 1) + 1}{(m\alpha+1)(m\alpha-d+1)}\delta^{m\alpha-d+1} + \ldots \right) \end{align}} Here, the coefficients $f_m$ can be obtained from the coefficients $c_m$ and, the $I_{GHY}, I_{null}, I_{J}$ parts of the gravitational action do not give any contributions to $i_0$ since they are analytic. We show this in Appendix \ref{non-analytic boundary calculation}. \section{$\cal N$ $=1$ Gapped Flow} \label{n=1 flow} The metric is known analytically for the flow of $\mathcal{N}=4$ SYM theory to a confining theory under a mass-like source deformation with dimension $\Delta = 3$ in the UV \cite{Girardello:1998pd}. The metric is asymptotically $AdS_{5}$ with $f(z)$ and $\Phi(z)$ given by \footnote{Note that $\xi$ differs here from definition in previous sections by a constant factor of $\sqrt{2}$ for ease of notation.} \begin{align} &f(z) = (1+z^2/\xi^2)^2 \\ &\Phi(z) = \frac{\sqrt{3}}{2} \log{\frac{\sqrt{\xi^2+z^2}+z}{\sqrt{\xi^2+z^2}-z}} \end{align} The potential $V(\Phi)$ in the action for the scalar is \begin{align} V(\Phi) = -\frac{3}{2L^2} ( 3 + 4 \cosh{(2\Phi/\sqrt{3})} + \cosh{(2\Phi/\sqrt{3})}^2 ) \end{align} The near boundary asymptotics $z << \xi$ for the scalar and metric are consistent with a source deformation with $\nu = 1$. \begin{align} &f(z) = 1+\frac{2z^2}{\xi^2} +... \label{fPhi} \\ &\Phi(z) = \frac{\sqrt{3}z}{\xi} + ... \end{align} Note here that since $\nu (d-1) = 3$ takes a special value i.e an integer, therefore $\tau^{\nu(d-1)}=\tau^3$ is analytic in $\tau$. However, this term is still distinguished because it is the only term in the series with an odd power of $\tau$. In that sense it is natural to use a variable $\tilde{\tau}=\sqrt{\tau}$ in which case $\tilde{\tau}^{\frac{3}{2}}$ would be the non-analytic term near $\tau=0$. \\ Of course, the $\mathcal{N} = 1$ geometry just becomes the AdS geometry when $\xi \rightarrow \infty$ or when the scalar source is turned off. Therefore, we can expand the difference in maximal volumes or actions in powers $z_0/\xi$. $V_{\Sigma}$ can be computed analytically and is given by \begin{align} \frac{V_{\Sigma}(\xi)}{\sigma_3L^4} &= \frac{1}{3z_0^3} - \frac{1}{\xi^2 z_0} + \frac{\pi}{2\xi^3} - \frac{1}{\xi^3} \tan^{-1}{\frac{z_0}{\xi}} . \end{align} Following up on the discussion above eq.\ref{fPhi}, the function on the right-hand side is even under $\xi\rightarrow -\xi$ except for the term $\frac{\pi}{2\xi^3}$ that leads to the $\tau^3$ term that we interpret as the universal non-analytic term in this case. For $z_0/\xi$ small, replacing $\xi=\tau^{-1}$ and writing explicitly the terms that do not vanish as $z_0\rightarrow 0$ we obtain \begin{align} \frac{V_{\Sigma}(\tau)}{\sigma_3L^4} &=\frac{1}{3z_0^3} - \frac{\tau^2}{z_0} + \frac{\pi}{2}\, \tau^3 + {\cal O}(z_0) \end{align} We find regulator-dependent leading terms, a regulator-independent sub-leading term in $V_{\Sigma}$ that contains the only odd power of $\tau$, and then terms that go to 0 as $z_0 \rightarrow 0$ and that only contain even powers of $\tau$. \\ For the WDW action in the $\mathcal{N}=1$ flow case, the potential evaluated on the solution of the scalar is \begin{align} V = -\frac{6}{L^2} \left( 2+3u^2+u^4 \right) \end{align} with $u = z/\xi$. As in the vacuum AdS case, we have to regulate the geometry with a small z cutoff $z_0$ and a large z cutoff at $Z_0$ to regulate the singularity at $z \rightarrow \infty$. To label the WDW region, it is useful to define light-cone coordinates $v_1$ and $v_2$ using \begin{align} dv_1 &= dt-dz/\sqrt{f} \label{dv_1} \\ dv_2 &= -(dt+dz/\sqrt{f} ) \label{dv_2} \end{align} Integrating, one has $v_1 = t - \xi \tan^{-1} z/\xi$ and $v_2 = - (t + \xi \tan^{-1} z/\xi)$. Then the region enclosed by $v_1=0$ and $v_2=0$ in the regulated geometry is the WDW patch. These coordinates are also useful as $\ref{dv_1}$, $\ref{dv_2}$ gives null normals $k_1 = k_{1\mu}dx^{\mu} = dv_1$ and $k_2 = k_{2\mu}dx^{\mu} = dv_2$ that satisfy the geodesic equation with $\kappa = 0$ and point outwards. In terms of the familiar Poincare coordinates, the WDW patch is bounded between the rays $t=\pm \xi \arctan{z/\xi}$ on the $t-z$ plane. For this spacetime, \begin{align} I_{bulk} = \frac{\sigma_3L^3}{8\pi G_N\xi^3} \int_{z_0/\xi}^{Z_0/\xi} du \frac{\tan^{-1}{u}}{u^5}\left( \frac{u^2}{2} - 8 \right) \end{align} We can again take the $Z_0 \rightarrow \infty$ in this integral without encountering any difficulties. The integral gives \begin{align} I_{bulk} = \frac{\sigma_3L^3}{8\pi G_N \xi^3} \left( -\frac{2}{3u_0^3} - \frac{1}{2u_0^4}(1-\frac{u_0^2}{8}-\frac{9u_0^2}{2})\tan^{-1}u_0 + \frac{9}{4u_0}-\frac{9\pi}{8} \right) \end{align} with $u_0 = z_0/\xi$. The terms in the bracket can be expanded for small $u_0$ to get the leading behavior in the deformation away from the critical point, \begin{align} I_{bulk}(\tau) = \frac{\sigma_3L^3}{8 \pi G_N} \left( -\frac{8}{3z_0^3} + \frac{19}{6 z_0}\, \tau^2 - \frac{9\pi}{8}\, \tau^3 + {\cal O}(z_0) \right) \nonumber \end{align} The first term in the expression is the only one that survives when $\tau \rightarrow 0$ and is the familiar AdS contribution. The sub-leading universal piece again goes like $\tau^3$, the only odd power of $\tau$ in the expansion. The coefficients of this term are different for the two different prescriptions which we called $v_0$ and $i_0$ in Sec \ref{non-analytic terms}.\\ We next compute the GHY term for the timelike surfaces at $z=z_0$ and $z=Z_0$. These give \begin{align} I_{GHY} = \frac{L^3\sigma_3}{\pi G_N}\frac{\xi \tan^{-1}{z_0/\xi}}{z_0^4}\left( 1+z_0^2/\xi^2 \right) \end{align} as the $z=Z_0$ gives zero as we take $Z_0$ to the Poincare horizon. For small $u_0$ \begin{align} I_{GHY}(\tau) = \frac{L^3\sigma_3}{\pi G_N} \left( \frac{1}{z_0^3} + \frac{2}{3 z_0}\, \tau^2 + {\cal O}(z_0) \right) \end{align} Here we find as expected that the $GHY$ piece does not contribute to the universal piece in the WDW action. Next, we compute the contributions from the joints in the WDW region. As in the AdS case, there are four joints each of them formed by null-timelike intersections. The two joints connected to the $z=Z_0$ do not contribute while the $z=z_0$ do contribute \begin{align} I_{joints} = -\frac{\sigma_3L^3}{4 \pi G_N z_0^3}\log{z_0/L} \end{align} This quantity also does not contribute to the universal term in the action. Adding up all the action contributions, we get \begin{align} I(\tau) = \frac{\sigma_3 L^3}{8 \pi G_N} \left( \frac{1}{3z_0^3}(16+6\log{L/z_0}) + \frac{17}{2z_0}\, \tau^2 - \frac{9\pi}{8}\, \tau^3 + {\cal O}(z_0) \right) \end{align} Here again, the total action is positive as in the AdS case. Thus, the coefficients $v_0$ and $i_0$ for this flow are \begin{align} v_0 &= \frac{\pi}{2} \\ i_0 &= -\frac{9}{64 G_N} \end{align} \commentout{ For example, we can define $\delta V_{\Sigma}$ for the $\mathcal{N} = 1$ geometry by looking at the difference \begin{align} \delta V = V_{\Sigma} (AdS_{5}) - V_{\Sigma} (\mathcal{N} = 1) \end{align} This is the analog of the volume of formation for black hole spacetimes where the deformation is in the temperature. In earlier holographic calculations, this quantity was found to be free of several ambiguities that may arise in the calculation of the volume or the action. \begin{align} \frac{\delta V (\xi)}{\sigma_3L^4} &= \frac{1}{3\epsilon^3}\left(1-\left(1-\frac{\epsilon^2}{\xi^2}\right)^{3/2}\right) - \frac{\pi}{2\xi^3} + \frac{1}{\xi^3} \tan^{-1}{\frac{\epsilon}{\sqrt{\xi^2-\epsilon^2}}} + \frac{1}{\xi^2 \epsilon}\left(1-\frac{\epsilon^2}{\xi^2}\right)^{1/2} \end{align} For $\epsilon/\xi$ small, \begin{align} \frac{\delta V (\xi)}{\sigma_3L^4} &= \frac{3}{2 \epsilon \xi^2} - \frac{\pi}{2\xi^3} +O(\epsilon) \end{align} The integral gives \begin{align} I_{bulk} = \frac{\sigma_3 L^3}{8 \pi G_N \xi^3} \left( -\frac{9\pi}{8} + \frac{1}{12u_0^3}\sqrt{1- u_0^2}(35u_0^2 - 8) + \frac{1}{4u_0^4}(17u_0^2 - 8)\sin^{-1}{u_0} \right) \end{align} with $u_0 = \epsilon/\xi$. The terms in the bracket can be expanded for small $u_0$ to get the leading behavior in the deformation away from the critical point, \begin{align} I_{bulk} = \frac{\sigma_3L^3}{8 \pi G_N} \left( -\frac{8}{3\epsilon^3} + \frac{43}{6 \epsilon \xi^2} - \frac{9\pi}{8\xi^3} + {\cal O}(\epsilon) \right) \nonumber \end{align} The first term in the expression is the only one that survives when $\xi \rightarrow \infty$ and is the familiar AdS contribution. In the $\delta I_{bulk}$ we have a leading term that is regulator-dependent with a sub-leading universal piece which again goes like $t^3$. The coefficients of this term are different for the two different prescriptions. \begin{align} \delta I_{bulk} = \frac{\sigma_3 L^3}{8 \pi G_N} \left( -\frac{43}{6\epsilon \xi^2} + \frac{9\pi}{8\xi^3} + {\cal O}(\epsilon) \right) \end{align} \begin{align} \delta I_{GHY} = \frac{5L^3\sigma_3}{6\pi G_N\epsilon\xi^2} + {\cal O}(\epsilon) \end{align} As in the AdS case, there are four joints each of them formed by null-timelike intersections. The two joints connected to the $z=Z_0$ do not contribute while the $z=z_0$ do contribute \begin{align} I_{joints} = -\frac{\sigma_3L^3}{4 \pi G_N z_0^3}\log{z_0/L} \end{align} which gives in the small $u_0$ limit, \begin{align} \delta I_{joints} = \frac{\sigma_3 L^3}{8 \pi G_N \epsilon \xi^2} \left( 1 - 3\log{\epsilon/L} + ... \right) \end{align} \begin{align} \delta I = \frac{\sigma_3 L^3}{8 \pi G_N} \left( \frac{1}{2\epsilon\xi^2}(1-6\log{\epsilon/L}) + \frac{9\pi}{8\xi^3} + {\cal O}(\epsilon) \right) \end{align}} \section{Holographic Complexity and Ambiguities} \label{ambiguity} In general, the holographic complexity defined using either the CV or the CA prescriptions is an ambiguous quantity even after choosing one prescription (see {\it e.g.}\ \cite{Chapman:2016hwi}). However, in this section, by considering the various types of ambiguities we show that the non-analytic term calculated in the previous sections is free from such ambiguities and therefore continues to be meaningful. Still, there is a difference in the coefficient of the universal term as computed by the two prescriptions, and how that matches different definitions of complexity in the field theory remains to be seen. \\ The quantities calculated in the previous section $V_{\Sigma}$ and $I$ have been interpreted as representing the complexity of the quantum state dual to the geometry according to \begin{align} C_V &= \frac{V_{\Sigma}}{G_N l} \end{align} for CV and \begin{align} C_A &= \frac{I}{\pi \hbar} \end{align} for the CA conjecture \footnote{The reference state wrt which these quantities should be considered as complexities and the length scale $l$ in the CV prescription are both ambiguous. We take $l=L$ in this paper.}. \\ In this way, the previous sections allow us to compute $C_V$ and $C_A$ for the ${\cal N}=1$ flow geometry. We see that both $C_V$ and $C_A$ for this geometry start with the AdS contributions and then have corrections coming from the deformation $\tau$. So we define the subtracted quantities $\delta V_{\Sigma}$ and $\delta I$ for the $\mathcal{N} = 1$ geometry by looking at the difference \begin{align} \delta V_{\Sigma} &= V_{\Sigma} (AdS_{5}) - V_{\Sigma} (\mathcal{N} = 1) \\ \delta I &= I(AdS_5) - I({\cal N} = 1) \end{align} These are the analogs of the complexity of formation for black hole spacetimes where the deformation is in the temperature. In earlier holographic calculations, this quantity was found to be free of several ambiguities that may arise in the calculation of the volume or the action \cite{Chapman:2016hwi}. In this section, we are comparing the action and volumes in two different asymptotically AdS geometries. We use a systematic way of applying the cutoff in the Fefferman-Graham coordinates $y = \epsilon$ where the metric is of the form \begin{align} ds^2 = \frac{L^2}{y^2}(dy^2 + g_{\mu\nu}(x,y)dx^{\mu}dx^{\nu}) \end{align} The quantity $z_0(\epsilon)$ then has an expansion for general $f(z)$ of the form \ref{powerseriesf} \begin{align} z_0 (\epsilon) = \epsilon \big( 1 + \sum_{m=2}^{\infty} \frac{d_m }{2(m\alpha+1)}\big(\frac{\epsilon}{\xi}\big)^{m\alpha} + {\cal O}\left((\epsilon/\xi)^{2\Delta}\right) \big) \end{align} with $d_2 = 1$. We find that $z_0$ is analytic in $\tau$ up to the ${\cal O}\left((\epsilon/\xi)^{2\Delta}\right)$ terms in the above expansion. The small-z cutoff in the case of the ${\cal N}=1$ flow is \begin{align} z_0(\epsilon) = \frac{\epsilon}{\sqrt{1-\tau^2 \epsilon^2}} \end{align} The pure AdS cutoff and the $\mathcal{N} = 1$ flow cutoff differ by \begin{align} z_0(\epsilon) - \epsilon = \frac{\tau^2 \epsilon^3}{2} + O(\epsilon^{5}) \end{align} Using this result, we write down the expressions for $\delta C_V$ and $\delta C_A$ \begin{align} \delta C_V = \frac{\sigma_3 L^3}{G_N} \left( \frac{3 \tau^2}{2 \epsilon} - \frac{\pi \tau^3}{2} \right) \end{align} and \begin{align} \delta C_A = \frac{\sigma_3L^3}{16\pi^2G_N \hbar} \left( \frac{\tau^2}{\epsilon}(1+ 6\log{L/\epsilon}) + \frac{9\pi \tau^3}{4} \right) \end{align} Here in both the prescriptions, we find that $\delta C > 0$ for $\tau > 0$ ${\it i.e.}$ the complexity is the largest at the critical point and decreases as we move away. We showed that this holds generally in sec. \ref{non-analytic terms} for the volume case, but here we find that this also true for the action prescription in this particular example. \subsection{Choice of cutoff} We find that the subtracted quantity $\delta C$ defined using either the CV or the CA proposal is still dependent on the choice of the cutoff $\epsilon$. This is the first ambiguity that we encounter. \subsection{Null-null joint at the cutoff surfaces} Instead of regulating the patch as we did in the previous section, one could regulate the patch in such a way that the null normals meet at the $z = z_0$ surface as in Fig \ref{null-null joint fig}. Then instead of two null-timelike joints, we have one null-null joint. This procedure also shifts the null normals infinitesimally so that they are now, \begin{align} t = \xi \tan^{-1} z/\xi - \xi \tan^{-1} z_0/\xi \\ t = \xi \tan^{-1} z_0/\xi - \xi \tan^{-1} z/\xi \end{align} for the $\mathcal{N} = 1$ case and \begin{align} t = z -\epsilon \\ t = \epsilon - z \end{align} for the AdS case. Due to this infinitesimal shift, $I_{bulk}$ changes. With this choice, the new $I_{bulk}$ expressions are \begin{align} I_{bulk} (AdS_5) &= - \frac{\sigma_3 L^3}{12 \pi G_N \epsilon^3} \\ I_{bulk}(\mathcal{N} = 1) &= I_{bulk} (AdS_5) + \frac{\sigma_3 L^3}{8 \pi G_N} \left( \frac{13}{4\epsilon}\, \tau^2 - \frac{9 \pi}{8}\, \tau^3 \right) \end{align} up to terms that vanish when $\epsilon \rightarrow 0$. In this new scheme, the Gibbons-Hawking-York terms are 0 for both geometries because the timelike surface at the cutoff is no longer present. The null piece can also be taken to be zero in both cases by choosing $\kappa = 0$. The character of the joint terms changes as we now have a null-null joint. For this joint, $a =- \ensuremath{\mathrm{sign}}(k.\tilde{k})\ensuremath{\mathrm{sign}}(\hat{k}.\tilde{k}) \log{(k.\tilde{k}/2)}$ where $\hat{k}$ is in the tangent space of the boundary region which has the normal $k$ $(\hat{k}.k = 0)$, null and pointing outwards and away from the joint while $\tilde{k}$ is the normal of the other null region. With this prescription, the AdS joint term is \begin{align} I_{J} (AdS_5) = -\frac{\sigma_3 L^3}{4 \pi G_N \epsilon^3} \log{\epsilon /L} \end{align} and the $\mathcal{N} = 1$ term is \begin{align} I_{J} (\mathcal{N} = 1) = -\frac{\sigma_3 L^3}{4 \pi G_N z_0^3} \log{z_0 /L} \end{align} which is the same as the joint term in the case with joints as in Fig. \ref{timelike-null joint fig}. Thus, the difference $\delta I_{J}$ is the same. However, $\delta I_{bulk}$ and $\delta I_{GHY}$ are different. The quantity $\delta C_A$ still continues to be positive and is given by \begin{align} \delta C_A = \frac{3\sigma_3 L^3}{16 \pi^2 G_N \hbar} \left( \frac{\tau^2}{\epsilon}(2\log{L/\epsilon} - 3/2) + \frac{3\pi}{4}\, \tau^3 + {\cal O}(\epsilon) \right) \end{align} \subsection{Null-normal normalization and parametrization} Another ambiguity that can be kept track of is in the normalization of the null normals. We introduce $c > 0$ so that $k_1.\hat{t} = c$ instead of 1. In this case, \begin{align} I_{J} (AdS_5) &= -\frac{\sigma_3 L^3}{4 \pi G_N \epsilon^3} \log{(c \epsilon/L)} \\ I_{J} (\mathcal{N} = 1) &= -\frac{\sigma_3 L^3}{4 \pi G_N z_0(\epsilon)^3} \log{(c z_0(\epsilon)/L)} \\ \delta I_{J} &= \frac{\sigma_3 L^3 \tau^2}{8 \pi G_N \epsilon} \left( 1 + 3\log{L/c \epsilon} \right) \end{align} We find that $\delta I$ and thus $\delta C_A$ has some $c$ dependence in the leading term. \\ Next, we consider a different parametrization $\tilde{\lambda} (\lambda)$ from the previously chosen affine case $\lambda$ for the null surfaces. This induces new normals $\tilde{k_1}$ and $\tilde{k_2}$ and hence we have to recalculate both the null piece and the joint piece in the action. For simplicity, we consider whether $\delta I$ changes when $\kappa$ changes from zero to a constant not equal to zero. A different $\delta C_A$ would suggest that the complexity has another ambiguity that comes from the parametrization chosen for the boundary regions and therefore the existing definition requires an additional choice of parametrization to be specified. An affine parametrization $\lambda$ satisfies \begin{align} d\lambda = -\frac{L^2}{z^2}\frac{dz}{\sqrt{f}} \label{affine} \end{align} This gives the null normals $k_1$ and $k_2$ discussed earlier that satisfy $\kappa = 0$. We want to make a change $\lambda \rightarrow \tilde{\lambda} (\lambda)$ such that $\tilde{\kappa}$ is a constant different from zero. We consider a change of parametrization of the form \begin{align} \frac{d\tilde{\lambda}}{d\lambda} = e^{-\beta} \end{align} Then, the new normals $\tilde{k}^A$ satisfy $\tilde{k}^A = e^{\beta} k^{A}$. Then $\tilde{\kappa} = \frac{d}{d\lambda} e^{\beta}$ or equivalently $\tilde{\kappa} = \frac{d}{d\tilde{\lambda}} \beta$ so that $e^{\beta} = 1 + \tilde{\kappa} \lambda$. The quantity in the integrand $\tilde{\kappa} d\tilde{\lambda} = d\beta = \frac{\tilde{\kappa}d\lambda}{1+\tilde{\kappa}\lambda}$ can be written as an integral over $z$ using eq. $\ref{affine}$. For the AdS case, \begin{align} e^{\beta} = 1 + \frac{\tilde{\kappa}L^2}{z} \end{align} while for the $\mathcal{N} = 1$ case, \begin{align} e^{\beta} = 1 + \frac{\tilde{\kappa}L^2}{z} + \frac{\tilde{\kappa}L^2}{\xi} \tan^{-1} z/\xi \end{align} The integrals for $I_{null}$ are \begin{align} I_{null}(AdS_5) &= \frac{\tilde{\kappa}\sigma_3 L^5}{4 \pi G_N} \int_{\epsilon}^{\infty} \frac{dz}{z^4(z+\tilde{\kappa}L^2)} \\ I_{null} (\mathcal{N} = 1) &= \frac{\tilde{\kappa}\sigma_3 L^5 \xi^2}{4 \pi G_N} \int_{z_0}^{\infty} \frac{dz}{z^5(z^2 + \xi^2)(1+\tilde{\kappa}L^2/z + (\tilde{\kappa}L^2/\xi )\tan^{-1} z/\xi)} \end{align} Using the notation $\tilde{\kappa} L^2 = k$, the AdS integral is given by \begin{align} I_{null}(AdS_5) = \frac{\sigma_3 L^3}{4 \pi G_N} \left( \frac{1}{3\epsilon^3} - \frac{1}{2k\epsilon^2} + \frac{1}{k^2 \epsilon} + \frac{1}{k^3} \log{\epsilon/k} \right) + {\cal O}(\epsilon) \end{align} Keeping terms upto $\tau^5$, the ${\cal N}=1$ integral gives \begin{align} I_{null}({\cal N} = 1) = \frac{\sigma_3 L^3}{4 \pi G_N} \left( A_0 + A_2 \tau^2 + A_4 \tau^4 + A_5 \tau^5 + {\cal O}(\tau^6) \right) \end{align} with \begin{align} A_0 &= \frac{1}{3 \epsilon^3} - \frac{1}{2 k \epsilon^2} + \frac{1}{k^2 \epsilon} + \frac{1}{k^3} \log{\epsilon/k} \\ A_2 &= - \frac{5}{2\epsilon} - \frac{1}{2k} - \frac{3}{k} \log{\epsilon/k} \\ A_4 &= \frac{k}{6} \left( -11 + 6 \log{t k} \right) \\ A_5 &= \frac{k^2}{12} \left( -22 + \pi + 6 \pi \log{2} \right) \end{align} where we again drop terms that go to $0$ as $\epsilon \rightarrow 0$. \commentout{ \begin{align} I_{null}(AdS_5) &= \frac{\sigma_3 L^3}{4\pi G_N z_0^3} B[-z_0/\epsilon;4,0] \\ I_{null}({\cal N}=1) &= \frac{\sigma_3 L^3\epsilon}{4\pi G_N\xi^4} \int_{z_0(\epsilon)/\xi}^{\infty} \frac{du}{u^4(1+u^2)\left(u(1+\frac{\epsilon}{\xi}\tan^{-1}u) + \frac{\epsilon}{\xi}\right)} \\ & \approx \frac{\sigma_3 L^3\epsilon}{4\pi G_N\xi^4} \int_{z_0(\epsilon)/\xi}^{\infty} \frac{du}{u^4(1+u^2)\left(u + \frac{\epsilon}{\xi}\right)} \\ \nonumber &=\frac{\sigma_3L^3}{4\pi G_N z_0^3(1+z_0^2/\xi^2)}\left( -\frac{z_0^2}{6\xi^2} + \frac{1}{2}(\Psi(5/2) - \Psi(2))) \right) \end{align} where $B[a;b,c]$ is the incomplete beta function and $\Psi(z)$ is the digamma function. \\} The sign in the new $I_{J}$ is unchanged since $e^{\beta}$ is positive definite, so only the term in the $\log$ is modified. The modification is \begin{align} I_{J} (AdS_5) &= -\frac{\sigma_3 L^3}{4 \pi G_N \epsilon^3} \log{((\epsilon + k)/L)} \\ I_{J} (\mathcal{N} = 1) &= -\frac{\sigma_3 L^3}{4 \pi G_N z_0^3} \log{((z_0 + k + \tau z_0 k \tan^{-1}\tau z_0)/L)} \end{align} so that \begin{align} \delta I_{null} &= \frac{\sigma_3 L^3}{4\pi G_N} \left( A_2 \tau^2 + A_4 \tau^4 + A_5 \tau^5 + {\cal O}(\tau^6) \right) \\ \delta I_{J} &= \frac{3\sigma_3 L^3 \tau^2}{16 \pi G_N \epsilon} \left( 1+2\log{L/2\epsilon} \right) \end{align} \subsection{Functional redefinitions of the null surface} Finally, we have functional redefinitions of the null surface. We look at shifts in the quantity $a$ by constant $a_0$ at all the joints. \begin{align} &I_{J} (AdS_5) = \frac{\sigma_3 L^3}{4 \pi G_N \epsilon^3}(a_0 - \log{\epsilon/L } ) \\ &I_{J} (\mathcal{N} = 1) = \frac{\sigma_3 L^3}{4 \pi G_N z_0^3} (a_0 - \log{z_0/L}) \\ &\delta I_{J} = \frac{\sigma_3 L^3 \tau^2}{8 \pi G_N \epsilon} (1+3(a_0 - \log{\epsilon/L})) \end{align} \subsection{Scalar boundary term} Following \cite{Bernamonti:2020bcf}, we consider the case when the scalar action is modified by an extra term \begin{align} I_{s}(\partial W_1) = \frac{1}{16\pi G_N} \int d^{d}x \sqrt{|h|} \frac{1}{2} \Phi s^{A} \partial_A \Phi \end{align} Here, the unit normal $s$ and the region $\partial W_1$ are defined in sec. \ref{bulk gravity}. We find that just as in the case of boundary terms for the gravitational action, this term is analytic in $\tau$ for the $N = 1$ case and does not affect the universal piece of the action complexity calculation. Since it is zero for the pure AdS case when $\tau =0$, the quantity $\delta I_{s} (\partial W_1)$ is given by \begin{align} \delta I_{s} (\partial W_1) = \frac{3 \sigma_3 L^3}{16 \pi G_N} \left( \frac{\tau^2}{\epsilon} + {\cal O}(\epsilon) \right) \end{align} \\ Thus we find that the term that goes like $\xi^{-3}$ or $\tau^3$ is independent of $\epsilon$ and therefore universal ${\it i.e.}$ it does not change under the ambiguities discussed above, while the $\epsilon$-dependent terms are not universal. Analogous quantities to $C_V$ and $C_A$ can also be computed for situations when the temperature T is the only deformation away from the critical point. $\delta V$ or $\delta I$ is then proportional to the complexity of formation for the thermal state based on whether we use the CV or the CA proposal. For the black hole geometry dual to the thermal state, one finds that in both these cases, the quantity $\delta C(T) = C(T) - C(T=0)$ is independent of the regularization $\epsilon$. Not only is the complexity of formation free from the ambiguity of being regulator-dependent, but it is also free from other ambiguities that we looked at above \cite{Chapman:2016hwi}. Therefore, these complexities of formation $\delta C$ appear to be meaningful in this case but we find that this does not carry over to this RG flow example.\\ \subsection{Some comments on ambiguity in the field-theoretic complexity} Thus we see that any holographic dual to the complexity should also accommodate such ambiguities in RG flow computations of complexity in field theories. Such a possibility may exist as it can be shown that the quantity $\delta C$ can depend on the cutoff in existing definitions of complexity for discretized Gaussian field theories. As an example, consider $\kappa$-complexities for an $h$-dimensional spatial lattice \cite{jefferson2017circuit} \begin{equation} C_\kappa = \frac{1}{2^{\kappa-1}} \sum_p \left|\ln\left(\frac{\sqrt{p^2+m^2}}{\omega_0}\right)\right|^\kappa = \frac{1}{2^{\kappa-1}}\frac{V\Omega_{h-1}}{(2\pi)^{h}} \int_0^\Lambda p^{h-1} dp \left|\ln\left(\frac{\sqrt{p^2+m^2}}{\omega_0}\right)\right|^\kappa \end{equation} where $V$ is the volume of the system and $\Omega_{h-1}= \frac{2\pi^{h/2}}{\Gamma(h/2)}$ is the volume of the unit sphere $S^{h-1}$ with momentum cut-off $\Lambda=\sqrt{\omega_0^2-m^2}$ where the logarithm vanishes. For $\omega_0\gg m$ we can take $\omega_0\sim \frac{1}{a}$ where $a$ is the lattice spacing. In the quantum field theory, we take $\omega_0$ as a UV cut-off. We get \begin{equation} c_\kappa = \frac{C_\kappa}{V} = \frac{1}{2^{\kappa-1}}\frac{\Omega_{h-1}}{(2\pi)^{h}} \int_0^{\sqrt{\omega_0^2-m^2}} p^{h-1} dp \left|\ln\left(\frac{\sqrt{p^2+m^2}}{\omega_0}\right)\right|^\kappa \end{equation} A direct computation of $c_1$ gives \begin{eqnarray} c_1 = \frac{\Omega_{h-1}}{(2\pi)^h m^2}\frac{(\omega_0^2-m^2)^{1+h/2}}{h(h+2)} {}_2 F_{1}[1,1+h/2,2+h/2,1-\frac{\omega_0^2}{m^2}] \end{eqnarray} For $h=2$: \begin{eqnarray} c^{h=2}_1 &=& \frac{1}{8\pi} \left\{m^2\ln\frac{m^2}{\omega_0^2}-m^2+\omega_0^2\right\} \\ c^{h=2}_2 &=& \frac{1}{16\pi} \left\{-\frac{m^2}{2}\ln^2\frac{m^2}{\omega_0^2} + m^2\ln\frac{m^2}{\omega_0^2}-m^2+\omega_0^2\right\} \end{eqnarray} This suggests that the quantity $\delta c = c(m=0) - c(m)$ still depends on the UV cutoff. For example, \begin{equation} \delta c_1^{h=2} = \frac{1}{4\pi} \log{\omega_0/m} + \frac{m^2}{8\pi} \end{equation} Thus here again in the context of a free theory near a Gaussian fixed point $(m = 0)$, we find that $\delta c$ exhibits regulator dependence. How the other ambiguities should be accommodated or interpreted on the field theory side is still unclear. However, once the cost function or $\kappa$ and the reference state are fixed, the term $\tau^{\nu h}$ is again independent of the cutoff and therefore universal in that sense. \commentout{In this way, the previous sections allow us to compute $C_V$ and $C_A$ for the ${\cal N}=1$ flow geometry. \begin{align} C_V = \frac{\sigma_3 L^3}{G_N} \left( \frac{1}{3\epsilon^3} - \frac{3}{2\xi^2 \epsilon} + \frac{\pi}{2\xi^3} \right) \end{align} and \begin{align} C_A = \frac{\sigma_3L^3}{8\pi^2G_N \hbar} \left( \frac{16}{3\epsilon^3} + \frac{2}{\epsilon^3}\log{L/\epsilon} - \frac{1}{2\epsilon \xi^2}(1+ 6\log{L/\epsilon}) - \frac{9\pi}{8\xi^3} \right) \end{align} with a cutoff $\epsilon$ in the corresponding Fefferman-Graham coordinates.\\} \section{Conclusions} \label{conclusions} In a previous study, we showed that there is a non-analytic behavior in field-theoretic complexity up to logarithmic terms in the vicinity of critical points. We demonstrated this using explicit lattice calculations and also using general scaling arguments. This term scales like $\tau^{\nu(d-1)}$ where $\nu$ is the standard critical exponent for the correlation length $\xi$ as a function of the reduced coupling $\tau$: $\xi\sim \tau^{-\nu}$. In this work, we show both, for a generic renormalization group flow geometry and also in a specific known example, that this is also true when one studies holographic complexity using the volume and action prescriptions. We also find that even though holographic complexity, like field-theoretic circuit complexity, has various ambiguities, the non-analytic term is free from such ambiguities. \section{Acknowledgements} We are very grateful to Chen-Lung Hung, Sergei Khlebnikov, Nima Lashkari and Qi Zhou for comments and discussions. In addition, we are very grateful to the DOE that supported in part this work through grant DE-SC0007884 and the DOE QuantISED program of the theory consortium ``Intersections of QIS and Theoretical Particle Physics'' at Fermilab, as well as to the Keck Foundation that also provided partial support for this work.
1,314,259,995,730
arxiv
\section{Introduction} Heegaard Floer homology is a package of invariants associated to 3- and 4-manifolds introduced by Ozsv\'{a}th and Szab\'{o} \cite{OSDisks} \cite{OSTriangles}. To a connected, closed, oriented 3-manifold $Y$ with a $\Spin^c$ structure $\frs$, they construct $\Z[U]$-modules \[\HF^-(Y,\frs), \qquad \HF^\infty(Y,\frs) \qquad \text{and} \qquad \HF^+(Y,\frs),\] as well as a $\Z$-module $\hat{\HF}(Y,\frs).$ If $Y_1$ and $Y_2$ are 3-manifolds and $W$ is a cobordism from $Y_1$ to $Y_2$ which is equipped with a $\Spin^c$ structure $\frt$, they construct a cobordism map \[F_{W,\frt}^\circ:\Lambda^*(H_1(W;\Z)/\Tors)\otimes_{\Z} \HF^\circ(Y_1,\frt|_{Y_1})\to \HF^\circ(Y_2,\frt|_{Y_2}),\] for $\circ\in \{+,-,\infty,\wedge\}$. If $X$ is a smooth, closed, oriented 4-manifold with a $\Spin^c$ structure $\frt$, by combining the information contained in the cobordism maps associated to the $+$ and $-$ flavors, Ozsv\'{a}th and Szab\'{o} define a mixed invariant \[\Phi_{X,\frt}: \Lambda^*(H_1(X;\Z)/\Tors) \otimes_{\Z}\Z[U]\to \Z/{\pm 1}.\] Throughout this paper, we will work over the ground field $\bF_2:=\Z/2\Z$, instead of $\Z$. \subsection{Heegaard Floer mixed invariants of mapping tori} If $V=V_0\oplus V_1$ is a finite dimensional, relatively graded vector space over a field $k$, and $F:V\to V$ is a map which preserves the relative grading, the Lefschetz number of $F$ is defined as \[\Lef(F:V\to V):=\tr(F|_{V_0})-\tr(F|_{V_1})\in k.\] Whenever $\frs$ is a $\Spin^c$ structure on $Y^3$ with $c_1(\frs)$ non-torsion, the group $\HF^+(Y,\frs)$ is a finitely generated vector space over $\bF_2$. In this paper, we prove the following result about 4-manifolds which admit a non-separating cut: \begin{thm}\label{thm:mixedinvariantmappingtorus}Suppose that $X^4$ is a closed, oriented 4-manifold with $b_2^+(X)> 1$ and $Y^3\subset X$ is a closed, oriented, non-separating 3-dimensional submanifold. Write $W$ for the cobordism obtained by cutting $X$ along $Y$. Suppose $\frs\in \Spin^c(W)$ is a $\Spin^c$ structure whose restrictions to both copies of $Y$ in $\d W$ agree and are non-torsion, and $\xi\in \Lambda^*(H_1(W;\Z)/\Tors)\otimes \bF_2[U]$. Then the mixed invariants of $X$ satisfy \[\Lef\big(F^+_{W,\frs}(\xi\otimes -):\HF^+(Y,\frs|_{Y})\to \HF^+(Y,\frs|_{Y})\big)=\sum_{\substack{\frt\in \Spin^c(X)\\ \frt|_W=\frs}}\Phi_{X,\frt}(\xi).\] \end{thm} A motivating example of a 4-manifold which admits a non-separating cut is a mapping torus. If $Y$ is a closed, oriented 3-manifold and $\phi:Y\to Y$ is an orientation preserving diffeomorphism, the mapping torus $X_\phi$ of the pair $(Y,\phi)$ is defined as \[X_\phi:=\frac{Y\times [0,1]}{(y,1)\sim (\phi(y),0)} .\] By specializing Theorem \ref{thm:mixedinvariantmappingtorus}, we obtain the following: \begin{cor}\label{cor:mixedinvariantofactualmappingtori}Suppose $Y^3$ is closed, oriented 3-manifold and $\phi:Y\to Y$ is an orientation preserving diffeomorphism such that the mapping torus $X_\phi$ has $b_2^+(X_\phi)>1$. If $\frs\in \Spin^c(Y)$ is non-torsion and $\phi_*(\frs)=\frs$, then the mixed invariants of $X_\phi$ satisfy \[\Lef\big(\phi_*:\HF^+(Y,\frs)\to \HF^+(Y,\frs)\big)=\sum_{\substack{\frt\in \Spin^c(X_\phi)\\ \frt|_Y=\frs}}\Phi_{X_\phi,\frt}(1).\] \end{cor} The simplest example of a mapping torus is the manifold $Y\times S^1$. For this manifold, we can say a bit more about the mixed invariants. The projection map $\pi:Y\times S^1\to Y$ induces a map \[\pi^*:\Spin^c(Y)\to \Spin^c(Y\times S^1).\] We say that a $\Spin^c$ structure on $Y\times S^1$ is \textit{$S^1$-invariant} if it is in the image of $\pi^*$. As with the analogous situation in Seiberg-Witten theory \cite{BaldridgeSWCircleActions}, the adjunction inequality gives the following: \begin{prop}\label{prop:invariantsofYxS1}If $Y^3$ has $b_1(Y)>1$ and $\frs\in \Spin^c(Y)$ is non-torsion, then \[\Phi_{Y\times S^1,\pi^*(\frs)}(1)=\chi(\HF^+(Y,\frs)).\] Furthermore, if $\frt\in \Spin^c(Y\times S^1)$ is not $S^1$-invariant, then \[\Phi_{Y\times S^1,\frt}(1)=0.\] \end{prop} \begin{rem}Theorem~\ref{thm:mixedinvariantmappingtorus} and the subsequent corollaries fail if $\frs|_Y$ is torsion. For example, consider $\bT^4\iso \bT^3\times S^1$. Since $\bT^4$ is symplectic, it follows that $\Phi_{\bT^4,\frt}(1)=1$ if $c_1(\frt)=0$ and $\Phi_{\bT^4,\frt}(1)=0$ otherwise \cite{OSSymplectic}*{Theorem~1.1}. On the other hand, $\HF^+(\bT^3,\frs)$ is not finitely generated over $\bF_2$ for the torsion $\Spin^c$ structure on $\bT^3$, and $\HF^+_{\red}(\bT^3,\frs)$ vanishes for all $\Spin^c$ structures on $\bT^3$. By extending the techniques of this paper using twisted coefficients, one can prove a refinement of Theorem~\ref{thm:mixedinvariantmappingtorus} which applies to torsion $\Spin^c$ structures on $Y$ and allows one to separate out the mixed invariants of different $\Spin^c$ structures on $X$ which restrict to $\frs$ on $W$. We will not consider twisted coefficients in this paper, to simplify the notation. \end{rem} \subsection{Trace and cotrace cobordisms and the graph TQFT} The main technical input required for Theorem~\ref{thm:mixedinvariantmappingtorus} is a result about the graph TQFT for Heegaard Floer homology, which is constructed in \cite{ZemGraphTQFT}. The graph TQFT provides a construction of cobordism maps between disconnected 3-manifolds (a situation where the maps from \cite{OSTriangles} are not defined). Constructing such maps comes at the expense of incorporating basepoints more explicitly into the TQFT structure. If $(Y,\ws)$ consists of a 3-manifold $Y$ (possibly disconnected, or empty) together with a collection of basepoints $\ws\subset Y$, there are $\bF_2[U]$-modules \[\HF^-(Y,\ws,\frs), \qquad \HF^\infty(Y,\ws,\frs) \qquad \text{and} \qquad \HF^+(Y,\ws,\frs),\] as well as a vector space $\hat{\HF}(Y,\ws,\frs)$ over $\bF_2$. The construction of multi-pointed Heegaard Floer complexes for connected 3-manifolds is due to Ozsv\'{a}th and Szab\'{o} \cite{OSLinks}. Implicitly, the homology groups discussed earlier were for connected 3-manifolds with a single basepoint. We use the following notion of cobordism between multi-pointed 3-manifolds (from \cite{ZemGraphTQFT}): \begin{define} We say a pair $(W,\Gamma)$ is a \textbf{ribbon graph cobordism} from $(Y_1,\ws_1)$ to $(Y_2,\ws_2)$ if the following hold: \begin{enumerate} \item $W$ is a cobordism from $Y_1$ to $Y_2$ and $\Gamma\subset W$ is an embedded graph. \item $\Gamma\cap \d W=\ws_1\cup \ws_2$. \item Each basepoint in $\ws_1\cup \ws_2$ has valence 1 in $\Gamma$, and $\Gamma$ has no valence 0 vertices. \item Each vertex of $\Gamma$ is decorated with a cyclic ordering of the edges adjacent to it. \end{enumerate} \end{define} In \cite{ZemGraphTQFT}, the author describes functorial cobordism maps associated to ribbon graph cobordisms. Some background on the construction is provided in Section~\ref{sec:graphTQFT}. If $(W,\Gamma):(Y_1,w_1)\to (Y_2,w_2)$ is a graph cobordism between two singly pointed, connected 3-manifolds, and $\Gamma$ is a path from $w_1$ to $w_2$, then the graph cobordism map for $(W,\Gamma)$ agrees with the map defined by Ozsv\'{a}th and Szab\'{o} in \cite{OSTriangles}. The dependence on the path $\Gamma$ is also explored in \cite{ZemGraphTQFT}. According to \cite{ZemGraphTQFT}*{Theorem~G}, the cobordism maps from \cite{OSTriangles} are independent of the choice of path on the $+,-$ and $\infty$ flavors, but not always on the $\wedge$ flavor. In \cite{OSProperties}, Ozsv\'{a}th and Szab\'{o} describe the effect of orientation reversal on the Heegaard Floer complexes. Writing $-Y$ for $Y$ with its opposite orientation, there is a natural chain isomorphism \begin{equation}\CF^-(-Y,\ws,\bar{\frs})\iso \CF^-(Y,\ws,\frs)^\vee:= \Hom_{\bF_2[U]}(\CF^-(Y,\ws,\frs),\bF_2[U]),\label{eq:multipointeddualityiso}\end{equation} which is defined on the level of Heegaard diagrams (see \cite{OSProperties}*{Proposition~2.5}). Phrased another way, there is a natural pairing between $\CF^-(Y,\ws,\frs)$ and $\CF^-(-Y,\ws,\bar{\frs})$, which we call the trace pairing. Following an influential paper of Witten \cite{WittenTQFT}, Atiyah describes an axiomatic framework for TQFTs \cite{AtiyahTQFT}, which features a duality axiom concerning orientation reversal. According to Atiyah's duality axiom, one should expect the duality isomorphism from Equation \eqref{eq:multipointeddualityiso} to have an interpretation in terms of cobordism maps. If $(Y,\ws)$ is a multi-pointed 3-manifold, the 4-manifold with embedded graph $(Y\times [0,1],\ws\times [0,1])$ can be viewed as a graph cobordism in three ways, depending on which ends we identity as incoming and outgoing. We suggestively call these the \textbf{identity cobordism}, the \textbf{trace cobordism}, and the \textbf{cotrace cobordism}. We illustrate the three configurations in Figure~\ref{fig::55}. The graph TQFT from \cite{ZemGraphTQFT} assigns maps to the trace and cotrace cobordisms, though it is not at all obvious that they are the trace and cotrace maps, with respect to the duality isomorphism from Equation \eqref{eq:multipointeddualityiso}. In this paper we prove the following: \begin{figure}[ht!] \centering \input{fig55.pdf_tex} \caption{\textbf{The identity, trace and cotrace graph cobordisms.} All are equal to $(Y\times [0,1], \ws\times [0,1])$, but have different ends identified as incoming or outgoing.\label{fig::55}} \end{figure} \begin{thm}\label{thm:dualityv1}If $(Y,\ws)$ is a multi-pointed 3-manifold, the trace graph cobordism $(Y\times [0,1], \ws\times [0,1]): (Y\sqcup -Y,\ws\sqcup \ws)\to \varnothing$ induces the trace map \[\tr:\CF^-(Y,\ws,\frs)\otimes_{\bF_2[U]} \CF^-(-Y,\ws,\bar{\frs})\to \bF_2[U].\] Similarly, the cotrace graph cobordism $(Y\times [0,1],\ws\times [0,1]):\varnothing\to (Y\sqcup -Y,\ws\sqcup \ws)$ induces the cotrace map \[\cotr: \bF_2[U]\to \CF^-(Y,\ws,\frs)\otimes_{\bF_2[U]} \CF^-(-Y,\ws,\bar{\frs}).\] \end{thm} \begin{rem} If $C$ is a finitely generated, free module over a ring $\cR$, then there are canonical isomorphisms \[\Hom_{\cR}(C,C)\iso \Hom_{\cR}(C\otimes_{\cR} C^\vee, \cR)\iso \Hom_{\cR}(\cR,C^\vee\otimes_{\cR} C).\] Under the above isomorphisms, the identity map, the trace map and the cotrace map are all identified. Hence Theorem~\ref{thm:dualityv1} implies that the cobordism map for $(Y\times[0,1],\ws\times [0,1])$ is independent of which ends of $Y\times [0,1]$ are identified as incoming, and which ends are identified as outgoing. \end{rem} \begin{rem} The trace and cotrace formulas from Theorem~\ref{thm:dualityv1} follow from a more general computation of the graph cobordism map for a three ended 4-manifold obtained from a Heegaard triple. See Theorem~\ref{thm:triplesandgraphcobordismmaps} for more on this. \end{rem} We note that Ozsv\'{a}th and Szab\'{o} proved several similar, but slightly weaker results about duality in Heegaard Floer homology. Firstly, they proved the isomorphism in Equation \eqref{eq:multipointeddualityiso}. Secondly, they described the effect of turning around cobordisms. If $W:Y_1\to Y_2$ is a cobordism between two connected 3-manifolds (implicitly with a choice of basepoints in $Y_1$ and $Y_2$, and a path in $W$ connecting the basepoints) and $W^\vee:-Y_2\to -Y_1$ is the cobordism obtained by turning around $W$, they showed in \cite{OSTriangles}*{Theorem~3.5} that \begin{equation}F_{W^\vee,\frs}^-=(F_{W,\frs}^-)^{\vee}.\label{eq:OSturningaroundcob}\end{equation} It's a straightforward algebraic exercise to show that Equation~\eqref{eq:OSturningaroundcob} can be derived from Theorem~\ref{thm:dualityv1}. \subsection{Lefschetz numbers on torsion complexes over \texorpdfstring{$\bF_2[[U]]$}{F2[U]}} We now describe a simple algebraic result about computing the Lefschetz number of a map on a chain complex over $\bF_2[[U]]$. Although essentially elementary, this result is central to the proof of Theorem~\ref{thm:mixedinvariantmappingtorus}. For algebraic reasons, the formula is easiest to state if we work over the power series ring $\bF_2[[U]]$, instead of the polynomial ring $\bF_2[U]$. If $C$ is a relatively graded, finitely generated chain complex over a field $k$ and $F:C\to C$ is a chain map which preserves the relative grading, then \begin{equation}(\tr\circ(F\otimes \id)\circ \cotr)(1)=\Lef(F_*:H_*(C)\to H_*(C))\in k\label{eq:Eulercharacteristicoverfield}.\end{equation} Note that if $k$ does not have characteristic 2, the trace map in Equation \eqref{eq:Eulercharacteristicoverfield} must be replaced by a signed version of the trace map, sometimes called the supertrace. It is clear that if $C$ is instead a chain complex over $\bF_2[[U]]$, the above composition on the left side of Equation~\eqref{eq:Eulercharacteristicoverfield} does not give the Lefschetz number of $F_*$ on $H_*(C)$ as a vector space over $\bF_2$. A key step in the proof of Theorem~\ref{thm:mixedinvariantmappingtorus} is to formulate the correct analog of Equation \eqref{eq:Eulercharacteristicoverfield} for chain complexes over the ring $\bF_2[[U]]$. Suppose $C$ is a free, finitely generated chain complex over $\bF_2[[U]]$. We can take the formal derivative of the differential on $C$ to get a $+1$ graded chain map \[\Phi:C\to C.\] More explicitly, the map $\Phi$ is obtained by taking the differential $\d$ on $C$, writing it as a matrix in terms of a basis of $C$, and then differentiating each entry of the matrix. The map $\Phi$ is independent of the choice of basis, up to chain homotopy. We define the chain complexes \[C^-:=C, \qquad C^\infty:=C\otimes_{\bF_2[U]} \bF_2[U,U^{-1}],\qquad \text{and} \qquad C^+:=(C\otimes_{\bF_2[U]} \bF_2[U,U^{-1}])/ C,\] and write $H^-(C),H^\infty(C)$ and $H^+(C)$ for the respective homology groups. The short exact sequence \[0\to C^-\to C^\infty\to C^+\to 0,\] induces a long exact sequence relating the homology groups $H^-(C),H^\infty(C)$ and $H^+(C)$. We let \[\delta:H^+(C)\to H^-(C)\] denote the connecting homomorphism. Since we are working over the power series ring $\bF_2[[U]]$, it is not hard to see that whenever $H^+(C)$ is finitely generated over $\bF_2$, the module $H^\infty(C)$ vanishes, implying that $\delta$ is an isomorphism. We can now state our formula for the Lefschetz number of an endomorphism of a chain complex over $\bF_2[[U]]$ with torsion homology groups: \begin{prop}\label{prop:algebraicmappingtorus} Suppose that $C$ is a free, finitely generated chain complex over $\bF_2[[U]]$ such that $H^+(C)$ is finite dimensional over $\bF_2$. If $F:C\to C$ is a chain map which preserves the relative grading and commutes with the action of $U$, then $\Lef\big(F_*:H^+(C)\to H^+(C)\big)$ is equal to the coefficient of $U^{-1}$ in the expression \[(\tr \circ \delta^{-1}\circ (F\otimes \Phi^\vee) \circ \cotr)(1).\] \end{prop} In the above Proposition, $\Phi^\vee$ denotes the dual of the map $\Phi:C\to C$, and $\delta$ denotes the connecting homomorphism $\delta: H^+(C\otimes C^\vee)\to H^-(C\otimes C^\vee)$. \begin{rem}The above algebraic result is stated over $\bF_2$, since the Heegaard Floer complexes we will work with will have coefficients in $\bF_2[U]$. Nonetheless, the above formula can be stated for complexes over $\Q[[U]]$, however the maps $\tr$ and $\Phi^\vee$ must be replaced by signed versions. See Remark~\ref{rem:signassignments}. \end{rem} In the context of Heegaard Floer complexes, the formal derivative map $\Phi$ appearing in Proposition~\ref{prop:algebraicmappingtorus} is the map induced by the ``broken path'' graph cobordism shown in Figure~\ref{fig::42} (see Lemma~\ref{lem:phi=brokenpathcobordism}, below). When dealing with the Heegaard Floer complexes, we will usually write $\Phi_w$ for the broken path graph cobordism map, where $w\in Y$ is the basepoint corresponding to the broken path. For a quick summary of how the expression from Proposition~\ref{prop:algebraicmappingtorus} appears when computing the mixed invariants from Theorem~\ref{thm:mixedinvariantmappingtorus}, see Figure~\ref{fig::4}. \begin{figure}[ht!] \centering \input{fig42.pdf_tex} \caption{\textbf{The ``broken path'' graph cobordism inducing the map $\Phi_w:\CF^-(Y,w,\frs)\to \CF^-(Y,w,\frs)$.} This graph cobordism induces the map which features algebraically in Proposition~\ref{prop:algebraicmappingtorus}. The underlying 4-manifold is $Y\times [0,1]$.\label{fig::42}} \end{figure} The map $\Phi_w$ and some analogs have been studied in other contexts. An early appearance was in \cite{SarkarMovingBasepoints}, where Sarkar proved a formula for a mapping class group action that involved an analogous map on link Floer homology. The map $\Phi_w$ appears in the formula for the $\pi_1(Y,w)$-action on $\CF^-(Y,w,\frs)$, proven by the author in \cite{ZemGraphTQFT}. Analogous maps appear in the link Floer TQFT \cite{JCob} \cite{ZemCFLTQFT}, where they have a similar graphical interpretation in terms of certain dividing sets on cylindrical link cobordisms. The link Floer homology analogs of $\Phi_w$ also feature in a connected sum formula for involutive knot Floer homology \cite{ZemKnotConnectedSums}. \subsection{Comparison to Seiberg-Witten theory} There are isomorphisms between the 3-dimensional Heegaard Floer invariants from \cite{OSDisks} and the 3-dimensional Seiberg-Witten Floer invariants appearing in \cite{KMMonopole}. This result has been established by Kutluhan-Lee-Taubes (\cite{KLTHF=HM1}, \cite{KLTHF=HM2}, \cite{KLTHF=HM3}, \cite{KLTHF=HM4}, \cite{KLTHF=HM5}) and independently by Colin-Ghiggini-Honda (\cite{CGHHF=ECH0}, \cite{CGHHF=ECH1}, \cite{CGHHF=ECH2}, \cite{CGHHF=ECH3}), the latter using \cite{TaubesECH=SW1}. A proof of the equivalence between the 4-dimensional theories is expected, though hasn't yet appeared in the literature. There are several similar results to Theorem~\ref{thm:mixedinvariantmappingtorus} which have already been proven using Seiberg-Witten theory. An analog to Theorem~\ref{thm:mixedinvariantmappingtorus} was proven by Fr\o yshov using a version of Monopole Floer homology \cite{FroyshovMonopoleFloerHomology}*{Theorem~7}. Another example is Baldridge's computation of the Seiberg-Witten invariants of 4-manifolds with a free circle action \cite{BaldridgeSWCircleActions}. If $X$ is a 4-manifold which has the homology of $S^1\times S^3$, Mrowka, Ruberman, and Saveliev construct a Seiberg-Witten invariant $\lambda_{SW}(X)$ as a corrected count of irreducible monopoles \cite{MRSEndPeriodic} (note that our Theorem~\ref{thm:mixedinvariantmappingtorus} does not apply to such manifolds). The invariant has been further considered by Lin, Ruberman and Saveliev in \cite{LRShomologyS1S3}, and under the assumption that a generator of $H_3(Y;\Z)$ can be realized as a rational homology sphere $Y\subset X$, it is shown in \cite{LRShomologyS1S3} that the invariant $\lambda_{SW}(X)$ is related to the Lefschetz number of the map induced by the cobordism obtained by cutting $X$ along $Y$. \subsection{Organization} In Section~\ref{sec:background} we provide some background on Heegaard Floer homology. Section~\ref{sec:provelefschetznumberformula} is purely algebraic and covers the proof of Proposition~\ref{prop:algebraicmappingtorus}. Section~\ref{sec:graphTQFT} covers some background and preliminary results about the graph TQFT for Heegaard Floer homology. In Section~\ref{sec:handledecomposition} we describe a handle decomposition of the trace cobordism. In Sections~\ref{sec:generalized1--handleand3--handlemaps} and \ref{sec:doubleddiagrams}, we describe two technical tools which will be useful later in the paper: the generalized 1- and 3-handle maps, and doubled Heegaard diagrams. In Section~\ref{sec:connectedsumsandgraphTQFT} we describe the behavior of the graph TQFT with respect to connected sums, and prove that the maps Ozsv\'{a}th and Szab\'{o} used to prove the K\"{u}nneth theorem are in fact graph cobordism maps. In Section~\ref{sec:Heegaardtriplesandgraphcobordisms} we show that the graph cobordism map for the 4-manifold obtained from a Heegaard triple is simply the holomorphic triangle map on that Heegaard triple. In Section~\ref{sec:traceandcotrace} we compute the cobordism maps for the trace and cotrace graph cobordisms, proving Theorem~\ref{thm:dualityv1}. Finally, in Section~\ref{sec:mixedinvariants}, we combine all of the results of the paper and prove the Lefschetz number formula for the mixed invariants of mapping tori, Theorem~\ref{thm:mixedinvariantmappingtorus}. \subsection{Acknowledgments} I would like to thank Ciprian Manolescu, my Ph.D. advisor, for suggesting the mapping torus problem as a motivating problem for research, as well as providing valuable suggestions and direction along the way. I would also like to thank Andr\'{a}s Juh\'{a}sz, Jianfeng Lin, Robert Lipshitz, Thomas Mark and Peter Ozsv\'{a}th for valuable discussions and suggestions. In addition, the author is indebted to Robert Lipshitz, Peter Ozsv\'{a}th and Dylan Thurston for providing some suggestions for proving Theorem~\ref{thm:triplesandgraphcobordismmaps}. \section{Background on Heegaard Floer homology} \label{sec:background} \subsection{Multi-pointed Heegaard diagrams} To define Heegaard Floer homology for multi-pointed 3-manifolds, we use the following definition: \begin{define}\label{def:multipointedheegaarddiagram}If $(Y,\ws)$ is a connected, closed 3-manifold with a nonempty collection of basepoints $\ws$, we say that $\cH=(\Sigma,\as,\bs,\ws)$ is a \textbf{multi-pointed Heegaard diagram for} $(Y,\ws)$ if the following are satisfied: \begin{enumerate} \item\label{def:mphd1} $\Sigma\subset Y$ is an embedded surface containing the points $\ws$, which splits $Y$ into two handlebodies, $U_{\as}$ and $U_{\bs}$, oriented so that $\Sigma=\d U_{\as}=-\d U_{\bs}$. \item\label{def:mphd2} $\as=\{\alpha_1,\dots, \alpha_{g(\Sigma)+|\ws|-1}\}$ and $\bs=\{\beta_1,\dots, \beta_{g(\Sigma)+|\ws|-1}\}$ each consist of $g(\Sigma)+|\ws|-1$ pairwise disjoint, simple closed curves on $\Sigma$. \item\label{def:mphd3} The collection $\as$ bounds pairwise disjoint compressing disks in $U_{\as}$, and $\bs$ bounds pairwise disjoint compressing disks in $U_{\bs}$. \item\label{def:mphd4} Each of the sets $\as$ and $\bs$ are homologically independent in $H_1(\Sigma\setminus \ws; \Z)$. \end{enumerate} \end{define} It follows from the above definition that if $(\Sigma,\as,\bs,\ws)$ is a multi-pointed Heegaard surface, then each connected component of $\Sigma\setminus \as$ and $\Sigma\setminus \bs$ is homeomorphic to a sphere with some number of disks removed, and each component of $\Sigma\setminus \as$ and $\Sigma\setminus \bs$ contains exactly one $\ws$ basepoint. \subsection{The Heegaard Floer complexes} If $(Y,\ws)$ is a connected, closed, oriented 3-manifold with basepoints $\ws$ and a $\Spin^c$ structure $\frs$, Ozsv\'{a}th and Szab\'{o} define $\bF_2[U]$-modules \[\HF^-(Y,\ws,\frs), \qquad \HF^\infty(Y,\ws,\frs), \qquad \HF^+(Y,\ws,\frs), \qquad \text{and} \qquad \hat{\HF}(Y,\ws,\frs)\] in \cite{OSDisks} (when $|\ws|=1$) and \cite{OSLinks} (when $|\ws|>1$). We view $U$ as acting by zero on $\hat{\HF}(Y,\ws,\frs)$. To define these homology groups, one first picks a multi-pointed Heegaard diagram $\cH=(\Sigma,\as,\bs,\ws)$ for $(Y,\ws)$. There are two tori, \[\bT_{\as}:=\alpha_1\times \dots \times \alpha_n \qquad \text{and} \qquad \bT_{\bs}:=\beta_1\times \dots \times \beta_n,\] inside of the symmetric product $\Sym^n(\Sigma)$, where $n:=|\as|=|\bs|=g(\Sigma)+|\ws|-1$. There is a map \[\frs_{\ws}:\bT_{\as}\cap \bT_{\bs}\to \Spin^c(Y),\] defined in \cite{OSDisks}*{Section~2.6}. The chain complex $\CF^-(\cH,\frs)$ is generated over $\bF_2[U]$ by intersection points $\ve{x}\in \bT_{\as}\cap \bT_{\bs}$ with $\frs_{\ws}(\xs)=\frs$. If $\phi$ is a homology class of disks in $\Sym^n(\Sigma)$, with boundary on $\bT_{\as}\cup \bT_{\bs}$, and $\mu(\phi)=1$, then the moduli space $\cM(\phi)$ is generically 1-dimensional, and has a free action of $\R$. We let $\hat{\cM}(\phi)$ denote the quotient \[\hat{\cM}(\phi):=\cM(\phi)/\R.\] The differential on $\CF^-(\cH,\frs)$ counts Maslov index 1 holomorphic strips in $\Sym^n(\Sigma)$ via the formula \[\d(\ve{x})=\sum_{\ys\in \bT_{\as}\cap \bT_{\bs}} \sum_{\substack{\phi\in \pi_2(\xs,\ys)\\ \mu(\phi)=1}} \# \hat{\cM}(\phi) U^{n_{\ws}(\phi)} \cdot \ve{y}.\] Under the strong $\frs$-admissibility assumption on the diagram $\cH$ (see \cite{OSDisks}*{Section~4.2.2}), the total number of holomorphic disks contributing to $\d(\xs)$ is finite. In the above expression, $n_{\ws}(\phi)$ denotes the total multiplicity of the homology class $\phi$ over the $\ws$ basepoints. The complexes $\CF^\infty(\cH,\frs)$, $\CF^+(\cH,\frs)$ and $\hat{\CF}(\cH,\frs)$ are obtained algebraically from $\CF^-(\cH,\frs)$ by the formulas \[\CF^\infty:=\CF^-\otimes_{\bF_2[U]} \bF_2[U,U^{-1}],\qquad \CF^+:=\CF^\infty/\CF^-\qquad \text{and}\]\[ \hat{\CF}:=\CF^-\otimes_{\bF_2[U]} \bF_2[U]/(U=0).\] The short exact sequence $0\to \CF^-(\cH,\frs)\to \CF^\infty(\cH,\frs)\to \CF^+(\cH,\frs)\to 0$ yields a long exact sequence on homology. We let \[\delta:\HF^+(\cH,\frs)\to \HF^-(\cH,\frs)\] denote the connecting homomorphism. Finally, the groups $\HF_{\red}^{\pm}(\cH,\frs)$ are defined by \[\HF_{\red}^-(\cH,\frs):=\ker\big(\HF^-(\cH,\frs)\to \HF^\infty(\cH,\frs)\big)\] and \[\HF^+_{\red}(\cH,\frs):=\coker\big(\HF^\infty(\cH,\frs)\to \HF^+(\cH,\frs)\big).\] The connecting homomorphism $\delta$ induces an isomorphism from $\HF_{\red}^+(\cH,\frs)$ to $\HF_{\red}^-(\cH,\frs)$. We note that unlike $\HF^-$ or $\HF^+$, the module $\HF_{\red}^{\pm}$ is always finitely generated over $\bF_2$. \begin{comment} One can consider a more general situation, where each basepoint $w\in \ws$ is allowed to have its own variable $U_w$ (this was the situation considered in \cite{ZemGraphTQFT}). Using the terminology from \cite{ZemGraphTQFT}, in this paper we consider colorings of the basepoints where all basepoints are given the same color. \end{comment} We need the following naturality result: \begin{prop} If $\cH_1$ and $\cH_2$ are two strongly $\frs$-admissible diagrams for $(Y,\ws)$, then there is a transition map \[\Psi_{\cH_1\to \cH_2}: \CF^-(\cH_1,\frs)\to \CF^-(\cH_2,\frs),\] which is well defined up to chain homotopy. Furthermore \[\Psi_{\cH\to \cH}\simeq \id_{\CF^-(\cH,\frs)}.\] If $\cH_1,\cH_2$ and $\cH_3$ are three $\frs$-admissible Heegaard diagrams, then \[\Psi_{\cH_1\to \cH_3}\simeq \Psi_{\cH_2\to \cH_3}\circ \Psi_{\cH_1\to \cH_2}.\] \end{prop} We refer the reader to \cite{JTNaturality} for more about the problem of naturality, however we make a few remarks. If $\cH_1$ and $\cH_2$ are two admissible Heegaard diagrams for $(Y,\ws)$, one can always connect $\cH_1$ and $\cH_2$ by a sequence of elementary Heegaard moves. Using this fact, Ozsv\'{a}th and Szab\'{o} construct a transition map $\Psi_{\cH_1\to \cH_2}$ from $\HF^-(\cH_1,\frs)$ to $\HF^-(\cH_2,\frs)$ in \cite{OSDisks}. They show that $\Psi_{\cH_1\to \cH_2}$ is a quasi-isomorphism, though it isn't obviously independent of the sequence of intermediate Heegaard diagrams between $\cH_1$ and $\cH_2$. The main result of \cite{JTNaturality} is that the map $\Psi_{\cH_1\to \cH_2}$ is independent (on homology) from the choice of Heegaard moves from $\cH_1$ to $\cH_2$. Using this, it is possible to define a single $\bF_2[U]$-module $\HF^-(Y,\ws,\frs)$ as the transitive limit of the groups $\HF^-(\cH,\frs)$ (see \cite{JTNaturality}*{Definition~1.1}). In fact, using some additional results proven by Lipshitz (see \cite{LipshitzCylindrical}*{Proposition~11.4}), one can show that $\Psi_{\cH_1\to \cH_2}$ is in fact a chain homotopy equivalence, and that $\Psi_{\cH_1\to \cH_2}$ is well defined up to $\bF_2[U]$-equivariant chain homotopy. We refer the reader to \cite{HMInvolutive}*{Proposition~2.3} for an overview of this last fact. For some of the neck-stretching arguments we use in this paper, an important tool will be Lipshitz's cylindrical reformulation of Heegaard Floer homology \cite{LipshitzCylindrical}. Lipshitz constructs a chain complex, generated by intersection points $\xs\in\bT_{\as}\cap \bT_{\bs}$, as above, but with a differential that counts holomorphic curves in $\Sigma\times [0,1]\times \R$, with boundary on $\bs\times \{0\}\times \R$ and $\as\times \{1\}\times \R$. We describe some additional technical details about this approach in Section~\ref{section:analyticalaspects}. The cylindrical reformulation is motivated by the ``tautological correspondence'' between $(\frj_{\bD}, \Sym^{n}(\frj_\Sigma))$-holomorphic maps $u: \bD\to \Sym^{n}(\Sigma)$ and $(\frj_S, \frj_{\Sigma}\times \frj_{\bD})$-holomorphic maps $u':S\to \Sigma\times \bD,$ where $S$ is a Riemann surface and $\pi_{\bD}\circ u'$ is an $n$-fold branched cover of $\bD$. Here $\Sym^n(\frj_{\Sigma})$ and $\frj_{\Sigma}\times \frj_{\bD}$ denote the product almost complex structures. See \cite{OSDisks}*{Lemma~3.6} and \cite{LipshitzCylindrical}*{Section~13} for more details about the tautological correspondence and the equivalence between the two constructions. \subsection{Duality and the Heegaard Floer complexes} \label{sec:dualityofcomplexes} If $(\Sigma,\as,\bs,\ws)$ is a diagram for $(Y,\ws)$, then $(\Sigma,\bs,\as,\ws)$ is a diagram for $(-Y,\ws)$. In \cite{OSTriangles}*{Section~5}, Ozsv\'{a}th and Szab\'{o} define a pairing map \[\langle ,\rangle_{\bF_2} :\CF^\infty(Y,\ws,\frs)\otimes_{\bF_2} \CF^\infty(-Y,\ws,\bar{\frs})\to \bF_2,\] by the formula \begin{equation}\langle U^i\cdot \xs, U^j\cdot \ys \rangle_{\bF_2}=\begin{cases}1& \text{ if }i+j=-1 \text{ and } \xs=\ys\\ 0& \text{ otherwise.} \end{cases}\label{eq:F2pairingdefinition}\end{equation} We use the notation $\langle, \rangle_{\bF_2}$ instead of Ozsv\'{a}th and Szab\'{o}'s notation $\langle,\rangle$, to emphasize that the pairing $\langle, \rangle_{\bF_2}$ is not a map of $\bF_2[U]$ modules. As such, the pairing map $\langle,\rangle_{\bF_2}$ cannot have an interpretation in terms of the cobordism maps. Using the pairing $\langle, \rangle_{\bF_2}$, Ozsv\'{a}th and Szab\'{o} show in \cite{OSProperties}*{Proposition~2.5} (see also \cite{OSTriangles}*{Section~5.1}) that there is a chain isomorphism over $\bF_2[U]$ between \begin{equation}\CF^-(-Y,\ws,\bar{\frs})\iso \Hom_{\bF_2}(\CF^+(Y,\ws,\frs),\bF_2).\label{eq:OSdualcomplex}\end{equation} As described in the introduction, there is also a trace pairing, which will have an interpretation in terms of cobordisms. The trace map \[\tr: \CF^-(Y,\ws,\frs)\otimes_{\bF_2[U]} \CF^-(-Y, \ws,\bar{\frs})\to \bF_2[U],\] is defined by the formula \[\tr( U^i \cdot \xs, U^j\cdot \ys )=\begin{cases}U^{i+j}& \text{ if }\xs=\ys\\ 0& \text{ otherwise}, \end{cases}\] using the two diagrams $(\Sigma,\as,\bs)$ for $Y$ and $(\Sigma,\bs,\as)$ for $-Y$. Since there is a bijection between flowlines from $\xs$ to $\ys$ on $(\Sigma,\as,\bs)$ and flowlines from $\ys$ to $\xs$ on $(\Sigma,\bs,\as)$, it follows that \[\tr(\d (\xs),\ys) =\tr( \xs, \d^\vee (\ys)).\] Hence there is a chain isomorphism \begin{equation}\CF^-(-Y,\ws,\bar{\frs})\iso \Hom_{\bF_2[U]}(\CF^-(Y,\ws,\frs), \bF_2[U]).\label{eq:F[U]duality}\end{equation} Despite its slightly different appearance, the duality isomorphism from Equation \eqref{eq:F[U]duality} equivalent to the isomorphism proven by Ozsv\'{a}th and Szab\'{o} in Equation \eqref{eq:OSdualcomplex}, as we explain in the following lemma: \begin{lem}If $C$ is a free, finitely generated chain complex over $\bF_2[U]$, then \[\Hom_{\bF_2}(C^+, \bF_2)\iso \Hom_{\bF_2[U]}(C,\bF_2[U]),\] where $C^+:=(C\otimes \bF_2[U,U^{-1}])/C$. \end{lem} \begin{proof} Note that we can write $C^+$ as $C\otimes_{\bF_2[U]} (\bF_2[U,U^{-1}]/\bF_2[U])$. Using tensor-hom adjunction, we have \begin{align*}\Hom_{\bF_2}(C^+,\bF_2) &\iso \Hom_{\bF_2}(C\otimes_{\bF_2[U]} (\bF_2[U,U^{-1}]/\bF_2[U]),\bF_2)\\ &\iso \Hom_{\bF_2[U]}(C, \Hom_{\bF_2}(\bF_2[U,U^{-1}]/\bF_2[U],\bF_2)).\end{align*} We note that it is easy to construct an isomorphism \[\Hom_{\bF_2}(\bF_2[U,U^{-1}]/\bF_2[U],\bF_2)\iso \bF_2[U] \] of $\bF_2[U]$-modules. The main claim now follows. \end{proof} There is also a cotrace map \[\cotr:\bF_2[U]\to \CF^-(Y,\ws,\frs)\otimes_{\bF_2[U]} \CF^-(-Y,\ws, \bar{\frs}),\] which we can define as the dual of the trace map with domain $CF^-(-Y,\ws,\bar{\frs})\otimes_{\bF_2[U]} CF^-(Y,\ws,\frs)$. On the level of generators, the cotrace map takes the form \[\cotr(1)=\sum_{i=1}^n \xs_i \otimes \xs_i^\vee,\] for a basis $\xs_1,\dots, \xs_n$ of $\CF^-(Y,\ws,\frs)$. \subsection{Heegaard Floer mixed invariants} To a closed, oriented 4-manifold $X$ with $b_2^+(X)>1$, Ozsv\'{a}th and Szab\'{o} define a mixed invariant $\Phi_{X,\frt}$ \cite{OSTriangles}, which is a map \[\Phi_{X,\frt}:\Lambda^*( H_1(X;\Z)/\Tors)\otimes_{\bF_2} \bF_2[U]\to \bF_2.\] In this section, we describe Ozsv\'{a}th and Szab\'{o}'s construction, and state some basic properties. For notational reasons, we will focus on $\Phi_{X,\frs}(1)$. An important component of the construction of the mixed invariant is the following definition: \begin{define}An \textbf{admissible cut} of a 4-manifold $X$ is a closed, connected 3-manifold $N^3\subset X$, which separates $X$ into two connected submanifolds, $X_1$ and $X_2$, meeting along $N$, such that $b_2^+(X_i)>0$ and such that the restriction map \[H^2(X;\Z)\to H^2(X_1;\Z)\oplus H^2(X_2;\Z)\] is an injection. \end{define} Given an admissible cut $N\subset X$, we construct a cobordism $W_1:S^3\to N$ by removing a 4-ball from $X_1$. We construct a cobordism $W_2:N\to S^3$ similarly. The condition that $b_2^+(X_i)>0$ ensures that both \[F_{W_1, \frt|_{W_1}}^\infty:\Lambda^*( H_1(W_1;\Z)/\Tors) \otimes \HF^\infty(S^3)\to \HF^\infty(N, \frt|_{N}) \] and \[F_{W_2, \frt|_{W_2}}^\infty: \Lambda^*( H_1(W_2;\Z)/\Tors)\otimes \HF^\infty(N, \frt|_{N})\to \HF^\infty(S^3)\] vanish \cite{OSTriangles}*{Lemma~8.2}. It follows that if $N$ is an admissible cut, the maps $F_{W_1,\frt|_{W_1}}^-$ and $F_{W_2,\frt|_{W_2}}^+$ factor through $HF_{\red}^{\pm}$, as in the following diagram: \[\begin{tikzcd}\, &&& \HF^-(S^3)\arrow{d}{F_{W_1,\frt|_{W_1}}^-}\arrow[dashed]{dl}\\ \HF^+(N,\frt|_N)\arrow{r}\arrow[swap]{d}{F^+_{W_2,\frt|_{W_2}}}& \HF^+_{\red}(N,\frt|_N) \arrow[dashed]{dl}\arrow{r}{\delta}[swap]{\iso}&\HF^-_{\red}(N,\frt|_{N})\arrow{r}& \HF^-(N,\frt|_{N})\\ \HF^+(S^3)&&& \end{tikzcd}.\] The mixed invariant $\Phi_{X,\frt}$ is then defined as \[\Phi_{X,\frs}(1)=\langle (F_{W_2,\frt|_{W_2}}^+\circ \delta^{-1}\circ F_{W_1,\frt|_{W_1}}^-)(1), 1\rangle_{\bF_2},\] where $1\in \HF^-(S^3)$ denotes the top degree generator, and $\langle,\rangle_{\bF_2}: \HF^+(S^3)\otimes_{\bF_2} \HF^-(S^3)\to \bF_2$ is the pairing map from Equation \eqref{eq:F2pairingdefinition}. Phrased another way, the invariant $\Phi_{X,\frs}(1)$ is defined as the coefficient of $U^{-1}$ in the expression $(F_{W_2,\frt|_{W_2}}^+\circ \delta^{-1}\circ F_{W_1,\frt|_{W_1}}^-)(1)$. More generally, if $\xi_1\in \bF_2[U]\otimes \Lambda^*(H_1(X_1;\Z)/\Tors)$ and $\xi_2\in \bF_2[U]\otimes \Lambda^* (H_1(X_2;\Z)/\Tors)$, then the invariant $\Phi_{X,\frt}(\xi_1\wedge \xi_2)\in \bF_2$ is defined as the coefficient of $U^{-1}$ in the expression \begin{equation}F_{W_2, \frt|_{W_2}}^+(\xi_2\otimes \delta^{-1}( F_{W_1, \frt|_{W_1}}^{-}(\xi_1))).\label{eq:mixedinvariantwithhomologyaction}\end{equation} Computing the mixed invariants can sometimes be done in the presence of a non-admissible cut $N\subset X$ with a $\Spin^c$ structure $\frt$ on $N$ which is non-torsion. This situation is described in \cite{OSTrianglesandSymplectic}*{Section~2}. Suppose that $X$ is a closed 4-manifold which has a (not necessarily admissible) cut $N$ dividing $X$ into two pieces $X_1$ and $X_2$. Suppose further that we pick $\Spin^c$ structures $\frs_1$ and $\frs_2$ on $X_1$ and $X_2$, respectively, such that $\frs_1$ and $\frs_2$ restrict to the same $\Spin^c$ structure $\frt\in \Spin^c(N)$, and that $\frt$ is non-torsion. In the above situation, the fact that $\frt$ is non-torsion implies that $\HF^-(N,\frt), \HF^\infty(N,\frt)$ and $\HF^+(N,\frt)$ are all torsion modules over $\bF[U]$ (i.e. there is a nonzero element in $\bF_2[U]$ which annihilates them). In fact, according to \cite{OSTrianglesandSymplectic}*{Lemma~2.3}, for sufficiently large $m$ and $\ell$, the action of $(1+U^{m!})^\ell$ annihilates $\HF^\infty(N,\frt)$ and is the identity on $\HF^-_{\red}(N,\frt)$. For sufficiently large $m$ and $\ell$, the map obtained by multiplication by $(1+U^{m!})^{\ell}$ is independent of $m$ and $\ell$. Furthermore, the map $\HF^\infty(N,\frt)\to \HF^+(N,\frt)$ is trivial \cite{OSTrianglesandSymplectic}*{Corollary~2.4}, so that $\HF_{\red}^+(N,\frt)=\HF^+(N,\frt)$. Hence, composing the action of $(1+U^{m!})^\ell$ with the inverse of the connecting homomorphism $\delta$ yields a map \[\Pi_Y^{\red}:\HF^-(N,\frt)\to \HF^+(N,\frt).\] We can then define the invariant \[\Phi_{X,N,\frs_1,\frs_2}(1):=\langle (F^+_{W_2,\frs_2}\circ \Pi_Y^{\red}\circ F_{W_1,\frs_1}^-)(1),1\rangle_{\bF_2}.\] More generally, if $\xi\in \Lambda^*(H_1(X;\Z))\otimes \bF_2[U]$ is an arbitrary element, we can define $\Phi_{X,N,\frs_1, \frs_2}(\xi)$ by adapting the expression in Equation \eqref{eq:mixedinvariantwithhomologyaction}. \begin{comment} If $\xi\in \Lambda^*(H_1(X;\Z))\otimes \bF_2[U]$ is an element that can be written as $\xi=\xi_1\wedge \xi_2$ for $\xi_1\in \Lambda^*(H_1(X_1;\Z))\otimes \bF_2[U]$ and $\xi_2\in \bF_2[U]\otimes \Lambda^*(H_1(X_2;\Z))$, we can define \[\Phi_{X,N,\frs_1,\frs_2}(\xi):=\langle F^+_{W_2,\frs_2}(\xi_2\otimes \Pi_Y^{\red}( F_{W_1,\frs_1}^-(\xi_1))),1\rangle_{\bF_2}.\] \end{comment} To simplify the algebra in the definition of $\Phi_{X,N,\frs_1,\frs_2}$, it is convenient to work with completed versions of the chain complexes over the ring $\bF_2[[U]]$. We define complexes $\boldCF^-,\boldCF^\infty$ and $\boldCF^+$ by tensoring with $\bF_2[[U]]$, the power series ring in the variable $U$. When $\frt$ is non-torsion, the module $\boldHF^\infty(N,\frt)$ vanishes and we have \[\boldHF^-(N,\frt)=\HF_{\red}^-(N,\frt) \qquad \text{and} \qquad \boldHF^+(N,\frt)=\HF^+(N,\frt)=\HF^+_{\red}(N,\frt).\] It follows that the connecting homomorphism \[\delta:\boldHF^+(N,\frt)\to \boldHF^-(N,\frt)\] is an isomorphism. We can define a completed version of the mixed invariant \begin{equation}\ve{\Phi}_{X,N,\frs_1,\frs_2}(1):=\langle (F_{W_2,\frs_2}^+\circ \delta^{-1}\circ F_{W_1,\frs_1}^-)(1), 1\rangle_{\bF_2}.\label{eq:definitionmixedinvariant}\end{equation} More generally, we can define $\ve{\Phi}_{X,N,\frs_1,\frs_2}(\xi)$ for $\xi\in \Lambda^*( H_1(X;\Z)/\Tors)\otimes \bF_2[U]$, as in Equation \eqref{eq:mixedinvariantwithhomologyaction}. \begin{prop}\label{prop:mixedinvariantofnontorisioncutcomplaw}Suppose that $X$ is a closed 4-manifold with $b_2^+(X)>1$ which has a connected cut $N$ which separates $X$ into two cobordisms, $X_1$ and $X_2$. If $\frs_1\in \Spin^c(X_1)$ and $\frs_2\in \Spin^c(X_2)$ are two $\Spin^c$ structures which restrict to the same non-torsion $\Spin^c$ structure $\frt\in \Spin^c(N)$, then the mixed invariants defined above satisfy the relation \[\ve{\Phi}_{X,N,\frs_1,\frs_2}(\xi)=\Phi_{X,N,\frs_1,\frs_2}(\xi)=\sum_{\substack{\frt\in \Spin^c(W)\\\frt|_{X_i}=\frs_i}} \Phi_{X,\frs}(\xi).\] \end{prop} \begin{proof}Let us focus on $\xi=1$, for notational simplicity. The second equality follows from \cite{OSTrianglesandSymplectic}*{Proposition~2.5}. The first equality follows since there are natural maps $\CF^\circ\to\boldCF^\circ$, which commute with the maps in the long exact sequence for $\HF^-,\HF^\infty$ and $\HF^+,$ and also commute with the cobordism maps. This identifies $\Phi_{X,N,\frs_1,\frs_2}(1)$ with the $U^{-1}$ coefficient of the element \begin{equation}(F_{W_2,\frs_2}^+\circ \delta^{-1}\circ (1+U^{m!})^{\ell}\circ F_{W_1,\frs_1}^-)(1)\in \boldHF^+(S^3),\label{eq:mixedinvaraintcompleted}\end{equation} for sufficiently large $m$ and $\ell$. However $(1+U^{m!})^{\ell}$ acts by the identity on $\boldHF^-(N,\frt)$, for large $m$, since $\boldHF^\infty(N,\frt)$ vanishes, so Equation \eqref{eq:mixedinvaraintcompleted} reduces to Equation \eqref{eq:definitionmixedinvariant}. \end{proof} \subsection{Almost complex structures, moduli spaces and transversality} \label{section:analyticalaspects} We now describe some technical details about the almost complex structures and holomorphic curves we consider in this paper, and state some transversality results which will be helpful for some gluing arguments that appear in Section~\ref{sec:generalized1--handleand3--handlemaps}. If $(\Sigma,\as,\bs,\ws)$ is a multi-pointed Heegaard diagram we will primarily be interested in almost complex structures on the cylindrical 4-manifold $\Sigma\times [0,1]\times \R$ which satisfy the following axioms (taken from \cite{LipshitzCylindrical}): \begin{enumerate}[label=($J$\arabic*)] \item\label{def:J1} $J$ is tamed by the product symplectic form; \item\label{def:J2} $J$ is split (i.e. equal to $\frj_\Sigma\times \frj_{\bD}$) in a cylindrical neighborhood of $\ws\times [0,1]\times \R$. \item\label{def:J3} $J$ is translation invariant in the $\R$ factor. \item\label{def:J4} $J(\d/\d s)=\d/\d t$. \item\label{def:J5} $J$ preserves the 2-planes $T(\Sigma\times \{(s,t)\})$ for all $(s,t)\in [0,1]\times \R$. \end{enumerate} For the purposes of a gluing argument in Section~\ref{sec:generalized1--handleand3--handlemaps}, these will not be generic enough, so we state an alternate fifth axiom (also from \cite{LipshitzCylindrical}): \begin{enumerate}[label=($J$\arabic*$'$)] \setcounter{enumi}{4} \item \label{def:J5'} There is a 2-plane distribution $\xi$ on $\Sigma\times [0,1]$ such that the restriction of $\omega$ to $\xi$ is non-degenerate, $J$ preserves $\xi$ and the restriction of $J$ to $\xi$ is compatible with $\omega$. We further assume that $\xi$ is tangent to $\Sigma\times \{pt\}$ near $(\as\cup \bs)\times [0,1]\times \R$ and near $\Sigma\times \{0,1\}\times \R$. \end{enumerate} As in \cite{LipshitzCylindrical}, we are interested in holomorphic curves $u:S\to \Sigma\times [0,1]\times \R$, such that $S$ is a Riemann surface with boundary and $n:=g(\Sigma)+|\ws|-1$ positive punctures $p_1,\dots, p_n$ and $n$ negative punctures $q_1,\dots, q_n$, and such that the following are satisfied: \begin{enumerate}[label=($M$\arabic*)] \item\label{def:M1} $S$ is a smooth (not nodal) Riemann surface. \item\label{def:M2} $u(\d S)\subset (\as\times \{1\}\times \R)\cup (\bs\times \{0\}\times \R)$. \item\label{def:M4} $\lim_{z\to p_i} (\pi_{\R}\circ u)(z)=-\infty$ and $\lim_{z\to q_i} (\pi_{\R}\circ u)(z)=\infty$. \item\label{def:M5} $u$ has finite energy. \item\label{def:M3} $\pi_{\bD}\circ u$ is locally non-constant. \item\label{def:M6} $u$ is an embedding. \end{enumerate} We also will need to consider a weaker version of the \ref{def:M3} axiom: \begin{enumerate}[label=($M$\arabic*$'$)] \setcounter{enumi}{4} \item \label{def:M3'} There is no non-empty open subset $U\subset S$ such that $\pi_{\bD}\circ u|_U$ is constant, and takes value near $\{0,1\}\times \R$ (in the sense of \ref{def:J5'}). \end{enumerate} It is important for our purposes to have a precise formula for the dimension of the moduli spaces $\cM(\phi)$. The Maslov index $\mu(\phi)$ provides the expected dimension, though in general the actual dimension of the moduli space $\cM(\phi)$ may differ from the Maslov index when transversality is not achieved, or if there are non-embedded curves in the moduli space. To deal with the presence of curves which are potentially non-embedded, it is helpful to consider a refinement of the moduli space $\cM(\phi)$ which takes into account the topological source curve $S$. If $S$ is a topological source curve and $\phi$ is a homology class, we can consider the moduli space \[\cM(S,\phi)\] of holomorphic curves $u:S\to \Sigma\times [0,1]\times \R$ representing the homology class $\phi$, which satisfy \ref{def:M1}--\ref{def:M3}. Near a holomorphic curve $u$ where $D\bar{\d}$ achieves transversality, $\cM(S,\phi)$ will be a smooth manifold of dimension equal to the Fredholm index of $D\bar{\d}$ at $u$. We note that it is reasonable to consider these refinements of the moduli spaces $\cM(\phi)$ according to the topology of the source curve $S$, since according to \cite{LipshitzCylindrical}*{Equation~6}, the Fredholm index at a curve $u:S\to \Sigma\times [0,1]\times \R$ is determined by the Euler characteristic of $S$ and the homology class of $\phi$. It follows from \cite{LipshitzCylindrical}*{Corollary 4.3} that if $u:S\to \Sigma\times [0,1]\times \R$ is a holomorphic curve which is an embedding, then the Fredholm index agrees with the Maslov index, so the expected dimension of $\cM(S,\phi)$ is $\mu(\phi)$. More generally, the Fredholm index satisfies \[\ind(u)=\mu(\phi)-2\sing(u),\] where $\sing(u)$ is the number of double points of $u$, in an equivalent singularity (see \cite{LipshitzCylindricalErrata}*{Proposition~4.2'}). Given a submanifold $X\subset \Sym^n(\bD)$ and a point $p\in \Sigma\setminus (\as\cup \bs)$, we will need to consider the matched moduli space \[\cM(S,\phi,X):=\{u\in \cM(S,\phi): \rho^p(u)\in X\},\] where $n=n_p(\phi)$ and $\rho^p:\cM(S,\phi)\to \Sym^n(\bD)$ is the map \begin{equation}\rho^p(u):=(u\circ \pi_{\bD})\big((u\circ \pi_{\Sigma})^{-1}(p)\big).\label{eq:rhopdefinition}\end{equation} We will be exclusively interested in subsets $X\subset \Sym^n(\bD)$ which avoid the fat diagonal in $\Sym^n(\bD)$, i.e., the codimension 2 subset consisting of tuples with at least one repeated entry. We need the following transversality result: \begin{prop}\label{prop:transversalitydisks}Suppose $J$ is a generic almost complex structure on $\Sigma\times [0,1]\times \R$ satisfying \textup{\ref{def:J1}--\ref{def:J5}}. Then near any holomorphic curve $u:S\to \Sigma\times [0,1]\times \R$, satisfying \textup{\ref{def:M1}--\ref{def:M3}}, the moduli space $\cM(S,\phi)$ is a transversely cut out smooth manifold of dimension \[\ind(u)=\mu(\phi)-2\sing(u).\] Similarly if $X\subset \Sym^n(\bD)$ is a submanifold which avoids the fat diagonal, then near any curve $u\in \cM(S,\phi,X)$ satisfying \textup{\ref{def:M1}--\ref{def:M3}}, the space $\cM(S,\phi,X)$ is a transversely cut out smooth manifold of dimension \[\ind(u)=\mu(\phi)-2\sing(u)-\codim(X).\] If $J$ is a generic almost complex structure on $\Sigma\times [0,1]\times \R$ which satisfies \textup{\ref{def:J1}--\ref{def:J4}} and \textup{\ref{def:J5'}}, then the same statements holds at any holomorphic curve $u:S\to \Sigma\times [0,1]\times \R$ which satisfies \textup{\ref{def:M1}}, \textup{\ref{def:M2}}, \textup{\ref{def:M4}}, \textup{\ref{def:M5}} and \textup{\ref{def:M3'}}, with no multiply covered closed components, and with no components $S_0$ such that $\pi_{\bD}\circ u|_{S_0}$ is constant and takes on a value near $\{0,1\}\times \R$ (in the sense of \textup{\ref{def:J5'}}). \end{prop} The proof of the statements involving the unmatched moduli spaces $\cM(S,\phi)$ can be found in \cite{LipshitzCylindrical}*{Sections~3, 4} (see also \cite{LipshitzCylindricalErrata}). The statement about the matched moduli spaces is handled by adapting \cite{MS04:HolomorphicCurvesSymplecticTopology}*{Theorem~3.4.1}. In analogy to the situation in \cite{MS04:HolomorphicCurvesSymplecticTopology}, the proof that $\cM(S,\phi,X)$ is transversely cut out is substantially simplified by assuming that $X$ avoids the fat diagonal in $\Sym^n(\bD)$. We note that the condition that $X$ avoids the fat diagonal also implies that there are no multiply covered closed components. We refer the reader to \cite{JTNaturality}*{Section~9.3} for a more somewhat detailed account of proving the statement about the matched moduli spaces $\cM(\phi,S,X)$. Analogously, we need to describe the almost complex structures which we use to define the holomorphic triangle maps. We let $\Delta$ denote a triangular region in the complex plane, which has three boundary components, and three cylindrical ends, each identified with $[0,1]\times [0,\infty)$. As in \cite{LipshitzCylindrical}, we will primarily consider almost complex structures on $\Sigma\times \Delta$ which satisfy the following axioms: \begin{enumerate}[label=($J'$\arabic*)] \item\label{def:J'1} $J$ is tamed by the split symplectic form on $\Sigma\times \Delta$. \item \label{def:J'2}There is a finite collection of points $P\subset \Sigma\setminus(\as\cup\bs\cup\gs)$ with at least one point in each component of $\Sigma\setminus (\as\cup \bs\cup \gs)$ such that $J$ is split on product neighborhood of $P\times \Delta$. \item\label{def:J'3} In the cylindrical ends of $\Delta$, $J$ is equal to a cylindrical almost complex structure satisfying \ref{def:J1}--\ref{def:J5}. \item\label{def:J'4} The projection map $\pi_\Delta:\Sigma\times \Delta\to \Delta$ is holomorphic and the tangent space of each fiber of $\pi_\Sigma$ is a complex line. \end{enumerate} There will be some instances when we need to consider a more generic set of almost complex structures on $\Sigma\times \Delta$. We need the following alternate axioms for almost complex structures on $\Sigma\times \Delta$: \begin{enumerate}[label=($J'$\arabic*$'$)] \setcounter{enumi}{2} \item\label{def:J'3'} In the cylindrical ends of $\Delta$, $J$ agrees with cylindrical almost complex structures satisfying $(J1)$-$(J4)$ and $(J5')$, above. \item\label{def:J'4'} The 2-planes of $T(\{p\}\times \Delta)$ are complex lines of $J$ for all $p\in \Sigma$. \item\label{def:J'5'} The 2-planes $T(\Sigma\times \{d\})$, for $d\in \Delta$, are complex lines for $J$ near $(\as\cup \bs\cup \gs)\times \Delta$ and on $\Sigma\times U$ for an open subset $U\subset \Delta$ containing the three components of $\d \Delta$. \end{enumerate} \begin{rem}We view $\Delta$ as having three cylindrical ends, so in \ref{def:J'5'}, we pick neighborhood of $\d \Delta$ which does not contain the vertices of $\Delta$, but instead should be viewed as containing a cylindrical neighborhood of $\{0,1\}\times [0,\infty)$ in the cylindrical ends of $\Delta$. \end{rem} Given a source surface $S$ and a homology class of triangles $\psi$, we can again consider the moduli space of curves $\cM(S,\psi)$ satisfying the obvious analogs of \ref{def:M1}--\ref{def:M5}, for triangles. If $p\in \Sigma\setminus (\as\cup \bs)$ is a point, we can also consider a map $\rho^p:\cM(S,\psi)\to \Sym^{n_p(\psi)}(\Delta)$, defined analogously to Equation \eqref{eq:rhopdefinition}. If $X\subset \Sym^n(\Delta)$ is a subset (which we will always assume avoids the fat diagonal), then we can consider the matched moduli space $\cM(S,\psi,X)$, as before. In analogy to Proposition~\ref{prop:transversalitydisks}, we state a useful transversality result for holomorphic curves mapping into $\Sigma\times \Delta$: \begin{prop}\label{prop:transversalitytriangles} Suppose that $J$ is a generic almost complex structure on $\Sigma\times \Delta$ which satisfies \textup{\ref{def:J'1}--\ref{def:J'4}}. Then near any holomorphic curve $u:S\to \Sigma\times \Delta$, satisfying the analogs of \textup{\ref{def:M1}--\ref{def:M5}} for triangles, the moduli space $\cM(S,\psi)$ is a transversely cut out smooth manifold of dimension \[\ind(u)=\mu(\psi)-2\sing(u).\] Similarly if $X\subset \Sym^n(\Delta)$ is a submanifold which avoids the fat diagonal, then near any curve $u\in \cM(S,\psi,X)$ satisfying \textup{\ref{def:M1}--\ref{def:M5}}, the space $\cM(S,\psi,X)$ is a transversely cut out smooth manifold of dimension \[\ind(u)=\mu(\phi)-2\sing(u)-\codim(X).\] If $J$ is a generic almost complex structure on $\Sigma\times \Delta$ which satisfies \textup{\ref{def:J'1}}, \textup{\ref{def:J'2}}, \textup{\ref{def:J'3'}}, \textup{\ref{def:J'4'}} and \textup{\ref{def:J'5'}}, then the same statements holds at any holomorphic curve $u:S\to \Sigma\times \Delta$ which satisfies the analogs of \textup{\ref{def:M1}}, \textup{\ref{def:M2}}, \textup{\ref{def:M4}}, \textup{\ref{def:M5}} and \textup{\ref{def:M3'}} for triangles, with no multiply covered closed components, and with no components $S_0$ such that $\pi_{\Delta}\circ u|_{S_0}$ is constant and takes on a value near $\d \Delta$ (in the sense of \textup{\ref{def:J'5'}}) \end{prop} A sketch of the proof can be found in \cite{JTNaturality}*{Section~9.3}. The argument follows from adapting \cite{LipshitzCylindrical}*{Sections 3, 4} for the unmatched moduli spaces and \cite{MS04:HolomorphicCurvesSymplecticTopology}*{Theorem~3.4.1} for the matched moduli spaces. \section{Lefschetz numbers and torsion complexes over \texorpdfstring{$\bF[[U]]$}{F2[[U]]}} \label{sec:provelefschetznumberformula} In this section, we prove Proposition~\ref{prop:algebraicmappingtorus}, a formula for the Lefschetz number of an endomorphism of a chain complex whose homology is torsion over the power series ring $\bF_2[[U]]$. \subsection{Background on chain complexes over \texorpdfstring{$\bF[U]$}{F2[U]}} We assume that all chain complexes have a relative $\Z_2$ grading, which is lowered by $\d$, and which is preserved by the action of $U$. If $C$ is such a complex, which is also free and finitely generated over $\bF_2[U]$, we define chain complexes $C^-, C^\infty$ and $C^+$ by the formulas \[C^-:=C, \qquad C^\infty:=C^-\otimes_{\bF_2[U]} \bF_2[U,U^{-1}]\qquad\text{and} \qquad C^+:=C^\infty/C^-.\] We write $H^\circ(C)$ for the homology group $H_*(C^\circ)$, for $\circ\in \{+,-,\infty\}$. The short exact sequence \[0\to C^-\to C^\infty\to C^+\to 0\] induces a long exact sequence on homology \[\cdots \to H^+(C)\xrightarrow{\delta} H^-(C)\to H^\infty(C)\to H^+(C)\to \cdots,\] where $\delta$ denotes the connecting homomorphism. We denote by $H_{\red}^{\pm}(C)$ the modules \[H_{\red}^-(C):=\ker(H^-(C)\to H^\infty(C) )\qquad \text{and} \qquad H_{\red}^+(C):=\coker(H^\infty(C)\to H^+(C)).\] The connecting homomorphism $\delta$ induces an isomorphism from $H_{\red}^+(C)$ to $H_{\red}^-(C)$. We note that the above constructions work if $C$ is a free, finitely generated chain complex over the power series ring $\bF_2[[U]]$, and we will use the same notation. There is a classification theorem for free, finitely generated chain complexes over a PID, similar to the classification theorem for finitely generated modules over a PID. We state the following version, specialized to the ring $\bF_2[[U]]$: \begin{lem}\label{lem:classificationfgPID}If $C$ is a free, finitely generated chain complex over $\bF_2[[U]]$, then $C$ is chain isomorphic to a direct sum of 1-step complexes (i.e. a complex with a single generator over $\bF_2[[U]]$, and vanishing differential) and 2-step complexes of the form $\ve{a}\xrightarrow{U^n} \ve{b}$ (i.e. a complex with two generators over $\bF_2[[U]]$, $\ve{a}$ and $\ve{b}$, with $\d(\ve{a})=U^n \cdot \ve{b}$). \end{lem} \begin{proof}The classification theorem for finitely generated chain complexes over a PID (see, e.g., \cite{HMZConnectedSum}*{Lemma~6.1}) says that any free, finitely generated chain complex over a PID $\cR$ can be written as a direct sum of 1-step complexes, and 2-step complexes of the form $\ve{a}\xrightarrow{p} \ve{b}$, for various $p\in \cR$. In our case, $\cR=\bF_2[[U]]$, and we need to reason that for any 2-step complex which appears, the element $p$ can be taken to be a nonnegative power of $U$. Write $p=U^n(1+Uq(U))$ for some $q(U)\in \bF_2[[U]]$. Since $1+Uq(U)$ is a unit in $\bF_2[[U]]$, the complex \[\ve{a}\xrightarrow{U^n(1+Uq(U))} \ve{b}\] is chain isomorphic to the complex \[\ve{a}'\xrightarrow{U^n} \ve{b}'\] under the map $\ve{a}\mapsto \ve{a}'$ and $\ve{b}\mapsto (1+Uq(U))^{-1}\ve{b}'$. \end{proof} Another fact that will be useful to us (and our motivation for using the ring $\bF_2[[U]]$ instead of $\bF_2[U]$) is the following: \begin{lem}\label{lem:finitelygenerated=>inftyvanishes} If $C$ is a finitely generated, free chain complex over $\bF_2[[U]]$ and $H^+(C)$ is finite dimensional over $\bF_2$, then $H^\infty(C)=\{0\}$. \end{lem} \begin{proof}By Lemma~\ref{lem:classificationfgPID}, since $H^+(C)$ is finite dimensional, we know that $C$ is homotopy equivalent over $\bF_2[[U]]$ to a sum of complexes of the form $\ve{a}\xrightarrow{U^n} \ve{b}$. However, the $\infty$-flavor homology of such a chain complex clearly vanishes. \end{proof} There is a natural trace map \begin{equation}\tr: C\otimes_{\bF_2[U]} C^\vee\to \bF_2[U]\label{eq:trace}\end{equation} defined by the formula $\tr(\xs\otimes \ys)=\ys(\xs)$. Recalling that $C$ is finitely generated and free, there is a cotrace map \[\cotr: \bF_2[U]\to C\otimes_{\bF_2[U]} C^\vee,\] which can be defined as the dual of a trace map. If $\xs_1,\dots, \xs_n$ is a basis for $C$, then the cotrace map takes the form \[\cotr(1)=\sum_{i=1}^n \xs_i\otimes \xs_i^\vee.\] The trace and cotrace maps are easily seen to be chain maps, and can of course also be defined over $\bF_2[[U]]$. \subsection{The differentiated differential endomorphism $\Phi$} We now describe the map $\Phi:C\to C$, obtained by formally differentiating the differential. Suppose that $C$ is a free, finitely generated chain complex over $\bF_2[U]$, with a chosen basis $B=\{\xs_1,\dots, \xs_n\}$. We note that the construction works equally well over $\bF_2[[U]]$. We can write \begin{equation}\d(\ve{x}_i)=\sum_{j=1}^n P_{ij}\cdot \ve{x}_j,\label{eq:Pijdef}\end{equation} for $P_{ij}\in \bF_2[U]$. Let $P_{ij}'$ be denote the derivative of $P_{ij}$ with respect to $U$. We then define the map \[\Phi_B:C\to C\] by the formula \[\Phi_B(\xs_i)=\sum_{j=1}^n P_{ij}' \cdot \xs_j.\] Viewing $\d$ as a matrix over the basis $B$, we can differentiate the expression $\d^2=0$ using the Leibniz rule to see that $\Phi_B$ is a chain map. Similarly, if $x\in C$, applying the Leibniz rule to the expression $\Phi_B(x)$ (viewed as a product of a matrix and a column vector) implies the relation \[\Phi_B=\d\circ \frac{d}{d U}\bigg|_B+\frac{d}{d U}\bigg|_B \circ \d.\] The map $d/d U|_B$ is defined by writing an element $x\in C$ in terms of the basis $B$, and then differentiating the coefficients of the basis elements. The map $d/d U|_B$ of course does not commute with the action of $U$. The map $\Phi_B$ is independent of the chosen basis, up to chain homotopy, in the following sense: \begin{lem}\label{lem:Phiwelldefineduptochainhomotopy}If $B_1$ and $B_2$ are two bases of $C$ over $\bF_2[U]$, then the maps $\Phi_{B_1}$ and $\Phi_{B_2}$ are chain homotopic over $\bF_2[U]$. \end{lem} \begin{proof}Consider the more general situation, where $(C_1,\d_1)$ and $(C_2,\d_2)$ are chain complexes over $\bF_2[U]$ with bases $B_1$ and $B_2$ and $F:C_1\to C_2$ is a $\bF_2[[U]]$-equivariant chain map. Differentiating the matrix equation (written in terms of the bases $B_1$ and $B_2$) \[F\circ \d_1+\d_2\circ F=0,\] we get that \[F\circ\Phi_{B_1}+\Phi_{B_2}\circ F\simeq 0.\] By specializing to the case that $(C_1,\d_1)=(C_2,\d_2)$ and $F:C_1\to C_2$ is the identity map, but the bases $B_1$ and $B_2$ are different, we obtain the lemma statement. \end{proof} \subsection{The Lefschetz number formula} In this section, we prove Proposition \ref{prop:algebraicmappingtorus}, our formula for the Lefschetz number of a map on a torsion chain complex over $\bF_2[[U]]$. If $C$ is a free, finitely generated complex over $\bF_2[[U]]$ and $H^+(C)$ is finitely generated over $\bF_2$, then by Lemma~\ref{lem:classificationfgPID}, $H^+(C\otimes C^\vee)$ is also finitely generated over $\bF_2$, so by Lemma~\ref{lem:finitelygenerated=>inftyvanishes}, we know $H^\infty(C\otimes C^\vee)$ vanishes, and hence \[H_{\red}^{\pm}(C\otimes C^{\vee})=H^{\pm}(C\otimes C^{\vee}).\] Furthermore the connecting homomorphism $\delta$ is an isomorphism from $H^+(C\otimes C^{\vee})$ to $H^-(C\otimes C^{\vee})$. If $C$ is a free, finitely generated chain complex such that $H^\infty(C)=\{0\}$ and $F:C\to C$ is a grading preserving chain map, we define the quantity $\Delta(C,F)$ to be the coefficient of $U^{-1}$ in the expression \begin{equation}(\tr\circ (F\otimes \Phi^\vee)\circ \delta^{-1} \circ \cotr)(1).\label{eq:DeltaCFdefinition}\end{equation} The following observation is fundamental to the proof of Theorem~\ref{thm:mixedinvariantmappingtorus}, though the proof is essentially a straightforward computation: \begin{customprop}{\ref{prop:algebraicmappingtorus}} Suppose that $C$ is a free, finitely generated chain complex over $\bF_2[[U]]$ such that $H^+(C)$ is finite dimensional over $\bF_2$. If $F:C\to C$ is a chain map which preserves the relative grading and commutes with the action of $U$, then \[\Delta(C,F)=\Lef\big(F_*:H^+(C)\to H^+(C)\big).\] \end{customprop} \begin{proof} Applying Lemma~\ref{lem:classificationfgPID} shows that $C$ is chain homotopy equivalent to a sum of complexes of the form $\ve{a}\xrightarrow{U^n} \ve{b}$. Let us first verify the case that $C$ consists of a single 2-step complex, and $F:C\to C$ is the the identity map. In this case \[\Lef\big(F_*:H^+(C)\to H^+(C)\big)=\chi(H^+(C))=n.\] The dual complex $C^\vee$ is the 2-step complex $\ve{b}^\vee\xrightarrow{U^n} \ve{a}^\vee$. The complex $(C\otimes C^\vee)^-$ is shown below: \[(C\otimes C^\vee)^-=\begin{tikzcd}& \ve{a}\ve{b}^\vee\arrow[swap]{dl}{U^n} \arrow{dr}{U^n}&\\ \ve{bb}^\vee\arrow[swap]{dr}{U^n} && \ve{aa}^\vee\arrow{dl}{U^n}\\ & \ve{ba}^\vee& \end{tikzcd}.\] With this notation, the map $1\otimes \Phi^\vee$ takes the form \[(1\otimes \Phi^\vee)=\begin{tikzcd}& \ve{ab}^\vee \arrow{dr}{nU^{n-1}}&\\ \ve{bb}^\vee\arrow{dr}{nU^{n-1}} && \ve{aa}^\vee\\ & \ve{ba}^\vee& \end{tikzcd}.\] The homology group $H^-(C\otimes C^\vee)$ is equal to \[H^-(C\otimes C^\vee)=\{[U^i(\ve{aa}^\vee+\ve{bb}^\vee)], [U^i(\ve{ba}^\vee)]:\, 0\le i\le n-1\}.\] It is easy to compute that \[H^+(C\otimes C^\vee)=\{[U^i \ve{ab}^\vee], [U^i \ve{aa}^\vee], [U^i \ve{bb}^\vee]: -n\le i\le -1\}/([U^i \ve{aa}^\vee]= [U^i \ve{bb}^\vee]).\] The connecting homomorphism $\delta$ is computed to satisfy \[\delta([U^i \ve{a}\ve{b}^\vee])=[U^{i+n}(\ve{aa}^\vee+\ve{bb}^\vee)]\qquad \text{and} \qquad \delta([U^i\ve{aa}^\vee])=[U^{i+n} \ve{ba}^\vee].\] It is now an easy matter to compute \[(\tr\circ (\id\otimes\Phi^\vee)\circ \delta^{-1}\circ \cotr) (1)=nU^{-1},\] which verifies the claim in this case, since $n=\chi(H^+(C))=\Lef(\id_{H^+(C)}:H^+(C)\to H^+(C))$. We now consider the case that $C$ is still the 2-step complex $\ve{a}\xrightarrow{U^n} \ve{b}$, but $F:C\to C$ is an arbitrary chain map which preserves the relative grading. Since $F$ preserves the relative grading, it follows that $F(\ve{a})=p(U) \ve{a}$ and $F(\ve{b})=q(U)\ve{b}$ for some $p(U)$ and $q(U)$. Since $F$ is a chain map, it follows that $p(U)=q(U)$. Hence $F$ is equal to multiplication by $p(U)$, for some $p(U)\in \bF_2[[U]]$. Write \[p(U)=k+U p_0(U),\] where $k\in \bF_2$ and $p_0(U)\in \bF_2[[U]]$. Clearly \[\Lef\big(F_*:H^+(C)\to H^+(C)\big)=k n.\] On the other hand, it is easy to compute that \[(\tr\circ (F\otimes\Phi^\vee)\circ \delta^{-1}\circ \cotr)(1)=knU^{-1},\] so that $\Delta(C,F)=\Lef\big(F_*:H^+(C)\to H^+(C)\big)$. Finally, having verified the claim for 2-step complexes, we need to verify it for direct sums of 2-step complexes. Let us write $C=C_1\oplus \cdots \oplus C_n$, where each $C_i$ is a 2-step complex. We have \[H^+(C)=H^+(C_1)\oplus \cdots \oplus H^+(C_n),\] and we can write \[C\otimes C^{\vee}=\sum_{i,j} C_i\otimes C_j^\vee,\] We can decompose $F$ and $\Phi^\vee$ as \[F=\sum_{i,j} F_{ij},\qquad\text{and} \qquad \Phi^\vee=\sum_{k=1}^n \Phi_k^\vee \] where $F_{ij}=\pi_j\circ F\circ \pi_i$ and $\pi_i:C\to C$ is projection onto the $C_i$ summand and $\Phi_i^\vee$ is defined similarly. The trace and cotrace maps on $C\otimes C^\vee$ decompose as \[\cotr=\sum_{i=1}^n \cotr_i, \qquad \text{and} \qquad \tr=\sum_{i=1}^n \tr_i,\] where $\tr_i$ and $\cotr_i$ are the trace and cotrace maps on $C_i\otimes C_i^\vee$. We can write \[\delta=\sum_{i,j}\delta_{i}^j\] where $\delta_{i}^j$ is the connecting homomorphism for the complex $C_i\otimes C_j^\vee$. Finally, it is a simple matter to compute $\Delta(C,F)$ from the definition: \begin{align*}&(\tr\circ (F\otimes \Phi^\vee)\circ \delta^{-1}\circ \cotr)(1)\\ =&\Bigg(\bigg(\sum_{i=1}^n \tr_i\bigg)\circ \bigg(\sum_{i,j,k} F_{ij}\otimes \Phi_k^\vee\bigg)\circ \bigg(\sum_{i,j} (\delta_i^j)^{-1}\bigg)\circ \bigg(\sum_{i=1}^n \cotr_i\bigg) \Bigg)(1)\\ =&\sum_{i=1}^n (\tr_i \circ (F_{ii}\otimes \Phi_i^\vee)\circ (\delta_{i}^{i})^{-1}\circ \cotr_i)(1). \end{align*} By our result for 2-step complexes, the above is equal to \[\sum_{i=1}^n \Lef\big((F_{ii})_*:H^+(C_i)\to :H^+(C_i)\big)\cdot U^{-1},\] which is clearly $\Lef\big(F_*:H^+(C)\to H^+(C) \big)\cdot U^{-1}$, completing the proof. \end{proof} \begin{rem}\label{rem:signassignments} The above result can also be stated over $\Q[[U]]$, though we must modify the maps $\tr$ and $\Phi$ to take into account signs. Writing $\epsilon(\xs)=(-1)^{\gr(\xs)}$, the graded version of the map $\tr$ is defined \[\tr(\ve{x}\otimes \ve{y}):=\epsilon(\xs) \ve{y}(\ve{x}).\] Similarly a graded version of the map $\Phi$ can be defined as \[\Phi(\ve{x}_i)=\sum_{i=1}^n \epsilon(\ve{x}_i) P_{ij}'\cdot \ve{x}_j,\] where $P_{ij}$ are as in Equation \eqref{eq:Pijdef}. For the two step complex $\ve{a}\xrightarrow{U^n} \ve{b}$, an easy computation shows that $\Delta(C,\id_C)=\epsilon(\ve{b})\cdot n$ which is the Euler characteristic of $H^+(C)$, over $\Q$. \end{rem} \section{The graph TQFT for Heegaard Floer homology} \label{sec:graphTQFT} In this section, we provide an overview of the graph TQFT for Heegaard Floer homology, constructed by the author \cite{ZemGraphTQFT}, and prove some properties which are relevant to this paper. The graph TQFT uses the following notion of cobordism between multi-pointed 3-manifolds: \begin{define} \begin{enumerate} \item A \textbf{ribbon graph} is a graph with no valence zero vertices, together with a choice of cyclic ordering of the edges adjacent to each vertex. \item A \textbf{ribbon graph cobordism} $(W,\Gamma):(Y_1,\ve{w}_1)\to (Y_2,\ve{w}_2)$ between two multi-pointed 3-manifolds is a pair consisting of a 4-manifold $W$ with $\d W=-Y_1\sqcup Y_2$ as well as a ribbon graph $\Gamma\subset W$ such that $\Gamma\cap Y_i=\ve{w}_i$ and each basepoint of $\ve{w}_i$ has valence 1 in $\Gamma$. \end{enumerate} \end{define} To a ribbon graph cobordism $(W,\Gamma):(Y_1,\ve{w}_1)\to (Y_2,\ve{w}_2)$, equipped with a $\Spin^c$ structure $\frs$ on $W$, the author constructs a cobordism map \[F_{W,\Gamma,\frs}^A: \CF^-(Y_1,\ve{w}_1,\frs|_{Y_1})\to \CF^-(Y_2,\ve{w}_2,\frs|_{Y_2}),\] in \cite{ZemGraphTQFT}. The graph cobordism maps from \cite{ZemGraphTQFT} extend the construction of cobordism maps due to Ozsv\'{a}th and Szab\'{o} in \cite{OSTriangles} (which are implicitly for cobordisms equipped with a path from the basepoint in $Y_1$ and the basepoint in $Y_2$). We note that the map $F_{W,\Gamma,\frs}^A$ is denoted $F_{W,\Gamma,\frs}$ in \cite{ZemGraphTQFT}. There is a variation, written $F_{W,\Gamma,\frs}^B$, which we will also consider in this paper. In Sections~\ref{sec:mapsandrelations} and \ref{sec:outlineofconstruction}, we will provide more details about the construction of the graph cobordism maps, and explain the distinction between the type-$A$ maps and the type-$B$ maps. In Sections~\ref{sec:furtherrelationsofGraphTQFT}, \ref{sec:graphsandhomologyactions} and \ref{sec:relativehomologyandTriangles} we will describe some relations which are satisfied by the graph cobordism maps. \subsection{Ingredients of the graph TQFT} \label{sec:mapsandrelations} Before summarizing the construction of the graph cobordism maps from \cite{ZemGraphTQFT}, we will describe some maps which feature in the construction, and some important relations for these maps. The following maps are used in the construction: \begin{enumerate} \item \label{graphmap1} Handle attachment maps for 1-, 2-, and 3-handles, attached away from the basepoints. \item \label{graphmap2}Handle attachment maps for 0- and 4-handles, which add or remove a copy of $S^3$, with a single basepoint. \item \label{graphmap3}\textit{Free-stabilization maps} for adding or removing basepoints in a 3-manifold. \item \label{graphmap4}\textit{Relative homology maps} associated to paths between two basepoints in a multi-pointed 3-manifold. \end{enumerate} The original cobordism maps from \cite{OSTriangles} are built as a composition of maps of type \eqref{graphmap1} in the above list (handle attachment maps for handles of index 1, 2 and 3). We now describe the maps of type \eqref{graphmap2}, \eqref{graphmap3} and \eqref{graphmap4}, which are new to the construction in \cite{ZemGraphTQFT}. The 0-handle and 4-handle maps are defined using the canonical isomorphism \[\CF^-(Y\sqcup S^3, \ws\cup \{w_0\},\frs\sqcup \frs_0)\iso \CF^-(Y,\ws,\frs)\otimes_{\bF_2[U]} \CF^-(S^3,w_0,\frs_0).\] Under this isomorphism, the 0-handle map $F_0$ and the 4-handle map $F_4$ take the form \[F_0(\ve{x})=\ve{x}\times \ve{c}_0\qquad \text{and} \qquad F_4(\ve{x}\times \ve{c}_0)=\ve{x},\] where $\ve{c}_0\in \CF^-(S^3,w_0,\frs)$ is a cycle which generates the homology group $\HF^-(S^3,w_0,\frs_0)\iso \bF_2[U]$. The free-stabilization maps \[S_{w}^+:\CF^-(Y,\ve{w},\frs)\to \CF^-(Y,\ve{w}\cup \{w\},\frs)\] and \[S_w^-:\CF^-(Y,\ve{w}\cup \{w\},\frs)\to \CF^-(Y,\ve{w},\frs)\] are constructed somewhat analogously to the 1-handle and 3-handle maps defined by Ozsv\'{a}th and Szab\'{o}. One picks a diagram $(\Sigma,\as,\bs,\ws)$ for $(Y,\ws)$ such that $w\in \Sigma\setminus (\as\cup \bs)$. A diagram $(\Sigma,\as\cup \{\alpha_0\},\bs\cup \{\beta_0\},\ws\cup \{w\})$ for $(Y,\ws\cup \{w\})$ is then constructed by adding the basepoint $w$, as well two new curves, $\alpha_0$ and $\beta_0$, both contained in a disk on $\Sigma$ which is disjoint from $\as\cup \bs\cup \ws$, such that $|\alpha_0\cap \beta_0|=2$. Writing $\theta^+$ and $\theta^-$ for the higher and lower graded intersection points of $\alpha_0\cap \beta_0$, the free-stabilization maps are defined by the formulas \[S_w^+(\ve{x})=\ve{x}\times \theta^+,\] \[S_w^-(\ve{x}\times \theta^+)=0\qquad \text{and} \qquad S_w^-(\ve{x}\times \theta^-)=\ve{x}.\] For appropriately stretched almost complex structure, these are chain maps, which commute with the transition maps for changing the Heegaard diagram $(\Sigma,\as,\bs)$. See \cite{ZemGraphTQFT}*{Section~6} for more details on the construction. On the level of graph cobordisms, they are the maps induced by the graphs inside of $Y\times [0,1]$ on the left side of Figure~\ref{fig::29}. We note that the formulas for the free-stabilization maps resemble the formulas for the 1-handle and 3-handle maps from \cite{OSTriangles}. This is, of course, no accident. We can write $S_w^+$ as the composition of a 0-handle map (which adds a copy of $S^3$ and the basepoint $w$), as well as a canceling 1-handle (which cancels the 0-handle topologically, but leaves the basepoint $w$ in $Y$). Analogously, the map $S_w^-$ can be written as a composition of a 3-handle map (with attaching 2-sphere bounding a small ball containing the basepoint $w$), followed by the 4-handle map. Another map which appears in the graph TQFT is the map $\Phi_w$, which is an endomorphism of the complex $\CF^-(Y,\ve{w},\frs)$ when $w\in \ve{w}$. It can be defined on $\CF^-(Y,\ws,\frs)$ by the formula \[\Phi_w(\ve{x})=U^{-1}\sum_{\ve{y}\in \bT_{\as}\cap \bT_{\bs}} \sum_{\substack{\phi\in \pi_2(\ve{x},\ve{y})\\ \mu(\phi)=1}}n_w(\phi) \# \hat{\cM}(\phi) U^{n_{\ve{w}}(\phi)}\cdot \ve{y}.\] In Lemma~\ref{lem:phi=brokenpathcobordism}, we prove that the broken graph cobordism $(Y\times [0,1], \Gamma_w)$ on the right side of Figure~\ref{fig::29} induces the map $\Phi_w$. As the cobordism for $\Phi_w$ in Figure~\ref{fig::29} suggests, the map $\Phi_w$ satisfies $\Phi_w\simeq S_w^+S_w^-$ when $w$ is not the only basepoint (so that $S_w^+$ and $S_w^-$ can be defined). \begin{rem} If $w$ is the only basepoint on $Y$, then the map $\Phi_w$ coincides with the formal derivative of the differential on $\CF^-(Y,w,\frs)$, as defined in Section~\ref{sec:provelefschetznumberformula}. If $\ws=\{w_1,\dots, w_n\}$ is a collection of basepoints on $Y$, then the formal derivative map of $CF^-(Y,\ws,\frs)$ is instead chain homotopic to $\Phi_{w_1}+\cdots +\Phi_{w_n}$. \end{rem} \begin{figure}[ht!] \centering \input{fig29.pdf_tex} \caption{\textbf{The graph cobordism inducing the free-stabilization maps $S_w^+$ and $S_w^-$ (on the left) and the broken path graph cobordism $(Y\times [0,1], \Gamma_w)$ inducing the map $\Phi_w$ (on the right).} The maps $S_w^+$ and $S_w^-$ are only defined in the presence of an additional basepoint (so that both ends have at least one basepoint), whereas the map $\Phi_w$ is defined regardless of whether there are additional basepoints or not. \label{fig::29}} \end{figure} The next maps which feature in the construction of the graph TQFT are the relative homology maps. Suppose $(\Sigma,\as,\bs,\ve{w})$ is a multi-pointed Heegaard diagram, and $\lambda$ is a path on $\Sigma$ between two basepoints $w_1$ and $w_2$. If $\phi\in \pi_2(\xs,\ys)$, we can define a quantity $a(\lambda,\phi)\in \bF_2$ by summing the changes in multiplicity of $\phi$ across each of the $\as$ curves as one travels along the path $\lambda$. Using the quantities $a(\lambda,\phi)$, we can define a $-1$ graded endomorphism \[A_{\lambda}:\CF^-(\Sigma,\as,\bs,\ws,\frs)\to \CF^-(\Sigma,\as,\bs,\ve{w},\frs)\] using the formula \[A_{\lambda}(\ve{x})=\sum_{\ve{y}\in \bT_{\as}\cap \bT_{\bs}}\sum_{\substack{\phi\in \pi_2(\xs,\ys)\\ \mu(\phi)=1}} a(\phi,\lambda) \# \hat{\cM}(\phi)U^{n_{\ws}(\phi)}\cdot \ys.\] By counting the ends of 2-dimensional moduli spaces, one obtains the equality \[\d \circ A_\lambda+A_\lambda\circ \d=0.\] Note that one can consider more general Heegaard Floer complexes than the ones considered in this paper, where one associates a variable to each basepoint. In this more general case, if $\lambda$ is a path from $w_1$ to $w_2$, then $\d \circ A_{\lambda}+A_{\lambda}\circ \d=U_{w_1}+U_{w_2}$. Instead of including the factor $a(\lambda,\phi)$, which counts the sums of changes in the multiplicities of $\phi$ across the $\as$ curves, in the definition of $A_\lambda$, one could instead include a factor obtained by counting changes across only the $\bs$ curves, which we will denote by $b(\lambda,\phi)$. Doing so yields a map $B_\lambda$, which is also a chain map. For a path $\lambda$ with ends on two basepoints, $w_1$ and $w_2$, the maps $A_\lambda$ and $B_{\lambda}$ are in general not equal, or even chain homotopic. Instead, since the quantities $a(\lambda,\phi)$ and $b(\lambda,\phi)$ satisfy \[a(\lambda,\phi)+b(\lambda,\phi)=n_{w_1}(\phi)-n_{w_2}(\phi),\] it follows that $A_{\lambda}$ and $B_{\lambda}$ satisfy the relation \begin{equation}A_{\lambda}+B_{\lambda}=U\Phi_{w_1}+U\Phi_{w_2}.\label{eq:Alambda+Blambda}\end{equation} For an immersed, closed curve $\gamma$ in $\Sigma$, one can define maps $A_{\gamma}$ and $B_{\gamma}$, using the same formula as the maps $A_{\lambda}$ and $B_{\lambda}$. When $\gamma$ is a closed curve in $\Sigma$, the maps $A_{\gamma}$ and $B_{\gamma}$ are chain maps. In contrast to Equation \eqref{eq:Alambda+Blambda}, since $a(\gamma,\phi)=b(\gamma,\phi)$ when $\gamma$ is a closed loop, we have that \[A_\gamma=B_{\gamma}.\] Indeed it is not hard to see that $A_\gamma$ is the map induced by the action of $H_1(Y;\Z)/\Tors$ described in \cite{OSDisks}, using the homology class induced by $\gamma$ under the inclusion $\Sigma\hookrightarrow Y$. We now list some useful relations between the maps described above, which will be important for this paper: \begin{enumerate}[label=($R$\arabic*), widest=10] \item \label{rel:R1}$S_{w}^-S_{w}^+\simeq 0$. \item \label{rel:R2}$S_{w}^+S_{w}^-\simeq \Phi_w$. \item \label{rel:R3'} $S_{w'}^{\pm} \Phi_w\simeq \Phi_w S_{w'}^{\pm}$ if $w\neq w'$. \item\label{rel:R3} $A_{\lambda_1}A_{\lambda_2}+A_{\lambda_2}A_{\lambda_1}\simeq \#((\d \lambda_1)\cap (\d \lambda_2))\cdot U$. \item\label{rel:R4} If $\lambda$ is a path from $w_1$ to $w_2$ and $\lambda'$ is a path from $w_2$ to $w_3$ then $A_{\lambda}+A_{\lambda'}=A_{\lambda'*\lambda}$, where $*$ denotes concatenation. \item\label{rel:R6} If $\lambda$ is a path from $w_1$ to $w_2$ (and $w_1\neq w_2$) then $A_{\lambda}^2\simeq U$. \item\label{rel:R6'} If $\lambda$ is a path from $w$ to another basepoint, then $\Phi_w A_{\lambda}+A_{\lambda}\Phi_w\simeq \id$. \item \label{rel:R7} $S_{w}^{\pm}S_{w'}^{\pm'}\simeq S_{w'}^{\pm'}S_{w}^{\pm}$. \item \label{rel:R7'} If $e$ is an edge which we can write as the concatenation of two edges, $e_1$ and $e_2$, whose intersection consists of a single vertex $v$, then $A_e\simeq S_v^- A_{e_2}A_{e_1} S_v^+$. \item \label{rel:R8} If $\lambda$ is a path from $w$ to another basepoint, then $S_{w}^-A_{\lambda}S_{w}^+\simeq \id$. \item \label{rel:R9}If $\lambda$ is a path from $w_1$ to $w_2$ (and $w_1\neq w_2$) then $S_{w_1}^- A_{\lambda} S_{w_2}^+\simeq \phi_*$, where $\phi$ is a diffeomorphism of $Y$ which moves $w_1$ to $w_2$ along $\lambda$, and is supported in a neighborhood of $\lambda$. \end{enumerate} Most of the above relations are proven in \cite{ZemGraphTQFT}. Relation~\ref{rel:R1} is immediate from the formulas for the maps. Relation~\ref{rel:R2} is \cite{ZemGraphTQFT}*{Lemma~14.16}. Relation~\ref{rel:R3'} can be proven from the fact $S_{w'}^{\pm}$ are chain maps on the complexes which have distinct variables for each basepoint, by adapting the proof of Lemma~\ref{lem:Phiwelldefineduptochainhomotopy}. Relation~\ref{rel:R3} is \cite{ZemGraphTQFT}*{Lemma~5.10}. Relation~\ref{rel:R4} follows from the fact that $a(\lambda*\lambda',\phi)=a(\lambda,\phi)+a(\lambda',\phi)$. Relation~\ref{rel:R6} is \cite{ZemGraphTQFT}*{Lemma~5.12}. Relation~\ref{rel:R6'} is \cite{ZemGraphTQFT}*{Lemma~14.7}. Relation~\ref{rel:R7} is \cite{ZemGraphTQFT}*{Proposition~6.23}. Relation~\ref{rel:R7'} is \cite{ZemGraphTQFT}*{Lemma~7.8}. Relation~\ref{rel:R8} is \cite{ZemGraphTQFT}*{Lemma~14.12}. Relation~\ref{rel:R9} is \cite{ZemGraphTQFT}*{Theorem~14.13}. \subsection{Outline of the construction of the graph cobordism maps} \label{sec:outlineofconstruction} We now briefly summarize the construction of the graph cobordism maps, in terms of the maps described in the last section. The first step toward constructing the cobordism maps for a graph cobordism $(W,\Gamma)$ is to remove a collection of 4-balls from $W$, to ensure that all connected components of $W$ have at least one incoming and one outgoing boundary end. A single arc is added to $\Gamma$ for each 4-ball we remove. Each arc has one endpoint on a new boundary 3-sphere, as well as a point on $\Gamma$. Writing $(W',\Gamma')$ for any graph cobordism obtained by puncturing $W$ and adding strands to $\Gamma$ in the above manner, the map $F_{W,\Gamma,\frs}^A$ is then defined as the composition of $F_{W',\Gamma',\frs|_{W'}}^A$ together with 0-handle and 4-handle maps for the 4-balls we removed. Using \cite{ZemGraphTQFT}*{Lemma~13.1}, it follows that the induced cobordism maps are independent from which 4-balls we removed, and which arcs we pick to connect them to $\Gamma$ (and in fact the maps are invariant under additional puncturing). We henceforth assume that each component of $W$ has non-empty incoming and outgoing ends. For a cobordism obtained by attaching 1-, 2- or 3-handles, the maps are essentially the same as the ones described by Ozsv\'{a}th and Szab\'{o} in \cite{OSTriangles}. For graph cobordisms of the form $(Y\times [0,1],\Gamma):(Y,\ws_1)\to (Y,\ws_2)$, the graph cobordism maps can be described as a composition of the free-stabilization maps, and the relative homology maps, as we now describe. It is more convenient to project the graph $\Gamma\subset Y\times [0,1]$ into $Y$, and define a map for a ribbon graph embedded in $Y$, which has designated incoming and outgoing vertices. We call such a graph, embedded in $Y$, a \textbf{ribbon flow graph}, and write $\Gamma:\ws_1\to \ws_2$. For a ribbon flow graph $\Gamma:\ws_1\to \ws_2$ in $Y$, a map \[A_{\Gamma}: \CF^-(Y,\ws_1,\frs)\to \CF^-(Y,\ws_2,\frs)\] is constructed in \cite{ZemGraphTQFT}*{Section~7}, which we call the \textbf{type-$A$ graph action map}. The type-$A$ graph cobordism map for a graph cobordism $(Y\times [0,1],\Gamma)$ is equal to the type-$A$ graph action map for the graph obtained by projecting $\Gamma$ into $Y$. Given a flow graph $\Gamma:\ws_1\to \ws_2$ in $Y$, to construct the graph action map, one decomposes the graph $\Gamma$ into a sequence of \textit{elementary flow graphs} (possibly subdividing edges by adding vertices). In \cite{ZemGraphTQFT}, such a decomposition is described as an \textit{Cerf decomposition} of a flow graph. There are 3 types of elementary flow graphs (see \cite{ZemGraphTQFT}*{Definition~7.3}), as follows: \begin{enumerate}[label= Type (\arabic*)] \item\label{elementarygraphtype1} $|\ws_1|=|\ws_2|$ and each edge of $\Gamma$ connects a vertex in $\ws_1$ to a vertex in $\ws_2$. \item\label{elementarygraphtype2} There is a single vertex $v_0$ of $\Gamma$ which is not in $\ws_1$ or $\ws_2$, and all edges of $\Gamma$ connect either $\ws_1$ to $\ws_2$, or connect a point of $\ws_1$ or $\ws_2$ to $v_0$. \item\label{elementarygraphtype3} $|\ws_1|=|\ws_2|\pm 2$, and all edges of $\Gamma$, except for a single edge $e$, connect $\ws_1$ to $\ws_2$. Furthermore $e$ connects two vertices of $\ws_1$ together, or connects two vertices of $\ws_2$ together. \end{enumerate} Examples of elementary flowgraphs are shown in Figure~\ref{fig::56}. \begin{figure}[ht!] \centering \input{fig56.pdf_tex} \caption{\textbf{Examples of the three types of elementary flow graphs.} These graphs are embedded in a fixed 3-manifold $Y$. \label{fig::56}} \end{figure} The graph action map $A_\Gamma$ for an elementary flow graph $\Gamma:\ws_1\to \ws_2$ of \ref{elementarygraphtype1} is equal to the composition of $| E(\Gamma)|$ terms of the form \begin{equation}S_{w_1}^- A_{e} S_{w_2}^+,\label{eq:elemengraphtype1}\end{equation} where $e$ is an edge of $\Gamma$ and $w_1$ and $w_2$ are the incoming and outgoing ends of $e$. Note that by \ref{rel:R9}, the induced map $A_{\Gamma}$ is the diffeomorphism map obtained by moving $\ws_1$ to $\ws_2$ along the edges of $\Gamma$. Also, we note that it's easy to use Relations~\ref{rel:R1}--\ref{rel:R9} to see that the map is independent of the ordering of the terms of the form $S_{w_1}^- A_e S_{w_2}^+$. The graph action map $A_{\Gamma}$ for an elementary flow graph $\Gamma:\ws_1\to \ws_2$ of \ref{elementarygraphtype2} is defined as follows. Let $v_0$ be the interior vertex, and let $e_1,\dots, e_n$ denote the edges adjacent to $v_0$, ordered in any way which is compatible with the cyclic ordering. The graph action is defined as a composition of expressions of the form shown in Equation \eqref{eq:elemengraphtype1}, for the edges $e$ which are not incident to $v_0$, as well as the map \begin{equation}S_{v_0}^- A_{e_n}\cdots A_{e_1} S_{v_0}^+.\label{eq:elemengraphtype2}\end{equation} The map appearing in Equation \eqref{eq:elemengraphtype2}, turns out to be invariant under cyclic permutation of the edges $e_1,\dots, e_n$, and hence only depends on their cyclic orders \cite{ZemGraphTQFT}*{Lemma~7.12}. Finally, the graph action map for $A_{\Gamma}$ for an elementary flow graph of \ref{elementarygraphtype3} is defined as follows. Let $e$ denote the edge of $\Gamma$ which does not have a vertex in both $\ws_1$ and $\ws_2$, and write $v_1$ and $v_2$ for the two vertices of $e$. If $v_1,v_2\in \ws_1$, the map $A_{\Gamma}$ is defined as a composition of terms as in Equation \eqref{eq:elemengraphtype1}, for the edges of $\Gamma$ which have an end in both $\ws_1$ and $\ws_2$, as well as a single term of the form \[S^-_{v_1}S^-_{v_2} A_e.\] If instead $v_1,v_2\in \ws_2$, the map is defined by replacing the above expression with $A_e S_{v_1}^+S_{v_2}^+$. The map $A_{\Gamma}$ is independent up to chain homotopy of the choice of Cerf decomposition of the flow graph $\Gamma$, and is also invariant under subdivision of the edges of $\Gamma$ \cite{ZemGraphTQFT}*{Theorem~B}. We note that the map $A_{\Gamma}$ is defined somewhat asymmetrically, since we chose to use the $A_{\lambda}$ maps, which count changes across the $\as$ curves. We could instead define a graph action map $B_{\Gamma}$, by using the construction described above and replacing each instance of $A_{\lambda}$ with $B_{\lambda}$. The type-$A$ graph cobordism map is defined as a composition of the 0-, 1-, 2-, 3- and 4-handles, as well as the type-$A$ graph action map. The type-$B$ graph cobordism map is defined similarly, but using the type-$B$ graph action map. The type-$A$ and type-$B$ versions of the graph cobordism maps are related as follows: \begin{lem}[\cite{HMZConnectedSum}*{Lemma~5.9}] \label{lem:reversecyclicordering} The type-$A$ and type-$B$ graph cobordism maps satisfy the relation \[F_{W,\Gamma,\frs}^A\simeq F_{W,\bar{\Gamma},\frs}^B,\] where $\bar{\Gamma}$ is the ribbon graph obtained by reversing the cyclic orderings on $\Gamma$. \end{lem} \subsection{Further relations in the graph TQFT} \label{sec:furtherrelationsofGraphTQFT} In this section, we discuss two useful relations. The first is the vertex breaking relation, which computes the effect of changing the cyclic ordering at a vertex, and the second is the computation of the broken path cobordism map. \begin{lem}\label{lem:vertexbreakingrelation}Suppose that $(W,\Gamma)$ is a graph cobordism, and $v_0$ is a vertex in the interior of $\Gamma$, and $e_1$ and $e_2$ are two edges incident to $v_0$, which are adjacent in the cyclic ordering. Let $\Gamma'$ denote the ribbon graph obtained by switching the relative ordering of $e_1$ and $e_2$. Let $\Gamma''$ denote the graph obtained by removing a connected subarc from the interiors of each of the edges $e_1$ and $e_2$ (as in Figure~\ref{fig::39}). Then \[F_{W,\Gamma,\frs}^A+F_{W,\Gamma',\frs}^A\simeq U\cdot F_{W,\Gamma'',\frs}^A.\] The same relation holds with all three type-$A$ maps replaced by type-$B$ maps. \end{lem} \begin{figure}[ht!] \centering \input{fig39.pdf_tex} \caption{\textbf{The vertex breaking relation for changing the relative ordering of two vertices.} Note that it's not necessary to change the embedding of the graph in the middle diagram (we are actually just changing the cyclic ordering), though we do so for notational clarity in the middle picture. \label{fig::39}} \end{figure} \begin{proof} It is sufficient to show the analogous relation for the graph action map. Suppose that $\Gamma:\ws_1\to \ws_2$ is a ribbon flow graph, embedded in $Y$. Since the graph action map is defined as a composition of the graph action maps for elementary flow graphs, only one of which will contain the vertex $v_0$, it is sufficient to show the claim for an elementary flow graph of \ref{elementarygraphtype2}, which contains the vertex $v_0$ as the interior vertex. Let $e_1,\dots, e_n$ be the edges adjacent to $v_0$, indexed compatibly with their cyclic ordering. Let $\ws_1'$ denote the set of vertices in $\ws_1$ which are adjacent to the edges in $e_1,\dots, e_n$, and let $\ws_2'$ denote the vertices in $\ws_2$ which are adjacent to an edge in $e_1,\dots, e_n$. Write $S_{\ws_1'}^-$ for the composition of the maps $S_v^-$ for $v\in \ws_1'$ (note that by Relation~\ref{rel:R7} the order of the vertices in $\ws_1'$ does not affect the composition). By definition, the graph action is a composition of maps as in Equation \eqref{eq:elemengraphtype1}, for edges not adjacent to $v_0$, as well as a term of the form \[S_{\ws_1'}^- A_{e_n}\cdots A_{e_2}A_{e_1}S_{\ws_2'}^+.\] Using Relation~\ref{rel:R3}, we obtain the relation \[S_{\ws_1'}^- A_{e_n}\cdots A_{e_2}A_{e_1}S_{\ws_2'}^++S_{\ws_1'}^- A_{e_n}\cdots A_{e_1}A_{e_2}S_{\ws_2'}^+\simeq U \cdot S_{\ws_1'}^- A_{e_n}\cdots A_{e_3} S_{\ws_2'}^+.\] By definition, the expression $S_{\ws_1'}^- A_{e_n}\cdots A_{e_1}A_{e_2}S_{\ws_2'}^+$ is the graph action map $A_{\Gamma'}$ (technically there are other terms in the composition for edges $e$ which go from $\ws_1$ to $\ws_2$, which we are not writing, however these coincide for all three maps). We now claim that the third expression $ S_{\ws_1'}^- A_{e_n}\cdots A_{e_3} S_{\ws_2'}^+$ is chain homotopic to the graph action map $A_{\Gamma''}$. Note that this is not quite obvious, since it is not the expression induced by a Cerf decomposition of the graph $\Gamma''$, however, by adding trivial strands using Relation~\ref{rel:R8}, and subdividing edges using Relation~\ref{rel:R7'}, it is straightforward to manipulate the expression so that it is the map induced by a Cerf decomposition for the graph $\Gamma''$. \end{proof} We now consider the broken path cobordism $(Y\times [0,1], \Gamma_w)$, shown in Figure~\ref{fig::29}. \begin{lem}\label{lem:phi=brokenpathcobordism}If $(Y,\ws)$ is a multi-pointed 3-manifold (possibly with only one basepoint) and $(Y\times [0,1],\Gamma_w)$ is the broken path cobordism shown in Figure~\ref{fig::29}, then \[F_{Y\times [0,1],\Gamma_w,\frs}\simeq \Phi_w.\] The statement holds regardless of whether $|\ws|=1$ or $|\ws|>1$, and for both the $A$ and $B$ versions of the maps. \end{lem} \begin{proof}If $|\ws|>1$, the result follows from Relation~\ref{rel:R2} since $(Y\times [0,1],\Gamma_w)$ is a composition of a free-destabilization cobordism and a free-stabilization cobordism. In the case when $\ws$ consists of a single point $w$, we use invariance of the graph cobordism maps under isotopies of the graph. We pick a new basepoint $w'\not\in \ws$, as well as a path $\lambda$ from $w$ to $w'$. There is a \textit{basepoint swapping} graph cobordism $(Y\times [0,1],\Gamma_{\lambda}^X)$ from $(Y,\{w,w'\})$ to $(Y, \{w,w'\})$ which swaps the basepoints $w$ and $w'$ along the path $\lambda$. This is obtained by picking a path $\lambda'$ which is isotopic to $\lambda$ in $Y$, but intersects $\lambda$ only at $\d \lambda$. The graph $\Gamma_{\lambda}^X$ is obtained by simultaneously moving $w$ to $w'$ along $\lambda$, and moving $w'$ to $w$ along $\lambda'$. The graph $\Gamma_{\lambda}^X\subset Y\times [0,1]$ is well defined up to isotopy, since $Y\times [0,1]$ is a 4-manifold (unlike the analogous situation in 3-manifolds, where there are two non-isotopic choices of crossings). We can decompose the broken path cobordism $(Y\times [0,1],\Gamma_w)$ into a composition of a free-stabilization graph cobordism at $w'$, followed by a basepoint swapping cobordism $(Y\times [0,1],\Gamma_{\lambda}^X)$, followed by a free-destabilization cobordism at $w'$. The configuration is shown in Figure~\ref{fig::44}. \begin{figure}[ht!] \centering \input{fig44.pdf_tex} \caption{\textbf{Manipulating the broken path cobordism $(Y\times [0,1],\Gamma_w)$ so that a basepoint swapping cobordism appears in the middle.} Unlike in 3 dimensions, the basepoint swapping graph cobordism $(W,\Gamma_{\lambda}^X)$ is well defined up to an isotopy of the graph, and the crossing shown in our picture carries no meaning. \label{fig::44}} \end{figure} By \cite{ZemGraphTQFT}*{Proposition~14.27}, the cobordism $(Y\times [0,1],\Gamma_{\lambda}^X)$ induces the map \[F_{Y\times [0,1],\Gamma_{\lambda}^X,\frs}^A\simeq \Phi_{w} A_{\lambda}+A_{\lambda} \Phi_{w'}.\] Using this, we compute that \begin{align*}F_{Y\times [0,1],\Gamma,\frs}^A&\simeq S_{w'}^-(\Phi_{w} A_{\lambda}+A_{\lambda} \Phi_{w'}) S_{w'}^+&&\\ &\simeq S_{w'}^- \Phi_{w} A_{\lambda} S_{w'}^+&& \text{\ref{rel:R1}, \ref{rel:R2}}\\ & \simeq \Phi_w S_{w'}^- A_{\lambda} S_{w'}^+&&\text{\ref{rel:R3'}}\\ & \simeq \Phi_w&& \text{\ref{rel:R8}}. \end{align*} Replacing the $A_\lambda$ maps with $B_{\lambda}$ yields the result for the type-$B$ maps, as well. \end{proof} \subsection{Graphs and the actions of \texorpdfstring{$U$}{U} and \texorpdfstring{$H_1(Y;\Z)/\Tors$}{H1(Y;Z)/Tors}} \label{sec:graphsandhomologyactions} In this section, we describe how the $\bF_2[U]$-module action and the action of $H_1(Y;\Z)/\Tors$ are encoded into the graph cobordism maps. We consider the graph cobordisms $(Y\times[0,1],\Gamma_\gamma)$ and $(Y\times [0,1], \Gamma_U)$ shown in Figure~\ref{fig::26}. \begin{figure}[ht!] \centering \input{fig26.pdf_tex} \caption{\textbf{Graph cobordisms for the action of $[\gamma]\in H_1(Y;\Z)/\Tors$ and for action of $U$.} The two loops on the cobordism for the $U$ map are both null-homologous. \label{fig::26}} \end{figure} \begin{prop}\label{prop:spliceinloopsforUandH_1} Suppose $\gamma$ is a closed, embedded loop in $Y$ which intersects $w$, and let $\Gamma_\gamma\subset Y\times [0,1]$ be the graph $\Gamma_{\gamma}:=(\{w\}\times [0,1])\cup (\gamma\times \{\tfrac{1}{2}\})$, shown in Figure~\ref{fig::26}. Then \[F_{Y\times [0,1],\Gamma_\gamma,\frs}^A\simeq F_{Y\times [0,1],\Gamma_\gamma,\frs}^B\simeq A_{\gamma}\] where $A_\gamma$ denotes the action of $H_1(Y;\Z)/\Tors$. Let $\Gamma_U\subset Y\times [0,1]$ denote the graph on the right side of Figure~\ref{fig::26}. Then \[F_{Y\times[0,1],\Gamma_U,\frs}^A\simeq F_{Y\times[0,1],\Gamma_U,\frs}^B\simeq U.\] \end{prop} It's convenient to break the computation into manageable pieces. If $\lambda$ is a path from $w$ to $w'$ in $Y$, then let $\Gamma^{\curlyvee}_{\lambda}$ and $\Gamma^{\curlywedge}_{\lambda}$ denote the graphs in of $Y\times [0,1]$, with a trivalent vertex shown in Figure~\ref{fig::48}. \begin{figure}[ht!] \centering \input{fig48.pdf_tex} \caption{\textbf{The graph cobordisms $(Y\times [0,1], \Gamma^\curlyvee_{\lambda})$ and $(Y\times [0,1], \Gamma^{\curlywedge}_{\lambda})$ considered in Lemma~\ref{lem:computationofYshapedgraphcobordisms}.} These depend on a choice of path, $\lambda$, from $w$ to $w'$ in $Y$.\label{fig::48}} \end{figure} \begin{lem}\label{lem:computationofYshapedgraphcobordisms}The graph cobordism maps for $(Y\times [0,1], \Gamma_{\lambda}^{\curlyvee})$ and $(Y\times [0,1], \Gamma_{\lambda}^\curlywedge)$ satisfy \[F_{Y\times [0,1], \Gamma_{\lambda}^{\curlyvee},\frs}^A \simeq B_{\lambda} S_{w'}^+ \qquad \text{and} \qquad F_{Y\times [0,1], \Gamma_{\lambda}^\curlywedge,\frs}^A\simeq S_{w'}^-B_{\lambda}.\] If $\bar{\Gamma}_{\lambda}^{\curlyvee}$ and $\bar{\Gamma}_{\lambda}^{\curlywedge}$ denote the graphs with the opposite cyclic order, then \[F_{Y\times [0,1], \bar{\Gamma}_{\lambda}^{\curlyvee},\frs}^A \simeq A_{\lambda} S_{w'}^+ \qquad \text{and} \qquad F_{Y\times [0,1], \bar{\Gamma}_{\lambda}^\curlywedge,\frs}^A\simeq S_{w'}^-A_{\lambda}.\] \end{lem} \begin{proof}In \cite{HMZConnectedSum}*{Lemmas~5.5 and 5.6} it is computed (directly from the definition of the graph action map) that \[F_{Y\times [0,1], \Gamma_{\lambda}^{\curlyvee},\frs}^A \simeq (A_{\lambda}+U\Phi_{w}) S_{w'}^+ \qquad \text{and} \qquad F_{Y\times [0,1], \Gamma_{\lambda}^\curlywedge,\frs}^A\simeq S_{w'}^-(A_\lambda+U\Phi_{w}).\] By using Equation \eqref{eq:Alambda+Blambda} and Relations~\ref{rel:R1} and \ref{rel:R2}, we see that \[(A_{\lambda}+U\Phi_{w}) S_{w'}^+\simeq (B_{\lambda}+U\Phi_{w'})S_{w'}^+\simeq B_{\lambda}S_{w'}^+,\] and similarly \[S_{w'}^-(A_{\lambda}+U\Phi_{w}) \simeq S_{w'}^-(B_{\lambda}+U\Phi_{w'})\simeq S_{w'}^-B_{\lambda}.\] To prove the statements about the graph cobordisms with the opposite cyclic orders, we use Lemma~\ref{lem:reversecyclicordering}, which shows that the effect of switching the cyclic ordering is to replace the $A_{\lambda}$ maps with the $B_{\lambda}$ maps, and vice-versa. \end{proof} We can now compute the graph cobordism maps for $(Y\times [0,1], \Gamma_\gamma)$ and $(Y\times [0,1],\Gamma_U)$: \begin{proof}[Proof of Proposition~\ref{prop:spliceinloopsforUandH_1}]Let us first consider the cobordism $(Y\times [0,1], \Gamma_\gamma)$, shown on the left side of Figure~\ref{fig::26}. Write $\gamma$ as a concatenation of two arcs, $\lambda_1$ and $\lambda_2$, which both have one endpoint at $w$, as well as another endpoint at new basepoint, $w'$. We first claim that the graph cobordism map for $(Y\times [0,1], \Gamma_\gamma)$ is invariant under splitting the single vertex in the interior of the graph into two vertices connected by an edge, as shown in Figure~\ref{fig::49}. One way of establishing this equality would be to compute directly from the definition. This is not hard, though somewhat tedious. For convenience, we will instead appeal to the link cobordism interpretation of the graph cobordism maps \cite{ZemCFLTQFT}*{Section~14}, from which it follows that ribbon-equivalent graphs induce the same map (see \cite{ZemCFLTQFT}*{Theorem~C}). After splitting this vertex into two vertices, the resulting graph cobordism can be written as the composition of the two graph cobordisms, $(Y\times [0,1], \Gamma_{\lambda_2}^{\curlywedge})$ and $(Y\times [0,1], \Gamma_{\lambda_1}^{\curlyvee})$. \begin{figure}[ht!] \centering \input{fig49.pdf_tex} \caption{\textbf{Computing the cobordism map for $(Y\times[0,1], \Gamma_\gamma)$.} On the left is the graph cobordism $(Y\times [0,1], \Gamma_\gamma)$. In the middle is a ribbon equivalent graph cobordism. On the right is the composition of $(Y\times [0,1], \Gamma_{\lambda_2}^{\curlywedge})$ and $(Y\times [0,1], \Gamma_{\lambda_1}^{\curlyvee})$, The concatenation $\lambda_2* \lambda_1$ is equal to the closed curve $\gamma$. In Proposition~\ref{prop:spliceinloopsforUandH_1} we show that the induced map is the $H_1(Y;\Z)/\Tors$ action map, $A_\gamma$.\label{fig::49}} \end{figure} Lemma~\ref{lem:computationofYshapedgraphcobordisms} describes the maps induced by $(Y\times [0,1], \Gamma_{\lambda_2}^{\curlywedge})$ and $(Y\times [0,1], \Gamma_{\lambda_1}^{\curlyvee})$. Using that computation, we see that \[F_{Y\times [0,1],\Gamma_\gamma,\frs}\simeq S_{w'}^- B_{\lambda_2} B_{\lambda_1} S_{w'}^+.\] By Relation~\ref{rel:R7'}, this is chain homotopic to $B_{\gamma}$. We note also that $B_{\gamma}=A_{\gamma}$, since the quantities $a(\gamma,\phi)$ and $b(\gamma,\phi)$ agree for any homology class $\phi$, when $\gamma$ is a closed curve, completing the proof of the formula for the cobordism map for $(Y\times [0,1],\Gamma_\gamma)$. We now compute the map for the graph cobordism $(Y\times[0,1],\Gamma_U)$ from Figure~\ref{fig::26}. We use the previous result for $(Y\times [0,1],\Gamma_\gamma)$, as well as the vertex breaking relation from Lemma~\ref{lem:vertexbreakingrelation}. It is convenient to consider the more general case that the loops in $\Gamma_U$ are not necessary null-homologous, and instead have homology classes $\gamma_1$ and $\gamma_2$. As shown in Figure~\ref{fig::28}, by using the vertex breaking relation, and the computation of the induced map of the left cobordism in Figure~\ref{fig::26}, we have that the cobordism on the right of Figure~\ref{fig::26} has induced map \[U+A_{\gamma_1}A_{\gamma_2}.\] The graph $\Gamma_U$ from the proposition statement is obtained by picking $\gamma_1$ and $\gamma_2$ to be null-homologous in $H_1(Y;\Z)$, so that $A_{\gamma_1}$ and $A_{\gamma_2}$ vanish. The remaining term in the above formula is $U$, completing the proof. \end{proof} \begin{figure}[ht!] \centering \input{fig28.pdf_tex} \caption{\textbf{Computing the graph cobordism on the left by using the vertex breaking relation.} The vertex breaking relation is proven in Lemma~\ref{lem:vertexbreakingrelation}. The cyclic orders are counterclockwise, with respect to the page.\label{fig::28}} \end{figure} \subsection{Relative homology maps and holomorphic triangle maps} \label{sec:relativehomologyandTriangles} In this section, we describe the interaction of the relative homology maps with the holomorphic triangle maps. Suppose that $(\Sigma,\ve{\alpha},\ve{\beta},\ve{\gamma},\ve{w})$ is a multi-pointed Heegaard triple. In \cite{OSDisks}*{Section~8}, Ozsv\'{a}th and Szab\'{o} construct a 4-manifold $X_{\as,\bs,\gs}$, as well as a map \[\frs_{\ws}:\pi_2(\xs,\ys,\zs)\to \Spin^c(X_{\as,\bs,\gs}).\] The holomorphic triangle map \[F_{\as,\bs,\gs,\frs}:\CF^-(\Sigma,\ve{\alpha},\ve{\beta},\frs_{\as,\bs})\otimes_{\bF_2[U]} \CF^-(\Sigma,\ve{\beta},\ve{\gamma},\frs_{\bs,\gs})\to \CF^-(\Sigma,\ve{\alpha},\ve{\gamma},\frs_{\as,\gs})\] is defined by counting holomorphic triangles, which represent Maslov index 0 homology classes of triangles $\psi$ with $\frs_{\ws}(\psi)=\frs$. If $\lambda$ is a path between two basepoints on the Heegaard surface $\Sigma$, and $\phi$ is a homology class of disks, let $a(\lambda,\phi)$, $b(\lambda,\phi)$ and $c(\lambda,\phi)$ denote the sum of the changes of multiplicities of $\phi$, across only the $\ve{\alpha},\ve{\beta}$ or $\ve{\gamma}$ curves, respectively. Let $A_{\lambda},B_\lambda$ and $C_\lambda$ denote the map which counts holomorphic disks with an extra factor of $a(\lambda,\phi),b(\lambda,\phi)$ or $c(\lambda,\phi)$, respectively (on any of the three complexes involved in the triple). We have the following: \begin{lem}\label{lem:graphactionandtriangles} If $\frs\in \Spin^c(X_{\as,\bs,\gs})$, then the holomorphic triangle map $F_{\as,\bs,\gs,\frs}$ satisfies the relations \begin{align*}F_{\as,\bs,\gs,\frs}(A_\lambda\otimes \id)+A_\lambda \circ F_{\as,\bs,\gs,\frs}(\id\otimes \id)&\simeq 0\\ F_{\as,\bs,\gs,\frs}(B_\lambda\otimes \id)+F_{\as,\bs,\gs,\frs}(\id\otimes B_\lambda)&\simeq 0\\ F_{\as,\bs,\gs,\frs}(\id\otimes C_\lambda)+C_\lambda\circ F_{\as,\bs,\gs,\frs}(\id\otimes \id)&\simeq 0.\end{align*} \end{lem} \begin{proof} Let us consider the first relation, as the other two follow from nearly identical arguments. We count the ends of Maslov index 1 triangles. The ends of the space of index 1 holomorphic triangles consists of pairs of an index 0 holomorphic triangle together with an index 1 holomorphic disk. If $\ve{x},\ve{y}$ and $\ve{z}$ are fixed intersection points, and $\psi\in \pi_2(\ve{x},\ve{y},\ve{z})$ is a Maslov index 1 homology class of triangles, the total number of ends of $\cM(\psi)$ is zero. Hence, summing over all such classes, for fixed $\xs,\ys$ and $\zs$, we get \[0=\sum_{\substack{\psi\in \pi_2(\ve{x},\ve{y},\ve{z})\\ \mu(\psi)=1\\ \frs_{\ve{w}}(\psi)=\frs}} a(\lambda,\psi)\# \d \cM(\psi)U^{n_{\ve{w}}(\psi)}.\] Furthermore, if $\psi$ is a class of homology triangles and $\phi$ is a class of homology disks, we have $a(\lambda,\psi+\phi)=a(\lambda,\psi)+a(\lambda,\phi)$. Also, if $\phi$ is a homology class of disks on $(\Sigma,\ve{\beta},\ve{\gamma})$ then $a(\lambda,\phi)=0$. Hence \[F_{\as,\bs,\gs,\frs}(A_\lambda\otimes \id)+A_\lambda F_{\as,\bs,\gs,\frs}(\id\otimes \id)=\d_{\as,\gs} H_{\as,\bs,\gs,\frs}^{A}+H_{\as,\bs,\gs,\frs}^{A}(\d_{\as,\bs}\otimes \id+\id\otimes \d_{\bs,\gs}),\] where $H_{\as,\bs,\gs,\frs}^{A}$ counts holomorphic triangles with $\frs_{\ve{w}}(\psi)=\frs$ with an additional factor of $a(\lambda,\psi)$. The other chain homotopies are constructed similarly. \end{proof} \section{Morse theory on the identity cobordism} \label{sec:handledecomposition} In this section, we describe a handle decomposition of the trace cobordism $Y\times [0,1]:-Y\sqcup Y\to \varnothing$, and also describe the singular homology of mapping tori. \subsection{A handle decomposition of the trace cobordism} \label{sec:handledecomptracecobordism} In this section, we describe how a Heegaard diagram for $Y$ induces a handle decomposition of the trace cobordism. In this section, it's more convenient to view the trace cobordism as $Y\times [-1,1]$, instead of $Y\times [0,1]$. Suppose $f:Y\to [1,3]$ be a Morse function which induces the diagram $(\Sigma,\as,\bs,\ws)$, i.e., $f$ is a Morse function which admits a gradient like vector field such that the following hold: \begin{enumerate} \item $f$ has $|\ws|$ index 0 critical points, all with critical value 1. \item $f$ has $g(\Sigma)+|\ws|-1$ index 1 critical points, all with critical values in $(1,2)$, whose ascending manifolds intersect $\Sigma$ along $\as$. \item $f^{-1}(2)=\Sigma$. \item $f$ has $g(\Sigma)+|\ws|-1$ index 2 critical points, all with critical values in $(2,3)$ whose descending manifolds intersect $\Sigma$ along $\bs$. \item $f$ has $|\ws|$ index 3 critical points, all with critical value $3$. \end{enumerate} We construct a Morse function $F:Y\times [-1,1]\to [0,3]$ by the formula \[F(y,s)=(1-s^2)\cdot f(y).\] It is easy to see that the critical set of $F$ is equal to $\Crit(f)\times \{0\}\subset Y\times [-1,1]$. Furthermore, if $p$ is a critical point of $f$, then \[\ind_{(p,0)}(F)=\ind_p(f)+1.\] For our purposes, it is important to precisely describe the attaching spheres of the handles. To this end, we define the following submanifolds: \[W_t:=F^{-1}([0,t]),\qquad M_t:=F^{-1}(t),\qquad Y_t:= f^{-1}([t,3]),\qquad \text{and}\qquad \Sigma_t :=f^{-1}(t). \] We now prove a useful lemma for describing the level sets of the trace cobordism: \begin{lem}\label{lem:levelsetdescription}Suppose that $t\in [1,3]$ is a regular value of $f$. The projection map $\pi_Y:Y\times [-1,1]\to Y$ restricts to a homeomorphism between $M_t\cap (Y\times [-1,0])$ and $Y_t$. The map $\pi_Y$ also restricts to a homeomorphism between $M_t\cap (Y\times [0,1])$ and $Y_t$. On each of the above sets, the map $\pi_Y$ is a diffeomorphism away from $Y\times \{0\}$. Putting these maps together yields a homeomorphism between $M_t$ and $Y_t\cup_{\Sigma_t} -Y_t$, which is a diffeomorphism away from $M_t\cap (Y\times \{0\})$. \end{lem} \begin{proof}To see that $\pi_Y$ induces a homeomorphism on the stated spaces, it is sufficient to show that it is bijective, and maps between the stated subsets, which is an easy exercise from the definitions of the maps $f$ and $F$. To see that $\pi_Y$ induces a local diffeomorphism on each of $M_t\cap (Y\times [-1,0))$ and $M_t\cap (Y\times (0,1])$, one simply needs to show that $\d/\d s\not \in T_{(y,s)}M_t$. However it is easily checked that $\d/\d s\in T_{(y,s)} M_t$ iff $s=0$. \end{proof} Using Lemma~\ref{lem:levelsetdescription}, we obtain the following description of the handles and their attaching spheres for $Y\times [-1,1]$: \begin{enumerate}[label=(Index \arabic*)] \item $F$ has $|\ws|$ index 1 critical points, corresponding to the index 0 critical points of $f$. All have critical value equal to 1. The attaching 0-sphere in $Y\sqcup -Y$ of each of these critical points is equal to the union of the corresponding index 0 critical point of $f$ in $Y$, together with its image in $-Y$. \item $F$ has $g(\Sigma)+|\ws|-1$ index 2 critical points, which have attaching sphere equal to the union of the descending flow lines of the index 1 critical points of $f$ in $Y$ (which have ends on the attaching 0-spheres of the 1-handles described above), concatenated with their images in $-Y$. They have critical values in $(1,2)$. The framings are discussed in more detail, below. \item $F$ has $g(\Sigma)+|\ws|-1$ index 3 critical points. We can view the attaching spheres as being in $M_2=F^{-1}(2)$, which is homeomorphic to $U_{\bs}\cup_{\Sigma} -U_{\bs}$ by Lemma~\ref{lem:levelsetdescription}. The descending manifolds of the index $2$ critical points of $f$ are 2-dimensional disks in $U_{\bs}\subset Y$, which meet $\Sigma$ along the $\bs$ curves. The union of these disks with their images in $-U_{\bs}\subset -Y$ are spheres in $M_2=U_{\bs}\cup_{\Sigma} -U_{\bs}$, and these spheres are the attaching spheres of the index 3 critical points of $F$. They have critical values in $(2,3)$. \item $F$ has $|\ws|$ index 4 critical points, corresponding to the $|\ws|$ index 3 critical points of $f$. They all have critical value equal to $3$. \end{enumerate} We now discuss the framings of the index 2 critical points in somewhat more detail. Note that the exact framing of the attaching spheres of the 2-handles in $M_{1+\epsilon}\iso Y\, \#_{\ws} -Y$ (where $Y\, \#_{\ws} -Y$ denotes the manifold obtained by adding a connected sum tube near each point in $\ws$) depends on some additional data (such as a choice of gradient like vector field), however the framing of the portion of the link in $Y$ can be taken to be the mirror of the framing of the portion in $-Y$. Up to isotopy, a framing is uniquely determined by this property. \begin{rem}A handlebody decomposition of the cotrace cobordism can be obtained by turning around the above decomposition for the trace cobordism. \end{rem} \subsection{Singular homology of mapping tori} Given a diffeomorphism $\phi:Y^3\to Y^3$, we now describe the singular homology of $X_\phi$. First, let us recall the algebraic mapping cone construction. If $(C_1,\d_1)$ and $(C_2,\d_2)$ are two chain complexes, and $F:C_1\to C_2$ is a degree zero chain map, the \textbf{mapping cone of $F$}, written as \[\Cone(C_1\xrightarrow{F} C_2),\] is defined to be the complex \[\Cone(C_1\xrightarrow{F} C_2):=C_1[1]\oplus C_2,\] with differential \[\d=\begin{pmatrix}-\d_1& 0\\ F& \d_2 \end{pmatrix}.\] \begin{prop}\label{prop:CWhomologyofmappingtori}Suppose that $X^4$ is a smooth, oriented 4-manifold and $Y^3\subset X^4$ is a smooth, oriented, non-separating cut. Let $W^4$ be the result of cutting $X$ along $Y$, and let $\iota_0$ and $\iota_1$ denote the two inclusions of $Y$ into $W$ (corresponding to the two copies of $Y$ in $\d W$). Then $C_*^{CW}(X;\Z)$ is quasi-isomorphic to \[\Cone(C_*^{CW}(Y;\Z)\xrightarrow{(\iota_0)_*-(\iota_1)_*} C_*^{CW}(W;\Z)).\] \end{prop} \begin{rem}Note that the above proposition specializes in the case of a mapping torus to show that $C_*^{CW}(X_\phi;\Z)$ is quasi-isomorphic to $\Cone(C_*^{CW}(Y;\Z)\xrightarrow{\id_*-\phi_*} C_*^{CW}(Y;\Z))$. \end{rem} \begin{proof}[Proof of Proposition~\ref{prop:CWhomologyofmappingtori}] The proof is by explicit construction of a $CW$ decomposition of $X$ whose homology is that of the mapping cone. Pick a $CW$ decomposition of $Y$, and pick a $CW$ decomposition of $W$ which extends this fixed decomposition (on both boundary components). The $CW$ decomposition of $Y$ naturally yields a $CW$ decomposition of $Y\times [0,1]$, via the product construction. If $e_i$ is a cell of dimension $i$ in our decomposition for $Y$, then there are three cells in our decomposition for $Y\times [0,1]$, namely $e_i\times \{0\}, e_i\times \{1\}$ and $e_i\times [0,1]$. Furthermore \[\d(e_i\times [0,1])=e_i\times \{1\}-e_i\times \{0\}.\] We can construct a $CW$ decomposition of $X$, by taking our $CW$ decomposition for $W$, and adding in the cells of the form $e_i\times [0,1]$, where $e_i$ is a cell in $Y$. Manifestly, we have an isomorphism of groups \[C_*^{CW}(X;\Z)\iso C_*^{CW}(Y;\Z)[1]\oplus C_*^{CW}(W;\Z),\] and the differential on $C_*^{CW}(X;\Z)$ is given by \[\d=\begin{pmatrix}-\d_Y&0\\ (\iota_1)_*-(\iota_0)_*& \d_W \end{pmatrix},\] which is the mapping cone. \end{proof} \section{Generalized 1-handle and 3-handle maps} \label{sec:generalized1--handleand3--handlemaps} In \cite{OSTriangles}, Ozsv\'{a}th and Szab\'{o} define cobordism maps associated to attaching a 4-dimensional 1-handle or 3-handle. For our purposes, it is useful to define a map associated to attaching many 1-handles or 3-handles simultaneously. In this section we describe such a map, and prove some key properties which will be useful for our analysis of the trace and cotrace cobordisms. \subsection{Definition of the generalized 1- and 3-handle maps} \label{section:defgeneralized1-handlemaps} Suppose that $\cH=(\Sigma,\as,\bs,\ws)$ is a multi-pointed Heegaard diagram, and that $(\Sigma_0,\xis,\zetas,\ws_0)$ is a diagram for $(S^1\times S^2)^{\# g(\Sigma_0)}$. Using \cite{OSDisks}*{Lemma~9.1} (when $|\ws|=1$) and \cite{OSLinks}*{Proposition~6.5} (when $|\ws|>1$), we see that there are relatively graded isomorphisms \[\HF^-(\Sigma_0,\xis,\zetas,\ws_0)\iso V^{g(\Sigma_0)+|\ws_0|-1}\otimes_{\bF_2} \bF_2[U],\qquad \text{and} \qquad \hat{\HF}(\Sigma_0,\xis,\zetas,\ws_0)\iso V^{g(\Sigma_0)+|\ws_0|-1},\] where $V\iso H^1(S^1;\bF_2)$. In particular, there is a well defined top degree element on homology. We also assume that $|\zetas_i\cap \xis_j|=2 \delta_{ij}$, where $\delta_{ij}$ denotes the Kronecker-delta. This last assumption implies that the top degree element of homology is realized as a top degree intersection point $\Theta_{\xis,\zetas}^+\in \bT_{\xis}\cap \bT_{\zetas}$. There is also a well defined bottom degree intersection point $\Theta^-_{\xis,\zetas}$. Note that we don't require $\xis$ to be small isotopies of the $\zetas$ curves, however this is the simplest example. Suppose we have an embedding $f:\ws_0\to \Sigma\setminus (\as\cup \bs\cup \ws)$. Write $\ps\subset \Sigma$ for the image $\ws_0$ under $f$. We join the diagrams $(\Sigma,\as,\bs,\ws)$ and $(\Sigma_0,\xis,\zetas,\ws_0)$ by adding a connected sum tube at each pair of points identified by $\ws_0$ and $\ps$, identified by $f$. We remove the basepoints $\ws_0$, and obtain a diagram \[(\Sigma\, \#_f \Sigma_0, \as\cup \xis, \bs\cup \zetas, \ws).\] An example is shown in Figure~\ref{fig::31}. We define the generalized 1-handle map \[F_1^{\xis,\zetas}:\CF^-(\Sigma,\as,\bs,\ve{w})\to \CF^-(\Sigma\, \#_f \Sigma_0, \as\cup \xis, \bs\cup \zetas, \ve{w})\] by the formula \begin{equation}F_1^{\xis,\zetas}(\ve{x})=\ve{x}\otimes \Theta^+_{\xis,\zetas}.\label{eq:formula1-handlemap}\end{equation} In the opposite direction, we define the generalized 3-handle map via the formula \[F_3^{\xis,\zetas}(\ve{x}\times \theta)=\ve{x}\cdot \langle \theta, \Theta^-_{\xis,\zetas} \rangle,\] where $\langle , \rangle:\CF^-(\Sigma_0, \xis,\zetas)\otimes_{\bF_2[U]} \CF^-(\Sigma_0, \xis,\zetas)\to \bF_2[U]$ is the map given by \begin{equation}\langle U^i\cdot \ve{x}, U^j\cdot \ve{y} \rangle=\begin{cases} U^{i+j}& \text{if } $\xs=\ys$,\\ 0& \text{otherwise.} \end{cases}\label{eq:pairingdefinition}\end{equation} \begin{figure}[ht!] \centering \input{fig31.pdf_tex} \caption{\textbf{An example of the generalized 1-handle operation.} The connected sum is taken at the points $\ps\subset \Sigma$ and $\ws_0\subset \Sigma_0$, using the identification given by $f$. Only a small portion of the Heegaard surface $\Sigma$ is shown.\label{fig::31}} \end{figure} \subsection{Holomorphic disks and the generalized 1-handle and 3-handle maps} We now show that the generalized 1-handle and 3-handle maps defined in Section~\ref{section:defgeneralized1-handlemaps} are chain maps, for appropriate choices of almost complex structures. Write $\d_0$ for the differential on $\CF^-(\Sigma,\as,\bs)$ for an almost complex structure $J$, which is split on a cylindrical neighborhood of $\ve{p}\times [0,1]\times \R$. Write $\d_{J(T)}$ for the differential on $\CF^-(\Sigma\,\#_f \Sigma_0,\as\cup \xis,\bs\cup \zetas, \ws)$, for an almost complex structure $J(T)$ which is obtained by taking the connected sum of $J$ with an almost complex structure on $\Sigma_0\times [0,1]\times \R$, and inserting a neck of length $T$ into each connected sum tube. We will prove the following (compare \cite{OSProperties}*{Proposition~6.4}): \begin{prop}\label{prop:differentialcomp}Suppose that $(\Sigma,\as,\bs,\ws)$ is a Heegaard diagram with collection of points $\ps\subset \Sigma\setminus (\as\cup \bs\cup \ws)$. Further suppose that $(\Sigma_0,\xis,\zetas,\ws_0)$ is a diagram for $(S^1\times S^2)^{\# g(\Sigma_0)}$, satisfying the assumptions described in the previous section, and that $f:\ws_0\to \ps$ is a fixed bijection, so that the generalized 1-handle and 3-handle maps are defined. For sufficiently large $T$, the differential $\d_{J(T)}$ satisfies \[\d_{J(T)}\circ F_1^{\xis,\zetas}=F_1^{\xis,\zetas}\circ \d_0\qquad \text{and}\qquad \d_0\circ F_3^{\xis,\zetas}=F_3^{\xis,\zetas}\circ \d_{J(T)}.\] \end{prop} \begin{proof} The first relation can be restated as \[\langle \d_{J(T)}(\xs\times \Theta^+_{\xis,\zetas}), \ys\times \theta\rangle=\langle \d_0(\xs),\ys\rangle\cdot \langle \Theta^+_{\xis,\zetas}, \theta\rangle,\] where $\langle,\rangle$ denotes the pairing from Equation \eqref{eq:pairingdefinition}. The second relation can be restated as \[\langle \d_{J(T)}(\xs\times \theta), \ys\times \Theta^-_{\xis,\zetas} \rangle=\langle \d_0(\xs), \ys \rangle \cdot \langle \theta,\Theta^-_{\zetas,\xis} \rangle.\] Let us consider first the claim about $F_1^{\xis,\zetas}$. If $ \phi_0\in \pi_2( \theta,\ \theta')$ is a class on $(\Sigma_0,\xis,\zetas)$, then by the definition of the Maslov grading on $\CF^-(\Sigma_0,\xis,\zetas,\ws_0)$, one has \begin{equation} \mu(\phi_0)=2n_{\ve{w}_0}(\phi_0)+\gr(\theta,\theta'), \label{eq:Maslovindexgeneralized1-handle} \end{equation} where $\gr(\theta,\theta')$ is the drop in grading from $\theta$ to $\theta'$. Combining this with the Maslov index formula from \cite{LipshitzCylindrical}*{Equation~(6)} implies that if $\phi\# \phi_0\in \pi_2(\xs\times \theta,\ys\times \theta')$ is a class on $(\Sigma\#_f \Sigma_0, \as\cup \xis,\bs\cup \zetas,\ws),$ then \begin{equation} \label{eq:Maslovindexgeneralized1--handledisk} \begin{split}\mu(\phi\# \phi_0)&=\mu(\phi)+\mu(\phi_0)-n_{\ve{p}}(\phi)-n_{\ve{w}_0}(\phi_0) \\&=\mu(\phi)+\gr(\theta,\theta'). \end{split} \end{equation} Furthermore, if $\phi\#\phi_0$ has a representative for all large $T$, then we can extract a limit to a broken representative of both $\phi$ and $\phi_0$. In particular, by transversality of holomorphic curves of index 1 or less on $(\Sigma,\as,\bs)$, the existence of holomorphic representatives for arbitrarily large $T$ implies that $\mu(\phi)\ge 0$, with equality to zero iff $\phi$ is the constant class. Since we are computing $\d_{J(T)}(\ve{x}\times \Theta_{\xis,\zetas}^+)$, we only consider classes where $\theta=\Theta^+_{\xis,\zetas}$. Since $\Theta^+_{\xis,\zetas}$ is the top graded intersection point, we have that $\gr(\Theta_{\xis,\zetas}^+,\theta')\ge 0$ for any intersection point $\theta'\in \bT_{\xis}\cap \bT_{\zetas}$. In light of Equation \eqref{eq:Maslovindexgeneralized1--handledisk}, there are two cases which can arise: \begin{enumerate} \item $\mu(\phi)=0\qquad\text{and} \qquad \gr(\Theta_{\xis,\zetas}^+,\theta')=1$; \item $\mu(\phi)=1\qquad\text{and} \qquad \gr(\Theta_{\xis,\zetas}^+,\theta')=0$. \end{enumerate} In the first case, $\phi$ must be a constant disk, since it has Maslov index zero and admits a broken holomorphic representative for a cylindrical almost complex structure. Hence $\phi_0\in \pi_2(\Theta^+_{\xis,\zetas},\theta')$ is an index 1 disk which has zero multiplicity over $\ws_0$. As $\Theta_{\xis,\zetas}^+$ is a cycle in $\hat{\CF}(\Sigma_0,\xis,\zetas,\ws_0)$, all holomorphic disks of this form cancel, modulo 2, and hence (all together) such disks make no contribution to the differential $\d_{J(T)}(\ve{x}\otimes \Theta_{\xis,\zetas}^+)$. In the second case, we have $\theta=\theta'=\Theta_{\xis,\zetas}^+$. If $\phi\#\phi_0$ admits holomorphic representatives for arbitrarily long neck lengths, then we can extract (potentially broken) limiting curves $U$ and $U_0$ representing $\phi$ and $\phi_0$, respectively. Since $\mu(\phi)=1$, the broken curve $U$ consists of a single, non-broken holomorphic strip, $u$. Write $\ws_0=\{w_1,\dots, w_k\}$ and $\ps=\{p_1,\dots, p_k\}$, where the map $f:\ws_0\to \ps$ satisfies $f(w_i)=p_i$. There must be a holomorphic strip $u_0$ in the broken curve $U_0$ which matches $u$, i.e., which satisfies \[\rho^{\ve{p}}(u)=\rho^{\ws_0}(u_0),\] where \[\rho^{\ps}: \cM(\phi)\to \Sym^{n_1}(\bD)\times \cdots \times \Sym^{n_k}(\bD)\] is the map defined by the formula \[\rho^{\ps}(u)=\big((u\circ \pi_\bD)((u\circ \pi_\Sigma)^{-1}(p_1)),\dots,(u\circ \pi_\bD)((u\circ \pi_\Sigma)^{-1}(p_k))\big).\] Here $n_i$ is the integer \[n_i:=n_{p_i}(\phi)=n_{w_i}(\phi_0),\] and the map $\rho^{\ws_0}$ is defined analogously to $\rho^{\ps}$. The argument now diverges slightly, depending on whether $|\ws_0|=1$ or $|\ws_0|>1$. In the case that $|\ws_0|>1$, we can consider almost complex structures satisfying \ref{def:J1}--\ref{def:J5}, whereas when $|\ws_0|=1$, we will have to consider slightly more generic almost complex structures. We first consider the case that $|\ws_0|>1$, as this case is slightly simpler. We claim that the broken curve $U_0$, described above, consists of exactly the unbroken holomorphic strip $u_0$ (and no other curves). This follows from expected dimension counts, and transversality, as we now describe. Write $\phi_0'$ for the homology class of $u_0$. For a generic choice of almost complex structure on $\Sigma\times [0,1]\times \R$, the set $\rho^{\ps}(u)$ will be disjoint from the fat diagonal whenever $u$ has Maslov index 1. In the case that $|\ws_0|>1$, it is not hard to see that the curve $u_0$ will satisfy conditions \ref{def:M1}--\ref{def:M3} (the important point being that since $\rho^{\ws_0}(u_0)$ is not in the fat diagonal, the curve $u_0$ must satisfy \ref{def:M3}). Using Proposition~\ref{prop:transversalitydisks} we see that for generic almost complex structure on $\Sigma_0\times [0,1]\times \R$, if $S_0$ denotes the source curve of $u_0$, and if $X\subset \Sym^{n_1}(\bD)\times \cdots \Sym^{n_k}(\bD)$ is a smooth submanifold avoiding the fat diagonal, then $\cM(S_0,\phi_0',X)$ is a smooth manifold of dimension $\mu(\phi_0')-\codim(X)-2\sing(u_0)$ near $u_0$. We consider \[X(\phi):=\{\rho^{\ps}(u): u\in \cM(\phi)\}\subset \Sym^{n_1}(\bD)\times \cdots \times \Sym^{n_k}(\bD),\] which has codimension $2(n_1+\cdots +n_k)-1$. Near $u_0$, we have \[\dim \cM(S_0,\phi_0',X(\phi))=\mu(\phi_0')-\codim(X(\phi))-2\sing(u_0)\le 1,\] with equality iff $u_0$ is embedded and $\mu(\phi_0')=2(n_1+\cdots+n_k)$. It follows that $u_0$ is an embedding and has Maslov index $2(n_1+\cdots +n_k)$. There can't be any remaining curves of $U_0$, since they would have Maslov index at least 1 by transversality, and hence would raise the Maslov index of $\phi_0$ above $2(n_1+\dots+ n_k)$, contradicting our assumption. Hence $\phi_0'=\phi_0$. Hence any sequence of curves representing $\phi\# \phi_0$ for a sequence of almost complex structures $J(T_i)$, with $T_i$ approaching $\infty$, limits to a pair $(u,u_0)\in \cM(\phi)\times \cM(\phi_0)$ which satisfies $\rho^{\ps}(u)=\rho^{\ws_0}(u_0)$. We call a such pair $(u,u_0)$ a \textit{prematched} disk. We now wish to use gluing results about holomorphic curves to describe a neighborhood of a prematched disk in the compactification of the space of holomorphic curves on $(\Sigma\, \#_f \Sigma_0)\times [0,1]\times \R$. Since $\phi$ has index 1, the space $\cM(\phi)/\R$ is a finite set, so we can use a ``Morse-type'' gluing lemma, where the asymptotics of the curves we are gluing are fixed. The relevant gluing theorem for our purposes is \cite{LipshitzCylindrical}*{Proposition~A.2}. If we were considering $\phi$ with higher Maslov index, (so that both of the factors of the fibered product were positive dimensional manifolds) then we would need harder ``Morse-Bott'' gluing results like those described in \cite{BourgeoisMorseBott}. In our case, if the almost complex structures achieve transversality at $u$ and $u_0$, then it follows from \cite{LipshitzCylindrical}*{Proposition~A.2} that there is a neighborhood $\cU$ of $(u,u_0)$ in the compactification of the space of holomorphic disks on $(\Sigma\, \#_f\Sigma_0)\times [0,1]\times \R$ such that \[\cU\cap\bigg(\bigcup_{T>0}\hat{\cM}_{J(T)}(\phi\# \phi_0)\cup \{(u,u_0)\}\bigg)\iso (0,1].\] For a $\ve{d}\in \Sym^{n_1}(\bD)\times \cdots \times \Sym^{n_k}(\bD)$ which avoids the fact diagonal, we consider the space \[\cM(\phi_0,\ve{d}):=\{u\in \cM(\phi_0): \rho^{\ve{w}_0}(u)=\ve{d}\}.\]If $\ve{d}$ is point which is not in the fat diagonal, then by Proposition~\ref{prop:transversalitydisks}, for a generic choice of almost complex structures, the space $\cM(\phi_0,\ve{d})$ is transversely cut out and a 0-dimensional manifold. We will show if we fix a $\ve{d}$ as above, then for a generic choice of almost complex structure we in fact have \begin{equation}\sum_{\substack{\phi_0\in \pi_2(\Theta_{\xis,\zetas}^+,\Theta_{\xis,\zetas}^+)\\ n_{p_i}(\phi_0)=n_i}} \#\cM(\phi_0,\ve{d})\equiv 1 \pmod{2}.\label{eq:maincountofdisks}\end{equation} Note that assuming Equation~\eqref{eq:maincountofdisks}, the main statement follows, since using the argument described thus far, we see that \begin{equation*}\begin{split}\d_{J(T)}(\ve{x}\times \Theta_{\xis,\zetas}^+)&=\sum_{\substack{\theta'\in \bT_{\as}\cap \bT_{\bs}\\ \phi\# \phi_0\in \pi_2(\xs\times \Theta^+_{\xis,\zetas}, \ys\times \theta')\\ \mu(\phi\# \phi_0)=1}}\# \hat{\cM}(\phi\# \phi_0) U^{n_{\ws}(\phi)}\cdot \ys\times \theta'\\ &=\sum_{\substack{\phi\in \pi_2(\xs,\ys)\\\mu(\phi)=1}} \sum_{\substack{\phi_0\in \pi_2(\Theta_{\xis,\zetas}^+, \Theta_{\xis,\zetas}^+)\\ n_{w_i}(\phi_0)=n_{p_i}(\phi)}} \#\hat{\cM}(\phi\# \phi_0) U^{n_{\ws}(\phi)} \cdot \ys\times \Theta_{\xis,\zetas}^+\\ &=\sum_{\substack{\phi\in \pi_2(\xs,\ys)\\\mu(\phi)=1}} \sum_{u \in \hat{\cM}(\phi)} \bigg(\sum_{\substack{\phi_0\in \pi_2(\Theta_{\xis,\zetas}^+, \Theta_{\xis,\zetas}^+)\\ n_{w_i}(\phi_0)=n_{p_i}(\phi)}} \# \cM(\phi_0, \rho^{\ws}(u))\bigg) U^{n_{\ws}(\phi)}\cdot \ys\times \Theta^+_{\xis,\zetas}\\ &=\sum_{\substack{\phi\in \pi_2(\xs,\ys)\\ \mu(\phi)=1}} \sum_{u\in \hat{\cM}(\phi)} U^{n_{\ws}(\phi)}\cdot \ys\times \Theta_{\xis,\zetas}^+\\ &=\d_0(\xs)\otimes \Theta_{\xis,\zetas}^+. \end{split}\end{equation*} Hence it remains to establish the count from Equation \eqref{eq:maincountofdisks}. To prove it, one first shows invariance from $\ve{d}$. To do this, one picks a path $\ve{d}(t)$, avoiding the fat diagonal, between two points $\ve{d}(0)$ and $\ve{d}(1)$. Let us write $\ve{d}_t$ for the image of the path $\ve{d}(t)$. One then considers the moduli space \[\cM(\ve{d}_t):=\bigcup_{\substack{\phi_0\in \pi_2(\Theta_{\xis,\zetas}^+,\Theta_{\xis,\zetas}^+)\\ \mu(\phi_0)=2(n_1+\cdots+n_k)}} \cM(\phi_0,\ve{d}_t).\] For generically chosen almost complex structure, the space $\cM(\ve{d}_t)$ is a 1-manifold, with ends which correspond to curves which match $\ve{d}(0)$ or $\ve{d}(1)$, as well as ends which correspond to Maslov index 1 holomorphic strips breaking off (note that by the dimension counts from Proposition~\ref{prop:transversalitydisks}, further degenerations do not occur for a generically chosen almost complex structure). The total number of ends corresponding to strip breaking is zero (mod 2), since $\hat{\d}(\Theta_{\xis,\zetas}^+)=0$. Having now established independence from $\ve{d}$, one considers a path $\ve{d}(t)$, where $t$ ranges over $[1,\infty)$, in $\Sym^{n_1}(\bD)\times \cdots \times \Sym^{n_k}(\bD)$, of $n_1+\cdots +n_k$ points which become spaced out more and more in $\bD$, and approach the $\bs$ boundary of $\bD$. The count is then reduced to counting the number of index 2 $\bs$ boundary degenerations which match a point in the upper half plane. According to \cite{OSLinks}*{Theorem~5.5}, the count of such curves is 1, modulo 2. Hence, by gluing $n_1+\dots+n_k$ curves together, we establish Equation \eqref{eq:maincountofdisks}, with $\ve{d}=\ve{d}_t$, for some sufficiently large $t$. We now consider the case that $|\ws_0|=1$. In this case, the conditions \ref{def:J1}--\ref{def:J5} don't prevent curves $u_0$ appearing which don't satisfy \ref{def:M3}, since for example a closed copy of $\Sigma_0\times \{(s,t)\}$ could appear. As in the proof of stabilization invariance from \cite{LipshitzCylindrical}*{Section~12} one solution to this problem is to consider almost complex structures satisfying \ref{def:J1}--\ref{def:J4} and \ref{def:J5'}, which achieve transversality at curves satisfying \ref{def:M1}--\ref{def:M4} and \ref{def:M3'}, and have no multiply covered components. In this case, the assumption that $\ve{d}$ avoids the fat diagonal implies that no curve in $\cM(\phi_0,\ve{d})$ has a multiply covered component. Hence, for a generic choice of almost complex structure, the space $\cM(\phi_0,\ve{d})$ will be transversely cut out by Proposition~\ref{prop:transversalitydisks}. We now establish the count appearing in Equation \eqref{eq:maincountofdisks}. The argument proceeds in a familiar fashion. By considering holomorphic disks matching a point along a path $\ve{d}(t)$ avoiding the fat diagonal, we can establish that the count $\# \cM(\phi_0,\ve{d})$ is independent of $\ve{d}$. However we cannot degenerate a collection $\ve{d}$ towards the boundary to reduce to the count of boundary degenerations since the almost complex structures we consider are not harshly perturbed near $\Sigma\times \{0,1\}\times \R$. Nevertheless, we describe a trick, which is (upon some reflection) morally equivalent. First, we note that by degenerating the set $\ve{d}$ into $|\ve{d}|$ points, spaced farther and farther apart in $[0,1]\times \R$, but not approaching $\{0,1\}\times \R$, we can reduce the count to the case when $\ve{d}$ consists of a single point $d\in [0,1]\times \R$. In this case, we wish to count $\cM(\phi_0,d)$ when $\phi_0$ is an index 2 class with domain $[\Sigma_0]$. To count $\cM(\phi_0,d)$ in this case, we argue by the following trick: we perform a free-stabilization at the point $w_0$ on the diagram $(\Sigma_0,\xis,\zetas,w_0)$. There is an index 1 class, formed by taking the connected sum of the bigon $B$ on $(S^2,\alpha_0,\beta_0,w,w_0)$ which has multiplicity 1 in the connected sum region, together with the class $\phi_0$, as shown in Figure~\ref{fig::32}. For sufficiently stretched neck, by our previous argument, the count $\#\hat{\cM}(B\# \phi_0)$ is equal to $\# \cM(B)\cdot\#\cM(\phi_0,d)=\# \cM(\phi_0,d)$. On the other hand, we can splice in a new bigon in the stabilization region to $B\# \phi_0$, as in Figure~\ref{fig::32}. This yields a Maslov index 2 class. For sufficiently stretched almost complex structure, there are exactly two ends. One corresponds to the index 2 class breaking into a bigon, as well as a representative of $B\# \phi_0$. This strip breaking contributes $\# \cM(\phi_0,d)$ total ends to the moduli space. There is another end corresponding to an index 2 $\as$ boundary degeneration, which has 1 representative modulo 2 by \cite{OSLinks}*{Theorem~5.5}. As there are no other ends, we conclude that \[\#\cM(\phi_0,d)\equiv 1\pmod{2}.\] This completes the proof in the case that $|\ve{w}_0|=1$. The first equation in proposition statement now follows. A straightforward modification proves the relation $\d_0\circ F_3^{\xis,\zetas}=F_3^{\xis,\zetas}\circ \d_{J(T)}$, completing the proof. \end{proof} \begin{figure}[ht!] \centering \input{fig32.pdf_tex} \caption{\textbf{Counting $\#\cM(\phi_0,d)$ when $|\ws_0|=1$.} The domain of the index 2 class $\phi_0$ is shown on top. A gluing argument shows that $\# \cM(\phi_0,d)$ is equal to the number of representatives of the class $B\# \phi_0$, shown on the lower left, on a free-stabilization of $(\Sigma_0,\xis,\zetas)$. To count the number of representatives of this class, we splice in another bigon, and count the ends of the index 2 moduli space on the bottom right. There are 2 ends, one corresponding to strip breaking (i.e. a holomorphic representative of the bigon breaking off), and another corresponding to an $\as$ boundary degeneration forming. \label{fig::32}} \end{figure} \subsection{Holomorphic triangles and generalized 1-handle and 3-handle maps} We now address the interaction of the holomorphic triangle maps with the generalized 1-handle and 3-handle maps. The result should be thought of as a stronger version of the holomorphic triangle map computation used to show the well-definedness of the 1-handle map (\cite{OSTriangles}*{Theorem~4.10}). Suppose that $(\Sigma,\as,\bs,\gs,\ws)$ and $(\Sigma_0,\xis,\zetas,\taus,\ws_0)$ are Heegaard triples. Suppose further that $(\Sigma_0,\xis,\zetas,\taus,\ws_0)$ satisfies the following: \begin{enumerate}[label= ($T$\arabic*),ref= ($T$\arabic*)] \item\label{cond:triple1} The Heegaard triple $(\Sigma_0,\xis,\zetas,\taus,\ws_0)$ is related by a sequence of handleslides and isotopies to a triple where all three sets of attaching circles are equal. \item\label{cond:triple2} The collections $\xis,\zetas,\taus$ can be ordered so that $|\xi_i\cap \zeta_j|=|\xi_i\cap \tau_j|=|\zeta_i\cap \tau_j|=2\delta_{ij}$, where $\delta_{ij}$ denotes the Kronecker delta function. \end{enumerate} Condition~\ref{cond:triple1} allows us to interpret the triangle counts on $(\Sigma_0,\xis,\zetas,\taus,\ws_0)$ as being associated to a sequence of Heegaard moves on a diagram for $(S^1\times S^2)^{\# g(\Sigma_0)}$. Together, Conditions~\ref{cond:triple1} and \ref{cond:triple2} imply that there are well defined top degree intersection points \[\Theta_{\xis,\zetas}^+\in \bT_{\xis}\cap \bT_{\zetas},\qquad \Theta^+_{\zetas,\taus}\in \bT_{\zetas}\cap \bT_{\taus} ,\qquad \text{and} \qquad \Theta^+_{\xis,\taus}\in \bT_{\xis}\cap \bT_{\taus}.\] Similarly there are well defined bottom degree intersection points $\Theta_{\xis,\zetas}^-,\Theta^-_{\xis,\taus},$ and $\Theta^-_{\zetas,\taus}$. As in the previous section, if we pick an embedding $f:\ws_0\to \Sigma\setminus (\as\cup \bs\cup \gs\cup \ws)$, we can form the connected sum of the two Heegaard triples at the points identified by $f$, and obtain a Heegaard triple \[(\Sigma\, \#_f \Sigma_0, \as\cup \xis,\bs\cup \zetas,\gs\cup \taus,\ws).\] In \cite{OSDisks}*{Section~8} Ozsv\'{a}th and Szab\'{o} describe a 4-manifold $X_{\as,\bs,\gs}$ associated to the Heegaard triple $(\Sigma,\as,\bs,\gs)$. We review the construction presently. Let $\Delta$ denote a triangle, with edges labeled $e_{\as},e_{\bs}$ and $e_{\gs}$ (in that order, clockwise). We let $U_{\as}, U_{\bs}$ and $U_{\gs}$ denote the 3-dimensional handlebody, with boundary $\Sigma$, determined by the curves $\as,\bs,$ and $\gs$. The 4-manifold $X_{\as,\bs,\gs}$ is defined by \begin{equation}X_{\as,\bs,\gs}:=\big((\Sigma\times \Delta)\sqcup (U_{\as}\times e_{\as})\sqcup (U_{\bs}\times e_{\bs})\sqcup (U_{\gs}\times e_{\gs})\big)/{\sim}\label{eq:Xabgdef}\end{equation} where $\sim$ is the relation determined by gluing $U_{\sigmas} \times e_{\sigmas}$ to $\Sigma\times \Delta$ along $\Sigma \times e_{\sigmas}$ for each $\sigmas\in \{\as,\bs,\gs\}$, using the natural identification. We begin with the following topological lemma about 4-dimensional $\Spin^c$ structures: \begin{lem}\label{lem:connectedsumspincstructures}Suppose $(\Sigma,\as,\bs,\gs,\ws)$ and $(\Sigma_0,\xis,\zetas,\taus,\ws_0)$ are arbitrary Heegaard triples, with a chosen embedding $f:\ws_0\to \Sigma\setminus (\as\cup \bs\cup \gs\cup \ws)$, with which we take the connected sum. Writing $X_{\as\cup \xis, \bs\cup \zetas,\gs\cup \taus}$ for the 4-manifold constructed from the triple $(\Sigma\, \#_{f} \Sigma_0, \as\cup \xis,\bs\cup \zetas,\gs\cup \taus,\ws)$, there is a natural isomorphism \[\Spin^c(X_{\as\cup \xis,\bs\cup \zetas,\gs\cup \taus})\iso \Spin^c(X_{\as,\bs,\gs})\times \Spin^c(X_{\xis,\zetas,\taus}).\] \end{lem} \begin{proof}The claim is proven by analyzing two Mayer-Vietoris exact sequences. We will define a map from $\Spin^c(X_{\as\cup \xis,\bs\cup \zetas,\gs\cup \taus})$ to $\Spin^c(X_{\as,\bs,\gs})\times \Spin^c(X_{\xis,\zetas,\taus})$ as a composition of a restriction map, and the inverse of another restriction map, as we now describe. For notational clarity, let $X_{\Sigma,\as,\bs,\gs}$ denote the 4-manifold constructed from the Heegaard triple $(\Sigma,\as,\bs,\gs)$. Note that topologically $X_{\Sigma\, \#_{f} \Sigma_0,\as\cup \xis,\bs\cup \zetas,\gs\cup \taus}$ is obtained by gluing $X_{\Sigma\setminus N(\ps),\as,\bs,\gs}$ to $X_{\Sigma_0\setminus N(\ws_0), \xis,\zetas,\taus}$ along $|\ws_0|$ thrice punctured copies of $S^3$. We leave it to the reader to analyze the Mayer-Vietoris exact sequence for cohomology and verify that the map \[\Spin^c(X_{\Sigma\, \#_f \Sigma_0,\as\cup \xis,\bs\cup \zetas,\gs\cup \taus})\to \Spin^c(X_{\Sigma\setminus N(\ps),\as,\bs,\gs})\times \Spin^c(X_{\Sigma_0\setminus N(\ws_0),\xis,\zetas,\taus})\] is an isomorphism. Finally, it is not hard to verify that \[X_{\Sigma,\as,\bs,\gs}\setminus X_{\Sigma\setminus N(\ps),\as,\bs,\gs},\] is a topologically a 4-ball, so a similar argument shows that the restriction map \[\Spin^c(X_{\Sigma,\as,\bs,\gs})\to \Spin^c( X_{\Sigma\setminus N(\ps),\as,\bs,\gs})\] is also an isomorphism. \end{proof} Note that Condition~\ref{cond:triple1} implies that the 4-manifold $X_{\xis,\zetas,\taus}$ is diffeomorphic to the 4-manifold $X_{\xis,\xis,\xis}$. Upon filling in each end with 3-handles and 4-handles we obtain obtain $(S^1\times S^3)^{\# g(\Sigma_0)}$. In particular, there is a unique $\Spin^c$ structure $\frs_0\in \Spin^c(X_{\xis,\zetas,\taus})$ which restricts to the torsion $\Spin^c$ structure on each end of $X_{\xis,\zetas,\taus}$. Using Lemma~\ref{lem:connectedsumspincstructures}, it follows that if $\frs\in \Spin^c(X_{\as,\bs,\gs})$, then there is a well defined $\Spin^c$ structure \[\frs\# \frs_0\in \Spin^c(X_{\as\cup \xis,\bs\cup \zetas,\gs\cup \taus}).\] \begin{prop}\label{prop:generalized1-handlesandtriangles}Suppose that $(\Sigma, \ve{\alpha},\ve{\beta},\ve{\gamma},\ve{w})$ and $(\Sigma_0,\xis,\zetas,\taus,\ve{w}_0)$ are Heegaard triples with a fixed embedding $f:\ws_0\to \Sigma\setminus(\as\cup \bs\cup \gs\cup \ws)$ and consider the triple $(\Sigma\#_f \Sigma_0,\as\cup \xis,\bs\cup \zetas,\gs\cup \taus, \ws)$. Furthermore, suppose that $(\Sigma_0,\xis,\zetas,\taus,\ws_0)$ satisfies Conditions~\ref{cond:triple1} and~\ref{cond:triple2}, above. If $\frs\in \Spin^c(X_{\as,\bs,\gs})$, then for almost complex structure sufficiently stretched on the connected sum necks, we have the relations \begin{align*}F_{\as\cup \xis,\bs\cup \zetas,\gs\cup \taus,\frs\# \frs_0}(F_{1}^{\xis,\zetas}(\xs), F_1^{\zetas,\taus}(\ys))&=F_1^{\xis,\zetas}( F_{\as,\bs,\gs,\frs}(\xs,\ys))\\ F_3^{\xis,\zetas}( F_{\as\cup \xis,\bs\cup \zetas,\gs\cup \taus,\frs\# \frs_0}(F_{1}^{\xis,\zetas}(\xs), \ys\times \theta))&= F_{\as,\bs,\gs,\frs} (\xs, F_3^{\zetas,\taus}(\ys\times\theta))\\ F_3^{\xis,\zetas}( F_{\as\cup \xis,\bs\cup \zetas,\gs\cup \taus,\frs\# \frs_0}(\xs\times \theta, F_1^{\zetas,\taus}(\ys)))&= F_{\as,\bs,\gs,\frs} (F_3^{\xis,\zetas}(\xs\times \theta), \ys) \end{align*} \end{prop} \begin{proof}Writing out the definitions of the above maps, we wish to show \begin{align*}F_{\as\cup \xis,\bs\cup \zetas,\gs\cup \taus,\frs\# \frs_0}(\ve{x}\times \Theta_{\xis,\zetas}^+, \ve{y}\times \Theta_{\zetas,\taus}^+)&=F_{\as,\bs,\gs,\frs}(\ve{x},\ve{y})\otimes \Theta_{\xis,\taus}^+\\ \langle F_{\as\cup \xis,\bs\cup \zetas,\gs\cup \taus,\frs\# \frs_0}(\ve{x}\times \Theta_{\xis,\zetas}^+, \ve{y}\times \theta),\ve{z}\times \Theta_{\xis,\taus}^-\rangle &=\langle F_{\as,\bs,\gs,\frs}(\ve{x},\ve{y}),\ve{z}\rangle\cdot \langle \theta, \Theta_{\zetas,\taus}^-\rangle\\ \langle F_{\as\cup \xis,\bs\cup \zetas,\gs\cup \taus,\frs\# \frs_0}(\ve{x}\times \theta, \ve{y}\times \Theta^+_{\zetas,\taus}),\ve{z}\times \Theta_{\xis,\taus}^-\rangle &= \langle F_{\as,\bs,\gs,\frs}(\ve{x},\ve{y}),\ve{z}\rangle\cdot \langle \theta, \Theta_{\xis,\zetas}^- \rangle , \end{align*} where $\langle, \rangle$ denotes the pairing map in Equation \eqref{eq:pairingdefinition}. We will focus on the first formula involving the generalized 1-handle map. The two formulas involving the generalized 3-handle map follow from a straightforward adaptation of the following argument. We first claim that if $\psi_0\in \pi_2(\theta_1,\theta_2,\theta_3)$ is a homology triangle on $(\Sigma_0,\xis,\zetas,\taus,\ws_0)$ which restricts to the torsion $\Spin^c$ structure on each end of the cobordism $X_{\xis,\zetas,\taus}$ then \begin{equation}\mu(\psi_0)=-\gr(\Theta_{\xis,\zetas}^+,\theta_1)-\gr(\Theta_{\zetas,\taus}^+,\theta_2)+\gr(\Theta_{\xis,\taus}^+,\theta_3)+2\sum_{w\in \ws_0}n_{w}(\psi_0).\label{eq:Maslovindexisotopictriple}\end{equation} One simple way to verify this formula is to first observe that it is true for some class of triangles, since invariance of $\hat{\HF}((S^1\times S^2)^{\# g(\Sigma_0)}, \ws_0)$ shows that the triangle map on $\hat{\CF}$ (which counts index 0 triangles which have zero multiplicity on all of the basepoints) maps the element $\Theta_{\xis,\zetas}^+\otimes \Theta_{\zetas,\taus}^+$ to $\Theta_{\xis,\taus}^+$. As there is a unique $\Spin^c$ structure on $X_{\xis,\zetas,\taus}$ which restricts to the torsion $\Spin^c$ structure on each end, it follows that if $\psi_0'$ is another triangle in $\pi_2(\theta_1,\theta_2,\theta_3)$, then the difference $\psi_0-\psi_0'$ is a sum of doubly periodic domains. Hence it is sufficient to show that the formula respects splicing disks into a homology class of triangles. To see that the formula respects this, we can now apply the index formula for disks from Equation \eqref{eq:Maslovindexgeneralized1-handle}. Now if $\psi\# \psi_0$ is a class triangles in $\pi_2(\ve{x}\times \theta_1, \ve{y}\times \theta_2, \ve{z}\times \theta_3)$, then the formula for the index from \cite{SMaslov} implies that \[\mu(\psi\# \psi_0)=\mu(\psi)+\mu(\psi_0)-2\sum_{w\in \ws_0} n_{w}(\psi_0).\] Combining this with the expression from Equation \eqref{eq:Maslovindexisotopictriple} we see that \begin{equation}\mu(\psi\# \psi_0)=\mu(\psi)-\gr(\Theta_{\xis,\zetas}^+,\theta_1)-\gr(\Theta_{\zetas,\taus}^+,\theta_2)+\gr(\Theta_{\xis,\taus}^+,\theta_3).\label{eq:indexforgeneratriangles}\end{equation} Given a triangle $\psi\# \psi_0\in \pi_2(\ve{x}\times \Theta^+_{\xis,\zetas}, \ve{y}\times \Theta_{\zetas,\taus}^+, \ve{z}\times \theta_3)$, Equation \eqref{eq:indexforgeneratriangles} specializes to the formula \begin{equation}\mu(\psi\# \psi_0)=\mu(\psi)+\gr(\Theta^+_{\xis,\taus},\theta_3).\label{eq:indexforrelevanttriangles}\end{equation} From here, the argument proceeds in a similar manner to the proof of Proposition~\ref{prop:differentialcomp}. Given a sequence of holomorphic triangles in the homology class $\psi\# \psi_0$ for a sequence of almost complex structures $J(T_i)$, with necks of length $T_i$ inserted along the connected sum tubes, where $T_i\to \infty$, we can extract a limit consisting of a broken holomorphic triangle $U$ representing $\psi$, and a broken holomorphic triangle $U_0$ representing $\psi_0$. From the index formula appearing in Equation \eqref{eq:indexforrelevanttriangles}, it follows that $\mu(\psi)=0$ and $\gr(\Theta_{\xis,\taus}^+, \theta_3)=0$. It follows that $U$ consists of a single index 0 holomorphic triangle and that $\theta_3=\Theta_{\xis,\taus}^+$. As in Proposition~\ref{prop:differentialcomp}, the next step is to analyze the curves appearing in $U_0$. Write $\ps=\{p_1,\dots, p_k\}$ and $\ws_0=\{w_1,\dots, w_k\}$, where $f(w_i)=p_i$. As with the case of disks, we consider the map \[\rho^{\ws_0}: \cM(\psi_0)\to \Sym^{n_1}(\Delta)\times \cdots \times \Sym^{n_k}(\Delta)\] defined by the formula \[\rho^{\ws_0}(u):=\big((u\circ \pi_\Delta)((u\circ \pi_\Sigma)^{-1}(w_1)),\dots,(u\circ \pi_\Delta)((u\circ \pi_\Sigma)^{-1}(w_k))\big).\] Here the integers $n_i$ are defined as $n_i:=n_{p_i}(\psi)=n_{w_i}(\psi_0)$. If $\ve{d}\in \Sym^{n_1}(\Delta)\times \cdots \times \Sym^{n_k}(\Delta)$, we can consider the matched moduli space \[\cM(\psi_0,\ve{d}):=\{u_0\in \cM(\psi_0): \rho^{\ws_0}(u_0)=\ve{d}\}.\] We will show that for generic $\ve{d}$ (i.e. not in the fat diagonal), and appropriately generic almost complex structure, the matched moduli space $\cM(\psi_0,\ve{d})$ is a manifold of dimension $\mu(\psi_0)-\codim(\ve{d})$, which is transversally cut out, and also to establish the count \begin{equation}\sum_{\substack{\psi_0\in \pi_2(\Theta_{\xis,\zetas}^+,\Theta_{\zetas,\taus}^+,\Theta_{\xis,\taus}^+)\\ n_{w_i}(\phi_0)=n_i}} \# \cM(\psi_0,\ve{d})\equiv 1\pmod{2}.\label{eq:matchedholomorphictrianglecount}\end{equation} Assuming this, one then applies the gluing result \cite{LipshitzCylindrical}*{Proposition~A.2} to show that if $\mu(\psi)=0$, and $\psi_0\in \pi_2(\Theta_{\xis,\zetas}^+,\Theta_{\zetas,\taus}^+,\Theta_{\xis,\taus}^+)$, then for sufficiently stretched almost complex structure \[\# \cM(\psi\# \psi_0)=\sum_{u\in\cM(\psi)} \# \cM(\psi_0, \rho^{\ps}(u)).\] Using this, the main result follows from the following manipulation \begin{align*} &\qquad F_{\as\cup \xis,\bs\cup \zetas,\gs\cup \taus,\frs\# \frs_0}(\ve{x}\times \Theta_{\xis,\zetas}^+, \ve{y}\times \Theta_{\zetas,\taus}^+)\\ &=\sum_{\theta_3\in \bT_{\xis}\cap \bT_{\taus}}\sum_{\substack{ \psi\#\psi_0\in \pi_2(\xs\times \Theta_{\xis,\zetas}^+,\ys\times \Theta_{\zetas,\taus}^+,\zs\times \theta_3)\\\mu(\psi\# \psi_0)=0\\\frs_{\ws}(\psi\#\psi_0)=\frs\#\frs_0}} \# \cM(\psi\# \psi_0) U^{n_{\ws}(\psi\# \psi_0)}\cdot \zs\times \theta_3\\ &=\sum_{\substack{\psi\in \pi_2(\xs,\ys,\zs)\\ \mu(\psi)=0\\\frs_{\ws}(\psi)=\frs}} \sum_{\psi_0\in \pi_2(\Theta_{\xis,\zetas}^+, \Theta_{\zetas,\taus}^+,\Theta_{\xis,\taus}^+)} \# \cM(\psi\# \psi_0) U^{n_{\ws}(\psi)} \cdot \zs\times \Theta_{\xis,\taus}^+\\ &=\sum_{\substack{\psi\in \pi_2(\xs,\ys,\zs)\\ \mu(\psi)=0\\\frs_{\ws}(\psi)=\frs}} \sum_{\psi_0\in \pi_2(\Theta_{\xis,\zetas}^+, \Theta_{\zetas,\taus}^+,\Theta_{\xis,\taus}^+)} \sum_{u\in\cM(\psi)} \# \cM(\psi_0, \rho^{\ps}(u)) U^{n_{\ws}(\psi)} \cdot \zs\times \Theta_{\xis,\taus}^+\\ &=\sum_{\substack{\psi\in \pi_2(\xs,\ys,\zs)\\ \mu(\psi)=0\\\frs_{\ws}(\psi)=\frs}} \sum_{u\in\cM(\psi)}U^{n_{\ws}(\psi)}\cdot \zs\times \Theta_{\xis,\taus}^+\\ &=F_1^{\xis,\taus}(F_{\as,\bs,\gs,\frs}(\xs,\ys)). \end{align*} Hence it remains to show that the moduli spaces $\cM(\psi_0,\ve{d})$ are transversely cut out, and to establish the count from Equation \eqref{eq:matchedholomorphictrianglecount}. As in Proposition~\ref{prop:differentialcomp}, the argument proceeds slightly differently, depending on whether $|\ws_0|>1$ or $|\ws_0|=1$. We first consider the case that $|\ws_0|>1$. In this case, we can use almost complex structures satisfying \ref{def:J'1}--\ref{def:J'4}. If $\ve{d}$ is a not in the fat diagonal (i.e. has no repeated entries) and $u_0\in \cM(\psi_0,\ve{d})$ satisfies the analogs of \ref{def:M1}, \ref{def:M2}, \ref{def:M4} and \ref{def:M5}, then it is easy to see that it also satisfies \ref{def:M3}. Proposition~\ref{prop:transversalitytriangles} shows that $\cM(\psi_0,\ve{d})$ is 0-manifold which is transversely cut out. To obtain the count in Equation \eqref{eq:matchedholomorphictrianglecount}, one proceeds as in Proposition~\ref{prop:differentialcomp} by first showing that the quantity on the left side is independent of $\ve{d}$ using a Gromov compactness argument, then showing that the count in Equation \eqref{eq:matchedholomorphictrianglecount} is equal to 1 (modulo 2) by degenerating $\ve{d}$. To degenerate $\ve{d}$, one considers a path $\ve{d}(t)$, with $t\in [1,\infty)$, consisting of $|\ve{d}|$ points in $\Delta$, which travel into one of the cylindrical ends of $\Delta$, such that the spacing between two points is at least $t$, and that the points individually approach the edge $e_\alpha$ of $\d \Delta$. In the limit, one obtains $|\ve{d}|$ cylindrical $\as$ boundary degenerations, each of which matches a single point in the upper half plane, as well as a single holomorphic triangle $u_0'\in \pi_2(\Theta_{\xis,\zetas}^+,\Theta_{\zetas,\taus}^+,\Theta_{\xis,\taus}^+)$ which has zero multiplicity on the points in $\ws_0$. Using Equation \eqref{eq:Maslovindexisotopictriple}, we see that $u_0'$ has Maslov index 0. Condition~\ref{cond:triple1} on the Heegaard triple $(\Sigma_0,\xis,\zetas,\taus,\ws_0)$, as well as invariance of $\hat{\HF}$, then implies that the total number of holomorphic triangles representing index 0 classes in $\pi_2(\Theta_{\xis,\zetas}^+,\Theta_{\zetas,\taus}^+,\Theta_{\xis,\taus}^+)$ with multiplicity zero on $\ws_0$, is 1, since the triangle map can be interpreted as a transition map. The total count of index 2 $\as$ boundary degenerations which match a fixed point in the upper half plane is 1, by \cite{OSLinks}*{Theorem~5.5}. By gluing index 0 triangles to the $|\ve{d}_t|$ index 2 disks, using \cite{LipshitzCylindrical}*{Proposition~A.2}, we can establish Equation \eqref{eq:matchedholomorphictrianglecount} for $\ve{d}(t)$, when $t$ is sufficiently large. We now consider the case that $|\ws_0|=1$. In this case, we must consider more generic almost complex structures satisfying \ref{def:J'1}, \ref{def:J'2}, \ref{def:J'3'}--\ref{def:J'5'} to achieve transversality, since the almost complex structures we considered for $|\ws_0|>1$ do not achieve transversality at curves with image $\Sigma_0\times \{d\}$. The argument proceeds with the same strategy as before. The count in Equation \eqref{eq:matchedholomorphictrianglecount} is shown to be independent of $\ve{d}$, for $\ve{d}$ avoiding the fat diagonal, using the same argument. To establish the stated count, we degenerate the points $\ve{d}$, however we must use a slightly different degeneration than we used when $|\ws_0|>1$. We pick a path $\ve{d}(t)$ (for $t\in [1,\infty)$), avoiding the fat diagonal, consisting of $|\ve{d}|$ points all traveling into the $\alpha$-$\beta$ end of $\Delta$, and with spacing between points which increases without bound. However we assume that the points do not approach $\d \Delta$, in the cylindrical end of $\Delta$. We do this because condition \ref{def:J'5'} requires that the almost complex structure not be harshly perturbed near $\d \Delta$ (so we can't assume the matched moduli spaces achieve transversality when we let $\ve{d}$ contain points near $\d \Delta$). In the limit, we obtain an index 0 holomorphic triangle representing a class in $\pi_2(\Theta_{\xis,\zetas}^+,\Theta_{\zetas,\taus}^+,\Theta_{\xis,\taus}^+)$ with zero multiplicity over $\ws_0$, as well as $|\ve{d}|$ index 2 holomorphic strips on $(\Sigma_0,\xis,\taus,\ws_0)$, representing classes in $\pi_2(\Theta_{\xis,\taus}^+,\Theta_{\xis,\taus}^+)$, which each match a single point $d\in [0,1]\times \R$. The total count of index 0 triangles with zero multiplicity on $\ws_0$ is 1, by invariance of $\hat{\HF}$, and the total count of holomorphic disks in index 2 classes in $\pi_2(\Theta_{\xis,\zetas}^+,\Theta_{\xis,\zetas}^+)$ matching a single point $d\in [0,1]\times \R$ is 1, as shown in the proof of Proposition~\ref{prop:differentialcomp}. A gluing argument then establishes Equation \eqref{eq:matchedholomorphictrianglecount} at $\ve{d}(t)$, for sufficiently large $t$. \end{proof} \subsection{Further remarks on the generalized 1-handle and 3-handle maps} We now describe some special cases of the generalized 1-handle and 3-handle maps which will be used later in this paper. As a first example, the 1-handle map from \cite{OSTriangles} is equal to a generalized 1-handle maps with $(\Sigma_0,\xis,\zetas,\ws_0)$ taken to be a singly based diagram for $S^1\times S^2$, with Heegaard surface a torus, and with two isotopic attaching curves. Similarly the 1-handle map from \cite{ZemGraphTQFT} is a generalized 1-handle map with $(\Sigma_0,\xis,\zetas,\ws_0)$ a doubly based diagram for $S^3$, with two isotopic attaching curves and a Heegaard surface equal to $S^2$. We can also view the free-stabilization map $S_w^+$ (see Section~\ref{sec:mapsandrelations}) as a special case of the generalized 1-handle map, as we now describe. Suppose that $(\Sigma,\as,\bs,\ws)$ is an arbitrary Heegaard diagram, and suppose that we wish to perform a free-stabilization at a point $w\in \Sigma\setminus (\as\cup \bs\cup \ws)$. The free-stabilization operation adds two new curves, $\alpha_0$ and $\beta_0$, each of which bounds a disk containing $w$, and which intersect at two points, $\theta^+$ and $\theta^-$. The map $S_w^+$ is defined by the formula $S_w^+(\ve{x})=\ve{x}\times \theta^+$. We can alternative view the map $S_w^+$ as the composition of a zero handle map, which adds a copy of the diagram $(S^2,w)$ (with no attaching curves), followed by a 1-handle map connecting the copy of $S^2$ with $\Sigma$. In particular, the holomorphic disk and triangle counts of Propositions~\ref{prop:differentialcomp} and \ref{prop:generalized1-handlesandtriangles} can be applied to free-stabilizations by first taking the disjoint union of the diagrams $(S^2,w)$ and $(\Sigma,\as,\bs,\ws)$, and then connecting them with a 1-handle. More generally, we make the following remark: \begin{rem}\label{rem:extrabasepoints}In a similar manner to the free-stabilization maps, the generalized 1-handle and 3-handle maps can be defined when we wish to attach a diagram $(\Sigma_0,\xis,\zetas,\ws_0\cup \ws_1)$ for $(S^1\times S^2)^{\# g(\Sigma_0)}$ to a diagram $(\Sigma,\as,\bs,\ws)$, using an embedding $f:\ws_0\to \Sigma\setminus (\as\cup \bs\cup \ws)$. To do this, we first add a copy of $(S^2,w)$ to $(\Sigma,\as,\bs,\ws)$ for each basepoint $w\in \ws_1$, and then use the generalized 1-handle map for joining $(\Sigma_0,\xis,\zetas,\ws_0\cup \ws_1)$ to $(\Sigma,\as,\bs,\ws)\cup \coprod_{w\in \ws_1}(S^2,w)$ using the obvious extension of $f$ to all of $\ws_0\cup \ws_1$. In this case, the generalized 1-handle map has domain $\CF^-(\Sigma,\as,\bs,\ws)$, and codomain $\CF^-(\Sigma\#_f \Sigma_0, \as\cup \xis,\bs\cup \zetas,\ws\cup \ws_1)$. Similarly the holomorphic triangle counts from Proposition~\ref{prop:generalized1-handlesandtriangles} can be applied when we are given two Heegaard triples, $(\Sigma,\as,\bs,\gs,\ws)$ and $(\Sigma_0,\xis,\zetas,\taus,\ws_0\cup \ws_1)$, and a map $f:\ws_0\to \Sigma\setminus (\as\cup \bs\cup \gs\cup \ws)$ used to form their connected sum. \end{rem} \section{Doubling a Heegaard diagram} \label{sec:doubleddiagrams} Suppose that $\cH=(\Sigma, \ve{\alpha},\ve{\beta},\ws)$ is a multi-pointed Heegaard diagram for $(Y,\ws)$. In this section we describe two natural diagrams for $(Y,\ws)$, \[D_{\as}(\cH) \qquad \text{and} \qquad D_{\bs}(\cH),\] which can be constructed from the diagram $\cH$. We will call these diagrams the \textbf{doubled Heegaard diagrams of } $\cH$. They will appear when we compute the trace and cotrace cobordism maps. \subsection{Construction of the doubled diagrams} \label{sec:doubleddiagram} We first describe the construction of the diagram $\cD_{\as}(\cH)$. Pick a regular neighborhood $N(\Sigma)\iso \Sigma\times [-1,1]$ of $\Sigma$ in $Y$, such that $\Sigma$ is embedded as $\Sigma\times \{0\}$. Let $N'(\ws)$ denote a collection of $|\ws|$ pairwise disjoint closed disks in $\Sigma\setminus (\as\cup \bs)$, each containing a single basepoint of $\ws$ in its boundary (i.e. the disks $N'(\ws)$ are obtained by translating a regular neighborhood $N(\ws)$ slightly so that $\ws\subset \d N'(\ws)$). We remove $(\Int N'(\ws))\times [-1,1]$ from $N(\Sigma)$ to obtain a handlebody of genus $2g(\Sigma)+|\ws|-1$, which we denote by $U_\Sigma$. We will write $\Sigma\, \#_{\ws} \bar{\Sigma}$ for $-\d U_\Sigma$, and we note that $\Sigma \, \#_{\ws} \bar{\Sigma}$ is a Heegaard splitting of $Y$ with $\ws\subset \Sigma\#_{\ws} \bar{\Sigma}$. We have oriented $\Sigma\, \#_{\ws} \bar{\Sigma}$ so that $Y\setminus U_\Sigma$ is the $\alpha$-handlebody, and $U_\Sigma$ is the $\beta$-handlebody. We will write $\Sigma\, \#_{\ws} \bar{\Sigma}$ for the boundary of $U_\Sigma$. We pick a set of compressing curves $\Ds$ on $\Sigma\, \#_{\ws} \bar{\Sigma}$ for the handlebody $U_\Sigma$. Let $\as\subset \Sigma\, \#_{\ws} \bar{\Sigma}$ denote the images of the original $\as$ curves on $\Sigma\setminus N'(\ws)$, and let $\bar{\bs}\subset \Sigma\,\#_{\ws} \bar{\Sigma}$ denote the images of the original $\bs$ curves on $\bar{\Sigma}\setminus N'(\ws)$. Note that the curves $\as\cup \bar{\bs}\subset\Sigma\, \#_{\ws} \bar{\Sigma}$ bound compressing disks in $Y\setminus U_\Sigma$. The diagram $D_{\as}(\cH)$ is defined as \[D_{\as}(\cH):=(\Sigma\, \#_{\ws} \bar{\Sigma},\ve{\alpha}\cup \bar{\ve{\beta}},\Ds,\ve{w}).\] If we instead want $Y\setminus U_\Sigma$ to play the role of the $\beta$-handlebody, we can construct the diagram \[D_{\bs}(\cH):=(\bar{\Sigma}\, \#_{\ws} \Sigma, \Ds,\bar{\ve{\alpha}}\cup \ve{\beta},\ve{w}).\] An example of a neighborhood of a basepoint in a doubled Heegaard diagram is shown in Figure~\ref{fig::10}. \begin{figure}[ht!] \centering \input{fig10.pdf_tex} \caption{\textbf{A neighborhood of a basepoint $w\in \ws$ in a Heegaard diagram $\cH$ and its double $D_{\as}(\cH)$.} The red and blue shaded strips denote portions of compressing disks attached to the Heegaard surface. On the right, a single $\Ds$ curve is shown, between $\Sigma$ and $\bar{\Sigma}$.\label{fig::10}} \end{figure} There is a natural way to construct the compressing curves $\Ds$ on $\Sigma\, \#_{\ws} \bar{\Sigma}$, as we now describe. Define the subsurface \[\Sigma(\ws):=\Sigma\setminus (\Int N'(\ws)),\] where $N'(\ws)$ is a collection of disks described above. Pick a collection of closed arcs \[A\subset (\d N'(\ws))\setminus \ws\] such that the set $A$ contains one arc per component of $\d \Sigma(\ws)$. We then form the surface $\Sigma(\ws)\,\natural_A \bar{\Sigma}(\ws)$, where $\natural_A$ denotes the boundary connected sum along $A$. The surface $\Sigma(\ws)\,\natural_A \bar{\Sigma}(\ws)$ has one puncture per basepoint of $\ws$, and is homeomorphic to $(\Sigma\, \#_{\ws} \bar{\Sigma})\setminus N(\ws)$. We now pick a collection of pairwise disjoint, properly embedded arcs $d_1,\dots, d_{2n}$ on $\Sigma(\ws)$, which have endpoints on $A$, and which form a basis of $H_1(\Sigma(\ws), A;\Z)$ (here $n=|\as|=|\bs|=g(\Sigma)+|\ws|-1$). We take the arcs $d_i$ on $\Sigma(\ws)$, and concatenate them with their mirrors on $\bar{\Sigma}(\ws)$ to form a collection of $2n$ simple closed curves $\delta_1,\dots, \delta_{2n}$ on $\Sigma\, \#_{\ws} \bar{\Sigma}$. In the definition of the doubled Heegaard diagram, above, we can take $\Ds=\{\delta_1,\dots, \delta_{2n}\}$ as our choice of compressing curves for the handlebody $U_\Sigma$. We call this construction the \textit{doubling procedure}. We now show that any set of curves $\Ds$, constructed using the doubling procedure above, form a valid set of attaching curves, in the sense of Definition~\ref{def:multipointedheegaarddiagram}. They clearly satisfy requirements \eqref{def:mphd1}--\eqref{def:mphd3}. It remains to show the following: \begin{lem}\label{lem:doublingvalid}The curves $\Ds$ are homologically independent in $ (\Sigma\,\, \#_{\ws} \bar{\Sigma})\setminus \ws$. \end{lem} \begin{proof}As described above, we have a diffeomorphism $(\Sigma\, \#_{\ws} \bar{\Sigma})\setminus N(\ws)\iso \Sigma(\ws)\,\natural_A\bar{\Sigma}(\ws)$. Consider the composition \[H_1(\Sigma(\ws)\,\natural_A \bar{\Sigma}(\ws);\Z)\to H_1(\Sigma(\ws)\,\natural_A \bar{\Sigma}(\ws), \bar{\Sigma}(\ws); \Z)\to H_1(\Sigma(\ws), A;\Z).\] The first map is the natural map, and the second map is the inverse of the excision isomorphism. The composition sends $\delta_i$ to $d_i$. Since the $d_i$ are linearly independent by assumption, it follows that the $\delta_i$ are as well. \end{proof} \subsection{Computing the transition map from \texorpdfstring{$\cH$}{H} to \texorpdfstring{$D_{\as}(\cH)$}{Da(H)}} It will be important for our purposes to have a simple formula for the transition maps $\Psi_{\cH\to D_{\as}(\cH)}$ and $\Psi_{D_{\as}(\cH)\to \cH}$. In this section, we describe a candidate map and prove that it is the transition map. We note that if $\bs$ is any set of attaching curves on $\Sigma$, then $(\Sigma\, \#_{\ws} \bar{\Sigma},\bs\cup \bar{\bs},\Ds,\ws)$ is a diagram for $(S^1\times S^2)^{\# g(\Sigma)}$ with $|\ws|$ basepoints, because it's the double of the diagram $(\Sigma,\bs,\bs,\ws)$. Hence $\HF^-(\Sigma\, \#_{\ws} \bar{\Sigma},\bs\cup \bar{\bs},\Ds,\ws)$ admits a top degree generator $\Theta_{\bs\cup \bar{\bs},\Ds}^+$, which is well defined on the level of homology. Note also that there is a generalized 1-handle map \[F_1^{\bar{\bs},\bar{\bs}}: \CF^-(\Sigma,\as,\bs,\ws)\to \CF^-(\Sigma\, \#_{\ws} \bar{\Sigma}, \as\cup \bar{\bs}, \bs\cup \bar{\bs}, \ws),\] as described in Section~\ref{sec:generalized1--handleand3--handlemaps}. The main result of this section is the following: \begin{prop}\label{prop:changeofdiagramsmapcomp}If $\cH=(\Sigma,\as,\bs,\ws)$ is a multi-pointed Heegaard diagram, and $\Ds$ is any set of attaching curves obtained by the doubling procedure described in Section~\ref{sec:doubleddiagram}, then the transition map $\Psi_{\cH\to D_{\as}(\cH)}$ satisfies the formula \[\Psi_{\cH\to D_{\as}(\cH)}\simeq F_{\as\cup \bar{\bs}, \bs\cup \bar{\bs}, \Ds}( F_1^{\bar{\bs},\bar{\bs}}, \Theta_{\bs\cup \bar{\bs},\Ds}^+).\] \end{prop} \begin{rem} In Proposition~\ref{lem:doublingvalid}, we have omitted a $\Spin^c$ structure in the triangle map $F_{\as\cup \bar{\bs}, \bs\cup \bar{\bs}, \Ds}$. We will see that the triple $(\Sigma\, \#_{\ws} \bar{\Sigma}, \as\cup \bar{\bs}, \bs\cup \bar{\bs},\Ds)$ represents surgery on a link embedded in $Y\#(S^1\times S^2)^{\# |\bs|}$ which topological cancels the summands of $S^1\times S^2$ added by the generalized 1-handle map. Thus, by attaching 3-handles and 4-handles to $X_{\as\cup \bar{\bs}, \bs\cup \bar{\bs}, \Ds}$ we obtain the identity cobordism $Y\times [0,1]$, so there is a unique $\Spin^c$ structure on $X_{\as\cup \bar{\bs}, \bs\cup \bar{\bs}, \Ds}$ which extends to $\frs$ on $Y\times [0,1]$. \end{rem} The proof of Proposition~\ref{prop:changeofdiagramsmapcomp} is somewhat involved, though the idea is quite simple to state: the expression in Proposition~\ref{prop:changeofdiagramsmapcomp} represents the cobordism map for a collection of topologically canceling 1-handles and 2-handles. If the reader is satisfied with this level of reasoning, they can safely skip the remainder of the proof. For the undeterred reader, we now embark upon providing a proper proof of Proposition~\ref{prop:changeofdiagramsmapcomp}. The first step is to specify a collection of 0-spheres and a framed link. Let $D_{\beta_i}\subset Y$ denote a choice of compressing disk for $\beta_i\in \bs$. Inside of the handlebody $U_{\bs}$, we pick regular neighborhoods $N(\Sigma)$ and $N(D_{\beta_i})$ of $\Sigma$ and the compressing disks $D_{\beta_i}$, respectively. Further, we pick trivializing diffeomorphisms \[\tau_i:N(D_{\beta_i})\to D_{\beta_i}\times [-1,1] \qquad \text{and} \qquad \tau':N(\Sigma)\to \Sigma\times [0,1],\] such that $\tau_i(D_{\beta_i})=D_{\beta_i}\times \{0\}$ and $\tau'(\Sigma)=\Sigma\times \{0\}$. Using the maps $\tau_i$ and $\tau'$, we can specify 0-spheres $S_1,\dots, S_n$ and a framed link $\bL\subset Y(S_1,\dots, S_n)$. We define the 0-sphere $S_i\subset Y$ to be \begin{equation}S_i:=\left\{\left(0, \tfrac{1}{2}\right),\left(0, -\tfrac{1}{2}\right)\right\}\subset D_{\beta_i}\times [-1,1],\label{eq:defframed0spheres}\end{equation} and we define the link component $\ell_i\subset Y(S_1,\dots, S_n)$ of $\bL$ to be \begin{equation}\ell_i:=\left\{(0,t):t\in \left[-\tfrac{1}{2}, \tfrac{1}{2}\right]\right\}\subset D_{\beta_i}\times [-1,1].\label{eq:defframedlink}\end{equation} A neighborhood of the disk $D_{\beta_i}$ and the nearby 0-sphere $S_i$ and link component $\ell_i$ are shown in Figure~\ref{fig::52}. The trivialization $\tau_i$ of $N(D_{\beta_i})$ determines a framing of the link component $\ell_i$, which is given as a vector field along $\ell_i$ that projects to a single vector in $T_0 D_{\beta_i}$ under the projection map $D_{\beta_i}\times [-1,1]\to D_{\beta_i}$. \begin{figure}[ht!] \centering \input{fig52.pdf_tex} \caption{\textbf{The Heegaard surface $\Sigma'\subset Y(S_1,\dots, S_n)$ inside the neighborhood $N(D_{\beta_i})$ of the compressing disk $D_{\beta_i}$.} A neighborhood $N(\Sigma)\subset U_{\bs}$ of the original Heegaard surface $\Sigma$ is identified with $\Sigma\times [0,1]$. The original Heegaard surface $\Sigma$ is identified with $\Sigma\times \{0\}$. The surface $\Sigma'$ is the union of $\Sigma\times\{0\}$, a portion of $\Sigma\times \{1\}$, and the two annuli $A_-$ and $A_+$. Surgery on the knot $\ell_i$ cancels surgery on the 0-sphere $S_i$. \label{fig::52}} \end{figure} We can specify a Heegaard surface \[\Sigma'\subset Y(S_1,\dots, S_n),\] as follows. Outside of the union of the neighborhoods $N(D_{\beta_i})$, the surface $\Sigma'$ is equal to $\Sigma\,\#_{\ws} \bar{\Sigma}= \d ((\Sigma\setminus N'(\ws))\times [0,1])$. Inside of $N(D_{\beta_i})$, we define the surface $\Sigma'$ by the formula \[\Sigma'\cap N(D_{\beta_i}):=\big((\Sigma\times \{0\})\cap N(D_{\beta_i})\big)\cup \left((\Sigma\times \{1\})\cap \left(D_{\beta_i}\times \left(\left[-1,-\tfrac{1}{2}\right]\cup \left[\tfrac{1}{2},1\right]\right)\right)\right)\cup A_-\cup A_+,\] where $A_{-}$ and $A_{+}$ are two annular subsets of $D_{\beta_i}\times \left\{-\tfrac{1}{2}\right\}$ and $D_{\beta_i} \times \left\{\tfrac{1}{2}\right\}$, respectively. The surface $\Sigma'$ is shown in Figure~\ref{fig::52}. It is not hard to see that the two diffeomorphisms $\tau_i$ and $\tau'$ also determine a diffeomorphism \begin{equation}\phi:\Sigma\, \#_{\ws} \bar{\Sigma}\to \Sigma',\label{eq:embeddingphidef}\end{equation} up to isotopy (examine Figure~\ref{fig::52}). For each 0-sphere $S_i$, there is a 1-handle map $F_{S_i}$, defined in \cite{ZemGraphTQFT}*{Section~8}. The definition is similar to, but not identical to the definition of the 1-handle map from \cite{OSTriangles}. The definition of the map $F_{S_i}$ from \cite{ZemGraphTQFT} agrees with the generalized 1-handle map from Section~\ref{section:defgeneralized1-handlemaps} when $(\Sigma_0,\xis,\zetas,\ws)$ is a doubly based diagram for $S^3$, with Heegaard surface $\Sigma_0=S^2$, and $\xis$ and $\zetas$ both consisting of a single curve, intersecting at two points. The next step is to relate the generalized $F_1^{\bar{\bs},\bar{\bs}}$ appearing in Proposition~\ref{prop:changeofdiagramsmapcomp} with a cobordism map: \begin{lem}\label{lem:generalized1-handlemapiscompof1-handles}Suppose $(\Sigma,\as,\bs,\ws)$ is a Heegaard diagram for $Y$, and let $S_1,\dots, S_{n}$ be the 0-spheres in $U_{\bs}$, described above. Let $\phi:\Sigma\, \#_{\ws} \bar{\Sigma}\to Y(S_1,\dots, S_n)$ denote the embedding describe above. The generalized 1-handle map $F_1^{\bar{\bs},\bar{\bs}}$ agrees with the composition of the 1-handle maps for the 0-spheres $S_1,\dots, S_n\subset Y$ in the following sense: \[\phi_* \circ F_1^{\bar{\bs},\bar{\bs}}\simeq F_{S_n}\circ F_{S_{n-1}}\circ \cdots \circ F_{S_1}.\] \end{lem} \begin{proof} If we ignore almost complex structures, the result is obvious, since the two maps are both defined by the formula $\ve{x}\mapsto \ve{x}\times \Theta_{\bar{\bs},\bar{\bs}}^+$. However the two maps have different requirements concerning how the almost complex structure on $(\Sigma\,\#_{\ws} \bar{\Sigma})\times [0,1]\times \R$ must be stretched. By definition, an almost complex structure can be used to compute the map $F_{S_i}$ if the change of almost complex structure map associated to additional stretching along the two necks parallel to the two new $\as$ and $\bs$ curves preserves the intersection points of the form $\ve{x}\times \theta^+$ (see \cite{ZemGraphTQFT}*{Definition~6.9}). By analyzing the proof that 1-handle maps commute with each other, it follows that we can pick a single almost complex structure to compute the composition $F_{S_n}\circ \cdots \circ F_{S_1}$, which has been stretched on $2|\bar{\bs}|$ necks simultaneously (see \cite{ZemGraphTQFT}*{Proposition~6.23} for a proof of this fact, though this will also follow from the argument we present below). On the other hand, an almost complex structure for $F_1^{\bar{\bs},\bar{\bs}}$ must be stretched on the $|\ws|$ connected sum necks between $\Sigma$ and $\bar{\Sigma}$. Schematically, the two almost complex structures are shown in Figure~\ref{fig::33}. It is not \textit{a-priori} obvious that a single almost complex structure can be chosen to compute both maps. \begin{figure}[ht!] \centering \input{fig33.pdf_tex} \caption{\textbf{A schematic of the almost complex structures used to compute the maps $F_1^{\bar{\bs},\bar{\bs}}$ and $F_{S_n}\circ \cdots \circ F_{S_1}$.} The dashed lines indicate where we stretch the almost complex structures for the two maps. Shown is the subset of $\Sigma\, \#_{\ws} \bar{\Sigma}$ corresponding to $\bar{\Sigma}$. \label{fig::33}} \end{figure} To show that $F_1^{\bar{\bs},\bar{\bs}}$ and $F_{S_n}\circ \cdots \circ F_{S_1}$ agree, it is sufficient to show that if $J_1$ can be used to compute $F_1^{\bar{\bs},\bar{\bs}}$ and $J_2$ can be used to compute the composition $F_{S_n}\circ \cdots \circ F_{S_1}$, then the change of almost complex structures map $\Psi_{J_1\to J_2}$ satisfies \begin{equation}\Psi_{J_1\to J_2}(\xs\times \Theta_{\bar{\bs},\bar{\bs}}^+)=\xs \times \Theta_{\bar{\bs}, \bar{\bs}}^+.\label{eq:stretchallthenecks}\end{equation} The strategy is to stretch along all $2|\bar{\bs}|+|\ws|$ necks simultaneously. Fix an almost complex structure $J$ on $(\Sigma\,\#_{\ws} \bar{\Sigma})\times [0,1]\times \R$. If $\ve{T}=(T_1,\dots, T_k)$ is a tuple of positive numbers with $k=2|\bs|+|\ws|$, let us write $J(\ve{T})$ for the almost complex structure on $(\Sigma\,\#_{\ws} \bar{\Sigma})\times [0,1]\times \R$ obtained from $J$ by inserting necks of length $T_1,\dots, T_k$ along the $|\ws|$ connected sum tubes of $\Sigma\,\#_{\ws} \bar{\Sigma}$, and along the $2|\bar{\bs}|$ curves which are parallel to the $\bar{\bs}$ curves on $\bar{\Sigma}$ (as in Figure~\ref{fig::33}). We claim the following: If $\ve{T}$ and $\ve{T}'$ are two tuples of neck lengths, and all of the components of $\ve{T}$ and $\ve{T}'$ are sufficiently large, then \begin{equation}\Psi_{J(\ve{T})\to J(\ve{T}')}(\ve{x}\times \Theta_{\bar{\bs},\bar{\bs}}^+)=\ve{x}\times \Theta_{\bar{\bs},\bar{\bs}}^+.\label{eq:cxstr1-handle=generalized1-handle}\end{equation} Importantly, we don't assume anything about the relative sizes of the components of $\ve{T}$ and $\ve{T}'$. We note that the main lemma statement follows immediately from Equation \eqref{eq:cxstr1-handle=generalized1-handle}, since using it we can choose a single almost complex structure which can be used to compute both $F_{1}^{\bar{\bs},\bar{\bs}}$ and $F_{S_n}\circ \cdots \circ F_{S_1}$, since the change of almost complex structure map associated to additional stretching along any of the necks preserves the element $\ve{x}\times \Theta_{\bar{\bs},\bar{\bs}}^+$. To establish Equation \eqref{eq:cxstr1-handle=generalized1-handle}, note that the change of almost complex structure map $\Psi_{J(\ve{T})\to J(\ve{T}')}$ can be computed by counting Maslov index 0 holomorphic strips for a non-cylindrical almost complex structure which agrees with $J(\ve{T})$ on $(\Sigma\, \#_{\ws} \bar{\Sigma})\times [0,1]\times (-\infty,-1]$ and agrees with $J(\ve{T}')$ on $(\Sigma\, \#_{\ws} \bar{\Sigma})\times [0,1]\times [1,\infty)$. Suppose we are given two sequences of tuples, $\ve{T}_i$ and $\ve{T}_i'$, such that each component of each tuple individually approaches $+\infty$. We can pick a sequence of interpolating almost complex structures $\hat{J}_i$ between $J(\ve{T}_i)$ and $J(\ve{T}'_i)$ on $(\Sigma\, \#_{\ws} \bar{\Sigma}) \times [0,1]\times \R$ such that the almost complex manifold $((\Sigma\, \#_{\ws} \bar{\Sigma})\times [0,1]\times \R,\hat{J}_i)$ contains the almost complex submanifold \[(\Sigma\setminus N_i(\ws)\times [0,1]\times \R,J_0|_{\Sigma\setminus N_i(\ws)\times [0,1]\times \R}),\] where $J_0$ is a fixed, cylindrical almost complex structure on $\Sigma\times [0,1]\times \R$ and $ N_i(\ws)$ is a sequence of regular neighborhoods of $\ws$ such that $N_{i+1}(\ws)\subset N_i(\ws)$ and $\bigcap_i N_i(\ws)=\ws$. If we are given a sequence of $\hat{J}_i$-holomorphic curves $u_i$ on $(\Sigma\, \#_{\ws} \bar{\Sigma})\times [0,1]\times \R$, which represent the Maslov index 0 class $\phi\# \phi_0\in \pi_2(\xs\times \Theta_{\bar{\bs},\bar{\bs}}^+, \ys\times \theta)$, where $\phi$ is a class on $(\Sigma,\as,\bs)$ and $\phi_0$ is a class on $(\bar{\Sigma},\bar{\bs},\bar{\bs})$. The index formula from Equation \eqref{eq:Maslovindexgeneralized1--handledisk} shows that \begin{equation}\mu(\phi\#\phi_0)=\mu(\phi)+\gr(\Theta_{\bar{\bs},\bar{\bs}}^+,\theta).\label{eq:indexformuladyamicdisks}\end{equation} Adapting \cite{LipshitzCylindrical}*{Sublemma A.12}, by letting $i\to \infty$, we can extract a (potentially broken) limiting curve on $\Sigma\times [0,1]\times \R$, for the cylindrical almost complex structure $J_0$, representing the class $\phi$. Since $J_0$ is cylindrical, by transversality we must have $\mu(\phi)\ge 0$. Using Equation \eqref{eq:indexformuladyamicdisks} and the fact that the change of almost complex structure map counts index 0 holomorphic disks, we conclude that $\mu(\phi)=0$ and $\gr(\Theta_{\bar{\bs},\bar{\bs}}^+,\theta)=0$. Transversality implies that $\phi$ is the constant homology class. Since $\gr(\Theta_{\bar{\bs},\bar{\bs}}^+,\theta)=0$, it follows that $\Theta_{\bar{\bs},\bar{\bs}}^+=\theta$. It is straightforward to simply examine the diagram $(\Sigma_0,\bar{\bs},\bar{\bs})$ and observe that the only nonnegative homology class in $\pi_2(\Theta_{\bar{\bs},\bar{\bs}}^+, \Theta_{\bar{\bs},\bar{\bs}}^+)$ which has zero multiplicity on the basepoints is the constant class. Hence, if $\ve{T}$ and $\ve{T}'$ have components which are all sufficiently large, the map $\Psi_{J(\ve{T})\to J(\ve{T}')}$, when applied to $\ve{x}\times \Theta_{\bar{\bs},\bar{\bs}}^+$, counts only the constant homology classes. This establishes equation \eqref{eq:cxstr1-handle=generalized1-handle}, completing the proof. \end{proof} We now focus our attention on the triangle map $F_{\as\cup \bar{\bs}, \bs\cup \bar{\bs}, \Ds}(-,\Theta_{\bar{\bs},\bar{\bs}}^+)$ appearing in the statement of Proposition~\ref{prop:changeofdiagramsmapcomp}. In Equation~\eqref{eq:defframedlink}, we described a framed link $\bL\subset Y(S_1,\dots, S_n)$ which topologically cancels the 0-spheres $S_1,\dots, S_n\subset Y$, defined in Equation~\eqref{eq:defframed0spheres}. Recall that the framing of $\bL$, as well as the embedding of $\Sigma\,\#_{\ws} \bar{\Sigma}$ into $Y(S_1,\dots, S_n)$ depended on our choice of trivializations of $N(\Sigma)$ and $N(D_{\beta_i})$. We give the following alternate description of the framings of $\bL$: \begin{lem}\label{lem:welldefinedframing}Let $\phi:\Sigma\, \#_{\ws} \bar{\Sigma}\to Y(S_1,\dots, S_n)$ denote the embedding defined above (which depends on the trivializations $\tau_i$ and $\tau'$ of $N(D_{\beta_i})$ and $N(\Sigma)$). Suppose that $\ell_i\in \bL$ is the link component corresponding to the curve $\beta_i\in \bs$. \begin{enumerate} \item Suppose $b_i\subset \Sigma\setminus N'(\ws)$ is a properly embedded arc which intersects $\beta_i$ once, intersects none of the other $\beta_j$ curves, and which intersects the boundary of $ \Sigma\setminus N'(\ws)$ at two points. Then $\ell_i$ is isotopic to the knot obtained by doubling $b_i$ onto $\Sigma\, \#_{\ws} \bar{\Sigma}$. \item The framing on $\ell_i$ given by the trivializations of $N(\Sigma)$ and $N(D_{\beta_i})$, described above, agrees with the framing induced by a normal vector to the tangent space of $\Sigma\, \#_{\ws} \bar{\Sigma}$. \end{enumerate} \end{lem} \begin{proof}Both claims are easily verified by drawing a picture (examine Figure~\ref{fig::52}). \end{proof} We note that Proposition~\ref{prop:changeofdiagramsmapcomp} is stated in terms of an arbitrary $\Ds$, constructed using the doubling procedure from Section~\ref{prop:changeofdiagramsmapcomp}. We describe a choice of $\Ds$ which is particularly convenient for our purposes. Pick a collection $b_1,\dots, b_n$ of pairwise disjoint arcs in $H_1(\Sigma(\ws), A; \Z)$, which have both endpoints on $A$, and which are dual to the curves $\beta_1,\dots, \beta_n$ in the sense that $|\beta_i\cap b_j|=\delta_{ij}$, where $\delta_{ij}$ denotes the Kronecker-delta. Additionally, we construct arcs $b_1',\dots, b_n'$ on $\Sigma(\ws)$ by isotoping $\beta_i$ (which intersects $b_i$ in a single point) along $b_i$ (in either direction), until it intersects $\d \Sigma(\ws)$ at two points. We let $\Ds$ be the collection of curves determined by doubling $b_1,\dots, b_n,b_1',\dots, b_n'$ onto $\Sigma\, \#_{\ws} \bar{\Sigma}$. An example of the arcs $b_1,\dots, b_n,b_1',\dots, b_n'$ is shown in Figure~\ref{fig::34}. \begin{figure}[ht!] \centering \input{fig34.pdf_tex} \caption{\textbf{The arcs $b_i$ and $b_i'$ on $\Sigma(\ws):=\Sigma\setminus \Int N'(\ws)$.} On the left are the curves $\bs\subset \Sigma$. On the right, the surface $\Sigma(\ws)$ is shown, as well as the arcs $A\subset \d N'(\ws)$ (shown in bold), and the arcs $b_i$ and $b_i'$ (both shown as dashed lines). \label{fig::34}} \end{figure} We now show that the $\Ds$ curves constructed by doubling the arcs $b_1,\dots, b_n,b_1',\dots, b_n'$ forms a valid set of attaching curves. By Lemma~\ref{lem:doublingvalid}, this amounts to proving the following: \begin{lem}\label{lem:particularbasisisarealbasis}If $b_1,\dots, b_n,b_1',\dots, b_n'$ denote the arcs described above, then the classes $[b_1],\dots, [b_n],$ $[b_1'],\dots, [b_n']$ form a basis of $H_1(\Sigma(\ws), A;\Z)$. \end{lem} \begin{proof}As $[b_i']$ is homologous to $[\beta_i]$, it is sufficient to show that the classes $[b_1],\dots, [b_n], [\beta_1],\dots, [\beta_n]$ form a basis of $H_1(\Sigma(\ws), A;\Z)$. The claim can then be proven by induction on the number of $\bs$ curves. In the case that $\bs$ is empty, the surface $\Sigma(\ws)$ is a collection of disks, and $H_1(\Sigma(\ws),A;\Z)$ vanishes. Assuming the claim holds for any diagram where $|\bs|=k-1$, we can prove that the claim also holds for diagrams with $|\bs|=k$ by surgering out a curve $\beta_k\in \bs$ and considering the effect on $H_1(\Sigma(\ws), A;\Z)$. Using a Mayer-Vietoris exact sequence for the subspaces $N(\beta_k)$ and $\Sigma(\ws)\setminus \beta_k$, it's easy to see that \[\rank H_1(\Sigma(\ws),A;\Z)=\rank H_1(\Sigma(\ws)(\beta_k), A;\Z)+2.\] Furthermore, a basis of $H_1(\Sigma(\ws),A;\Z)$ is obtained from a basis of $H_1(\Sigma(\ws)(\beta_k),A;\Z)$ by adding the two generators $[\beta_k]$ and $[b_k]$, where $b_k$ is a dual arc to $\beta_k$. \end{proof} \begin{lem}\label{lem:randomtrianglemapis2-handlemap} Let $\Sigma\, \#_{\ws} \bar{\Sigma}$ be embedded in $Y(S_1,\dots, S_n)$ as described above (using fixed trivializations of $N(\Sigma)$ and $N(D_{\beta_i})$). With the choice of $\Ds$ considered in Lemma~\ref{lem:particularbasisisarealbasis}, the map $F_{\as\cup \bar{\bs}, \bs\cup \bar{\bs},\Ds}(-,\Theta_{\bs\cup \bar{\bs},\Ds}^+)$ is the 2-handle map, for surgery on the framed link $\bL\subset Y(S_1,\dots, S_n)$, with the framing induced by the trivializations of $N(\Sigma)$ and $N(D_{\beta_i})$, described above. \end{lem} \begin{proof}Let us write $\ds=\{\delta_1,\dots, \delta_n\}$ for the curves obtained by doubling $b_1,\dots, b_n$ and let us write $\ds'=\{\delta_1',\dots, \delta_n'\}$ for the curves obtained by doubling $b_1',\dots, b_n'$. By definition \[\Ds=\ds'\cup \ds.\] The 2-handle map is defined by picking a Heegaard triple which is subordinate to a bouquet for the framed 1-dimensional attaching spheres of the 2-handles. In this case, the triple $(\Sigma\, \#_{\ws} \bar{\Sigma},\as\cup \bar{\bs},\bs\cup \bar{\bs},\Ds)$ is not quite a triple subordinate to a bouquet of the framed link $\bL$, however we will show that it is related to such a triple by a sequence of handleslides, which is sufficient for the lemma statement, since the associativity relations for holomorphic triangles can then be used to show that the the triangle map for the triple $(\Sigma\,\#_{\ws} \bar{\Sigma}, \as\cup \bar{\bs}, \bs\cup \bar{\bs}, \Ds)$ is chain homotopic to the 2-handle map. First handleslide each $\bs$ curve across the corresponding curve in $\bar{\bs}$. Let $\hat{\bs}$ denote the resulting curves, and let us now consider the triple $(\Sigma\, \#_{\ws} \bar{\Sigma}, \as\cup \bar{\bs}, \hat{\bs}\cup \bar{\bs}, \Ds)$. Note that to handleslide the curve $\beta_i$ over $\bar{\beta}_i$, we need to pick a path from $\beta_i$ to $\bar{\beta}_i$. The arc $b_i$ (which intersects $\beta_i$ exactly once) provides two choices of path from $\beta_i$ to $\bar{\beta}_i$, and we choose the one which is consistent with our choice for the arcs $b_i'$ in the construction of the curves $\Ds$. It follows that $\hat{\beta}_i$ is isotopic to $\delta_i'$. Clearly we can pick $\hat{\beta}_i$ so that \[|\hat{\beta}_i\cap \delta_j'|=\begin{cases} 2 & \text{if } i=j,\\ 0& \text{otherwise},\end{cases}\] and $|\hat{\beta}_i\cap \delta_j|=0$ for all $i$ and $j$. According to Lemma~\ref{lem:welldefinedframing}, the link component $\ell_i\in \bL$ corresponding to the curve $\beta_i$ is isotopic to the knot obtained by pushing $\delta_i$ off of $\Sigma \, \#_{\ws} \bar{\Sigma}$. The framing from Lemma~\ref{lem:welldefinedframing} is the one which is tangent to $\Sigma\, \#_{\ws} \bar{\Sigma}$. Hence $\delta_i$ is a longitude of $\ell_i$. Furthermore, $\bar{\beta}_i$ is a meridian of $\ell_i$. The curves $\delta_i$ and $\bar{\beta}_i$ are dual, in the sense that \[|\bar{\beta}_i\cap \delta_j|=\begin{cases} 1 & \text{if } i=j,\\ 0& \text{otherwise}.\end{cases}\] Since $\bar{\beta}_i\cup \delta_j$ is also disjoint from the $\hat{\beta}_j$ and $\delta_j'$ curves, it follows that a regular neighborhood of $\bar{\beta}_i\cup \delta_i$ is a diffeomorphic to a once punctured torus $F_i$, which doesn't intersect any of the other $\bar{\beta}_j$ or $\delta_j$ curves, or any of the $\hat{\beta}_j$ or $\delta'_j$ curves. We note that \[(\Sigma\, \#_{\ws} \bar{\Sigma}) \setminus \bigcup_{i=1}^n(F_i\cup \hat{\beta}_i)\] is homeomorphic to a collection of punctured disks, each containing exactly one $\ws$ basepoint. There are thus disjoint, embedded arcs on $(\Sigma\, \#_{\ws} \bar{\Sigma})$ (avoiding $\hat{\bs},$ $\bar{\bs}$ and $\ds$ and $\ds'$, except at their endpoints) connecting each $\delta_i$ to one of the basepoints. The union of these arcs (pushed slightly off of $\Sigma\, \#_{\ws} \bar{\Sigma}$) is a bouquet for $\bL$, and the triple $(\Sigma\, \#_{\ws} \bar{\Sigma}, \as\cup \bar{\bs}, \hat{\bs}\cup \bar{\bs}, \Ds)$ is, by definition, subordinate to this bouquet for $\bL$. \end{proof} We can now prove Proposition~\ref{prop:changeofdiagramsmapcomp}, by showing that \[\Psi_{\cH\to D_{\as}(\cH)}\simeq F_{\as\cup \bar{\bs}, \bs\cup \bar{\bs}, \Ds}( F_1^{\bar{\bs},\bar{\bs}}, \Theta_{\bs\cup \bar{\bs},\Ds}^+).\] \begin{proof}[Proof of Proposition~\ref{prop:changeofdiagramsmapcomp}] The composition appearing in the proposition statement is unchanged (up to chain homotopy) by isotopies or handleslides of the $\Ds$ curves amongst each other. As any two sets of attaching curves for the fixed handlebody $U_\Sigma$ are related by a sequence of handleslides and isotopies, it is sufficient to show the claim for the $\Ds$ curves considered in Lemma~\ref{lem:randomtrianglemapis2-handlemap}. The main proposition statement is now a consequence of Lemmas~\ref{lem:generalized1-handlemapiscompof1-handles} and \ref{lem:randomtrianglemapis2-handlemap}, as well as invariance of the cobordism maps under 4-dimensional handle cancellations. \end{proof} Analogously, the proof of Proposition~\ref{prop:changeofdiagramsmapcomp} adapts to compute the transition map in the opposite direction: \begin{prop}\label{prop:changeofdiagramsmapcompundouble}If $\cH=(\Sigma,\as,\bs,\ws)$ is a multi-pointed Heegaard diagram, then the transition map $\Psi_{ D_{\as}(\cH)\to \cH}$ satisfies \[\Psi_{ D_{\as}(\cH)\to \cH}\simeq F_3^{\bar{\bs},\bar{\bs}}(F_{\as\cup \bar{\bs}, \Delta, \bs\cup \bar{\bs}}(-, \Theta^+_{\bs\cup \bar{\bs}, \Ds})).\] \end{prop} \section{Connected sums and graph cobordisms} \label{sec:connectedsumsandgraphTQFT} If $(\Sigma_1,\as_1,\bs_1,w_1)$ and $(\Sigma_2,\as_2,\bs_2,w_2)$ are two singly pointed Heegaard diagrams for $(Y_1,w_1)$ and $(Y_2,w_2)$, then the diagram $(\Sigma_1\# \Sigma_2,\as_1\cup \as_2,\bs_1\cup \bs_2,w)$ is a diagram for $(Y_1\# Y_2,w)$, where the connected sum is taken near $w_1$ and $w_2$, and $w$ is a basepoint in the connected sum region of $\Sigma_1\# \Sigma_2$. In \cite{OSProperties}*{Theorem~1.5}, Ozsv\'{a}th and Szab\'{o} construct a map from \[\CF^-(\Sigma_1,\as_1,\bs_1,w_1)\otimes_{\bF_2[U]} \CF^-(\Sigma_2,\as_2,\bs_2,w_2)\qquad \text{to} \qquad \CF^-(\Sigma_1\# \Sigma_2,\as_1\cup \as_2,\bs_1\cup \bs_2,w)\] and prove that it is a quasi-isomorphism. In \cite{HMZConnectedSum}, the graph cobordism maps from \cite{ZemGraphTQFT} are studied in relation to connected sums, and it is proven that there are natural graph cobordisms between $Y_1\sqcup Y_2$ and $Y_1\# Y_2$ which induce chain homotopy equivalences. In this section, we prove that the map used by Ozsv\'{a}th and Szab\'{o} to prove the connected sum formula are in fact equal to the graph cobordism maps considered in \cite{HMZConnectedSum}. We will write $\cE_1$ for the map which Ozsv\'{a}th and Szab\'{o} considered in \cite{OSProperties}*{Theorem~1.5}. It is defined as a composition of two generalized 1-handle maps, and a triangle map, via the formula \begin{equation}\cE_1(\xs,\ys):=F_{\as_1\cup \as_2, \bs_1\cup \as_2, \bs_1\cup \bs_2}(F_1^{\as_2,\as_2}(\xs)\otimes F_1^{\bs_1,\bs_1}(\ys)).\label{eq:cE_1def}\end{equation} We note that the formula for $\cE_1$ is not symmetric in $Y_1$ and $Y_2$. By switching the roles of $Y_1$ and $Y_2$, we can define a (potentially different) map $\cE_2$ by the formula \begin{equation}\cE_2(\xs,\ys):=F_{\as_1\cup \as_2, \as_1\cup \bs_2, \bs_1\cup \bs_2}(F_1^{\as_1,\as_1}(\ys)\otimes F_1^{\bs_2,\bs_2}(\xs)).\label{eq:cE_2def}\end{equation} We will often refer to $\cE_1$ and $\cE_2$ as the \textit{intertwining maps}. An alternate way of constructing a map is to construct a graph cobordism \[(W,\Gamma):(Y_1\sqcup Y_2,\{w_1,w_2\})\to (Y_1\# Y_2, w),\] and consider the induced graph cobordism map. We will take the 4-manifold $W$ to be obtained by attaching a 1-handle to $Y_1\sqcup Y_2$ at $w_1$ and $w_2$, i.e., gluing a copy of $D^3\times [-1,1]$ to $(Y_1\sqcup Y_2)\times [0,1]$ along regular neighborhoods of $(w_1,1)$ and $(w_2,1)$ in $(Y_1\sqcup Y_2)\times \{1\}$. We view $D^3$ as the unit ball in $\R^3$ and pick a point $v\in S^2=\d D^3$. We take the basepoint in $Y_1\# Y_2$ to be $w:= (v,0)\in S^2\times [-1,1]$, which sits inside of the connected sum region of $Y_1\# Y_2$. We define the graph $\Gamma$ to be \[\Gamma:=\big(\{w_1,w_2\}\times [0,1]\big)\cup \big(\{(0,0,0)\}\times [-1,1]\big)\cup \{(t v,0): t\in [0,1]\}.\] We give $\Gamma$ the ribbon structure induced by the cyclic ordering of the boundary manifolds $Y_1,Y_2,Y_1\# Y_2$ (read left to right). This is shown in Figure~\ref{fig::38}. \begin{figure}[ht!] \centering \input{fig38.pdf_tex} \caption{\textbf{The graph cobordisms $(W,\Gamma)$ and $(W',\Gamma')$ for connected sums used to define the maps $E_1^A$ and $ E_1^B$ (left) as well as $G_1^A$ and $G_1^B$ (right).} In the case that $Y_1$ and $Y_2$ have more basepoints, the cobordisms have additional 1-handles or 3-handles, and additional graph components. The cyclic order on the trivalent vertices are shown.\label{fig::38}} \end{figure} Define graph cobordism maps $E_1^A$ and $E_2^A$ by \begin{equation}E_1^A:=F_{W,\Gamma,\frt}^A\qquad \text{and} \qquad E_2^A:=F_{W,\bar{\Gamma},\frt}^A,\label{eq:EiAmapsdefinition}\end{equation} where $\bar{\Gamma}$ is the ribbon graph obtained by taking $\Gamma$ and reversing the cyclic orderings. Also $\frt$ is the unique $\Spin^c$ structure on $Y_1\# Y_2$ which extends $\frs_1\sqcup \frs_2$. We define maps $E_1^B$ and $E_2^B$ analogously, by using the type-$B$ graph cobordism maps. Note that \[E_1^A\simeq E_2^B \qquad \text{and} \qquad E_2^A\simeq E_1^B,\] by Lemma~\ref{lem:reversecyclicordering}. We can also define a graph cobordism $(W',\Gamma')$ from $(Y_1\# Y_2,w)$ to $(Y_1\sqcup Y_2, \{w_1,w_2\})$. We define $W'$ to be the cobordism obtained by turning around and reversing the orientation of $W$, and we take $\Gamma'$ to be the graph $\Gamma$, with reversed cyclic ordering. We define maps $G_1^A$ and $G_2^A$ by the formulas \begin{equation}G_1^A:=F_{W',\Gamma',\frt}^A \qquad \text{and} \qquad G_2^A:=F_{W',\bar{\Gamma}',\frt}^A,\label{eq:GiABdefinition}\end{equation} where $\frt$ is the unique 4-dimensional $\Spin^c$ structure extending $\frs_1\sqcup \frs_2$ on $Y_1\sqcup Y_2$. We define maps $G_1^B$ and $G_2^B$ analogously. We will prove the following in this section: \begin{prop}\label{prop:OSmapsaregraphcobmaps}The connected sum maps $\cE_i$, $E_i^A$ and $E_i^B$ satisfy the relations \[\cE_1\simeq E_1^B\simeq E_2^A\qquad \text{and} \qquad\cE_2\simeq E_1^A\simeq E_2^B.\] \end{prop} As a first hint that the intertwining maps $\cE_i$ and the graph cobordism maps $E_i^A$ and $E_i^B$ are related, we note that in \cite{HMZConnectedSum}*{Proposition~5.4} it is shown that the maps $E_i^A$ and $E_i^B$ are chain homotopy equivalences, and hence give an alternate proof of the connected sum formula \cite{OSProperties}*{Theorem~1.5}, which Ozsv\'{a}th and Szab\'{o} proved by considering the map $\cE_1$. \begin{rem}\label{rem:morebasepoints}The connected sum maps $\cE_i,$ $E_i^A$ and $E_i^B$ can all be defined when $(Y_1,\ws_1)$ and $(Y_2,\ws_2)$ are multi-pointed manifolds, as long a bijection $f:\ws_1\to \ws_2$ is specified, as we now describe. If $(Y_1,\ws_1)$ and $(Y_2,\ws_2)$ are two multi-pointed 3-manifolds, with a bijection $f:\ws_1\to \ws_2$ one can add a connected sum tube between $Y_1$ and $Y_2$ for each pair of basepoints in $\ws_1$ and $\ws_2$. In each connected sum tube, we add a basepoint. We write $\ws$ for the new basepoints, and $Y_1\, \#_{\ws} Y_2$ for the manifold obtained by this connected sum operation. In the case that $Y_1$ and $Y_2$ are connected, we of course have (non-canonically) $Y_1\, \#_{\ws} Y_2\iso Y_1\# Y_2\# (S^1\times S^2)^{\#(|\ws|-1)}$. For multi-pointed 3-manifolds, one can define intertwining connected sum maps $\cE_i$ essentially the same as in Equation \eqref{eq:cE_1def}, using the generalized 1-handle operation from Section~\ref{sec:generalized1--handleand3--handlemaps} followed by a triangle map. A graph cobordism $(W,\Gamma)$ can be defined from $(Y_1\sqcup Y_2,\ws_1\cup \ws_2)$ to $(Y_1\, \#_{\ws} Y_2,\ws)$ by attaching $|\ws|$ 1-handles, and constructing a graph with $|\ws|$ components, each with 3 edges and a single trivalent vertex. There are $2^{|\ws|}$ potential choices of cyclic orderings on this graph. For definiteness, we pick the cyclic order on $\Gamma$ induced by the cyclic ordering of the boundaries of $W$ given by $Y_1,$ $Y_2,$ $Y_1\, \#_{\ws}Y_2$ (read left to right). Proposition~\ref{prop:OSmapsaregraphcobmaps}, as well as the proof we give, apply in this more general context of multi-pointed 3-manifolds. \end{rem} \subsection{Properties of the connected sum graph cobordism maps} \label{sec:descconnsumgraphcobs} In this section we describe some properties of the connected sum graph cobordism maps $E_i^A,$ $E_i^B,$ $G_i^A$ and $G_i^B$, defined in Equations~\eqref{eq:EiAmapsdefinition} and \eqref{eq:GiABdefinition}. We will mostly focus on taking connected sums of singly pointed 3-manifolds, but all the statements we make can be generalized in a straightforward manner to the more general context of multi-pointed 3-manifolds, as in Remark~\ref{rem:morebasepoints} and the discussion which followed. It will be useful for our purposes to have a more explicit description of the maps $E_1^A$ and $E_1^B$: \begin{lem}\label{lem:connsumgraphcobcomp}Suppose $(Y_1,w_1)$ and $(Y_2,w_2)$ are two singly pointed 3-manifolds, and $(Y_1\# Y_2, w)$ is their connected sum, with the connected sum taken at $w_1\in Y_1$ and $w_2\in Y_2$. The maps $E_1^A$ and $E_1^B$ from Equation \eqref{eq:EiAmapsdefinition} satisfy \[E_1^A\simeq \phi_* S_{\psi(w_2)}^- A_\lambda F_1\psi_*, \qquad \text{and} \qquad E_1^B\simeq \phi_*S_{\psi(w_2)}^- B_\lambda F_1\psi_*,\] where \begin{itemize} \item $\psi$ is a diffeomorphism of $Y_1\sqcup Y_2$, which is supported in a neighborhood of $\{w_1,w_2\}$ and moves $w_1$ and $w_2$ along paths in $Y_1$ and $Y_2$ outside of where the 1-handle is attached; \item $F_1$ is the 1-handle map for attaching a 1-handle with feet centered at $w_1$ and $w_2$; \item $\lambda$ is a path from $\psi(w_1)$ to $\psi(w_2)$ in $Y_1\# Y_2$, obtained by concatenating the paths used to construct $\psi$ with a path across the connected sum region; \item $\phi$ is a diffeomorphism of $Y_1\# Y_2$ which is supported in a neighborhood of the path $\lambda$ and moves $\psi(w_1)$ to the point $w$. \end{itemize} If $(Y_1,\ws_1)$ and $(Y_2,\ws_2)$ are two multi-pointed 3-manifolds, with a chosen bijection $f:\ws_1\to \ws_2$, then the connected sum maps $E_1^A$ and $E_1^B$ are a composition of $|\ws|$ maps, each taking the above formula. \end{lem} \begin{rem} It might appear overly fastidious to keep track of the diffeomorphisms $\psi$ and $\phi$. For many applications, this is not necessary, however it will be necessary for the proof of Proposition~\ref{prop:OSmapsaregraphcobmaps}, which is our main application of Lemma~\ref{lem:connsumgraphcobcomp}. \end{rem} \begin{proof}[Proof of Lemma~\ref{lem:connsumgraphcobcomp}]The key is to manipulate the graph, so that it has the configuration shown in Figure~\ref{fig::37}. More precisely, first one moves the basepoints $w_1$ and $w_2$ away from the feet of the 1-handle, using a diffeomorphism $\psi$, then one attaches the 1-handle. Next, one inserts a trivalent graph which deletes $w_2$ and doesn't move $\psi(w_1)$, as considered in Lemma~\ref{lem:computationofYshapedgraphcobordisms}. Finally one moves $\psi(w_2)$ into the connected sum region using the diffeomorphism $\phi$. Using the computation from Lemma~\ref{lem:computationofYshapedgraphcobordisms} for a cylindrical cobordism containing a trivalent graph, the entire graph cobordism map becomes \[E_1^A\simeq \phi_* S_{\psi(w_2)}^-A_\lambda F_1\psi_*\] The formula for $E_1^B$ follows from a simple modification of the above argument. The claim about the formulas for $E_1^A$ and $E_1^B$ when $Y_1$ and $Y_2$ have several basepoints follows by applying the above argument at each pair of basepoints. \end{proof} \begin{figure}[ht!] \centering \input{fig37.pdf_tex} \caption{\textbf{Computing $E_1^A$ by manipulating the graph inside the connected sum graph cobordism.} One first moves the basepoints $w_1$ and $w_2$ slightly away from the feet of the 1-handle (corresponding to the diffeomorphism $\psi$). Then one attaches a 1-handle at $w_1$ and $w_2$. This is followed by a copy of the $(Y_1\# Y_2)\times [0,1]$, containing a trivalent graph. Finally one moves the basepoint $\psi(w_1)$ into the connected sum region, using the diffeomorphism $\phi$. \label{fig::37}} \end{figure} Analogously to Lemma~\ref{lem:connsumgraphcobcomp}, we have the following: \begin{lem}\label{lem:connsumgraphcobcomp2}Suppose that $(Y_1,w_1)$ and $(Y_2,w_2)$ are two singly pointed 3-manifolds, and $G_1^A$ and $G_1^B$ are the cobordism maps for the graph cobordism $(W',\Gamma')$, which consists of a 3-handle cobordism containing a graph with three edges and a single trivalent vertex (described above and shown in Figure~\ref{fig::38}). Then \[G_1^A\simeq \phi_* F_3 A_\lambda S_{w_2'}^+\psi_* \qquad \text{and} \qquad G_1^B\simeq \phi_* F_3 B_\lambda S_{w_2'}^+\psi_*,\] where \begin{itemize} \item $\psi$ is a diffeomorphism of $Y_1\# Y_2$, supported in a neighborhood of the connected sum region, which pushes $w$ slightly into $Y_1$; \item $w_2'$ is a new basepoint in the $Y_2$ side of $Y_1\#Y_2$, near the connected sum region; \item $\lambda$ is a path from $\psi(w)$ to $w_2'$ crossing over the connected sum region; \item $\phi$ is diffeomorphism of $Y_1\sqcup Y_2$ which is supported in a neighborhood of $w_1$ and $w_2$ and moves $\psi(w)$ to $w_1$ and moves $w_2'$ to $w_2$. \end{itemize} If $(Y_1,\ws_1)$ and $(Y_2,\ws_2)$ are two multi-pointed 3-manifolds, with a chosen bijection $f:\ws_1\to \ws_2$, then the maps $G_1^A$ and $G_1^B$ are a composition of $|\ws_i|$ maps, each given by one of the above formulas. \end{lem} \begin{proof}The proof is identical to the proof of Lemma~\ref{lem:connsumgraphcobcomp}. \end{proof} An important property of the maps $E_i^A,E_i^B, G_i^A$ and $G_i^B$ is that they are chain homotopy inverses of each other: \begin{prop}\label{prop:connectedsummapsarehomotopyinverses}The maps $E_i^A,$ and $G_i^A$ satisfy the relations \[E_i^A\circ G_i^A\simeq \id \qquad \text{and} \qquad G_i^A\circ E_i^A\simeq \id,\] for $i=1,2$. The same relation hold for the type-$B$ maps $E_i^B$ and $G_i^B$. \end{prop} A direct proof of Proposition~\ref{prop:connectedsummapsarehomotopyinverses} can be found in \cite{HMZConnectedSum}*{Proposition~5.2}, in the context of 3-manifolds with a single basepoint. The proof extends without change to the case when $Y_1$ and $Y_2$ each have extra basepoints which are not involved in the connected sum (i.e. when we take two multi-pointed 3-manifolds, $(Y_1,\ws_1)$ and $(Y_2,\ws_2)$, and take their connected sum at just one pair of basepoints $w_1\in \ws_1$ and $w_2\in \ws_2$). The full version of Proposition~\ref{prop:connectedsummapsarehomotopyinverses}, where the cobordisms for $E_i^A$ and $G_i^A$ involve $|\ws_i|$ 1-handles or 3-handles, with each handle containing a trivalent vertex, can then be proven by applying the composition law. The idea of the proof of \cite{HMZConnectedSum}*{Proposition~5.2} was to consider a doubly pointed diagram for $(Y_1\# Y_2,\{w_1,w_2\})$ which was obtained by taking a diagrams for $(Y_1,w_1)$ and $(Y_2,w_2)$ and attaching a 1-handle with feet near $w_1$ and $w_2$. By degenerating the almost complex structure appropriately, one can compute the holomorphic disks counted by the differential on the diagram for $(Y_1\# Y_2, \{w_1,w_2\})$, and verify the relations \[A_\lambda F_1 F_3 A_\lambda\simeq A_\lambda+U F_1F_3 \qquad \text{and} \qquad F_3A_\lambda F_1\simeq \id.\] Using these two relations, it is straightforward to manipulate the expressions for $G_1^A \circ F_1^A$ and $F_1^A\circ G_1^A$ obtained by combining Lemmas~\ref{lem:connsumgraphcobcomp} and~\ref{lem:connsumgraphcobcomp2}. \begin{rem}Analogous to Proposition~\ref{prop:connectedsummapsarehomotopyinverses}, there is a proof of the K\"{u}nneth theorem for knot Floer homology \cite{OSKnots}*{Theorem~7.1} using link cobordisms for connected sums \cite{ZemKnotConnectedSums}*{Proposition~5.1}. The argument uses the link cobordism maps from \cite{ZemCFLTQFT}. In fact, Proposition~\ref{prop:connectedsummapsarehomotopyinverses} can be derived from \cite{ZemKnotConnectedSums}*{Proposition~5.1}, since the graph cobordism maps from \cite{ZemGraphTQFT} are an algebraic reduction of the link cobordism maps from \cite{ZemCFLTQFT}. See \cite{ZemCFLTQFT}*{Theorem~C and Section~14} for more on the connection between the graph cobordism maps from \cite{ZemGraphTQFT} and the link cobordism maps from \cite{ZemCFLTQFT}. \end{rem} \subsection{Proof of Proposition~\ref{prop:OSmapsaregraphcobmaps}: \texorpdfstring {$\cE_i$}{curlyE\_i} and \texorpdfstring{$E_i^B$}{E\_i B} are equal} \begin{proof}[Proof of Proposition~\ref{prop:OSmapsaregraphcobmaps}] We will show that $\cE_1\simeq E_1^B$. Once we establish this, the relation $\cE_2\simeq E_2^B$ will also follow, since both $\cE_2$ and $E_2^B$ are simply the maps obtained by switching the roles of $Y_1$ and $Y_2$ in the definitions of $\cE_1$ and $E_1^B$. We will also only consider the case that $Y_1$ and $Y_2$ are singly pointed. The case that they are multi-pointed follows from a straightforward modification of the argument we present. As $E_1^B$ and $G_1^B$ are chain homotopy inverses of each other by Proposition~\ref{prop:connectedsummapsarehomotopyinverses}, it is sufficient to show that \[G_1^B\circ \cE_1\simeq \id.\] By Lemma~\ref{lem:connsumgraphcobcomp2}, this amounts to showing that \begin{equation}\phi_* F_3 B_\lambda S_{w_2'}^+\psi_* \cE_1\simeq \id_{\CF^-(\Sigma_1,\as_1,\bs_1)\otimes \CF^-(\Sigma_2,\as_2,\bs_2)},\label{eq:OS=GraphTQFT1}\end{equation} where $\psi$ is a diffeomorphism pushing $w$ into $Y_1$ slightly, $w_2'$ is a new basepoint in the $Y_2$ side of $Y_1\# Y_2$, $\lambda$ is a path from $\psi(w)$ to $w_2'$ crossing the connected sum region, and $\phi$ is a diffeomorphism of $Y_1\sqcup Y_2$ which moves $\psi(w)$ to $w_1$ and $w_2'$ to $w_2$. The remainder of the proof establishes Equation~\eqref{eq:OS=GraphTQFT1}. We warn the reader that the proof is quite involved, though the strategy is relatively straightforward to summarize. Using properties of the graph TQFT and the holomorphic triangle counts used to show the well definedness of the generalized 1-handle and 3-handle maps, we will manipulate the expression in Equation~\eqref{eq:OS=GraphTQFT1} until it becomes a holomorphic triangle count on $\Sigma_1\sqcup \Sigma_2$ which counts the same holomorphic triangles as appear in the transition map associated to a small isotopy of the attaching curves on each diagram. Pick diagrams $(\Sigma_1,\as_1,\bs_1,w_1)$ and $(\Sigma_2,\as_2,\bs_2,w_2)$ for $(Y_1,w_1)$ and $(Y_2,w_2)$, respectively. We can form a diagram for $(Y_1\# Y_2,w)$ by taking the connected sum of the two diagrams at $w_1$ and $w_2$, and placing a single basepoint $w$ in the connected sum region. We also need to consider doubly pointed diagrams for $(Y_1\#Y_2,\{\psi(w),w_2'\})$, where $\psi$ and $w_2'$ are as above. To get such a diagram, we need to add an additional pair of attaching curves to $(\Sigma_1\# \Sigma_2,\as_1\cup \as_2,\bs_1\cup \bs_2)$. There are two convenient choices, which are shown in Figure~\ref{fig::36} and are labeled by $\zeta$ and $\tau$. The curves marked with $\tau$ are chosen so that they can compute the free-stabilization maps at $w_2'$, and the curves marked with $\zeta$ can be used to compute the 3-handle map (we are abusing notation and writing $\zeta$ or $\tau$ for both a curve and a small Hamiltonian translate). We will write $F_3^{\zeta,\zeta}$ for $F_3$, for emphasis. \begin{figure}[ht!] \centering \input{fig36.pdf_tex} \caption{\textbf{The pairs of attaching curves labeled $\tau$ and $\zeta$ in the connected sum region of $\Sigma_1\# \Sigma_2$.} The $\zeta$ curves can be used to compute the 3-handle map $F_3=F_3^{\zeta,\zeta}$, while the $\tau$ curves can be used to compute the free-stabilization map $S_{w_2'}^+$. The path $\lambda$ is also shown (dashed). \label{fig::36}} \end{figure} Rewriting Equation \eqref{eq:OS=GraphTQFT1} using the definition of $\cE_1$, we wish to show that \begin{equation}\phi_*F_3^{\zeta,\zeta} B_\lambda S^+_{w_2'}\psi_* F_{\as_1\cup\as_2,\bs_1\cup \as_2,\bs_1\cup\bs_2} (F_1^{\as_2,\as_2}(-), F_1^{\bs_1,\bs_1}(-)) \label{eq:OS=GraphTQFT2'}\end{equation} \[\simeq \id_{\CF^-(\Sigma_1,\as_1,\bs_1)}(-)\otimes \id_{\CF^-(\Sigma_2,\as_2,\bs_2)}(-).\] Noting that $\psi$ can be chosen to fix the Heegaard surface $\Sigma_1\# \Sigma_2\subset Y_1\# Y_2$, as well as the curves $\as_i$ and $\bs_i$, we can bring $\psi_*$ inside the triangle map so that the composition on the left side of Equation \eqref{eq:OS=GraphTQFT2'} becomes \begin{equation}\phi_*F_3^{\zeta,\zeta} B_\lambda S^+_{w_2'} F_{\as_1\cup\as_2,\bs_1\cup \as_2,\bs_1\cup\bs_2} (\psi_*F_1^{\as_2,\as_2}(-), \psi_*F_1^{\bs_1,\bs_1}(-))\label{eq:OS=GraphTQFT2}. \end{equation} \begin{comment} To actually compute the composition of the maps in Equation \eqref{eq:OS=GraphTQFT2}, one must also compute a transition map to move between a diagram which can be used to compute the map $S_{w_2'}^+$ and a diagram which can be used to compute $F_3^{\zeta,\zeta}$. In principle, one must also consider the change of almost complex structure maps used to change between almost complex structures which allow the various 1-handle, 3-handle and free-stabilization maps to be computed. However, we will not explicitly keep track of these, since their specific forms are not important to the argument. \end{comment} We define the following sets of attaching curves on $\Sigma_1\# \Sigma_2$: \begin{align*}\mathcal{L}_{\tau}&:=\as_1\cup\{\tau\}\cup \as_2,& \mathcal{\cL}_{\zeta}&:= \as_1\cup \{\zeta\}\cup \as_2,\\ \mathcal{M}_{\tau}&:= \bs_1\cup \{\tau\} \cup \as_2,& \mathcal{M}_{\zeta}&:=\bs_1\cup \{\zeta\}\cup \as_2,\\ \mathcal{R}_{\tau}&:=\bs_1\cup \{\tau\} \cup \bs_2,& \mathcal{R}_{\zeta}&:=\bs_1\cup \{\zeta\}\cup \bs_2. \end{align*} Using the relation between the triangle maps and the generalized 1-handle maps in Proposition~\ref{prop:generalized1-handlesandtriangles} (see Remark~\ref{rem:extrabasepoints}), we can pull the expression $S_{w_2'}^+$ inside the triangle map (for appropriately chosen almost complex structures) to see that Equation \eqref{eq:OS=GraphTQFT2} is chain homotopic to \begin{equation}\phi_*F_3^{\zeta,\zeta} B_{\lambda} F_{\cL_{\tau},\cM_{\tau}, \cR_{\tau}}(S_{w_2'}^+ \psi_* F_1^{\as_2,\as_2}(-), S_{w_2'}^+ \psi_* F_1^{\bs_1,\bs_1}(-)).\label{eq:OS=GraphTQFT3}\end{equation} The expression in Equation \eqref{eq:OS=GraphTQFT3} isn't quite sufficient to compute the composition since it still implicitly involves a change of diagrams map to handleslide the two $\tau$ curves into the position of the two $\zeta$ curves. Hence, we insert the transition map $\Psi_{\cL_\tau\to \cL_\zeta}^{\cR_\tau\to \cR_\zeta}$ immediately to the left of the triangle map in Equation \eqref{eq:OS=GraphTQFT3} to rewrite Equation \eqref{eq:OS=GraphTQFT3} as \[\phi_*F_3^{\zeta,\zeta} B_{\lambda} \Psi_{\cL_\tau\to \cL_\zeta}^{\cR_\tau\to \cR_\zeta} F_{\cL_{\tau},\cM_{\tau}, \cR_{\tau}}(S_{w_2'}^+\psi_* F_1^{\as_2,\as_2}(-), S_{w_2'}^+ \psi_*F_1^{\bs_1,\bs_1}(-)).\] By the construction of the transition maps, we have \[\Psi_{\cL_\tau\to \cL_\zeta}^{\cR_\tau\to \cR_\zeta}=\Psi_{\cL_\tau\to \cL_\zeta}^{\cR_{\zeta}}\circ \Psi_{\cL_\tau}^{\cR_\tau\to \cR_\zeta}.\] Individually, each of $\Psi_{\cL_\tau\to \cL_{\zeta}}^{\cR_{\zeta}}$ and $\Psi^{\cR_\tau\to \cR_\zeta}_{\cL_\tau}$ can be computed by a holomorphic triangle map. For example, the map $\Psi_{\cL_\tau}^{\cR_{\tau}\to \cR_{\zeta}}$ satisfies \[\Psi_{\cL_\tau}^{\cR_{\tau}\to \cR_{\zeta}}(-)\simeq F_{\cL_\tau, \cR_{\tau}, \cR_{\zeta}}(-, \Theta_{\cR_\tau, \cR_{\zeta}}^+),\] where $\Theta_{\cR_\tau, \cR_{\zeta}}^+\in \CF^-(\Sigma_1\# \Sigma_2, \cR_\tau, \cR_{\zeta})$ is a cycle which represents the top degree element of homology. The map $\Psi_{\cL_\tau\to \cL_{\zeta}}^{\cR_{\zeta}}$ takes a similar form. A straightforward argument using associativity of the triangle maps (twice) shows that \begin{equation}\begin{split}\,&\phi_*F_3^{\zeta,\zeta} B_{\lambda}\Psi_{\cL_\tau\to \cL_\zeta}^{\cR_{\zeta}} \Psi_{\cL_\tau}^{\cR_\tau\to \cR_\zeta}F_{\cL_{\tau},\cM_{\tau}, \cR_{\tau}}(S_{w_2'}^+ \psi_*F_1^{\as_2,\as_2}(-), S_{w_2'}^+ \psi_* F_1^{\bs_1,\bs_1}(-)) \\ \simeq &\phi_*F_3^{\zeta,\zeta} B_{\lambda}F_{\cL_{\zeta},\cM_{\tau}, \cR_{\zeta}}(\Psi_{\cL_\tau\to \cL_{\zeta}}^{\cM_\tau} S_{w_2'}^+ \psi_* F_1^{\as_2,\as_2}(-),\Psi^{\cR_\tau\to \cR_\zeta}_{\cM_\tau} S_{w_2'}^+\psi_* F_1^{\bs_1,\bs_1}(-)).\end{split}\label{eq:OS=GraphTQFT4}\end{equation} We now wish to change the $\cM_\tau$ to $\cM_{\zeta}$, in the above triangle map. Naturality of Heegaard Floer homology implies that \begin{equation}\Psi_{\cL_\zeta}^{\cM_{\zeta}\to \cM_{\tau}}\circ \Psi_{\cL_\zeta}^{\cM_{\tau}\to \cM_{\zeta}}\simeq \id_{\CF^-(\Sigma_1\# \Sigma_2, \cL_{\zeta}, \cM_{\tau})}.\label{eq:changethenchangeback=id}\end{equation} Inserting Equation~\eqref{eq:changethenchangeback=id} into Equation`\eqref{eq:OS=GraphTQFT4}, and using associativity of the triangle maps together with the fact that $\Psi_{\cL_\zeta}^{\cM_{\zeta}\to \cM_{\tau}}$ can be realized as the triangle map $F_{\cL_{\zeta},\cM_{\zeta},\cM_{\tau}}(-, \Theta_{\cM_{\zeta},\cM_\tau}^+)$, we see that \begin{align*}\, &\phi_* F_3^{\zeta,\zeta} B_{\lambda} F_{\cL_{\zeta},\cM_{\tau}, \cR_{\zeta}}(\Psi_{\cL_\tau\to \cL_{\zeta}}^{\cM_\tau} S_{w_2'}^+ \psi_*F_1^{\as_2,\as_2},\Psi^{\cR_\tau\to \cR_\zeta}_{\cM_\tau} S_{w_2'}^+\psi_* F_1^{\bs_1,\bs_1})\\ \simeq &\phi_* F_3^{\zeta,\zeta} B_{\lambda} F_{\cL_{\zeta},\cM_{\tau}, \cR_{\zeta}}(\Psi_{\cL_\zeta}^{\cM_{\zeta}\to \cM_{\tau}}\Psi_{\cL_\zeta}^{\cM_{\tau}\to \cM_{\zeta}}\Psi_{\cL_\tau\to \cL_{\zeta}}^{\cM_\tau} S_{w_2'}^+\psi_* F_1^{\as_2,\as_2},\Psi^{\cR_\tau\to \cR_\zeta}_{\cM_\tau} S_{w_2'}^+ \psi_*F_1^{\bs_1,\bs_1})\\ \simeq &\phi_* F_3^{\zeta,\zeta} B_{\lambda} F_{\cL_{\zeta},\cM_{\zeta}, \cR_{\zeta}}(\Psi_{\cL_\zeta}^{\cM_{\tau}\to \cM_{\zeta}}\Psi_{\cL_\tau\to \cL_{\zeta}}^{\cM_\tau} S_{w_2'}^+\psi_* F_1^{\as_2,\as_2},\Psi^{\cR_{\zeta}}_{\cM_{\tau}\to \cM_{\zeta}}\Psi^{\cR_{\tau}\to \cR_{\zeta}}_{\cM_{\tau}} S_{w_2'}^+\psi_* F_1^{\bs_1,\bs_1}). \end{align*} We condense the above expression slightly by combining some of the transition maps to arrive at the expression \begin{equation}\phi_* F_3^{\zeta,\zeta} B_{\lambda} F_{\cL_{\zeta},\cM_{\zeta}, \cR_{\zeta}}(\Psi_{\cL_\tau\to \cL_{\zeta}}^{\cM_{\tau}\to \cM_{\zeta}} S_{w_2'}^+ \psi_* F_1^{\as_2,\as_2}(-),\Psi_{\cM_{\tau}\to \cM_{\zeta}}^{\cR_{\tau}\to \cR_{\zeta}} S_{w_2'}^+\psi_* F_1^{\bs_1,\bs_1}(-)) .\label{eq:OS=GraphTQFT7} \end{equation} Lemma~\ref{lem:graphactionandtriangles} allows us to bring $B_{\lambda}$ inside the triangle map to see that Equation \eqref{eq:OS=GraphTQFT7} is chain homotopic to \begin{equation}\phi_* F_3^{\zeta,\zeta} F_{\cL_{\zeta},\cM_{\zeta}, \cR_{\zeta}}(\Psi_{\cL_\tau\to \cL_{\zeta}}^{\cM_{\tau}\to \cM_{\zeta}} S_{w_2'}^+ \psi_*F_1^{\as_2,\as_2}(-),B_{\lambda}\Psi_{\cM_{\tau}\to \cM_{\zeta}}^{\cR_{\tau}\to \cR_{\zeta}} S_{w_2'}^+\psi_* F_1^{\bs_1,\bs_1}(-)).\label{eq:OS=GraphTQFT5}\end{equation} We now define a map \begin{equation}\Top^+_{(\Sigma_2,\as_2,\as_2)}:\CF^-(\Sigma_1,\as_1,\bs_1,w_1)\to \CF^-(\Sigma_1, \as_1, \bs_1, w_1)\otimes \CF^-(\Sigma_2,\as_2,\as_2,w_2')\label{eq:OS=GraphTQFT9}\end{equation} by the formula \[\Top^+_{(\Sigma_2,\as_2,\as_2)}(\ve{x})=\ve{x}\otimes \Theta_{\as_2,\as_2}^+,\] where $\Theta^+_{\as_2,\as_2}\in \CF^-(\Sigma_2,\as_2,\as_2,w_2')$ is the top degree generator. We claim that \begin{equation}\Psi_{\cL_\tau\to \cL_{\zeta}}^{\cM_{\tau}\to \cM_{\zeta}} S_{w_2'}^+ \psi_* F_1^{\as_2,\as_2}(-)\simeq F_1^{\zeta,\zeta} \psi'_* \Top_{(\Sigma_2,\as_2,\as_2)}^+(-) ,\label{eq:OS=GraphTQFT6}\end{equation} where $\psi'$ is the diffeomorphism of $\Sigma_1$ obtained by pushing $w_1$ to $\psi(w)$, and $F_1^{\zeta,\zeta}$ is the 1-handle map for attaching a 1-handle with feet at $w_1$ and $w_2$, using attaching curves in the 1-handle region equal to Hamiltonian translates of $\zeta$. Equation \eqref{eq:OS=GraphTQFT6} follows from the fact that the generalized 1-handle map is well defined (i.e. the holomorphic triangle counts of Proposition~\ref{prop:generalized1-handlesandtriangles}, as well as the change of almost complex structure computation of Lemma~\ref{lem:generalized1-handlemapiscompof1-handles}). This is demonstrated schematically in Figure~\ref{fig::53}. We remark that technically we are using the well-definedness of a version of the generalized 1-handle map where we allow additional basepoints on our diagram for $(S^1\times S^2)^{\# g(\Sigma_0)}$, though the holomorphic disk and triangle counts from Section~\ref{sec:generalized1--handleand3--handlemaps} can be adapted to this situation with only minor notational changes (see Remark~\ref{rem:extrabasepoints}). \begin{figure}[ht!] \centering \input{fig53.pdf_tex} \caption{\textbf{A schematic of the relation $\Psi_{\cL_\tau\to \cL_{\zeta}}^{\cM_{\tau}\to \cM_{\zeta}} S_{w_2'}^+ \psi_* F_1^{\as_2,\as_2}(-)\simeq F_1^{\zeta,\zeta} \psi'_* \Top_{(\Sigma_2,\as_2,\as_2)}^+(-)$.} The relation follows from the well-definedness of the generalized 1-handle map, since both compositions can be interpreted as a version of the generalized 1-handle map, with the connected sum operation being taken near $\psi(w)$.\label{fig::53}} \end{figure} Using Equation~\eqref{eq:OS=GraphTQFT9}, Equation~\eqref{eq:OS=GraphTQFT5} now becomes \begin{equation}\phi_*F_3^{\zeta,\zeta} F_{\cL_{\zeta},\cM_{\zeta}, \cR_{\zeta}}(F_1^{\zeta,\zeta} \psi'_* \Top_{(\Sigma_2,\as_2,\as_2)}^+(-),B_{\lambda}\Psi_{\cM_{\tau}\to \cM_{\zeta}}^{\cR_{\tau}\to \cR_{\zeta}} S_{w_2'}^+ \psi_* F_1^{\bs_1,\bs_1}(-)).\label{eq:OS=graphTQFT20}\end{equation} Using the relation between the triangle maps and the generalized 3-handle maps from Proposition~\ref{prop:generalized1-handlesandtriangles}, we see that, for an appropriate choice of almost complex structure, Equation~\eqref{eq:OS=graphTQFT20} is equal to \begin{equation}\phi_*F_{\as_1\cup\as_2,\bs_1\cup \as_2,\bs_1\cup\bs_2}( \psi'_* \Top_{(\Sigma_2,\as_2,\as_2)}^+(-), F_3^{\zeta,\zeta}B_{\lambda}\Psi_{\cM_{\tau}\to \cM_{\zeta}}^{\cR_{\tau}\to \cR_{\zeta}} S_{w_2'}^+\psi_* F_1^{\bs_1,\bs_1}(-)).\label{eq:OS=graphTQFT11}\end{equation} We remark that the underlying Heegaard surface of the triangle count in Equation \eqref{eq:OS=graphTQFT11} is the disjoint union $\Sigma_1\sqcup \Sigma_2$, since we surger out $\zeta$ when moving the 3-handle map inside the triangle map. We now wish to rearrange the terms appearing in the right component of the triangle map. Recall that $\psi$ is the diffeomorphism of $Y_1\# Y_2$ which moves $w$ into the $Y_1$ side of $Y_1\# Y_2$. The diffeomorphism $\psi$ and the curves $\zeta$ and $\tau$ can be chosen so that $\psi$ fixes $w_2',\tau$ and $\zeta$, implying \begin{equation}\Psi_{\cM_{\tau}\to \cM_{\zeta}}^{\cR_{\tau}\to \cR_{\zeta}}S_{w_2'}^+\psi_*=\psi_*\Psi_{\cM_{\tau}\to \cM_{\zeta}}^{\cR_{\tau}\to \cR_{\zeta}}S_{w_2'}^+. \label{eq:OS=graphTQFT19} \end{equation} Define $\lambda'':=\psi^{-1}(\lambda),$ so that tautologically \begin{equation}\psi_* B_{\lambda''}\simeq B_{\lambda}\psi_*.\label{eq:OS=graphTQFT17}\end{equation} Using Equations \eqref{eq:OS=graphTQFT19} and \eqref{eq:OS=graphTQFT17}, we see that \begin{equation}F_3^{\zeta,\zeta}B_{\lambda}\Psi_{\cM_{\tau}\to \cM_{\zeta}}^{\cR_{\tau}\to \cR_{\zeta}} S_{w_2'}^+\psi_* F_1^{\bs_1,\bs_1}(-)\simeq F_3^{\zeta,\zeta} \psi_* B_{\lambda''} \Psi_{\cM_{\tau}\to \cM_{\zeta}}^{\cR_{\tau}\to \cR_{\zeta}} S_{w_2'}^+ F_1^{\bs_1,\bs_1}(-).\label{eq:OS=GraphTQFT8}\end{equation} We note that $F_1^{\bs_1,\bs_1}$ commutes with $S_{w_2'}^+$ since 1-handles and free-stabilization maps commute with each other \cite{ZemGraphTQFT}*{Proposition~6.23} by a change of almost complex structure computation similar to Lemma~\ref{lem:generalized1-handlemapiscompof1-handles}. Furthermore \begin{equation}\Psi_{\cM_{\tau}\to \cM_{\zeta}}^{\cR_{\tau}\to \cR_{\zeta}} F_1^{\bs_1,\bs_1}(-)\simeq F_1^{\bs_1,\bs_1} \Psi_{\as_2\cup \{\tau\}\to \as_2\cup \{\zeta\}}^{\bs_2\cup \{\tau\}\to \bs_2\cup \{\zeta\}}(-)\label{eq:OS=graphTQFT16}\end{equation} by the holomorphic triangle computation of Proposition~\ref{prop:generalized1-handlesandtriangles}. Since the relative homology map $B_{\lambda''}$ commutes with 1-handle maps by \cite{ZemGraphTQFT}*{Lemma~8.6} (or by just examining the domains of the holomorphic curves appearing in Proposition~\ref{prop:differentialcomp}) we note that \begin{equation}B_{\lambda''}F_1^{\bs_1,\bs_1}\simeq F_1^{\bs_1,\bs_1} B_{\lambda_0},\label{eq:OS=graphTQFT15} \end{equation} for a path $\lambda_0$ in $Y_2$ from $w_2$ to $w_2'$, contained in a neighborhood of $w_2\in Y_2$. Using Equations \eqref{eq:OS=graphTQFT16} and \eqref{eq:OS=graphTQFT15}, we conclude that Equation \eqref{eq:OS=GraphTQFT8} is chain homotopic to \begin{equation} F_3^{\zeta,\zeta} \psi_*F_1^{\bs_1,\bs_1}B_{\lambda_0} \Psi_{\as_2\cup \{\tau\}\to \as_2\cup \{\zeta\}}^{\bs_2\cup \{\tau\}\to \bs_2\cup \{\zeta\}} S_{w_2'}^+(-).\label{eq:OS=GraphTQFT10}\end{equation} We now claim that \begin{equation}F_3^{\zeta,\zeta}\psi_*F_1^{\bs_1,\bs_1}(-)\simeq \Top_{(\Sigma_1,\bs_1,\bs_1)}^+S^-_{w_2}(-),\label{eq:OS=graphTQFT12}\end{equation} where \[\Top_{(\Sigma_1,\bs_1,\bs_1)}^+:\CF^-(\Sigma_2,\as_2,\bs_2,w_2')\to \CF^-(\Sigma_1,\bs_1,\bs_1,\psi(w_1))\otimes \CF^-(\Sigma_2,\as_2,\bs_2,w_2')\] is defined by the formula $\Top^+_{(\Sigma_1,\bs_1,\bs_1)}(\xs)=\Theta_{\bs_1,\bs_1}^+\otimes \xs$, analogously to the map in Equation \eqref{eq:OS=GraphTQFT9}. Equation \eqref{eq:OS=graphTQFT12} essentially follows immediately from the formulas of the maps involved, however one should also use an argument similar to Lemma~\ref{lem:generalized1-handlemapiscompof1-handles} to ensure that a single almost complex structure can be chosen which allows both sides of the equivalence to be computed simultaneously. \begin{figure}[ht!] \centering \input{fig54.pdf_tex} \caption{\textbf{A schematic of the relation $F_3^{\zeta,\zeta}\psi_*F_1^{\bs_1,\bs_1}(-)\simeq \Top_{(\Sigma_1,\bs_1,\bs_1)}^+S^-_{w_2}(-)$.} The relation follows from the formulas for the maps in the composition.\label{fig::54}} \end{figure} Using Equation \eqref{eq:OS=graphTQFT12}, we see that Equation \eqref{eq:OS=GraphTQFT10} is chain homotopic to \begin{equation}\Top_{(\Sigma_1,\bs_1,\bs_1)}^+ S_{w_2}^- B_{\lambda_0} \Psi_{\as_2\cup \{\tau\}\to \as_2\cup \{\zeta\}}^{\bs_2\cup \{\tau\}\to \bs_2\cup \{\zeta\}} S_{w_2'}^+(-).\label{eq:OS=graphTQFT14}\end{equation} Since the maps appearing in Equation \eqref{eq:OS=graphTQFT14} are all natural, we will omit writing the transition map. This reduces Equation \eqref{eq:OS=graphTQFT14} to \begin{equation}\Top_{(\Sigma_1,\bs_1,\bs_1)}^+ S_{w_2}^- B_{\lambda_0} S_{w_2'}^+(-).\label{eq:OS=graphTQFT13}\end{equation} By Relation~\ref{rel:R9} (the basepoint moving relation) the expression in Equation \eqref{eq:OS=graphTQFT13} is chain homotopic to \begin{equation}\Top^+_{(\Sigma_1,\bs_1,\bs_1)}\phi^{\lambda_0}_*(-),\label{eq:OS=graphTQFT18}\end{equation} where $\phi^{\lambda_0}$ is the diffeomorphism of $Y_1\sqcup Y_2$ obtained by moving $w_2$ to $w_2'$ along the path $\lambda_0$. Inserting Equation \eqref{eq:OS=graphTQFT18} into Equation \eqref{eq:OS=graphTQFT11}, we see that \[(G_1^B\circ \cE_1)(-,-)\simeq \phi_*F_{\as_1\cup\as_2,\bs_1\cup \as_2,\bs_1\cup\bs_2}( \psi'_* \Top_{(\Sigma_2,\as_2,\as_2)}^+(-), \Top^+_{(\Sigma_1,\bs_1,\bs_1)}\phi^{\lambda_0}_*(-)).\] Bringing $\phi_*$ inside the triangle map cancels the diffeomorphism maps already inside the triangle map, and we are simply left with \[(G_1^B\circ \cE_1)(-,-)\simeq F_{\as_1\cup\as_2,\bs_1\cup \as_2,\bs_1\cup\bs_2}( \Top_{(\Sigma_2,\as_2,\as_2)}^+(-), \Top^+_{(\Sigma_1,\bs_1,\bs_1)}(-)).\] The above triangle map counts triangles on $\Sigma_1\sqcup \Sigma_2$, and is in fact just the tensor product of two transition maps, \[\Psi_{\as_1}^{\bs_1\to \bs_1}(-)\otimes \Psi_{\as_2\to \as_2}^{\bs_2}(-),\] completing the proof. \end{proof} \subsection{A concrete example of Proposition~\ref{prop:OSmapsaregraphcobmaps}} In this section, we give a concrete example of two intertwining maps which are related by the vertex breaking relation which is satisfied by the graph cobordism maps for connected sums. According to Proposition~\ref{prop:OSmapsaregraphcobmaps}, the intertwining maps are equal to the graph cobordism maps for connected sums, so we can view this as a partial illustration of Proposition~\ref{prop:OSmapsaregraphcobmaps}. We will consider intertwining maps \[\cE_1,\cE_2:\CF^-(S^3,w_1',w_1)\otimes_{\bF_2[U]} \CF^-(S^3,w_2,w_2')\to \CF^-(S^3,w_1',w,w_2'),\] corresponding to taking the connected sum of the two copies of $S^3$ at $w_1$ and $w_2$. The two basepoints $w_1$ and $w_2$ are replaced with a single basepoint $w$ in the connected sum region. We consider two doubly pointed diagrams, $(S^2,\alpha_1,\beta_1,w_1',w_1)$ and $(S^2,\alpha_2,\beta_2,w_2,w_2')$, both for $S^3$, which have $|\alpha_i\cap \beta_i|=2$. The two maps $\cE_1$ and $\cE_2$ are defined by adapting the formulas in Equations \eqref{eq:cE_1def} and \eqref{eq:cE_2def}. The diagrams for the domain the range of $\cE_1$ and $\cE_2$ are shown in Figure~\ref{fig::47}. \begin{figure}[ht!] \centering \input{fig47.pdf_tex} \caption{\textbf{The Heegaard diagrams in the domain and the range of the intertwining maps $\cE_1$ and $\cE_2$.}\label{fig::47}} \end{figure} The maps $\cE_1$ and $\cE_2$ are a slight variation to the maps featured in Proposition~\ref{prop:OSmapsaregraphcobmaps}, since in our present situation, the basepoints $w_1$ and $w_2$ are merged together, while $w_1'$ and $w_2'$ are not merged with any other basepoints. Nonetheless, we can define graph cobordism maps $E_1^A$ and $E_1^B$ for the graph cobordisms obtained by adding a 1-handle with feet centered at $w_1$ and $w_2$, and adding a graph as in Figure~\ref{fig::38}. The proof of Proposition~\ref{prop:OSmapsaregraphcobmaps} goes through without change to show that $\cE_1\simeq E_1^B$ and $\cE_2\simeq E_2^B$. We note that using the vertex breaking relation from Lemma~\ref{lem:vertexbreakingrelation}, the maps $E_1^B$ and $E_2^B$ satisfy \[E_1^B+E_2^B\simeq U E_i^B(\Phi_{w_1}\otimes \Phi_{w_2}),\] for any $i\in \{1,2\}$. In this section, we will show by explicit computation that the maps $\cE_1$ and $\cE_2$ satisfy the same relation: \begin{prop}\label{prop:E1E2relationexample}Let $(Y_1,\ws_1)=(S^3,\{w_1',w_1\})$ and $(Y_2,\ws_2)=(S^3,\{w_2,w_2'\})$. The intertwining maps $\cE_1$ and $\cE_2$ for taking the connected sum of $(Y_1,\ws_1)$ and $(Y_2,\ws_2)$ at $w_1\in \ws_1$ and $w_2\in \ws_2$ satisfy the relation \[\cE_1+\cE_2\simeq U \cE_i(\Phi_{w_1}\otimes \Phi_{w_2}),\] for either $i\in \{1,2\}$. \end{prop} The proof of the above proposition is carried out below and follows from the computations stated in Lemmas~\ref{lem:examplecomputation1ststep}, \ref{lem:examplecomputation2ndstep} and \ref{lem:computationEiX-Y-}. \begin{rem}In this example, we will explicitly compute $\cE_1$ and $\cE_2$ by counting holomorphic triangles, to verify Proposition~\ref{prop:E1E2relationexample}. Using an explicit count of holomorphic disks, it is also possible to compute $E_i^A$ and $E_i^B$ and show that $\cE_i=E_i^B$. We will leave this to the motivated reader. \end{rem} \begin{rem}This example may be interesting for another reason. The proof of the connected sum formula \cite{OSProperties}*{Properties~6.1} uses the fact that the triangle maps used to define the intertwining maps $\cE_1$ and $\cE_2$ can be approximated to first order (in terms of an area filtration) by a map which counts small triangles. The map which counts small triangles is obviously an isomorphism on the underlying groups of the chain complexes, however it is not obvious whether there are other (non-small) triangles which are counted by the actual holomorphic triangle maps. Proposition~\ref{prop:OSmapsaregraphcobmaps} indicates that in general there should be non-small holomorphic triangles which are counted, since otherwise we would expect $\cE_1$ and $\cE_2$ to be equal. In this example, we will compute an intertwining map explicitly, and show that there are non-small triangle classes which have holomorphic representatives, for arbitrary almost complex structures. \end{rem} \begin{rem}\label{rem:MorseBott}This example uses ``Morse-Bott'' gluing of holomorphic curves (i.e. gluing holomorphic curves along non-isolated asymptotics), unlike the gluing results used elsewhere in this paper. To make this fully rigorous, one could try to adapt the arguments from \cite{BourgeoisMorseBott} to our setting. Since this is just an illustrative example, we will not worry about providing a reference for such gluing results. \end{rem} We now begin the computation. Both $\cE_1$ and $\cE_2$ can be computed by counting holomorphic triangles on the triple \[(S^2,\xis,\zetas,\taus,w_1',w,w_2')=(S^2,\{\xi_1,\xi_2\},\{\zeta_1,\zeta_2\},\{\tau_1,\tau_2\},w_1',w,w_2')\] shown in Figure~\ref{fig::21}, however the two maps count different homology classes of triangles. \begin{figure}[ht!] \centering \input{fig21.pdf_tex} \caption{\textbf{A Heegaard triple $(S^2,\xis, \zetas, \taus,w_1',w,w_2')$ which can be used to compute both $\cE_1$ and $\cE_2$.}\label{fig::21}} \end{figure} The Maslov index of a homology class of triangles $\psi\in \pi_2(x_1\times x_2, y_1\times y_2,z_1\times z_2)$ satisfies \begin{equation}\mu(\psi)=\gr(x_1\times y_1,z_1)+\gr(x_2\times y_2,z_2)+2(n_w+n_{w_1'}+n_{w_2'})(\psi),\label{eq:Maslovindexexample}\end{equation} where $\gr(x_i\times y_i,z_i)$ denotes the drop in Maslov grading from $x_i\times y_i$ to $z_i$. In the above formula we adopt the convention that the top graded intersection point is in grading zero, so $x_i,y_i$ and $z_i$ each take grading in $\{-1,0\}$. Equation \eqref{eq:Maslovindexexample} can be proven by verifying the claim for any single class of triangles (for example a disjoint union of two 3-gons), and then verifying that the stated formula respects splicing in homology classes of disks and triply periodic domains. Let us write $X^{+}$ and $X^-$ for the intersection points of $\alpha_1\cap \beta_1$, and let us write $Y^+$ and $Y^-$ for the two intersection points of $\alpha_2\cap \beta_2$. The complex $\CF^-(S^2\sqcup S^2, \{\alpha_1,\alpha_2\}, \{\beta_1,\beta_2\})$ is generated over $\bF_2[U]$ by the intersection points $X^+Y^+,$ $X^+Y^-,$ $X^-Y^+$ and $X^-Y^-$. For the map $\cE_1$, we identify $\CF^-(S^2\sqcup S^2,\{\alpha_1,\alpha_2\}, \{\beta_1,\beta_2\})$ with $\CF^-(S^2\sqcup S^2,\{\xi_1,\zeta_2\}, \{\zeta_1,\tau_2\})$, by identifying $\alpha_1,\alpha_2,\beta_1$ and $\beta_2$ with $\xi_1,\zeta_2,\zeta_1,$ and $\tau_2$, respectively. Note that there is no ambiguity in such an identification, since the change of almost complex structure maps on $(S^2,\alpha_i,\beta_i)$ are always equal to the identity map on the level of intersection points, since the only nonnegative index 0 homology classes of disks are the constant ones. Under this identification, the map $\cE_1$ is defined by the formula \[\cE_1(X,Y)=F_{\xis,\zetas,\taus}(F_1^{\xi_2,\zeta_2}(X)\otimes F_1^{\zeta_1,\tau_1}(Y)),\] for $X\in \{X^+,X^-\}$ and $Y\in \{Y^+,Y^-\}$. Similarly the map $\cE_2$ is defined by first identifying $\CF^-(S^2\sqcup S^2, \{\alpha_1,\alpha_2\}, \{\beta_1,\beta_2\})$ with $\CF^-(S^2\sqcup S^2, \{\zeta_1,\xi_2\}, \{\tau_1,\zeta_2\})$ by identifying $\alpha_1,\alpha_2,\beta_1$ and $\beta_2$ with $\zeta_1,\xi_2,\tau_1$ and $\zeta_2$, respectively. After making this identification, the map $\cE_2$ is defined by the formula \[\cE_2(X,Y)=F_{\xis,\zetas,\taus}(F_1^{\xi_1,\zeta_1}(Y)\otimes F_1^{\zeta_2,\tau_2}(X)).\] \begin{comment} we first identify $(S^2\sqcup S^2, \{\alpha_1,\alpha_2\}, \{\beta_1,\beta_2\})$ with $(S^2\sqcup S^2, \{\zeta_1,\xi_2\}, \{\tau_1,\zeta_2\})$ by identifying $\alpha_1,\alpha_2,\beta_1$ and $\beta_2$ with $\zeta_1,\xi_2,\tau_1$ and $\xi_2$, respectively. Note that there is no ambiguity in doing so, as change of almost complex structure maps on $(S^2,\alpha_i,\beta_i)$ are trivial, since the only nonnegative index zero homology classes of disks are the constant ones. After making this identification, the map $\cE_1$ becomes \[\cE_1=F_{\xis,\zetas,\taus}(F_1^{\zeta_2,\tau_2}\otimes F_1^{\xi_1,\zeta_1}).\] For the map $\cE_2$, we identify $(S^2\sqcup S^2,\{\alpha_1,\alpha_2\}, \{\beta_1,\beta_2\})$ with $(S^2\sqcup S^2,\{\xi_1,\zeta_2\}, \{\zeta_1,\tau_2\})$, and the map $\cE_2$ is given by the formula \[\cE_2=F_{\xis,\zetas,\taus}(F_1^{\xi_2,\zeta_2}\otimes F_1^{\zeta_1,\tau_1}).\] \end{comment} As a first step towards Proposition~\ref{prop:E1E2relationexample}, we prove the following: \begin{lem}\label{lem:examplecomputation1ststep}The maps $\cE_1$ and $\cE_2$ satisfy \[\cE_i(X^+,Y^+)=z_1^+z_2^+,\qquad \cE_i(X^+,Y^-)=z_1^+z_2^-\qquad\text{and} \qquad \cE_i(X^-,Y^+)=z_1^-z_2^+\] for both $i=1$ and $i=2$. Moreover, $\Phi_{w_1}\otimes \Phi_{w_2}$ vanishes on the three intersection points $X^+Y^+,$ $X^+Y^-$ and $X^-Y^+$. \end{lem} \begin{proof}We begin with the computation of $\Phi_{w_i}$. Note that on the complexes $\CF^-(S^2,\alpha_1,\beta_1,w_1',w_1)$ and $\CF^-(S^2,\alpha_2,\beta_2,w_2,w_2')$, the maps $\Phi_{w_i}$ count representatives of the single bigon going over $w_i$, and hence it's easy to verify that \begin{equation}\Phi_{w_1}(X^-)=X^+,\qquad \Phi_{w_1}(X^+)=0, \qquad \Phi_{w_2}(Y^-)=Y^+ \qquad \text{and} \qquad \Phi_{w_2}(Y^+)=0.\label{eq:Phiwcomputation}\end{equation} As a consequence $\Phi_{w_1}\otimes \Phi_{w_2}$ vanishes on $X^+Y^+,$ $X^+Y^-$ and $X^-Y^+$. We now proceed with the holomorphic triangle computation. We use Equation \eqref{eq:Maslovindexexample} for the Maslov index of a homology class of triangles. For triangles counted by $\cE_1$ or $\cE_2$ when applied to any element except $X^-Y^-$, the term $\mu(x_1\times y_1,z_1)+\mu(x_2\times y_2,z_2)$ appearing in the Maslov index formula takes values in $\{-1,0,1,2\}$. Since the remaining terms in the Maslov index expression are even and nonnegative, this implies that any Maslov index 0 homology class which contributes to $\cE_i(X^+,Y^+)$, $ \cE_i(X^+, Y^-)$ or $\cE_i(X^-,Y^+)$ has \[\mu(x_1\times y_1,z_1)+\mu(x_2\times y_2,z_2)=n_w(\psi)=n_{w_1'}(\psi)=n_{w_2'}(\psi)=0.\] From here it is easy to check by considering domains on the diagram that the only holomorphic triangles which can be counted have domain equal to a pair of small triangles, and that \[\cE_i(X^+,Y^+)=z_1^+z_2^+,\qquad \cE_i(X^+,Y^-)=z_1^+z_2^-\qquad\text{and} \qquad \cE_i(X^-,Y^+)=z_1^-z_2^+.\] \end{proof} As a next step, we prove the following: \begin{lem}\label{lem:examplecomputation2ndstep}The maps $\cE_1$ and $\cE_2$ satisfy \[U\cE_i(\Phi_{w_1}\otimes \Phi_{w_2})(X^-,Y^-)=U \cdot z_1^+z_2^+.\] \end{lem} \begin{proof} This follows from Equation \eqref{eq:Phiwcomputation} and Lemma~\ref{lem:examplecomputation1ststep}. \end{proof} Using Lemmas~\ref{lem:examplecomputation1ststep} and \ref{lem:examplecomputation2ndstep}, in order to show Proposition~\ref{prop:E1E2relationexample}, it is sufficient to show that \[(\cE_1+\cE_2)(X^-,Y^-)=U\cdot z_1^+z_2^+.\] To this end, we will perform the following computation: \begin{lem}\label{lem:computationEiX-Y-} The maps $\cE_1$ and $\cE_2$ satisfy \[\cE_1(X^-,Y^-)=z_1^-z_2^-+C_1 U\cdot z_1^+z_2^+ \qquad \text{and} \qquad \cE_2(X^-,Y^-)=z_1^-z_2^-+C_2 U\cdot z_1^+z_2^+\] for constants $C_1,C_2\in \bF_2$ which depend on the choice of almost complex structure. Furthermore, for any generic choice of almost complex structure, the constants $C_1$ and $C_2$ satisfy \[C_1+C_2=1\in \bF_2.\] \end{lem} \begin{proof}We first describe the homology classes of triangles which are counted by $\cE_i(X^-,Y^-)$. For any triangle $\psi\in \pi_2(x_1\times x_2,y_1\times y_2,z_1\times z_2)$ counted by $\cE_i(X^-,Y^-)$, we have \[\gr(x_1\times y_1)=\gr(x_2\times y_2)=-1,\] using the convention that the top graded intersection point is in grading zero. Hence the expression \[\gr(x_1\times y_1,z_1)+\gr(x_2\times y_2,z_2),\] appearing in Equation \eqref{eq:Maslovindexexample}, takes values in $\{-2,-1,0\}$. Examining Equation \eqref{eq:Maslovindexexample}, for a class $\psi$ which could contribute to $\cE_i(X^-,Y^-)$ there are two possible cases: \begin{enumerate} \item \label{eq:typeoftriangle1}$n_{w}(\psi)=n_{w'_1}(\psi)=n_{w_2'}(\psi)=0$, $z_1=z_1^-$ and $z_2=z_2^-$. \item \label{eq:typeoftriangle2} $n_{w}(\psi)+n_{w'_1}(\psi)+n_{w_2'}(\psi)=1$, $z_1=z_1^+$ and $z_2=z_2^+$. \end{enumerate} Any triangle satisfying Condition~\eqref{eq:typeoftriangle1} consists of a pair of 3-gons, with zero multiplicity on any of the basepoints. Furthermore, such classes always have a unique holomorphic representative, and thus contribute $z_1^-z_2^-$ to both $\cE_1(X^-,Y^-)$ and $\cE_2(X^-,Y^-)$. We now consider classes satisfying Condition~\eqref{eq:typeoftriangle2}. By examining the diagram, one can verify that there are no nonnegative, index 0 classes of triangles with $n_w(\psi)=0$ and $n_{w_1'}(\psi)+n_{w_2'}(\psi)=1$. Hence the only remaining possibility for triangles satisfying Condition~\eqref{eq:typeoftriangle2} is for \[n_w(\psi)=1, \qquad n_{w_1'}(\psi)=n_{w_2'}(\psi)=0\qquad \text{and}\qquad \gr(x_1\times y_1,z_1)=\gr(x_2\times y_2,z_2)=-1.\] Analogous to equation \eqref{eq:Maslovindexexample}, the Maslov index of a triangle $\psi_1\in \pi_2(x_1,y_1,z_1)$ on $(S^2,\xi_1,\zeta_1,\tau_1)$ satisfies \[\mu(\psi_1)=\gr(x_1\times y_1, z_1)+2(n_{w_1}+n_{w_1'})(\psi),\] and similarly for a class $\psi_2$ on $(S^2,\xi_2,\zeta_2,\tau_2)$. It follows that if $\psi=\psi_1\# \psi_2$ is an index 0 homology class which contributes to either $\cE_1(X^-,Y^-)$ or $\cE_2(X^-,Y^-)$, which also satisfies Condition \eqref{eq:typeoftriangle2}, then $\psi_1$ and $\psi_2$ both have Maslov index 1. There are two index 1 classes, $\theta_1$ and $\theta_1'$, on $(S^2,\zeta_1,\xi_1,\tau_1)$ which can contribute to $\cE_1(X^-,Y^-)$, and similarly there are two index 1 classes, $\theta_2$ and $\theta_2'$, on $(S^2,\zeta_2,\xi_2,\tau_2)$ which can contribute to $\cE_1(X^-,Y^-)$. The map $\cE_1$ counts representatives of the connected sum classes $\theta_1\# \theta_2$, $\theta_1'\# \theta_2, \theta_1\# \theta_2'$ and $\theta_1'\# \theta_2'$. These classes are shown in Figure~\ref{fig::22}. Similarly there are two index 1 classes, $\rho_1$ and $\rho_1'$, on $(S^2,\xi_1,\zeta_1,\tau_1)$ which can contribute to $\cE_2(X^-,Y^-)$, as well two index 1 classes, $\rho_2$ and $\rho_2'$, on $(S^2,\xi_2,\zeta_2,\tau_2)$, which can contribute to $\cE_2(X^-,Y^-)$. These four classes are shown in Figure~\ref{fig::22}. The expression $\cE_2(X^-,Y^-)$ is computed by counting representatives of $\rho_1\# \rho_2,\rho_1'\# \rho_2,\rho_1\# \rho_2'$ and $\rho_1'\# \rho_2'$. \begin{figure}[ht!] \centering \input{fig50.pdf_tex} \caption{\textbf{Index 1 homology classes $\theta_1,\theta_1',\theta_2,\theta_2',\rho_1,\rho_1',\rho_2,\rho_2'$ which contribute to $\cE_1$ and $\cE_2$.} Representatives of the index 0 classes in $\Theta=\{\theta_1\# \theta_2, \theta_1'\# \theta_2,\theta_1\# \theta_2',\theta_1'\# \theta_2'\}$ are counted by $\cE_1(X^-,Y^-)$. Representatives of the index 0 classes in $P=\{\rho_1\# \rho_2,\rho_1'\# \rho_2,\rho_1\# \rho_2',\rho_1'\# \rho_2'\}$ are counted by $\cE_2(X^-,Y^-)$. \label{fig::22}} \end{figure} We define the sets of classes \[\Theta:=\{\theta_1\# \theta_2, \theta_1'\# \theta_2,\theta_1\# \theta_2', \theta_1'\# \theta_2'\}\] and similarly \[P:=\{\rho_1\# \rho_2,\rho_1'\# \rho_2,\rho_1\# \rho_2',\rho_1'\# \rho_2'\}.\] If $J$ is an almost complex structure on $\Sigma\times \Delta$, we will write $\cM_J(\Theta)$ and $\cM_J(P)$ for the disjoint unions of the moduli spaces of each of the elements of $\Theta$ and $P$, respectively. Note that the constants $C_1$ and $C_2$ appearing the lemma statement are given by \[C_1=\# \cM_J(\Theta)\qquad \text{and} \qquad C_2=\# \cM_J(P).\] The lemma statement follows from the fact that \[\#\cM_J(\Theta)+\# \cM_J(P)\equiv 1 \pmod{2},\] for generic $J$, which is proven in Lemma~\ref{lem:countofholomorphiccurves}, below. \end{proof} \begin{lem}\label{lem:countofholomorphiccurves} For a generic choice of almost complex structure $J$ on $\Sigma\times \Delta$, we have \[\# \cM_J(\Theta)+\# \cM_J(P)\equiv 1 \pmod{2}.\] \end{lem} As a first step towards the above lemma, we will show the following: \begin{lem}For a generic choice of almost complex structure $J$ on $\Sigma\times \Delta$, the quantity $\#\cM_J(\Theta)+\# \cM_J(P)$ is independent (modulo 2) of the almost complex structure $J$. \end{lem} \begin{proof}This follows from standard Floer theoretic arguments (compare \cite{LipshitzCylindrical}*{Lemma~2.19}). As a first step, let us consider a path $(J_t)_{t\in [0,1]}$ of almost complex structures on $\Sigma\times \Delta$ which is fixed for all $t$ on the cylindrical ends of $\Sigma\times \Delta$. In this case, we will show that \[\# \cM_{J_0}(\Theta)\equiv \# \cM_{J_1}(\Theta)\pmod{2}\] and \[\# \cM_{J_0}(P)\equiv \# \cM_{J_1}(P)\pmod{2}.\] Consider the 1-dimensional moduli spaces $\bigcup_t \cM_{J_t}(P)$ and $\bigcup_t \cM_{J_t}(\Theta)$. There are ends of $\bigcup_t \cM_{J_t}(\Theta)$ corresponding to $J_0$ and $J_1$ holomorphic triangles. We claim that the only other ends correspond to an index 1 holomorphic strip breaking off. If $\psi$ is an index $-1$ homology class of triangles, then for a generic path $J_t$, at all but finitely many $t$, the space $\cM_{J_t}(\psi)$ will be empty. However at finitely many $t\in [0,1]$ the space $\cM_{J_t}(\psi)$ may be nonempty. At such $t$, a sequence of index 0 holomorphic triangles may break into an index 1 disk and an index $-1$ triangle. In principle, another possible degeneration would be for a sequence of holomorphic triangles to split into an index 0 holomorphic triangle, and a non-constant index 0 holomorphic disk on one of the ends. This sort of degeneration is prohibited for our present choice of $J_t$, since we've assumed that the path $J_t$ is constant (in $t$) on the cylindrical ends of $\Sigma\times \Delta$, so index 0 strips are constant, by transversality. Upon direct inspection, the only way for a sequence of index 0 triangles in $\bigcup_{t}\cM_{J_t}(\Theta)$ or $\bigcup_{t}\cM_{J_t}(P)$ to break into an index $-1$ holomorphic triangle and an index 1 holomorphic strip are for the index 1 strip to be a bigon which doesn't go over the connected sum point. By examining the domains of the classes in $\Theta$ and $P$, we see that degenerations come in pairs. For example, if there is an end of $\bigcup_t \cM_{J_t}(\theta_1\# \theta_2)$ corresponding to a sequence of triangles breaking into an index $-1$ triangle and a bigon on $(S^2,\zeta_1,\tau_1)$, then there is a corresponding end of $\bigcup_t \cM_{J_t}(\theta_1'\# \theta_2)$, arising when a sequence of holomorphic triangles breaks into the same index $-1$ triangle, and a (different) bigon on $(S^2,\zeta_1,\tau_1)$. Reasoning in this way, we see that the total number of ends of $\bigcup_t \cM_{J_t}(\Theta)$ is equal (modulo 2) to $\# \cM_{J_0}(\Theta)+\# \cM_{J_1}(\Theta)$, so the latter quantity is 0 (modulo 2). The claim for $\bigcup_t \cM_{J_t}(P)$ follows similarly. Hence the quantity $\#\cM_J(\Theta)+\# \cM_J(P)$ depends on at most the cylindrical almost complex structures appearing in the ends of $\Sigma\times \Delta$. We now can consider the effect of changing the almost complex structure on each cylindrical end, individually. We first consider the $\xi$-$\tau$ end of $\Sigma\times \Delta$. Suppose $J_{\xi,\tau}$ and $J_{\xi,\tau}'$ are two choices of cylindrical almost complex structures on the $\xi$-$\tau$ end of $\Sigma\times \Delta$. Suppose $J$ is an almost complex structure which agrees with $J_{\xi,\tau}$ on the $\xi$-$\tau$ end of $\Sigma\times \Delta$. Let $\tilde{J}_{\xi,\tau}$ denote a non-cylindrical almost complex structure on $\Sigma\times [0,1]\times \R$, which is equal to $J_{\xi,\tau}$ on $\Sigma\times [0,1]\times (-\infty,-1]$ and $J_{\xi,\tau}'$ on $\Sigma\times [0,1]\times [1,\infty)$. We now construct a 1-parameter family of almost complex structures $(J_t)_{t\in [1,\infty]}$. Viewing the $\xi$-$\tau$ cylindrical end of $\Sigma\times \Delta$ as $\Sigma\times [0,1]\times [1,\infty)$, we construct $J_t$ by splicing $\tilde{J}_{\xi,\tau}$ into $J$, on the $\xi$-$\tau$ end of $\Sigma\times \Delta$. We do this in a way so that $\Sigma\times [0,1]\times [-1,\infty)$ is pasted onto $\Sigma\times [0,1]\times [1+t,\infty)$. Note that $J_1$ agrees with $J_{\xi,\tau}'$ on the $\xi$-$\tau$ end. Also $J_\infty$ (the pointwise limit of $J_t$) is equal to $J$, and hence agrees with $J_{\xi,\tau}$ on the $\xi$-$\tau$ end. We consider the ends of the space $\bigcup_t\cM_{J_t}(\Theta)\sqcup \bigcup_t \cM_{J_t}(P)$. As in the previous case, strip breaking can occur at finite $t\in [1,\infty)$. By the same argument as in the case where $J_t$ was fixed in the ends of $\Sigma\times \Delta$, strip breaking occurs in pairs within each of $\bigcup_t \cM_{J_t}(\Theta)$ and $\bigcup_{t} \cM_{J_t}(P)$, so such ends contribute an even number of points to either $\# \d \bigcup_t \cM_{J_t}(\Theta)$ or $\# \d \bigcup_t \cM_{J_t}(P)$. There is a final type of end of $\bigcup_t\cM_{J_t}(\Theta)\sqcup \bigcup_t \cM_{J_t}(P)$, corresponding to $t=\infty$. These ends correspond to an index 0 holomorphic strip on $(S^2\# S^2,\{\xi_1,\xi_2\},\{\tau_1,\tau_2\})$ for the (non-cylindrical) almost complex $\tilde{J}_{\xi,\tau}$ breaking off, and leaving an index 0 holomorphic triangle for the almost complex structure $J_{\infty}=J$. The homology classes of such curves are easy to describe. The index 0 disk corresponds to the bigon on $(S^2,\xi_1,\tau_1)$ which goes over the connected sum point, glued to the bigon on $(S^2,\xi_2,\tau_2)$ which also goes over the connected sum point. The remaining holomorphic triangle consists of the disjoint union of two 3-gons, which have zero multiplicity in the connected sum region. The union of two 3-gons has a unique holomorphic representative by the Riemann mapping theorem. Such ends do not come in pairs inside of either $\bigcup_{t} \cM_{J_t}(\Theta)$ or $\bigcup_{t}\cM_{J_t}(P)$, individually, though they do come in pairs inside of the union $\bigcup_{t}(\cM_{J_t}(\Theta)\sqcup \cM_{J_t}(P))$. More specifically, these ends correspond to curves in $\bigcup_t\cM_{J_t}(\theta_1'\# \theta_2')$ splitting into an index 0 holomorphic strip $u$ on $(S^2\# S^2,\{\xi_1,\xi_2\},\{\tau_1,\tau_2\})$, as well as a holomorphic triangle consisting of a union of two 3-gons. There is a corresponding end of $\bigcup_t \cM_{J_t}(P)$ which arises when a sequence of holomorphic triangles in $\bigcup_t \cM_{J_t}(\rho_1'\#\rho_2')$ breaks into the same index 0 holomorphic strip $u$, as well as a (different) pair of holomorphic 3-gons. Such ends contribute 1 to both of the counts of $\# \d \bigcup_{t} \cM_{J_t}(\Theta)$ and $\# \d \bigcup_t \cM_{J_t}(P)$. Changes of the almost complex structure on the other two ends of $\Sigma\times \Delta$ are handled similarly. the situation is somewhat simpler for changing the almost complex structures on the $\xi$-$\zeta$ and $\zeta$-$\tau$ ends, than on the $\xi$-$\tau$ end: by looking at the homology classes that appear, it is easy to see there are no ends in $\bigcup_t \cM_{J_t}(\Theta)$ or $\bigcup_t \cM_{J_t}(P)$ corresponding to sending $t=\infty$ when we change the almost complex structure on the $\xi$-$\zeta$ and $\zeta$-$\tau$ ends. \end{proof} We can now prove Lemma~\ref{lem:countofholomorphiccurves}, showing that in fact $\# \cM_J(\Theta)+\# \cM_J(P)\equiv 1 \pmod{2}$ for arbitrary, generic $J$. \begin{proof}[Proof of Lemma~\ref{lem:countofholomorphiccurves}] The argument proceeds by a neck stretching argument. Let $J(T)$ denote an almost complex structure with a neck length of $T$ inserted between the two copies of $S^2$. Let us define the sets of homology classes \[\Theta_1=\{\theta_1,\theta_1'\}, \quad\Theta_2=\{\theta_2,\theta_2'\},\quad P_1=\{\rho_1,\rho_1'\},\quad \text{and} \quad P_2=\{\rho_2,\rho_2'\}.\] We define maps $\rho^{w_i}$ from $\cM(\Theta_i)$ and $\cM(P_i)$ to $\Delta$ by the formula \[\rho^{w_i}(u)=(\pi_\Delta\circ u)\big((\pi_\Sigma \circ u)^{-1}(w_i)\big).\] For sufficiently large $T$, by using Morse-Bott gluing, the moduli space $\cM_{J(T)}(\Theta)$ is homeomorphic to the fibered product \begin{equation}\cM_{J(T)}(\Theta)\iso \cM(\Theta_1)\times_{\rho^{w_i}} \cM(\Theta_2).\label{eq:MTheta=fiberedprod}\end{equation} Similarly, for sufficiently large $T$, the moduli space $\cM_{J(T)}(P)$ is homeomorphic to the fibered product \begin{equation}\cM_{J(T)}(P)\iso\cM(P_1)\times_{\rho^{w_i}} \cM(P_2).\label{eq:MP=fiberedprod}\end{equation} Note that to be absolutely rigorous, one would need Morse-Bott gluing results (see Remark~\ref{rem:MorseBott}). We define the 1-dimensional, immersed submanifolds of $\Delta$ \[\ve{d}(\theta_i):=\rho^{w_i}(\cM(\theta_i)),\qquad \text{and} \qquad \ve{d}(\rho_i):=\rho^{w_i}(\cM(\rho_i),\] and define $\ve{d}(\theta_i')$ and $\ve{d}(\rho_i')$ similarly. Note that the space $\cM(\theta_1)$ has two ends, corresponding to the two ways of breaking $\theta_1$ into a bigon and a 3-gon. Similarly $\cM(\theta_1')$ has two ends, corresponding to the two ways of degenerating $\theta_1'$ into a bigon and 3-gon. Note that each of $\ve{d}(\theta_1)$ and $\ve{d}(\theta_1')$ has exactly one boundary point in the interior of $\Delta$, corresponding to a sequence of index 1 triangles degenerating into a holomorphic bigon and 3-gon. Nonetheless the two boundary points of $\ve{d}(\theta_1)$ and $\ve{d}(\theta_1')$ in the interior of $\Delta$ coincide as points of $\Delta$. Let us define $\ve{d}(\Theta_1):=\ve{d}(\theta_1)\cup \ve{d}(\theta_1')$. The set $\ve{d}(\Theta_1)$ is thus an immersed 1-manifold in $\Delta$, with two asymptotic ends in the cylindrical ends of $\Delta$. One asymptotic end is in the $\xi$-$\zeta$ end of $\Delta$, and one asymptotic end of $\ve{d}(\Theta_1)$ is in the $\xi$-$\tau$ end. We define $\ve{d}(\Theta_2),$ $\ve{d}(P_1)$ and $\ve{d}(P_2)$ analogously, which are also immersed 1-manifolds with no ends in the interior of $\Delta$, but each with exactly two asymptotic ends in the cylindrical ends of $\Delta$. A similar analysis shows that $\ve{d}(\Theta_2)$ is an immersed 1-manifold without boundary, and has asymptotic ends in the $\zeta$-$\tau$ and $\xi$-$\tau$ ends of $\Delta$. The immersed submanifold $\ve{d}(P_1)$ has one asymptotic end in the $\zeta$-$\tau$ and $\xi$-$\tau$ ends of $\Delta$, while $\ve{d}(P_2)$ has asymptotic ends in the $\xi$-$\zeta$ and $\xi$-$\tau$ ends of $\Delta$. Note that $(S^2,\xi_1,\tau_1)$ and $(S^2,\xi_2,\tau_2)$ each have a single bigon going over the connected sum point, and each of these bigons has a unique representative. Each of these bigons gives a map $u_i:[0,1]\times \R\to S^2$, such that $\{0\}\times \R$ is mapped to $\tau_i$ and $\{1\}\times \R$ is mapped to $\xi_i$. Define the points $p_1,p_2\in [0,1]$ by \[\{p_1\}:=\pi_{[0,1]}(u_1^{-1}(w_1)), \qquad \text{and} \qquad \{p_2\}:=\pi_{[0,1]}(u_2^{-1}(w_2)).\] For a generic choice of almost complex structure, there are two possibilities: \begin{enumerate}[label=\text{Case} (\arabic*):,ref=\text{Case} (\arabic*)] \item \label{case:1} $p_1<p_2$; \item \label{case:2} $p_2<p_1$. \end{enumerate} By our fibered product description in Equation \eqref{eq:MTheta=fiberedprod}, we have that \[\#\cM(\Theta)=\#( \ve{d}(\Theta_1)\cap \ve{d}(\Theta_2)).\] In both \ref{case:1} and \ref{case:2}, by drawing a picture, we can compute the algebraic intersection number. In \ref{case:1}, the intersection number is 1, and in \ref{case:2}, the intersection number is 0. Similarly, using our fibered product description, we also have that \[\#\cM(P)=\# (\ve{d}(P_1)\cap \ve{d}(P_2)).\] Using the same reasoning as before, we have now that in \ref{case:1}, the intersection number is 0, and in \ref{case:2}, the intersection number is 1. \ref{case:1} is illustrated in Figure \ref{fig::25}. \begin{figure}[ht!] \centering \input{fig51.pdf_tex} \caption{\textbf{The 1-dimensional submanifolds $\ve{d}(\Theta_1),$ $ \ve{d}(\Theta_2),$ $\ve{d}(P_1),$ and $\ve{d}(P_2)$ of $\Delta$, in \ref{case:1}.} The total count $\# \ve{d}(\Theta_1)\cap \ve{d}(\Theta_2)+\#\ve{d}(P_1)\cap \ve{d}(P_2)$ is $1$. Both $\# \ve{d}(\Theta_1)\cap \ve{d}(\Theta_2)$ and $\#\ve{d}(P_1)\cap \ve{d}(P_2)$ change in passing from \ref{case:1} to \ref{case:2}, but their sum remains $1$.\label{fig::25}} \end{figure} In both \ref{case:1} and \ref{case:2}, we have \[\# \cM(\Theta)+\# \cM(P)\equiv 1 \pmod{2},\] completing the proof. \end{proof} \section{Heegaard triples and graph cobordisms} \label{sec:Heegaardtriplesandgraphcobordisms} Given a Heegaard triple $(\Sigma,\ve{\alpha},\ve{\beta},\ve{\gamma},\ve{w})$, Ozsv\'{a}th and Szab\'{o} construct a smooth 4-manifold $X_{\as,\bs,\gs}$ \cite{OSDisks}*{Section~8}. The manifold $X_{\as,\bs,\gs}$ has boundary \[\d X_{\as,\bs,\gs}=-Y_{\as,\bs}\sqcup -Y_{\bs,\gs}\sqcup Y_{\as,\gs}.\] The construction of $X_{\as,\bs,\gs}$ is described in Equation \eqref{eq:Xabgdef}, of this paper. There is a graph $\Gamma_{\as,\bs,\gs}\subset X_{\as,\bs,\gs}$, defined as follows. Let $v_0\in \Delta$ be a chosen center point. A graph $\Gamma_0\subset \Delta$ can be defined by attaching three edges to $v_0$, which extend radially from $v_0$ to the vertices of $\Delta$. The graph $\Gamma_{\as,\bs,\gs}$ is defined as \[\Gamma_{\as,\bs,\gs}:= \ws\times \Gamma_0.\] We give $\Gamma_{\as,\bs,\gs}$ the cyclic order determined by giving the ends of $X_{\as,\bs,\gs}$ the ordering $-Y_{\as,\bs},-Y_{\bs,\gs}, Y_{\as,\gs}$ (read left to right). A picture is shown in Figure~\ref{fig::40}. \begin{figure}[ht!] \centering \input{fig40.pdf_tex} \caption{\textbf{The 4-manifold with embedded graph $(X_{\as,\bs,\gs},\Gamma_{\as,\bs,\gs})$.} \label{fig::40}} \end{figure} A natural question is whether the holomorphic triangle map $F_{\as,\bs,\gs,\frs}$, defined by the formula \begin{equation}F_{\as,\bs,\gs,\frs}(\xs,\ys)=\sum_{\substack{\psi\in \pi_2(\xs,\ys,\zs)\\ \mu(\psi)=0\\ \frs_{\ws}(\psi)=\frs}}\# \cM(\psi) U^{n_{\ws}(\psi)} \cdot \zs,\label{eq:holtrianglemapdef}\end{equation} is chain homotopic to the graph cobordism map for $(X_{\as,\bs,\gs},\Gamma_{\as,\bs,\gs})$. We answer this question in the affirmative: \begin{thm}\label{thm:triplesandgraphcobordismmaps}Suppose that $(\Sigma, \ve{\alpha},\ve{\beta},\ve{\gamma},\ws)$ is a multi-pointed Heegaard triple, and let $(X_{\as,\bs,\gs}, \Gamma_{\as,\bs,\gs}):(Y_{\as,\bs}\sqcup Y_{\bs,\gs},\ws\sqcup \ws)\to (Y_{\as,\gs},\ws)$ denote the ribbon graph cobordism described above. If $\frs\in \Spin^c(X_{\as,\bs,\gs})$, the graph cobordism map $F_{X_{\as,\bs,\gs},\Gamma_{\as,\bs,\gs},\frs}^B$ is chain homotopic to the triangle map $F_{\as,\bs,\gs,\frs}$ defined in Equation \eqref{eq:holtrianglemapdef}, as a map \[\CF^-(\Sigma,\ve{\alpha},\ve{\beta},\ws,\frs_{\as,\bs})\otimes_{\bF_2[U]} \CF^-(\Sigma,\ve{\beta},\ve{\gamma},\ws,\frs_{\bs,\gs})\to \CF^-(\Sigma,\ve{\alpha},\ve{\gamma},\ws,\frs_{\as,\gs}),\] where $\frs_{\as,\bs},$ $\frs_{\bs,\gs}$ and $\frs_{\as,\gs}$ denote the restrictions of $\frs$ to the ends of $X_{\as,\bs,\gs}$. \end{thm} \begin{rem} A sketch of a similar result was communicated to the author by Lipshitz, Ozsv\'{a}th and Thurston \cite{LOTPersonalCommunication}. \end{rem} \begin{proof} We can obtain a handle decomposition for the cobordism $X_{\as,\bs,\gs}$ by examining the handle decomposition of the trace cobordism described in Section~\ref{sec:handledecomptracecobordism}. We start with a Morse function $f_{\bs}$ on $U_{\bs}$ which has $|\ws|$ index 0 critical points, as well as $|\bs|$ index 1 critical points whose ascending manifolds intersect $\Sigma$ along the $\bs$ curves, and has $\Sigma$ as its maximal level set. By adapting the handle decomposition for the trace cobordism from Section~\ref{sec:handledecomptracecobordism}, we can give $X_{\as,\bs,\gs}$ the following handle decomposition: \begin{itemize} \item a 1-handle attached for each index 0 critical point of $f_{\bs}$, with one foot attached at an index 0 critical point of $f_{\bs}$ in $U_{\bs}\subset Y_{\as,\bs}$, and the other foot attached at the corresponding critical point of $f_{\bs}$ in $-U_{\bs}\subset Y_{\bs,\gs}$; \item a collection of $|\bs|$ 2-handles, whose framed attaching link $\bL$ is formed by taking the descending manifolds of the index 1 critical points of $f_{\bs}$ in $U_{\bs}\subset Y_{\as,\bs}$, and concatenating them across the 1-handles with their mirrors in $-U_{\bs}\subset Y_{\bs,\gs}$. \end{itemize} We can isotope the handles in this decomposition so that each of the 1-handles are attached with one foot at a basepoint $w\in \ws\subset Y_{\as,\bs}$ and the other foot on the corresponding basepoint in $Y_{\bs,\gs}$. Furthermore, we can perform an isotopy of the graph $\Gamma_{\as,\bs,\gs}$, so that the two edges connected to $Y_{\as,\bs}$ and $Y_{\bs,\gs}$, as well as the trivalent vertex, are contained in the interior of the corresponding 1-handle. The cobordism map $F_{X_{\as,\bs,\gs}, \Gamma_{\as,\bs,\gs},\frs}^B$ is thus equal to the connected sum graph cobordism map $E_1^B$ from Section~\ref{sec:connectedsumsandgraphTQFT} (with $Y_{\as,\bs}$ playing the role of $Y_1$, and $Y_{\bs,\gs}$ playing the role of $Y_2$), followed by the 2-handle map for surgery on $\bL$. We will write $Y_{\as,\bs}\, \#_{\ws} Y_{\bs,\gs}$ for the manifold obtain by adding $|\ws|$ connected sum tubes between $Y_{\as,\bs}$ and $Y_{\bs,\gs}$, and we will abuse notation slightly and write $\ws$ for the for the basepoints in the connected sum regions of $Y_{\as,\bs}\, \#_{\ws} Y_{\bs,\gs}$. The graph $\Gamma_{\as,\bs,\gs}$ intersects $Y_{\as,\bs}\, \#_{\ws} Y_{\bs,\gs}$ at $\ws$. It is convenient to start our computation of $F_{X_{\as,\bs,\gs},\Gamma_{\as,\bs,\gs},\frs}$ at the diagram $(\Sigma\sqcup \bar{\Sigma}, \as\cup \bar{\gs}, \bs\cup \bar{\bs}, \ws\sqcup \ws)$. Hence we begin by composing with the transition map \[\id_{\CF^-(\Sigma,\as,\bs)}\otimes \Psi_{(\Sigma,\bs,\gs)\to (\bar{\Sigma},\bar{\gs},\bar{\bs})}.\] We will omit writing this transition map for most of the argument, to condense the notation, however it will reappear at the end. By Proposition~\ref{prop:OSmapsaregraphcobmaps}, we know that the graph cobordism map $E_1^B$ is chain homotopic to the Ozsv\'{a}th-Szab\'{o} intertwining map \[\cE_1:\CF^-(\Sigma,\ve{\alpha},\ve{\beta},\ws,\frs_{\as,\bs})\otimes_{\bF_2[U]} \CF^-(\bar{\Sigma},\bar{\ve{\gamma}},\ve{\bar{\beta}},\ws,\frs_{\bs,\gs})\]\[\to \CF^-(\Sigma\, \#_{\ws} \bar{\Sigma}, \as\cup \bar{\gs}, \bs\cup \bar{\bs}, \ws, \frs_{\as,\bs}\# \frs_{\bs,\gs}),\] defined by the formula \[\cE_1(-,-):=F_{\as\cup \bar{\gs},\bs\cup \bar{\gs} ,\bs\cup \bar{\bs}}(F_1^{\bar{\gs},\bar{\gs}}(-)\otimes F_1^{\bs,\bs}(-)).\] We now pick curves $\Ds$ on $\Sigma\, \#_{\ws} \bar{\Sigma}$ as in Section~\ref{sec:doubleddiagram}, for a doubled diagram. Adapting the proof of Lemma~\ref{lem:randomtrianglemapis2-handlemap}, we see that the triple $(\Sigma\, \#_{\ws} \bar{\Sigma}, \as\cup \bar{\gs}, \bs\cup \bar{\bs}, \Ds,\ws)$ is (after performing a sequence of handleslides and isotopies) subordinate to a bouquet for the framed link $\bL\subset U_{\bs\cup \bar{\bs}}$. Thus the graph cobordism map $F_{X_{\as,\bs,\gs},\Gamma_{\as,\bs,\gs},\frs}(-,-)$ is chain homotopic to the composition \begin{equation}F_{\as\cup \bar{\gs}, \bs\cup \bar{\bs}, \Ds}(F_{\as\cup \bar{\gs},\bs\cup \bar{\gs} ,\bs\cup \bar{\bs}}(F_1^{\bar{\gs},\bar{\gs}}(-)\otimes F_1^{\bs,\bs}(-))\otimes \Theta_{\bs\cup \bar{\bs}, \Ds}^+).\label{eq:trianglemap=graphTQFT1}\end{equation} The associativity relations for the triangle maps yield that Equation \eqref{eq:trianglemap=graphTQFT1} is chain homotopic to \begin{equation}F_{\as\cup \bar{\gs}, \bs\cup \bar{\gs}, \Ds}(F_1^{\bar{\gs},\bar{\gs}}(-)\otimes F_{\bs\cup \bar{\gs},\bs\cup \bar{\bs},\Ds}( F_1^{\bs,\bs}(-)\otimes \Theta^+_{\bs\cup \bar{\bs},\Ds})).\label{eq:trianglemap=graphTQFT}\end{equation} The final Heegaard diagram in this composition is the double of the diagram $(\Sigma,\as,\gs)$, so we must post-compose with a transition map to undo the doubling operation. Proposition~\ref{prop:changeofdiagramsmapcompundouble} shows that the transition map associated to undoing the doubling operation satisfies \begin{equation}\Psi_{(\Sigma\, \#_{\ws} \bar{\Sigma},\as\cup \bar{\gs},\Ds)\to (\Sigma,\as,\gs)}(-)\simeq F_{3}^{\bar{\gs},\bar{\gs}}F_{\as\cup \bar{\gs}, \Ds, \gs\cup \bar{\gs}}(- \otimes \Theta_{\Ds, \gs\cup\bar{\gs}}^+).\label{eq:trianglemap=graphTQFT3}\end{equation} Composing Equation \eqref{eq:trianglemap=graphTQFT} with Equation \eqref{eq:trianglemap=graphTQFT3}, we see that the graph cobordism map for $(X_{\as,\bs,\gs}, \Gamma_{\as,\bs,\gs})$ is chain homotopic to \begin{equation}F_{3}^{\bar{\gs},\bar{\gs}}F_{\as\cup \bar{\gs}, \Ds, \gs\cup \bar{\gs}}(F_{\as\cup \bar{\gs}, \bs\cup \bar{\gs}, \Ds}(F_1^{\bar{\gs},\bar{\gs}}(-)\otimes F_{\bs\cup \bar{\gs},\bs\cup \bar{\bs},\Ds}( F_1^{\bs,\bs}(-)\otimes \Theta_{\bs\cup \bar{\bs},\Ds}^+))\otimes \Theta_{\Ds, \gs\cup \bar{\gs}}^+ ).\label{eq:trianglemap=graphTQFT4}\end{equation} Applying the associativity relations to the left two triangle maps in Equation \eqref{eq:trianglemap=graphTQFT4}, we conclude that Equation \eqref{eq:trianglemap=graphTQFT4} is chain homotopic to \begin{equation}F_3^{\bar{\gs},\bar{\gs}}F_{\as\cup \bar{\gs},\bs\cup \bar{\gs}, \gs\cup \bar{\gs}}(F_1^{\bar{\gs},\bar{\gs}}(-)\otimes F_{\bs\cup \bar{\gs},\Ds, \gs\cup \bar{\gs}}(F_{\bs\cup \bar{\gs},\bs\cup \bar{\bs},\Ds}( F_1^{\bs,\bs}(-)\otimes \Theta_{\bs\cup \bar{\bs},\Ds}^+)\otimes \Theta_{\Ds, \gs\cup \bar{\gs}}^+)).\label{eq:trianglemap=graphTQFT5}\end{equation} Using the holomorphic triangle counts from Proposition~\ref{prop:generalized1-handlesandtriangles}, we conclude that Equation \eqref{eq:trianglemap=graphTQFT5} is equal to \begin{equation}F_{\as,\bs,\gs}(-\otimes F_3^{\bar{\gs},\bar{\gs}}( F_{\bs\cup \bar{\gs},\Ds, \gs\cup \bar{\gs}}(F_{\bs\cup \bar{\gs},\bs\cup \bar{\bs},\Ds}(F_1^{\bs,\bs}(-)\otimes \Theta_{\bs\cup \bar{\bs},\Ds}^+)\otimes \Theta_{\Ds, \gs\cup \bar{\gs}}^+))).\label{eq:trianglemap=graphTQFT6}\end{equation} By Propositions~\ref{prop:changeofdiagramsmapcomp} and \ref{prop:changeofdiagramsmapcompundouble}, the composition \begin{equation}F_3^{\bar{\gs},\bar{\gs}}( F_{\bs\cup \bar{\gs},\Ds, \gs\cup \bar{\gs}}(F_{\bs\cup \bar{\gs},\bs\cup \bar{\bs},\Ds}(F_1^{\bs,\bs}(-)\otimes \Theta_{\bs\cup \bar{\bs},\Ds}^+)\otimes \Theta_{\Ds, \gs\cup \bar{\gs}}^+)\label{eq:trianglemap=graphTQFT7}\end{equation} is chain homotopic to \begin{equation}\Psi_{(\bar{\Sigma},\bar{\gs},\bar{\bs})\to (\Sigma,\bs,\gs)},\label{eq:trianglemap=graphTQFT8}\end{equation} since Equation \eqref{eq:trianglemap=graphTQFT7} represents the composition of a change of maps for doubling, followed by the transition map for undoing the doubling operation, by Propositions~\ref{prop:changeofdiagramsmapcomp} and \ref{prop:changeofdiagramsmapcompundouble}. Using the fact that Equations \eqref{eq:trianglemap=graphTQFT7} and \eqref{eq:trianglemap=graphTQFT8} are chain homotopic, our expression for the graph cobordism map for $(X_{\as,\bs,\gs}, \Gamma_{\as,\bs,\gs})$ from Equation \eqref{eq:trianglemap=graphTQFT6} reduces to \begin{equation}F_{\as,\bs,\gs}(-\otimes \Psi_{(\bar{\Sigma},\bar{\gs},\bar{\bs})\to (\Sigma,\bs,\gs)}(-)).\label{eq:trianglecobordism1}\end{equation} On the other hand, we started the proof by composing with the transition map \[\id_{\CF^-(\Sigma,\as,\bs)}\otimes \Psi_{(\Sigma,\bs,\gs)\to (\bar{\Sigma},\bar{\gs},\bar{\bs})}.\] Composing Equation \eqref{eq:trianglecobordism1} with this transition map, which we have been omitting from the formula until now, leaves just $F_{\as,\bs,\gs}(-,-)$, completing the proof. \end{proof} \section{Duality and the graph TQFT} \label{sec:traceandcotrace} In this section, we prove Theorem~\ref{thm:dualityv1} by computing the maps induced by the trace and cotrace cobordisms. \subsection{Turning around graph cobordisms} If $(W,\Gamma):(Y_1,\ve{w}_1)\to (Y_2,\ve{w}_2)$ is a graph cobordism, then we can turn around $(W,\Gamma)$ to get a graph cobordism \[(W^\vee,\Gamma^\vee):(-Y_2,\ve{w}_2)\to (-Y_1,\ve{w}_1).\] Here we give $\Gamma^\vee$ the same cyclic order as $\Gamma$. In this section, we extend the duality result of \cite{OSTriangles}*{Theorem~3.5} to graph cobordisms, by proving that the graph cobordism map for $(W^\vee,\Gamma^\vee)$ is the dual of the cobordism map for $(W,\Gamma)$. We note that we are using the Turaev interpretation of $\Spin^c$ structures, and we view a 4-dimensional $\Spin^c$ structure as a homology class of almost complex structures defined on the complement of a set of points, and we view a 3-dimensional $\Spin^c$ structure as a homology class of non-vanishing vector fields (see \cite{OSDisks}*{Section~2.6} and \cite{OSTriangles}*{Section~2.2} for more details). If $\frs\in \Spin^c(W)$, then the restrictions of $\frs$ to the ends of $Y_1$ and $Y_2$ are defined by taking the 2-plane field of tangencies to $Y_1$ and $Y_2$, and then taking the orthogonal complement in $Y_1$ and $Y_2$, which yields a non-vanishing vector field on $Y_1$ and $Y_2$. However turning around a cobordism now reverses the orientations of the ends with which we compute the orthogonal complement of the 2-plane field, thus resulting in conjugating (i.e. multiplying it by $-1$) the restriction of $\frs$ to these 3-manifolds. Using this convention, if $F_{W,\Gamma,\frs}^A$ is a map from $\CF^-(Y_1,\ws_1,\frs_1)$ to $\CF^-(Y_2,\ws_2,\frs_2)$, then the turned around graph cobordism $(W^\vee,\Gamma^\vee)$ induces a map \[F_{W^\vee,\Gamma^\vee,\frs}^A: \CF^-(-Y_2,\ws_2,\bar{\frs}_2)\to \CF^-(-Y_1,\ws_1,\bar{\frs}_1).\] \begin{prop}\label{prop:turningaroundgraphcobordisms}If $(W,\Gamma):(Y_1,\ve{w}_1)\to (Y_2,\ve{w}_2)$ is a ribbon graph cobordism, then \[F_{W^\vee,\Gamma^\vee,\frs}^A\simeq (F_{W,\Gamma,\frs}^A)^{\vee},\] with respect to the natural pairing between $CF^-(Y,\ws,\frs)$ and $CF^-(-Y, \ws, \bar{\frs})$. The same holds for the type-$B$ graph cobordism maps. \end{prop} \begin{proof}It is sufficient to show the claim for a cobordism obtained by attaching a single 0-, 1-, 3- or 4-handle, a collection of 2-handles, or a graph cobordism with underlying 4-manifold $Y\times [0,1]$. Note that when we turn around a $k$-handle, we get a $4-k$ handle. Duality between the 1- and 3-handle maps follows exactly as in \cite{OSTriangles}*{Section~5} and is immediate from the formulas. The argument for the 2-handle maps is the same as in \cite{OSTriangles}*{Section~5}: if $W$ is a 2-handle cobordism whose map can be computed by counting triangles in the triple $(\Sigma,\as,\bs,\bs')$, then the turned around cobordism $W^\vee$ is a 2-handle cobordism, whose map can be computed by counting triangles in the triple $(\Sigma,\bs,\bs',\as)$. Since the holomorphic triangles on the two triples are in bijection, it is easy to see that the maps $F_{\as,\bs,\bs',\frs}(-,\Theta_{\bs,\bs'}^+)$ and $F_{\bs,\bs',\as,\frs}(\Theta_{\bs,\bs'}^+,-)$ are dual to each other. We now consider the new maps appearing in the graph TQFT. These were summarized in Section~\ref{sec:outlineofconstruction}. Firstly, it is clear that the 0-handle and 4-handle maps are dual to each other. The remaining maps to consider are the graph action maps (which give the graph cobordism maps for graph cobordisms with underlying 4-manifold $Y\times [0,1]$). If $\Gamma:V_0\to V_1$ is an embedded flow graph in $Y$ between two disjoint collections of vertices $V_0,V_1\subset Y$, then the map $A_{\Gamma}:\CF^-(Y,V_0)\to \CF^-(Y,V_1)$ is defined as a composition of free-stabilization maps $S_{w}^{\pm}$ and relative homology maps $A_{\lambda}$, for various edges $\lambda$ and vertices $w$ of a subdivision of $\Gamma$. The graph action map $B_{\Gamma}$ is defined by replacing each instance of $A_{\lambda}$ in the formula for $A_{\Gamma}$ with $B_{\lambda}$. We claim that \begin{equation}(A_{\Gamma})^\vee\simeq B_{\bar{\Gamma}^\vee}.\label{eq:dualgraphaction}\end{equation} Here $\bar{\Gamma}^\vee:V_1\to V_0$ is the flow graph in $Y$ obtained by turning $\Gamma$ around (i.e. viewing it as a flow graph from $V_0$ to $V_1$) and reversing all of the cyclic orders. To establish Equation~\eqref{eq:dualgraphaction}, observe that from the definition of the graph action map \cite{ZemGraphTQFT}*{Section~7}, the formula for $B_{\bar{\Gamma}^\vee}$ can be obtained by taking the formula for $A_{\Gamma}$, reversing the order of all maps (hence reversing the cyclic orders at each vertex), replacing every free-stabilization $S_w^+$ with the corresponding free de-stabilization $S_w^-$ (and vice versa), and replacing each $A_\lambda$ map with $B_\lambda$. Just like with the 1-handle and 3-handle maps, it is straightforward to see that the maps $S_w^+$ and $S_w^-$ are dual to each other. Similarly, the two maps \[B_\lambda:\CF^-(\Sigma,\bs,\as)\to \CF^-(\Sigma,\bs,\as)\] and \[A_{\lambda}:\CF^-(\Sigma,\as,\bs)\to \CF^-(\Sigma,\as,\bs)\] are dual to each other, since they count the same holomorphic curves, with the same factor, since the roles of the $\ve{\alpha}$ and $\ve{\beta}$ curves have been changed on the two diagrams. Equation \eqref{eq:dualgraphaction} follows. It follows that \[(F_{W,\Gamma,\frs}^A)^\vee=F_{W^\vee,\bar{\Gamma}^\vee,\frs}^B.\] Applying Lemma~\ref{lem:reversecyclicordering} shows that \[F_{W^\vee,\bar{\Gamma}^\vee,\frs}^B\simeq F_{W^\vee,\Gamma^\vee,\frs}^A,\] completing the proof for the type-$A$ maps. The same argument works for the type-$B$ maps. \end{proof} \subsection{Trace and cotrace cobordism maps} In this section, we prove that the trace and cotrace graph cobordisms induce the trace and cotrace maps. \begin{customthm}{\ref{thm:dualityv1}}If $(Y,\ws)$ is a multi-pointed 3-manifold, the trace graph cobordism $(Y\times [0,1],\ws\times [0,1]): (Y\sqcup -Y,\ws\sqcup \ws)\to \varnothing$ induces the trace map \[\tr:\CF^-(Y,\ws,\frs)\otimes_{\bF_2[U]} \CF^-(-Y,\ws,\bar{\frs})\to \bF_2[U].\] Similarly, the cotrace graph cobordism $(Y\times [0,1],\ws\times [0,1]):\varnothing\to (Y\sqcup -Y,\ws\sqcup \ws)$ induces the cotrace map \[\cotr: \bF_2[U]\to \CF^-(Y,\ws,\frs)\otimes_{\bF_2[U]} \CF^-(-Y,\ws,\bar{\frs}).\] The formulas hold for both the type-$A$ and type-$B$ graph cobordism maps. \end{customthm} \begin{proof}[Proof of Theorem~\ref{thm:dualityv1}] We first consider the trace cobordism. Pick a diagram $(\Sigma,\ve{\alpha},\ve{\beta},\ws)$ for $Y$. We note that according to \cite{OSTriangles}*{Proposition~4.3}, the 4-manifold $X_{\as,\bs,\as}$ is diffeomorphic to $Y\times [0,1]\setminus N(U_{\as}\times \{\tfrac{1}{2}\})$, where $U_{\as}\subset Y$ is the $\as$ handlebody. Each $\as$ curve determines a compressing disk in $U_{\as}$. The union of this disk in $U_{\as}$, together with its image in $-U_{\as}$, determines a 2-sphere in $Y_{\as,\as}\subset X_{\as,\bs,\as}$. Let $(X',\Gamma')$ denote the graph cobordism obtained by attaching $|\as|$ 3-handles to $(X_{\as,\bs,\as},\Gamma_{\as,\bs,\as})$, along these 2-spheres in $Y_{\as,\as}$. The cobordism $(W',\Gamma')$ is diffeomorphic to $(Y\times [0,1],\ws\times [0,1])$ with $|\ws|$ 4-balls removed, and graph $\Gamma'$ obtained by adding a strand from each 3-sphere in the boundary of $W'$ to one of the components of $\ws\times [0,1]$. Let $\ve{\alpha}'$ be small Hamiltonian translates of the $\ve{\alpha}$ curves. Note that $X_{\as,\bs,\as}$ is diffeomorphic to $X_{\as,\bs,\as'}$. Using Theorem~\ref{thm:triplesandgraphcobordismmaps}, the cobordism map for $X_{\as,\bs,\as'}$ is chain homotopic to the holomorphic triangle map $F_{\as,\bs,\as'}$. The type-$B$ graph cobordism map for the trace cobordism \[F_{Y\times [0,1], \ws\times [0,1]}^B:\CF^-(\Sigma,\as,\bs)\otimes \CF^-(\Sigma, \bs, \as)\to \bF_2[U]\] can thus be computed as the composition of the change of diagrams map $\id \otimes \Psi_{\bs}^{\as\to \as'}$, followed by the triangle map $F_{\as,\bs,\as'}$, followed by $|\ve{\alpha}|$ 3-handle maps and $|\ws|$ 4-handle maps. If we identify $\CF^-(S^3,w)$ with $\bF_2[U]$ via the 4-handle map, the graph cobordism map for the trace cobordism thus takes the form \begin{align*}&F_{Y\times [0,1],\ws\times [0,1]}^B(\ve{x}\otimes \ve{y})\\=&F_3^{\as,\as'}(F_{\as,\bs,\as'}(\ve{x}\otimes \Psi_{\bs}^{\as\to \as'}(\ve{y})))\\=&\langle F_{\as,\bs,\as'}(\ve{x}\otimes \Psi_{\bs}^{\as\to \as'}(\ve{y})), \Theta^-_{\as,\as'}\rangle\\ =&\tr( F_{\as',\as,\bs}(\Theta^+_{\as',\as}\otimes \ve{x})\otimes\Psi_{\bs}^{\as\to \as'}(\ve{y}))\\ =&\tr( \Psi^{\ve{\beta}}_{\ve{\alpha}\to \ve{\alpha}'}(\ve{x})\otimes \Psi_{\bs}^{\as\to \as'}(\ve{y}) )\\ =&\tr(\xs\otimes\ys). \end{align*} The first equality follows from the topological reasoning of the previous paragraph. The second equality is the definition of the 3-handle map. The third equality follows from observing that $F_{\as,\bs,\as'}$ and $F_{\as',\as,\bs}$ count the same holomorphic triangles, and also noting that $\Theta_{\as,\as'}^-=\Theta_{\as',\as}^+$. The fourth equality is obtained by observing that the triangle map in the fourth line computes the transition map. The final equality follows by noting that \[\Phi_{\bs}^{\as\to \as'}: \CF^-(\Sigma,\bs,\as)\to \CF^-(\Sigma,\bs,\as'),\] is the dual of \[\Phi_{\as'\to \as}^{\bs}: \CF^-(\Sigma,\as',\bs)\to \CF^-(\Sigma,\as,\bs),\] and that $\Phi_{\as'\to \as}^{\bs}\circ \Phi_{\as\to \as'}^{\bs}\simeq \id_{\CF^-(\Sigma,\as,\bs)}$. As the graph cobordism $(Y\times [0,1],\ws\times [0,1])$ has no vertices of valence 3 or greater, it follows that \[F^A_{Y\times [0,1],\ws\times [0,1]}\simeq F^B_{Y\times [0,1],\ws\times [0,1]}\] by Lemma~\ref{lem:reversecyclicordering}, so the same formula holds for the type-$A$ graph cobordism maps. The statement about the cotrace cobordism follows from the formula for the trace cobordism map, together with Proposition~\ref{prop:turningaroundgraphcobordisms}, since the cotrace cobordism is obtained by turning around the trace cobordism. \end{proof} \section{Mixed invariants of mapping tori} \label{sec:mixedinvariants} In this section, we prove our formula for the mixed invariants of mapping tori (and more generally, for 4-manifolds with a non-separating cut). Suppose that $\phi:Y\to Y$ is a diffeomorphism. We consider the mapping torus of $\phi$, defined as \[X_\phi:=\frac{Y\times [0,1]}{(y,1)\sim (\phi(y),0)}.\] Note that only the based mapping class group $\MCG(Y,w)$ acts on Heegaard Floer homology, while a general diffeomorphism $\phi:Y\to Y$ may not fix the basepoint $w$. Given a $\phi$ which does not fix $w$, we can isotope $\phi$ so that $\phi(w)=w$. This doesn't change the diffeomorphism type of $X_\phi$, though \textit{a-priori} the resulting diffeomorphism map $\phi_*$ could depend on a choice of path from $\phi(w)$ to $w$. Fortunately, the map induced on $\HF^+(Y,w,\frs)$ is independent of choice of isotopy, since the $\pi_1(Y,w)$-action vanishes on $\HF^{+}(Y,w,\frs)$ \cite{ZemGraphTQFT}*{Theorem~G}. Thus an arbitrary orientation preserving diffeomorphism $\phi$ induces a well defined map on $\HF^+(Y,w,\frs)$, for which we will write $\phi_*$. Our main theorem concerns the more general situation of a 4-manifold $X^4$ with a closed, oriented, non-separating cut $Y^3$: \begin{customthm}{\ref{thm:mixedinvariantmappingtorus}} Suppose that $X^4$ is a closed, oriented 4-manifold with $b_2^+(X)> 1$ and $Y^3\subset X$ is a closed, oriented, non-separating 3-dimensional submanifold. Write $W$ for the cobordism obtained by cutting $X$ along $Y$. Suppose $\frs\in \Spin^c(W)$ is a $\Spin^c$ structure whose restrictions to both copies of $Y$ in $\d W$ agree and are non-torsion, and $\xi\in \Lambda^*(H_1(W;\Z)/\Tors)\otimes \bF_2[U]$. Then the mixed invariants of $X$ satisfy \[\Lef\big(F^+_{W,\frs}(\xi\otimes -):\HF^+(Y,\frs|_{Y})\to \HF^+(Y,\frs|_{Y})\big)=\sum_{\substack{\frt\in \Spin^c(X)\\ \frt|_W=\frs}}\Phi_{X,\frt}(\xi).\] \end{customthm} By specializing, we obtain the following relation for the mixed invariants of mapping tori: \begin{customcor}{\ref{cor:mixedinvariantofactualmappingtori}}Suppose $Y^3$ is a closed, oriented 3-manifold and $\phi:Y\to Y$ is an orientation preserving diffeomorphism such that the mapping torus $X_\phi$ has $b_2^+(X_\phi)>1$. If $\frs\in \Spin^c(Y)$ is non-torsion and $\phi_*(\frs)=\frs$, then the mixed invariants of $X_\phi$ satisfy \[\Lef\big(\phi_*:\HF^+(Y,\frs)\to \HF^+(Y,\frs)\big)=\sum_{\substack{\frt\in \Spin^c(X_\phi)\\ \frt|_Y=\frs}}\Phi_{X_\phi,\frt}(1).\] \end{customcor} \begin{proof}[Proof of Theorem~\ref{thm:mixedinvariantmappingtorus}] Let us first consider the case that $\xi=1\in \bF_2[U]\otimes \Lambda^*(H_1(W;\Z)/\Tors)$. We cut out a regular neighborhood of $Y$, identified with $Y\times [0,1]$. We then cut this regular neighborhood into three copies of $Y\times [0,1]$, and pick an arc $\Gamma\subset X$ with the configuration shown in Figure~\ref{fig::4}. As in Figure~\ref{fig::4}, we cut $W$ into three graph cobordisms. The first, which we denote by $(X_1,\Gamma_1)$, is a cotrace cobordism from $\varnothing $ to $Y\sqcup -Y$. The second, which we denote by $(X_2,\Gamma_2)$, has underlying 4-manifold $X_2=W\sqcup (-Y\times [0,1])$ and graph $\Gamma_2$ consisting of a single path in $W$, and a broken path in $(-Y\times [0,1])$. Finally we define $(X_3,\Gamma_3)$ to be the bottom most cobordism in Figure~\ref{fig::4}, which is the trace cobordism. \begin{figure}[ht!] \centering \input{fig4.pdf_tex} \caption{\textbf{Decomposing the path cobordism $(X,\Gamma)$ into three graph cobordisms.} We use the cut $N=Y\sqcup -Y$ to compute the mixed invariants for $X$.\label{fig::4}} \end{figure} Define $N:=Y\sqcup -Y$, viewed as the intersection of $X_1$ and $X_2$. Since $\frs|_Y$ is non-torsion, it follows from \cite{OSTrianglesandSymplectic}*{Lemma~2.3} and Lemma~\ref{lem:classificationfgPID}, that $\boldHF^\infty(N,\frs|_N)$ (the completed module over $\bF_2[[U]]$) vanishes, and $\boldHF^-(N,\frs|_N)$ is a finite dimensional vector space over $\bF_2$. Furthermore \[\boldHF^-(N,\frs|_N)=\HF^-_{\red}(N,\frs|_N)\iso \HF^+_{\red}(N,\frs_N)=\HF^+(N,\frs_N)=\boldHF^+(N,\frs_N).\] By adapting the formula from Equation \eqref{eq:definitionmixedinvariant} slightly, we can define a mixed invariant $\ve{\Phi}_{X,\Gamma,N,\frs|_{X_1},\frs|_{X_2\cup X_3}}$ over $\bF_2[[U]]$, associated to the cut $N$ and the graph $\Gamma\subset X$. Note that we are abusing notation slightly, since $\frs$ is not a $\Spin^c$ structure on $X$, but instead on $W$. However, it is easy to see that $\frs$ uniquely determines $\Spin^c$ structures on $X_1$ and $X_2\cup X_3$. By Lemma~\ref{lem:phi=brokenpathcobordism}, the cobordism map induced by $(X_2,\Gamma_2)$ is $F_{W,\frs}^-\otimes \Phi_w^\vee$. Theorem~\ref{thm:dualityv1} shows that $(X_1,\Gamma_1)$ and $(X_3,\Gamma_3)$ induce the trace and cotrace maps, respectively. Lemma~\ref{lem:phi=brokenpathcobordism}, shows that $(X_2,\Gamma_2)$ induces the map $F_{W,\frs}\otimes \Phi_w^{\vee}$. From the definition of $\Delta(\boldCF^-(Y,\frs|_Y),F^-_{W,\frs})$ in Equation \eqref{eq:DeltaCFdefinition}, we have \begin{equation}\ve{\Phi}_{X,N,\Gamma,\frs|_{X_1}, \frs|_{X_2\cup X_3}}=\Delta(\boldCF^-(Y,\frs|_Y), F_{W,\frs}^-).\label{eq:Phi=Delta} \end{equation} Now Proposition~\ref{prop:algebraicmappingtorus} identifies \begin{equation}\Delta(\boldCF^-(Y,\frs|_{Y}), F_{W,\frs}^-)=\Lef\big(F_{W,\frs}^+: \HF^+(Y,\frs|_Y)\to \HF^+(Y,\frs|_Y)\big).\label{eq:Delta=Lefschetz}\end{equation} We now wish to apply Proposition~\ref{prop:mixedinvariantofnontorisioncutcomplaw} to show that $\ve{\Phi}_{X,N,\frs|_{X_1},\frs|_{X_2\cup X_3}}$ is the sum of mixed invariants in the theorem statement. We note that Proposition~\ref{prop:mixedinvariantofnontorisioncutcomplaw} is only stated for connected cuts and graphs which consist of a path, intersecting the cut exactly once, so we must make an additional argument. We define a new decomposition of $(X,\Gamma)$ into three graph cobordisms $X_1',X_2',$ and $X_3'$. These are shown in Figure~\ref{fig::41}. \begin{figure}[ht!] \centering \input{fig41.pdf_tex} \caption{\textbf{A decomposition of $(X,\Gamma')$ with a connected cut $N'$}. The graph $\Gamma'$ is obtained by adding a trivial strand to $\Gamma$, near the top of the picture. Once we trim off trivial strands, we are left with a decomposition of $X$ into two path cobordisms which meet along a connected cut.\label{fig::41}} \end{figure} The cobordism $X_2'$ is defined by taking a regular neighborhood of $N=Y\sqcup -Y$ inside of $X_1$, and gluing on a neighborhood of the arc $\Gamma_1$ (i.e. adding a 1-handle to a neighborhood of $N$). We define $X_1'$ to be $X_1\setminus X_2'$, and we define $X_3'$ to be $X_2\cup X_3$. We let $\Gamma'$ be the graph obtained by adding a trivial strand to $\Gamma$, which extends radially out of the chosen regular neighborhood of $\Gamma_1$ and into $X_1'$. We let $\Gamma_1',\Gamma_2'$ and $\Gamma_3'$ be the intersections of $X_1',X_2'$ and $X_3'$ with $\Gamma'$, respectively. Finally, we let $N'$ denote the cut $Y\#-Y=\d X_1'$. Note that the cut $N'$ divides $(X,\Gamma')$ into two graph cobordisms, the first of which is a path cobordism, and the second is a path cobordism with a trivial strand added. As trivial strands do not effect the graph cobordism maps (this follows from Relation~\ref{rel:R8}), and the graph cobordism maps agree with the path cobordism maps for paths, we conclude from Proposition~\ref{prop:mixedinvariantofnontorisioncutcomplaw} that \begin{equation}\ve{\Phi}_{X,N',\Gamma',\frs|_{X_1'}, \frs|_{X_2'\cup X_3'}}(1)=\sum_{\substack{\frt\in \Spin^c(X)\\ \frt|_{W}=\frs}}\Phi_{X,\frt}(1).\label{eq:PhiN'=MixedInvariants}\end{equation} Note again, that we are abusing notation slightly with $\Spin^c$ structures, as $\frs$ is not defined on $X_2'\cup X_3'$, however $\frs$ uniquely determines a $\Spin^c$ structure on $X_2'\cup X_3'$, since topologically it is obtained from $W$ by attaching a 1-handle. Since the connecting homomorphism $\delta$ commutes with the graph cobordism maps, the following diagram commutes: \[\begin{tikzcd}\, & \boldHF^-(S^3)\arrow{d}{F^-_{X_1',\Gamma_1',\frs|_{X_1'}}}\\ \HF^+(N', \frs|_{N'})\arrow{r}{\iso}[swap]{\delta}\arrow{d}[swap]{F^+_{X_2', \Gamma_2', \frs|_{X_2'}}}& \boldHF^-(N',\frs|_{N'})\arrow{d}{F^-_{X_2', \Gamma_2', \frs|_{X_2'}}}\\ \HF^+(N,\frs|_N)\arrow{r}{\iso}[swap]{\delta} \arrow{d}[swap]{F^+_{X_3', \Gamma_3', \frs|_{X_3'}}}& \boldHF^-(N,\frs|_N)\\ \HF^+(S^3)& \end{tikzcd}\] Noting $X_1'\cup X_2'=X_1$ and $X_3'=X_2\cup X_3$, the above commutative diagram and the $\Spin^c$ composition law for graph cobordisms imply that \begin{equation}\ve{\Phi}_{X,N,\Gamma,\frs|_{X_1},\frs|_{X_2\cup X_2}}(1)=\ve{\Phi}_{X,N',\Gamma',\frs|_{X_1'},\frs_{X_2'\cup X_3'}}(1).\label{eq:mixedinvtsdiffcutsagree} \end{equation} Combining Equations \eqref{eq:Phi=Delta}, \eqref{eq:Delta=Lefschetz}, \eqref{eq:PhiN'=MixedInvariants}, and \eqref{eq:mixedinvtsdiffcutsagree}, the theorem statement now follows for $\xi=1\in \bF_2[U]\otimes \Lambda^*(H_1(W;\Z)/\Tors)$. We now consider the theorem statement when $\xi\neq 1$. If $\xi=U^n \cdot \xi_1\wedges \xi_n$, where $\xi_i\in H_1(W;\Z)$, then by Proposition~\ref{prop:spliceinloopsforUandH_1}, the cobordism map $F_{W,\frs}^-(\xi\otimes -)$ is equal to the graph cobordism map $F_{W,\Gamma_\xi,\frs}^-(-)$, for a graph $\Gamma_\xi$ obtained by splicing in a loop for each $\xi_i$, and a pair of null-homologous loops for each power of $U$ (using the cyclic orderings shown in Figure~\ref{fig::26}). Using this fact, the argument follows with only minor notational change from the $\xi=1$ case. \end{proof} \subsection{Mixed invariants of \texorpdfstring{$Y\times S^1$}{Y x S1} for non-torsion \texorpdfstring{$\Spin^c$}{Spinc} structures} \label{sec:YxS1} We now consider the 4-manifold $ Y\times S^1$, which has slightly more structure than an arbitrary mapping torus. The projection map $\pi:Y\times S^1\to Y$ induces a map \[\pi^*:\Spin^c(Y)\to \Spin^c(Y\times S^1).\] Similarly there is a restriction map \[r:\Spin^c(Y\times S^1)\to \Spin^c(Y),\] obtained by restricting a $\Spin^c$ structure to $Y\times \{pt\}$. To describe the map $\pi^*$, we briefly recall the Turaev interpretation of $\Spin^c$ structures on 3-manifolds, and the analogous interpretation for 4-manifolds, which are used in \cite{OSDisks}. If $Y^3$ is a 3-manifold, the set $\Spin^c(Y)$ can be describe as the set of homology classes of non-vanishing vector fields on $Y$, where two non-vanishing vector fields are said to be homologous if they are homotopic on the complement of a finite set of points. Similarly if $X^4$ is a 4-manifold, the set $\Spin^c(X)$ can be described as homology classes of almost complex structures, defined on the complement of a finite set of points $P\subset X$. Two almost complex structures $J_1$ and $J_2$, defined on the complement of $P_1$ and $P_2$, are said to be homologous if there is a compact 1-manifold $C\subset X$, with boundary containing $P_1$ and $P_2$, such that $J_1$ and $J_2$ are homotopic through almost complex structures on the complement of $C$. If $\frs\in \Spin^c(Y)$ corresponds to the non-vanishing vector field $v$ on $Y$, then the orthogonal complement $v^{\perp}$ is an oriented 2-plane field on $Y$. We pull back this 2-plane field under the map $\pi:Y\times S^1\to Y$ to get an oriented 2-plane field on $Y\times S^1$. This specifies an almost complex structure on $Y\times S^1$, up to homotopy through almost complex structures, and hence a $\Spin^c$ structure on $Y\times S^1$. It is straightforward to see that this correspondence is well defined on the level of homology classes of vector fields and almost complex structures. Furthermore \[r\circ \pi^*=\id|_{\Spin^c(Y)}.\] \begin{define}We say that a $\Spin^c$ structure on $Y\times S^1$ is $S^1$-invariant if it is in the image of $\pi^*:\Spin^c(Y)\to \Spin^c(Y\times S^1)$. \end{define} We prove the following: \begin{customprop}{\ref{prop:invariantsofYxS1}}If $Y^3$ has $b_1(Y)>1$ and $\frs\in \Spin^c(Y)$ is non-torsion, then \[\Phi_{Y\times S^1,\pi^*(\frs)}(1)=\chi(\HF^+(Y,\frs)).\] Furthermore, if $\frt\in \Spin^c(Y\times S^2)$ is not $S^1$-invariant, then \[\Phi_{Y\times S^1,\frt}(1)=0.\] \end{customprop} \begin{proof}The proof is no different than the proof of the analogous result for the Seiberg-Witten invariant \cite{BaldridgeSWCircleActions}*{Lemma~5}, and follows quickly from the adjunction inequality. Note that the second statement, together with Theorem~\ref{thm:mixedinvariantmappingtorus}, implies the first statement. By considering the mapping cone formula from Proposition~\ref{prop:CWhomologyofmappingtori}, we see that \begin{equation}H_2(Y\times S^1;\Z)\iso H_1(Y;\Z)\oplus H_2(Y;\Z).\label{eq:homologyYxS1}\end{equation} Furthermore, the first summand is generated by tori of the form \[F_\gamma:=\gamma\times S^1,\] where $\gamma\in H_1(Y;\Z)$. The second summand is generated by $H_2(Y;\Z)$ under the inclusion $Y\times \{pt\}\subset Y\times S^1$. From our description of the map $\pi^*$, it is clear that if $[F_\gamma]$ is in the first summand of Equation \eqref{eq:homologyYxS1}, and $[F]$ is in the second summand, then \[\pi^*(\frs+PD[\gamma])=\pi^*(\frs)+PD[F_\gamma] \qquad \text{and} \qquad r(\frs+PD[F])=r(\frs).\] Hence it is sufficient to show that \[\Phi_{\pi^*(\frs)+PD[F]}(1)=0\] for any surface $F$ embedded in $Y\times \{pt\}$, which represents a non-zero class in $H_2(Y;\Z)$. If $[F]\neq 0\in H_2(Y;\Z)$, we pick $\gamma$ to be an element of $H_1(Y;\Z)$ such that $\#(F\cap \gamma)>0$ (which can be found since $H_2(Y;\Z)$ is torsion free). If we can show \begin{equation}|\langle c_1(\pi^*(\frs)+PD[F]), [F_\gamma] \rangle |+[F_\gamma]\cdot [F_\gamma]>0=2g(F_\gamma)-2,\label{eq:adjunctionequation}\end{equation} we will be done, since we can apply the standard adjunction inequality \cite{OSTriangles}*{Theorem~1.5}. This is now an easy computation. Firstly, $[F_\gamma]\cdot [F_\gamma]=0$, and $\langle c_1(\pi^*(\frs)),[F_\gamma]\rangle=0$, as both the 2-plane field defining $\pi^*(\frs)$ and the torus $F_{\gamma}$ are $S^1$-invariant, so the 2-plane field defining $\pi^*(\frs)$ restricts to the trivial 2-plane bundle on $F_{\gamma}$. Hence the left side of Equation \eqref{eq:adjunctionequation} is $2 [F]\cdot [F_\gamma]=2\#(F\cap \gamma)$, which is positive, by construction. \end{proof} \bibliographystyle{custom}
1,314,259,995,731
arxiv
\section{Introduction} Centering Theory is a computational model of discourse interpretation that examines the relationship between attentional state, the form of referring expressions, and the control of inferential processes. These goals have led to its application to the study of unexpressed arguments (henceforth {\it zeros\/}) in topic-oriented languages like Japanese, in which salient entities, recoverable by inference in a given context, are freely omitted. Centering predicts the preferred interpretation of {\it zero\/}s in situations in which the antecedent of a {\it zero\/} was realized as a center in the previous discourse. Previous work argues that both syntactic and discourse factors associated with potential antecedents determine the preferred interpretation of {\it zero\/}s \cite{Kuno73,Kameyama85,WIC90,Iida92,WIC94}. For example, a discourse entity realized as a subject is more likely to serve as the antecedent of a {\it zero\/} than a discourse entity realized as a object. Walker et al. incorporated certain discourse features into centering with their proposed rule of {\sc zero topic assignment} (henceforth {\sc zta}). This proposal was motivated by the observation that a {\it zero\/} that was previously the center of attention (i.e., Cb) is easily understood as the continuing center even if it is expressed in a syntactically less salient argument position.\footnote{The salience of the subject has also been observed in various syntactic phenomena such as extraction and binding. See Walker et al.~(1994) for the discussion of salience among the arguments of the verb in Japanese.} For example, a zero object in a given utterance such as \ex{1}c below, is the topic because it was the Cb in the previous utterance. As the topic, the discourse entity realized in object position is ranked higher on the Cf list than the discourse entity realized as the subject. This explains the preferred interpretation of the subsequent utterance, (\ex{1})d in this case, as will be discussed in more detail below. \eenumsentence{\item[a.] \shortex{10}{Hanako &wa &siken &o &oete,&kyoositu&ni&modorimasita.} {Hanako&{\sc top/subj} &exam &{\sc obj} &finish &classroom &to &returned} {{\it Hanako returned to the classroom, having finished her exam.}} \item[b.] \shortex{6}{0 &hon &o &locker &ni &simaimasita.} {{\sc subj} &book &{\sc obj} &locker &in &took-away} {{\it She put her books in the locker.}} \item[c.] \shortex{10}{Itumo &no &yooni &Mitiko &ga &0&deki&o&tazunemasita.} {always& &like &Mitiko&{\sc subj} &{\sc obj2} &result &{\sc obj} &asked} {{\it Mitiko, as usual, asked (Hanako) how she did.}} \item[d.] \shortex{10} { 0 & 0 &zibun&no&tokenakkatta&mondai&o &misemasita.} {{\sc subj} & {\sc obj2} &self&{\sc gen}&solve-could-not&problem&{\sc obj}&showed} {{\it (Hanako) showed (Mitiko) the problems which she could not solve. }} \label{intro-examp} } In order to further test the feasibility of {\sc zta} and to examine strategies for keeping track of centers, this chapter examines the distribution of {\sc zeros} in naturally occurring Japanese newspaper texts. Two initial hypotheses about the use of {\it zeros\/} are given in \ex{1} and \ex{2}: \eenumsentence{\item[\ ] {\bf Hypothesis-1} \\ {\it Zeros\/} are used to {\sc continue} the center.} \eenumsentence{\item[\ ] {\bf Hypothesis-2} \\ Full {\sc np}s are used to {\sc shift} the center.} These hypothesis are similar to one tested for Italian in (Di Eugenio, this volume).\footnote{Di Eugenio's hypothesis says, ``Typically a null subject signals a {\sc continue}, and a strong pronoun a {\sc retain} or a {\sc shift}.} I report on an empirical study based on 250 utterances from a corpus of 20 Japanese newspaper articles. Each utterance is analyzed in terms of centering transitions and the form in which centers are realized by referring expressions. I also examine lexical subcategorization information, and tense and aspect in order to test the hypothesis that the speaker expects the hearer to use this information in determining global discourse structure. Figure \ref{trans-state-fig} summarizes the findings on the distribution of centering transitions with respect to form of referring expression used in the utterance. \begin{figure}[htb] \begin{tabular}{|r|c|c|c|c|} \hline & & & & \\ & {\sc continue} & {\sc \ \ \ retain\ \ \ } & {\sc smooth-shift} & {\sc rough-shift} \\ \hline & & & & \\ {\it with\/} {\sc zero\/} & 76 & 3 & 34 & 23 \\ \hline & & & & \\ {\it without\/} {\sc zero\/} & 7 & 39 & 9 & 35 \\ \hline & & & & \\ total & 83 & 42 & 42 & 58 \\ \hline \end{tabular} \caption{Distribution of Centering Transitions and Zeros in Japanese newspaper texts} \label{trans-state-fig} \end{figure} The hypothesis in (\ex{-1}) is confirmed by the distribution of {\sc continue} transitions in Figure \ref{trans-state-fig}, as compared to the other transitions combined $\chi^2$ = 53.932, {\it p\/}\ $<$\ .001). In {\sc continue} transitions, {\it zeros\/} are strongly preferred to {\sc np}s: among {\sc continue} transitions, 76 cases appears with {\it zero\/} and only 7 cases without {\it zero\/}, while other transitions preferentially realize centers by {\sc np}s: there are 60 cases with {\it zeros\/} and 83 cases without {\it zeros\/}. Note that Hypothesis-1 predicts as a corollary that discourse entities ranked higher in the Cf ranking would tend to be realized by {\it zeros\/}. The preference of {\it zeros\/} in {\sc continue} transitions proves this tendency and provides additional support for Walker etal's rule of {\sc zero topic assignment}. However, the second hypothesis in \ex{0} is disconfirmed: while the frequency of full NPs is greater (83) than zeros (60), full {\sc np}s are {\bf not} always used to shift the center, and zeros frequently are. The distribution of centering transitions in figure \ref{trans-state-fig} shows that a shift of attentional state is abundant in naturally occurring discourse, as seen by the frequency of {\sc retain} and {\sc rough-shift}, which the centering algorithm prefers the least \cite{BFP87}. In the Japanese data examined here, these transition states are identified when a {\sc zero} cannot take the current center of attention, the Cb, as its antecedent. Thus, what needs to be explained is the occurrence of {\it zeros\/} in these transitions in which the Cb changes, where it may be difficult for the hearer to determine which discourse entity is realized by the {\it zero}. How is discourse coherence preserved when two adjacent utterances are not locally coherent? In the transition state of {\sc retain}, the rule of {\sc zta} makes it is possible to avoid shifting the Cb, but in a {\sc rough-shift} transition, there is no link to the prior utterance, and the Cb must shift.\footnote{What I call a {\sc rough shift} in this chapter is elsewhere called a {\sc no cb} transition. That is, there is no Cb as no entity from the Cf(U$_{n-1}$) is realized in the current utterance.} The main focus is of this chapter is to study the relation of local and global structure in discourse, by exploring the strategies that a speaker uses to reduce the hearer's inference load and make the flow of discourse coherent when the antecedent of a {\it zero\/} is not realized in the immediately preceding utterance. In section \ref{zta-sec}, I discuss in more detail how centering works in Japanese and the rule of {\sc zero topic assignment}\cite{WIC94}. Then in section \ref{shift-sec}, I show how cues such as lexical semantics and tense and aspect can be used to interpret zeros in utterances that realize {\sc rough shift} transitions. On the basis of this analysis, section \ref{integ-sec} sketches an algorithm for integrating centering with global focus, and finally section \ref{discuss-sec} summarizes the contributions of the chapter. \section{Zero Topic Assignment and Disambiguation} \label{zta-sec} In this section, I briefly describe the {\sc zta} rule proposed by Walker et al.~and show that discourse coherence indeed tends to be maintained with the same discourse topic across utterances. In Walker et al., the centering algorithm specifies two structures for centers, namely Cb ({\sc backward-looking center}) and Cf ({\sc forward-looking centers}), and a set of rules and constraints (See Walker, Joshi and Prince, this volume). {\sc forward-looking centers} are a set of semantic discourse entities associated with each utterance. The Cf Ranking for Japanese according to discourse salience is given in (\ex{1}). \eenumsentence{\item[\ ] {\sc topic} $>$ {\sc empathy} $>$ {\sc subject} $>$ {\sc object2} $>$ {\sc object} $>$ {\sc others} } The highest ranked member of the Cf list is called the Cp ({\sc preferred center}). The Cp represents a prediction about the Cb of the following utterance. The {\sc backward-looking center} is the discourse entity that the utterance most centrally concerns. Discourse coherence is computed with this distinction between looking back to the previous discourse with the Cb and projecting preferences for interpretation in subsequence discourse with the Cp. In other words, the combination of the Cb and the Cp reflects the coherence of the discourse. The shift of centers is realized when a new entity is introduced as the Cp. These interactions of the Cb and Cp are stated as a set of constraints and rules (Walker, Joshi and Prince, this volume). What the constraints and rules amount to is the idea that discourse segments that continue centering the same entity are more coherent and easier to process than those that repeatededly shift from one center to another. The theory measures coherence by the hearer's inference load when interpreting a discourse sequence \cite{GJW86,GJW95}. {\sc zero topic assignment} is a discourse rule which allows a {\it zero\/} to be interpreted as a {\sc zero topic}. {\sc zta} is applied when there is no {\sc continue} transition of the previous center. \eenumsentence{\item[\ ] {\bf Zero Topic Assignment} \\ When a zero in $\rm{U_{i+1}}$ represents an entity that was the $\rm{Cb(U_{i})}$, and when no other {\sc continue} transition is available, that zero may be interpreted as the {\sc zero topic} of $\rm{U_{i+1}}$. } The rule allows a {\it zero\/} that has been the Cb in $\rm{U_{i-1}}$ to continue as the Cp in $\rm{U_{i}}$, even if it appears in a less salient syntactic position. It explains why the discourse entity Hanako, which is realized as the {\sc object2} in (\ex{1})c is interpreted as the {\sc subject} in (\ex{1})d. Consider again example (1) repeated here as (\ex{1}), with the centering data structures: \eenumsentence{\item[a.] \shortex{10}{Hanako &wa &siken &o &oete,&kyoositu&ni&modorimasita.} {Hanako&{\sc top/subj} &exam &{\sc obj} &finish &classroom &to &returned} {{\it Hanako returned to the classroom, having finished her exam.}} \begin{tabular}{|lll|} \hline {\bf Cb:} & {\sc hanako} & \\ {\bf Cf:} & [{\sc hanako}, {\sc exam}] & \\ \hline \end{tabular} \item[b.] \shortex{6}{0 &hon &o &locker &ni &simaimasita.} {{\sc subj} &book &{\sc obj} &locker &in &took-away} {{\it She put her books in the locker.}} \begin{tabular}{|lll|} \hline {\bf Cb:} & {\sc hanako} & \\ {\bf Cf:} & [{\sc hanako}, {\sc book}, {\sc locker}] & {\sc continue} \\ \hline \end{tabular} \item[c.] \shortex{10}{Itumo &no &yooni &Mitiko &ga &0&deki&o&tazunemasita.} {always& &like &Mitiko&{\sc subj} &{\sc obj2} &result &{\sc obj} &asked} {{\it Mitiko, as usual, asked (Hanako) how she did.}} \begin{tabular}{|lll|} \hline {\bf Cb:} & {\sc hanako} & \\ {\bf Cf1:} & [{\sc hanako}, {\sc mitiko}, {\sc result}] & {\sc zta continue} \\ & {\sc top}, {\sc subj}, {\sc obj} & \\ \hline {\bf Cf2:} & [{\sc mitiko}, {\sc hanako}, {\sc result}] & {\sc retain} \\ & {\sc subj}, {\sc obj2}, {\sc obj} & \\ \hline \end{tabular} \item[d.] \shortex{10} { 0 & 0 &zibun&no&tokenakkatta&mondai&o &misemasita.} {{\sc subj} & {\sc obj2} &self&{\sc gen}&solve-could-not&problem&{\sc obj}&showed} {{\it (Hanako) showed (Mitiko) the problems which she could not solve.}} \\ {{\it (Mitiko) showed (Hanako) the problems which she could not solve.}} \begin{tabular}{|lll|} \hline {\bf Cb1:} & {\sc hanako} & \\ {\bf Cf1:} & [{\sc hanako}, {\sc mitiko}, {\sc problem}] & {\sc continue} from Cf1(c) \\ & {\sc subj}, {\sc obj2}, {\sc obj} & \\ \hline \hline {\bf Cb2:} & {\sc mitiko} & \\ {\bf Cf2:} & [{\sc mitiko}, {\sc hanako}, {\sc problem}] & {\sc smooth-shift} from Cf2(c) \\ & {\sc subj}, {\sc obj2}, {\sc obj} & \\ \hline \end{tabular} \label{zta-ex-ga} } The discourse situation in (\ex{0}) is a case where the hearer may maintain multiple hypotheses about where the speaker's attention is directed. There are two assumptions available, the assumption that {\sc zta} applies and the {\it zero\/} is interpreted as the topic, versus the assumption that subjects are more highly ranked than objects on the Cf. Cf2 of (\ex{0})c is the only Cf possible without {\sc zta}, and represents a {\sc retain} rather than a {\sc continue}. By the formulation of the {\sc zta} rule above, {\sc zta} is triggered here since no {\sc continue} transition is otherwise available. Cf1 represents a {\sc continue} reading due to the {\sc zta} option; {\sc hanako} can be the Cp even when {\sc mitiko} is realized as the subject. This could lead to a potential ambiguity in (\ex{0})d, because it is possible for a hearer to simultaneously entertain both of the Cfs in (\ex{0})c. However, the {\sc continue} interpretation which results from the {\sc zta continue} transition state is strongly preferred. Walker et al (1994) reported that 28 out of 34 speakers preferred the {\sc continue} interpretation in (\ex{0}d); ($ Z = 4.95, p < .001$). The less preferred {\sc smooth-shift} interpretation would come from the algorithm's application to Cf2 of (\ex{0})c. Walker et al.~make a distinction between the notions of {\sc grammatical topic} and {\sc zero topic}. The grammatical topic is the {\it wa\/}-marked entity, which is by default predicted to be the most salient entity. The interaction between the grammatical topic and the zero topic is observed in (\ex{1}). Discourse segment (\ex{1}) uses the {\it wa\/}-marked {\sc np} instead of the {\sc ga\/}-marked {\sc np} in the {\sc zta} environment of (\ex{1})c. Compare the interpretation of (\ex{1})d with (\ex{0})d. \eenumsentence{ \item[a.] \shortex{8}{Hanako &wa &siken &o &oete,&kyoositu&ni&modorimasita.} {Hanako&{\sc top/subj} &exam &{\sc obj} &finish &classroom &to &returned} {{\it Hanako returned to the classroom, having finished her exam.}} \begin{tabular}{|lll|} \hline {\bf Cb:} & {\sc hanako} & \\ {\bf Cf:} & [{\sc hanako}, {\sc exam}] & \\ \hline \end{tabular} \item[b.] \shortex{6}{0 &hon &o &locker &ni &simaimasita.} {{\sc subj} &book &{\sc obj} &locker &in &took-away} {{\it (Hanako) put (her) books in the locker.}} \begin{tabular}{|lll|} \hline {\bf Cb:} & {\sc hanako} & \\ {\bf Cf:} & [{\sc hanako}, {\sc book}] {\sc continue} & \\ \hline \end{tabular} \item[c.] \shortex{10}{Itumo &no &yooni &Mitiko &{\bf wa} &0&deki&o&tazunemasita.} {always& &like &Mitiko&{\sc top/subj} &{\sc obj2} &result &{\sc obj} &asked} {{\it Mitiko, as usual, asked (Hanako) how she did.}} \begin{tabular}{|lll|} \hline {\bf Cb:} & {\sc hanako} & \\ {\bf Cf1:} & [{\sc hanako}, {\sc mitiko}, {\sc result}] & {\sc zta continue} \\ & {\sc zero-top}, {\sc top/subj}, {\sc obj} & \\ \hline {\bf Cf2:} & [{\sc mitiko}, {\sc hanako}, {\sc result}] & {\sc retain} \\ & {\sc top/subj}, {\sc obj2}, {\sc obj} & \\ \hline \end{tabular} \item[d.] \shortex{10} { 0 & 0 &zibun&no&tokenakkatta&mondai&o &misemasita.} { {\sc subj} & {\sc obj2} &self&{\sc gen}&solve-could-not&problem&{\sc obj}&showed} {{\it (Hanako) showed (Mitiko) the problems which she could not solve.}} \\ {{\it (Mitiko) showed (Hanako) the problems which she could not solve.}} \begin{tabular}{|lll|} \hline {\bf Cb1:} & {\sc hanako} & \\ {\bf Cf1:} & [{\sc hanako}, {\sc mitiko}, {\sc problem}] {\sc continue} from Cf1(c)& \\ & {\sc subj}, {\sc obj2}, {\sc obj} & \\ \hline \hline {\bf Cb2:} & {\sc mitiko} & \\ {\bf Cf2:} & [{\sc mitiko}, {\sc hanako}, {\sc problem}] {\sc smooth-shift} from Cf2(c)& \\ & {\sc subj}, {\sc obj2}, {\sc obj} & \\ \hline \end{tabular} \label{zta-ex-wa} } The {\it wa} marking has the predicted effect. Using the grammatical topic marker {\it wa} in (\ex{0})c dampens {\sc zta} and thus affects the interpretation of (\ex{0})d, which is now completely ambiguous. The results of experiments reported in \cite{WIC94} show that 10 subjects who prefer an interpretation that depends on {\sc zta} in (\ex{-1}) can no longer get the interpretation in (\ex{0}). In (\ex{0})d, only 18 out of 34 subjects prefer the {\sc zta continue} interpretation. Because the discourse entity realized as the grammatical topic and indicated by the {\it wa}-marked {\sc np} is the Cp by default, it is harder to interpret the {\it zero\/} as the topic. The situation can be characterized as a case of competing defaults; some hearers apply the default that the {\it wa}-marked entity is usually the Cp, and others apply the default that {\sc continue} interpretations are preferred and that {\it zeros} realize discourse entities that are ranked highly on the Cf. When an ambiguity arises from the use of the {\sc wa}-marked {\sc np} in the {\sc zta} environments as illustrated in the above example, it is often resolved with additional information provided in the subsequent discourse. Consider (\ex{1}).\footnote{There is no decisive proposal how complex sentences should be divided and arranged. In this study, I simply divide a complex sentence into simplex sentences and arranged them in serial order. The complex sentences which appeared in the data consist of coordinations and compounds with temporal adjunct clauses. A temporal subordinate clause is followed by the main clause in Japanese, so simple serial ordering normally preserves their chronological order.} \eenumsentence{\item[a.] \shortex{8}{S International&wa&sirikon-varee&ni&kenkyuusyo&o&kaisetusuru.} {S International&{\sc top/subj} &silicon valley&in&laboratory&{\sc obj}&establish} {{\it (S International) establishes a laboratory in Silicon Valley.}} \item[b.] \shortex{10}{0 &sutaffu&tosite&doobunya&no&keni&hutari&o&sukautosita.} {{\sc subj} &staff&as&this-field&{\sc gen}&authority&2-people&{\sc obj}&recruited} {{\it (S International) has recruited two authorities in the field as a staff.}} \begin{tabular}{|lll|} \hline {\bf Cb:} & {\sc s international} & \\ {\bf Cf:} & [{\sc s international}, {\sc two authorities}] & \\ \hline \end{tabular} \item[c.] \shortex{10}{Kono&kenkyuusyo&wa&0&saniibeeru&ni&kaisetusi,} {this&laboratory&{\sc top/obj} &{\sc subj}&Sunnyvale&in&open} {{\it (S International) will open this laboratory in Sunnyvale,}} \begin{tabular}{|lll|} \hline {\bf Cb:} & {\sc s international} & \\ {\bf Cf1:} & [{\sc s international}, {\sc laboratory}] {\sc zta continue} & \\ & {\sc zero-top}, {\sc top/obj} & \\ \hline {\bf Cf2:} & [{\sc laboratory}, {\sc s international}] {\sc retain} & \\ & {\sc top/obj}, {\sc subj} & \\ \hline \end{tabular} \item[d.] \shortex{10}{0&0&``Oputo-huirumu-kenkyuusyo''&to&nazukeru.} {{\sc subj} &{\sc obj}&Opt-film-laboratory&as&name} {{\it (S International) names (the laboratory) Opt-film Laboratory.}} \begin{tabular}{|lll|} \hline {\bf Cb:} & {\sc s international} & \\ {\bf Cf:} & [{\sc s international}, {\sc laboratory}] & \\ \hline \end{tabular} } Recall that the {\sc zta} effects are dampened when the grammatical topic marker {\it wa\/} is used. The third sentence yields the situation where the zero topic must compete with the grammatical topic, and the preference for one over the other is hard to determine. The ambiguity is resolved after processing the fourth sentence, however, when semantic information about the naming relation is provided. In other words, the inference that a newly created thing is normally given a name, allows the hearer to hypothesize that {\it the laboratory\/} naturally fills the {\it named\/} slot of the {\it naming\/} relation. In sum, these observations support the predictions made by centering that the preferred interpretation of utterances that contain {\it zeros} is one in which discourse coherence is maintained. Furthermore, {\sc zta} allows the hearer to interpret the current utterance as being highly coherent with the previous utterance. I have also suggested that in cases where an ambiguity arises because of the use of {\sc zta}, the speaker will provide additional cues to guide the hearer's interpretive process. \section{The Shift of Attentional Focus} \label{shift-sec} Now let us consider the prediction that discourse coherence is maintained even when zeros are used to shift the center. This is the context in which the Cb in utterance $\rm{U_{i}}$ is not realized as the Cp (i.e.~the most salient entity in $\rm{U_{i}}$). A new entity is introduced as the Cp, and the shift of the speaker's attentional focus onto this new entity is indicated. Below, I examine the interpretation of {\it zero\/}s in {\sc retain} (discourses (\ex{1}) and (\ex{2})) and {\sc rough-shift} (Discourses (\ex{3}) and (\ex{4})) transitions. After discussing these examples, I propose some hypotheses about how zeros are interpreted in these environments. In (\ex{1}c) a new center, {\it T co.} is introduced into the discourse and realized as a topic, while the old center, {\it the student} is realized as an object. Thus the center realized by {\it the student} is ranked lower on the Cf than the center realized by {\it T co.}, but {\it the student} is still the Cb, so the centering transition is a {\sc retain}. \eenumsentence{\item[a.] \shortex{8}{Gakusei &wa &hurii-daiaru-kaado&de&G-sya&e&denwasureba,} {students&{\sc top/subj} &free-dial card&with&G. Company&to&phone} {{\it When students call G. Company with the phone card,}} \item[b.] \shortex{6}{0 &syuusyoku-zyoohoo&o&muryoo-de&erareru.} {{\sc subj} &employment-information&{\sc obj}&free&get-can} {{\it (The student) can get employment information free.}} \item[c.] \shortex{10}{T-sya&wa&rezyaa-zyoohoo&o&0&fakusimiri&de&teikyuusiteori,} {T Co.&{\sc top/subj}&leisure info.&{\sc obj}&{\sc obj2}&fax&by&provide} {{\it T Co. provides leisure information (to student) by fax, }} } In (\ex{1}c) a new center, {\it the price} is introduced into the discourse and realized as a topic, and the center for {\it the bank} is realized as a subject. Thus the center for {\it the bank} is ranked lower on the Cf than the center for {\it the price.}, but {\it the bank} is still the Cb, so the centering transition is a {\sc retain}. \eenumsentence{\item[a.] \shortex{12}{Saga Ginkoo&wa &gasorin-sutando&de&''banku POS'' saabisu&o&hazimeru.} {Saga Bank&{\sc top/subj} &gas station&at&''Bank POS'' service&{\sc obj}& will start} {{\it Saga Bank will start ``Bank POS'' service at gas stations.}} \item[b.] \shortex{10}{0&kaimono-kayku&ni&kyassyu-kaado&wo&tukatte-morai,} {{\sc subj} &shoppers&{\sc obj2}&cash card&{\sc obj}&use-ask} {{\it (the bank) asks shoppers to use a credit card,}} \item[c.] \shortex{11}{daikin&wa&0&sokuza-ni&kokyaku&no&kooza&kara&hikiotosu.} {price&{\sc top/obj}&{\sc subj}&immediately&customer&{\sc poss}&account&from&draw} {{\it (the bank) takes the charge immediately from a customer's account.}} } In (\ex{1}c), the only center that provides a link to the prior discourse is the center for {\it the customer}, so that center is the Cb. However {\it the customer} is is ranked lower on the Cf than the center for {\it T. Insurance Co.}, yielding a {\sc rough shift} centering transition. \eenumsentence{\item[a.] \shortex{8}{S. ginkoo&wa&kinyuu-hosyoo-seido&no&toriatukai&mo&hazimeru.} {S. Bank.&{\sc top/subj} &money-insurance-system&{\sc gen}&handling&{\sc obj}& begin} {{\it S. Bank will start to handle a money insurance system as well.}} \item[b.] \shortex{8}{Kokyaku&ga&ittei&ryookin&o&haraeba,} {customer&{\sc subj} &certain&fee&{\sc obj}&pay} {{\it A customer pays a certain amount of fee,}} \item[c.] \shortex{10}{T. Insurance Co.&ga&sono&kinyuu-torihiki&o&0&hosyoosuru.} {T. Insurance Co.&{\sc subj}&that&money-transaction&{\sc obj}&{\sc obj2}&insure} {{\it T Insurance Co.~insures the money transaction {\it (to the customer\/}).}} } In (\ex{1}a), the phrase {\it T. Electron} introduces a center that is established as the Cb in (\ex{1}b. Other discourse entities become the Cb in utterances (\ex{1}d)) to (\ex{1})f, but in (\ex{1}g) the center corresponding to {\it T. Electron} is realized by a {\it zero}. None of the centers in (\ex{1})f serve as an antecedent for this zero, so this is a {\sc rough shift} transition. \eenumsentence{\item[a.] \longex{8}{8}{T. Electron&wa&Yamanasi-ken Nirasaki-si&ni&daikibona} {koozyoo&o&kaisetusuru.} {T. Electron&{\sc top/subj}&Yamanasi, Nirasaku-city&in&big} {facotry&{\sc obj}&will built.} {{\it T.E lectron will open the big factory in Nirasaki City, Yamanasi}} \item[b.] (a few sentences about T. Electron) \item[c.] \longex{8}{8}{Sinkoozyoo&de&seisansuru&sooti&wa&{\sc te}5000&o} {seinoo-appu-sita&{\sc rie}-ettingu-sooti.} {new factory&in&produce-is&devices&{\sc top/subj}&{\sc te}5000&{\sc obj}} {power-up-did&{\sc rie}-etching-devices} {{\it The devices that produced in the new factory are {\sc rie} etching devices,}} \\ {{\it more powerful than {\sc te}5000.}} \item[d.] \shortex{8}{0&16{\sc mdram}&no&sesan&ni&taioodekiru.} {{\sc subj}&16{\sc mdram}&{\sc gen}&production&{\sc obj2}&cope-with} {{\it ({\sc rie} devices) can cope with the production of 16 {\sc mdram}.}} \item[e.] \shortex{8}{{\sc dram}&no&syuusekido&ga&takamaruniture,} {{\sc dram}&{\sc gen}&integrality&{\sc subj} &increase} {{\it As the integrality of {\sc dram} increases,}} \item[f.] \shortex{8}{ettyaa&no&zyuyoo&ga&hueru&tame,} {etching-devices&{\sc gen}&demand&{\sc subj}&increase&since} {{\it The demand of etching devices increases, and hence,}} \item[g.] \shortex{8}{0&sinkoozyoo&no&seisan&ni&humikitta.} {{\sc subj}&new facility&{\sc gen}&production&{\sc obj2}&decided} {{\it (T. Electron) decided to begin the production in the new facility.}} } Note that the interpretation of {\it zeros} is not particularly problematic in the case of {\sc retain}; although the Cb is shifting the antecedent for the {\it zero} is a center from the previous utterance. Furthermore, in some cases, the {\sc retain} transitions may have a {\sc zta continue} option. However in the {\sc rough-shift} transition, no local antecedent of a {\it zero\/} is available and a center shift is forced. In this second case, the {\it zero\/}'s antecedent is not in the immediately preceding utterance, but must be realized in prior utterances. These cases have been called {\sc return pops} or {\sc focus pops} in the literature \cite{Reichman85,PS84,GS86}. See also (Walker, this volume). \begin{figure}[htb] \begin{tabular}{|r|c|c|c|} \hline & {\sc lexical} & {\sc tense \&} & \\ & {\sc semantics} & {\sc aspect} & {\sc agreement}\\ \hline & & & \\ {\sc rough-shift} with {\it zeros} & 20 & 6 & 2 \\ \hline \end{tabular} \caption{{\bf Disambiguation Features for Rough-Shift}} \label{lex-fig} \end{figure} If discourse coherence is to be maintained, it seems clear that there must be other cues that are used to preserve coherence and resolve {\it zero\/}s appropriately. This prediction has turned out to be correct. To test the hypothesis that shifting centers are associated with contextual factors that facilitate transitions, such as lexical semantics, agreement information and tense and aspect, all the rough shifts in the corpus (23 of them) wre coded for these features. The results are given in figure \ref{lex-fig}.\footnote{The total number of the table exceeds the total number of 23 occurrences of the {\sc rough-shift} transition with zeros. This is due to the fact that there are some cases where two features (i.e.~lexical semantics and tense) are employed at the same time.} Below I illustrate the role of these factors in interpreting {\it zeros} when the center shifts with representative examples from the corpus. \subsection{Interaction with lexical semantics} Let us take a look at the discourse in (\ex{0}). The appropriate interpretation of the {\it zero\/} in the last sentence is constrained by the semantic restriction assigned to the arguments of verb `decide'. No entity in (\ex{0})f can be a potential antecedent, and the {\it zero\/} must be resolved to a discourse entity expressed in the previous utterances of the text. In this case, it goes back to the utterance where {\it T. Electron\/} is available.\footnote{The part indicated by italics is the segment given in (\ex{0}).} \eenumsentence{\item[\ ] {\it {\bf T. Electron} will open the biggest factory in Nirasaki City, Yamanasi.} (T. Electron) will build (the factory) in the company property adjacent to its General Laboratory. (T. Electron) will provide a big-scale clean room, and produce etching devices which can deal with 16M bit dynamic {\sc ram}. The total investment amounts to 5 billion yen and the construction starts this fall. It is expected that (the factory) will start operation in a year later. {\it The devices produced in the new factory are {\sc rie} devices, more powerful than {\sc te5000}. ({\sc rie} devices) can cope with the production of 16{\sc mdram}. As the integrality of {\sc dram} increases, the demand of etching devices increases, and hence, (T. Electron) decided to begin the production in the new facility. } } If we assume that the antecedent of a {\it zero\/} is any of the centers introduced in the previous discourse, the interpretation of the last sentence would be ambiguous; there are multiple potential candidates even if lexical information is brought to bear. Note that {\it Nirasaki City\/} and {\it General Laboratory\/} are semantically legitimate antecedents of the missing subject of the {\it deciding\/}-situation described by the last sentence. The uncontroversial interpretation with {\it T. Electron\/} as the antecedent suggests that a discourse entity that has not been previously realized as the Cb {\bf cannot} be interpreted as the cospecifier of a {\it zero\/}. Discourse coherence can be maintained by an inference process based on the lexical semantics, but the preferred interpretation is not always computed by a inference process purely driven by the underlying semantics. Instead, discourse information such as attentional focus and salience provides constraints on the application of information from lexical semantics. This interaction is key for enhancing centering by incorporating disambiguation information from other sources. This claim is further supported by the observation in (\ex{1}). If we assume that the antecedent of a {\it zero\/} can be any of the entities that were previously realized in a discourse, nothing stops the {\it zero\/} in the third utterance from taking {\it doosya\/} (`the company') in the first utterance as its antecedent since this would yield a semantically plausible {\sc rough-shift} interpretation. However, this interpretation is never preferred over the interpretation obtained by a more highly ranked centering transition. That is, no interpretation based on lexical semantics is preferred to an interpretation that is ranked higher in terms of centering transitions. The preferred interpretation according to the centering rules cannot be overridden unless this interpretation is semantically anomalous. \eenumsentence{\item[a.] \shortex{8}{doosya&wa &15-dai &no &hanbai& o& mikondeiru.} {company&{\sc top/subj} &15-piece &{\sc gen}& sales &{\sc obj} &anticipate} {{\it The company anticipates the sales of 15 machines.}} \begin{tabular}{|lll|} \hline {\bf Cb:} & {\sc company} & \\ {\bf Cf:} & [{\sc company}, {\sc sales}] & \\ \hline \end{tabular} \item[b.] \shortex{6}{{\sc cvd}-sooti &wa &{\sc ceraus}.} {{\sc cvd}-device&{\sc top/subj} &{\sc ceraus}} {{\it The {\sc cvd} device is (called) {\sc ceraus}.}} \begin{tabular}{|lll|} \hline {\bf Cb:} & {\sc company} & \\ {\bf Cf:} & [{\sc cvd-device}, {\sc ceraus}] & {\sc retain} \\ \hline \end{tabular} \item[c.] \shortex{10}{0 &maruti-tyenbaa-hoosiki& o &saiyoo.} {{\sc subj} &multi-chamber system& {\sc obj} & adopt} {{\it (CVD-device) adopts a multi-chamber system.}} \begin{tabular}{|lll|} \hline {\bf Cb1:} & {\sc cvd-device} & \\ {\bf Cf1:} & [{\sc cvd-device}, {\sc system}] {\sc smooth-shift} & \\ & {\sc subj}, {\sc obj} & \\ \hline {\bf Cb2:} & {\sc cvd-device} & \\ {\bf Cf2:} & [{\sc company}, {\sc system}] {\sc rough-shift} & \\ & {\sc subj}, {\sc obj} & \\ \hline \end{tabular} \item[d.] \shortex{5} { 0 & tahaisen-maku &ni &taioo-dekiru.} { {\sc subj} & multi-wired film &{\sc obj2} & deal-can} {{\it (CVD-device) can deal with multi-wired films.}} \begin{tabular}{|lll|} \hline {\bf Cb1:} & {\sc cvd-device} & \\ {\bf Cf1:} & [{\sc cvd-device}, {\sc films}] {\sc continue} & \\ & {\sc subj}, {\sc obj2} & \\ \hline {\bf Cb2:} & {\sc company} & \\ {\bf Cf2:} & [{\sc company}, {\sc films}] {\sc smooth-shift} & \\ & {\sc subj}, {\sc obj2} & \\ \hline \end{tabular} } The lexical semantics of the verb {\it saiyoo\/} (`adopt') in (\ex{0})c would not block {\it the company\/} in (\ex{0}a) being realized as their subject. For instance, both `{\it The {\sc cvd}-device adopts a multi-chamber system}' and `{\it the company adopts a multi-chamber system\/}' are reasonable readings of (\ex{0})c. However, the Cf2 reading, which is obtained on the basis of lexical semantics and yields the {\sc rough-shift} transition, is not preferred to the Cf1 {\sc smooth-shift} reading. The preference assigned to (\ex{0}c) based on centering transitions is seen in (\ex{0})d. The verb {\it taioo} in (\ex{0}d) means `answer' or `response' when the human being or the organization is the subject, and it normally takes an abstract noun such as as `demand', `a political crisis' as its object. The verb also takes the non-agentive entity as the subject and means its applicability to some other object expressed in the non-subject position. The missing subject of the sentence in (\ex{0})d, which has a concrete object in the object2 position, therefore naturally refers to {\it the {\sc cvd} device\/} rather than {\it the company\/}, meaning that the {\sc cvd} device is applicable to handle multi-wired films. The preferred interpretation of (\ex{0})d thus supports the preference computed in utterance (\ex{0})c based on the centering transitions; the interpretation, which preserves discourse coherence between discourse segments, is the one most preferred. Thus, lexical semantics can be used to resolve the interpretation of {\it zero\/}s, as long as its interaction with discourse information about attentional state is taken into consideration. \subsection{Interaction with tense and aspect} It is not always the case that lexical semantics provides a cue. Observe the following examples. \eenumsentence{\item[a.] \shortex{8}{T. Electron&wa &hiitaa-koozyoo&no&kensetu&ni&tyakusyusita.} {T Electron&{\sc top/subj} &heater factory &{\sc gen}& construction &{\sc obj} &began({\sc past})} {{\it T. Electron began the construction of its heater factory.}} \item[b.] \shortex{10}{koremade&0&kyoodaigaisya&kara&kyookyuu&o&uketeita&ga,} {by now&{\sc subj}&brother-company&from&supply&{\sc obj}&recieved&but} {{\it By now (T. Electron) has been receiving the supply from its brother company,}} \item[c.] \shortex{10}{0 &zisya-seisan&ni&kirikaeteiku.} {{\sc subj} &self-production&{\sc obj2} & introduce} {{\it (T. Electron) will introduce self-production.}} \item[d.] \shortex{10} {Hiitaa-koozyoo&wa&0&itagane-koozyoo&ni&rinsetusite&kensetusuru.} {Heater factory& {\sc top/obj} & {\sc subj}&steel factory&to&adacent&construct.} {{\it (T.Electron) is constructing the heater factory next to the steel factory.}} \item[e.] \shortex{10} {0&hiraya-date&de, &yukamenseki&658 heihoo-meetoru.} {{\sc subj}&one-story&is&floor space&658 square meter} {{\it (The heater factory) is one-story building with the floor space of 685 square meter.}} \item[f.] \shortex{10} {0&{\sc cvd}-sooti-yoo hiitaa&o&seisansuru.} {{\sc subj}&{\sc cvd}-device-for heater&{\sc obj}&produce} {{\it (The heater factory) will produce heaters for {\sc cvd}-devices.}} \item[g.] \shortex{10} {Toosigaku&wa&2-oku 8-sen man yen&da.} {investment-money&{\sc top/subj}&280 million yen&is} {{\it The investment money amounts to 280 million yen.}} \begin{tabular}{|lll|} \hline {\bf Cb:} & {\sc heater factory} & \\ {\bf Cf:} & [{\sc investment}, {\sc 280 million yen}] {\sc retain} & \\ & {\sc subj}, {\sc comp} & \\ \hline \end{tabular} \item[h.] \shortex{12} {0&san'nin&no&gizyutusya&o&Sagami&ni&gizyutusyuutoku&tame&hakensita.} {{\sc subj}&three&{\sc gen}&technician&{\sc obj}&Sagami&to&technical training&for& sent} {{\it (T. Electron) sent three technicians to Sagami for technical training.}} \begin{tabular}{|lll|} \hline {\bf Cb1:} & {\sc investment money} & \\ {\bf Cf1:} & [{\sc heater factory}, {\sc technician}] {\sc rough-shift} & \\ & {\sc subj}, {\sc obj} & \\ \hline {\bf Cb2:} & {\sc investment money} & \\ {\bf Cf2:} & [{\sc T. Electron}, {\sc technician}] {\sc rough-shift} & \\ & {\sc subj}, {\sc obj} & \\ \hline \end{tabular} } No entity in (\ex{0})g is suitable as an antecedent of the {\it zero} in (\ex{0})h -- {\it the investment money\/} is never interpreted as the {\it sender\/} in (\ex{0}).\footnote{Here {\it heater factory\/} and {\it T. Electron\/}, realized in the previous discourse segments, are potential antecedents of the {\it zero\/} because they both meet the constraints on the antecedency of zeros and the semantics of the verb. However, the following alternative analysis would be possible. The introduction of a new entity, {\it tossigaku\/} (`investment money') in (\ex{0})g may indicate that this entity is associated with an entity that has been already introduced in the discourse. That is, we can assume that there is functional dependency relation between {\it heater factory\/} and {\it investment money\/}; {\it investment money\/} is the money for establishing the heater factory. In other words, the heater factory might be implicitly realized in (\ex{0})h though it is not overtly expresed. More research should be done to formalize when such an implicit relation is realized. A statistical measure of cooccurrence of {\sc np}s may be useful to identify potential attributes associated with an entity. For instance, a company may have attributes {\sc name}, {\sc location}, {\sc owned-by}, {\sc product}, {\sc net-worth}, {\sc nationality} and {\sc the number of employees} and so on.} That is, the {\sc rough-shift} transition is forced to make sense out of (\ex{0})h and the {\it zero\/} looks for its potential antecedent in the previous utterances. There are two entities whose semantics is compatible with what the verb of the sentence requires as its argument. That is, it is both plausible to say that `{\it The heater factory (as an organization, though the construction of its building has not been completed) sent technicians\/}' as well as `{\it T. Electron sent technicians\/}'. However, the second reading is more preferred. I assume that the shift is supported by the use of the past tense:\footnote{Tense in Japanese is realized as the morpheme attached to the verb stem. In general, for the [$-$stative] verbs, the simple present (or non-past) tense is marked with {\it -u\/}, while the simple past (or perfect) tense with {\it -ta\/}. The present tense form of [$-$stative] verbs usually refers to future time unless they represent habitual or generic actions, in which case they refer to present time (Kuno 1973). The past form represents an action that has been completed or executed at reference time.} the attentional focus in (\ex{0})h returns to an event which has been completed at the time of the utterance. Note that {\it T Electron} has been mentioned as an entity which conducted some past action at the beginning of the text. The example illustrates how inference based on temporal/aspectual information can be used to resolve ambiguity when no local constraints are available. They are used to control the flow of information, indicating the shift of the reference point in describing events. In other words, temporal/aspectual coherence participates in an inference system to maintain non-local coherence and it provides a cue to identify discourse structure segments and their non-local hierarchical relations in discourse. \subsection{Interaction with agreement} The third strategy to maintain discourse coherence is one that uses different types of agreement information in order to elicit adequate inference and eliminate an undesired potential interpretation. Consider example (\ex{1}). \eenumsentence{\item[a.] \shortex{8}{S. Metal&wa &zisedaigata&ettingu-sooti&o&kaihatu,} {S. Metal&{\sc top/subj} &next-generation-type&etching-device&{\sc obj} &develop} {{\it S. Metal has developed next-generation type etching devices,}} \item[b.] \shortex{10}{0&kotosi&kara&honkakuteki-na&maaketingu&o&hazimeteiru.} {{\sc subj}&this year&from&full-scale&marketing&{\sc obj}&begin} {{\it (S. Metal) has started full-scale marketing this year.}} \item[c.] (a few sentences about {\it the etching device\/}) \item[d.] \shortex{10}{{\sc cvd}-sooti&wa&kore&ni&tuzuku&mono&de,} {{\sc cvd}-device&{\sc top/sub} &this&{\sc obj2}&follow&thing&be} {{\it {\sc cvd} devices are the thing that will follow this (i.e.~etching devices).}} \item[e.] \shortex{10} {habahiroi&zyuyoo&ga&kitaisareteiru.} {wide&demand&{\sc subj} &is-expected} {{\it Wide range of demand is expected.}} \item[f.] \shortex{10} {0&{\bf tomoni}&{\sc ecr}&o&riyoositeori,} {{\sc subj} &both&{\sc ecr}&{\sc obj}&use} {{\it (CVD devices and etching devices) both use ECR,}} } The adverb {\it tomoni\/} (`both') in (\ex{0})f indicates that the unexpressed subject in the utterance refers to a set of two entities. Considering the previous discourse, we see that the entities which are of the same type and can form a set in this discourse segments are {\it etching devices\/} and {\it CVD devices\/}. Without this quantifier-like adverb, the {\it zero\/} could refer to {\it S. Metal\/}, which is an legitimate antecedent of the {\it zero\/} by itself. Although the language does not mark number distinction (i.e.~singular vs. plural) on nouns, classifiers are used when the number or the quantity does matter; {\it two cups of tea\/}, {\it 3 individuals of professors\/}, {\it 5 things of apples\/} and so on. Expressions which are sensitive to number as in (\ex{0}) thus can be used to make an adequate grouping among the entities in a discourse and prune an undesired interpretation which is otherwise predicted or never eliminated by basic discourse coherence principles. \subsection{Summary} In conclusion, a shift of centers occurs only when such an intended interpretation is well supported by other contextual information, so that the speaker's intention is rarely misinterpreted. If the speaker is concerned that her utterance might be misinterpreted as a consequence of shifting the topic, she always has an alternative to express the intended new topic overtly as I originally hypothesized in (3) above. However, constraints that arise from lexical semantics and the event structure appear to be readily available cues that the hearer can use to interpret zeros with nonlocal antecedents. In the following section, I will discuss how these observations can be incorporated into centering, and go some way towards integrating centering with a model of global discourse structure (cf. \cite{Hobbs85a,PS84,Reichman85,GS86}. \section{Integrating Centering and Global Coherence} \label{integ-sec} Although our initial hypothesis was that zeros would not be used to shift centers, we saw above that this often happens in naturally occurring discourse. The relevant numbers are repeated with the definition of the various centering transitions in figure \ref{trans-state2-fig}. \footnote{Note that Figure \ref{trans-state2-fig} table shows the frequency of each transition when an utterance contains at least one {\it zero\/}.} \begin{figure}[htb] \begin{tabular}{|r|c|c|} \hline & &\\ & $\rm{Cb(U_i) = Cb(U_{i-1})}$ & $\rm{Cb(U_i) \neq Cb(U_{i-1})}$ \\ \hline & & \\ $\rm{Cb(U_i) = Cp(U_i)}$ & {\sc continue 76} & {\sc smooth-shift 34} \\ \hline & & \\ $\rm{Cb(U_i) \neq Cp(U_i)}$ & {\sc retain 3} & {\sc rough-shift 23} \\ \hline \end{tabular} \caption{\bf Distribution of Centering Transitions with Zeros} \label{trans-state2-fig} \end{figure} In the current algorithms of Centering Theory \cite{BFP87,WIC94}, interpretations are determined by the Cb and Cf in $\rm{U_{i-1}}$ and $\rm{U_{i}}$ (i.e. local discourse entities). However, the observations above suggest that the theory must support an algorithm for accessing non-local antecedents when a {\sc rough-shift} transition occurs and a shift to a non-local center is detected (cf. \cite{Sidner83a}). In order to capture global coherence, another center data structure must be added to keep track of the Cbs introduced in the previous utterances. My data shows that {\it zeros\/} in {\sc rough-shifts\/} realize discourse entities that were previously realized as the $\rm{Cb(U_{i-n})}$: there are {\bf no} cases where a {\it zero\/} realizes a discourse entity that was previously a non-Cb. Thus I propose that what is needed is a Cb retrieval mechanism of some type to model the cases where a zero is resolved to a discourse entity that was an earlier center. This Cb retrieval mechanism could be based on the stack mechanism of \cite{Sidner83a,GS86}, or the cache mechanism proposed in \cite{Walker96b} and discussed in (Walker, this volume). Since I have no evidence that anything more powerful than a {\sc list} is required, the proposed algorithm is to search a linearly ordered {\sc list} of former Cbs, ordered by recency. In all the cases in my data, it is sufficient to search back through a list of former Cbs ordered by recency and choose as the antecedent of the zero the first such Cb that is semantically compatible with the requirements of the zero. This mechanism for computing global coherence must interact with the centering algorithm for local coherence in such a way that the former is activated when the latter fails. The condition may be stated as follows. \eenumsentence{\item[\ ] If $\rm{Cb(U_i)\neq Cp(U_i)}$, then take $\rm{Cb(U_m)}$ which is an element of $\rm{M}$ (i.e.~$\rm{Cb(U_m)\in M}$) where $\rm{M}$ is a list of $\rm{Cb(U_{1\ldots(i-1)})}$ which satisfies the inference process. } When local coherence is not observed and the shift of the center is forced, the list of the Cbs of the previous discourse, $\rm{M}$, is searched, and each proposed Cb is checked against an inference process based on lexical semantics and tense and aspect information, to determine its adequacy. The algorithms to refer to the global discourse may be sketched as in (\ex{1}). \eenumsentence{\item[\ ] When a Cb shift is detected (i.e.~${\rm{Cp(U_i)\neq Cb(U_i)}}$): \begin{enumerate} \item {\bf Local Coherence Check}:\\ \ \ \ \ \ {\it if\/} {\sc retain} and no {\sc zta-continue} is available, go to Global Coherence Check\\ \ \ \ \ \ {\it if\/} {\sc rough-shift}, go to Global Coherence Check \\ \ \ \ \ \ {\it else\/} return to Centering algorithm \\ \item{\bf Global Coherence Check}:\\ \ \ \ \ \ Take a $\rm{Cb(U_m)}$ on the $\rm{Cb}$ list, and (e.g.~$\rm{Cp(U_m)\in M}$) \\ \ \ \ \ \ Employ inference systems \\ \item[3.] {\bf Decision: }\\ \ \ \ \ \ {\it if\/} the interpretation $\rm{Cp(U_i)=Cb(U_m)}$ is acceptable, return to Centering algorithm\\ \ \ \ \ \ {\it else\/} return to Global Coherence Check and try the next Cb on the Cb list \end{enumerate} } \section{Discussion} \label{discuss-sec} In this chapter, I discuss issues that centering theory needs to address in order to model discourse coherence in a larger context. I argue that the use of {\it zeros} to realize previous Cbs in {\sc retain} and {\sc rough-shift} centering transition states indicates that coherence information provides constraints on inferential processes. Future work must integrate these observations with other studies on shifting centers. The data examined here show that lexical semantics as well as temporal/aspectual information are used to create links between non-local utterances, and that Centering theory can be extended to compute non-local discourse coherence as long as it incorporates a richer semantic representation of utterances. I propose that the combination of the centering algorithm with a global Cb list captures some aspects of global coherence, without introducing a completely different module. This kind of mechanism suggests that it might be possible to use Centering as a part of an algorithm for inferring discourse structure.
1,314,259,995,732
arxiv
\section{Introduction} In [18], Sz\'{e}kelyhidi and Tosatti studied regularity of weak solutions of the equation \begin{align} (\omega+\sqrt{-1}\partial\bar{\partial}\phi)^n=e^{-F(\phi, z)}\omega^n \end{align} on an $n$-dimensional compact K\"{a}hler manifold $(M,\omega)$, where $F: \mathbb{R}\times M\rightarrow \mathbb{R}$ is a smooth function and $\omega+\sqrt{-1}\partial\bar{\partial}\phi\geq 0 $ in the sense of currents. According to the local theory of Bedford-Taylor [1], for a locally bounded plurisubharmonic function $u$, the wedge product $(dd^c u)^k,\ 1\leq k\leq n$ is well defined, where $d^c=\frac{\sqrt{-1}}{2}(\bar{\partial}-\partial)$ and $dd^c=\sqrt{-1}\partial\bar{\partial}$. Indeed, for such $u$ and $T$ a positive closed current, the current $uT$ is well defined and $$dd^cu\wedge T:= dd^c(uT)$$ is also a positive closed current. Then the wedge product $(dd^c u)^k,\ 1\leq k \leq n$ can be defined inductively as closed positive currents. Denote $$PSH(M,\omega)=\{u: M\rightarrow [-\infty, +\infty)| u\ \text{is upper semicontinuous},\ \omega+\sqrt{-1}\partial\bar{\partial}u\geq 0\} $$ the set of $\omega$-plurisubharmonic (short for $\omega$-psh) functions on $M$. If $\phi\in PSH(M, \omega)\cap L^\infty(M)$ solves equation $(1.1)$ in the above sense of Bedford-Taylor, we say $\phi$ is a weak solution of the equation. The H\"{o}lder continuity of weak solutions follows from Ko{\l}odziej [12]. Using a different approach, Sz\'{e}kelyhidi and Tosatti [18] proved that such weak solutions are actually smooth. Particularly, if $M$ is Fano, $\omega\in c_1(M)$ and $F(\phi, z)=\phi-h$, where $h$ satisfies $\sqrt{-1}\partial\bar{\partial}h=\Ric(\omega)-\omega$, their result implies that K\"{a}hler-Einstein currents with bounded potentials are smooth. \let\thefootnote\relax\footnote{\textbf{Mathematics Subject Classification (2010)} \ 53C55, 35J96} In the proof of [18], the authors use the smoothing property of the corresponding parabolic flow \begin{align} \frac{\partial \varphi}{\partial t}=\log\frac{(\omega + \sqrt{-1}\partial \bar{\partial}\varphi)^n}{\omega^n}+F(\varphi, z). \end{align} They construct a function $\varphi\in C^0([0,T]\times M)\cap C^{\infty}((0,T]\times M)$ with $\varphi(0)= \phi$ which solves equation (1.2) on $(0,T]$, where $T$ depends only on $\sup|\phi|$, $F$ and $\omega$. Then they show that $\dot{\varphi}(t)=0$ for $0<t\leq T$ since the initial $\phi$ is a solution of (1.1). Therefore $\phi=\varphi(0)=\varphi(t)$ is smooth. Similar construction was previously used in Song-Tian [17] for the K\"{a}hler-Ricci flow.\\ As equation (1.1) also makes sense on Hermitian manifolds, it is natural to consider the regularity of weak solutions of equation (1.1) in a more general setting. On a Hermitian manifold, there are no local potentials for $\omega$. However, $\omega$-psh functions are locally the sum of plurisubharmonic functions and smooth functions. Using this property and the wedge product of a smooth positive $(1, 1)$ form and a positive current is again positive, the current $(\omega+\sqrt{-1}\partial\bar{\partial}\phi)^n$ is still well defined and positive for bounded $\omega$-psh functions. For more details on pluripotential theory we refer to [1, 3, 8, 9].\\ In this note, we show that the higher order estimates in [18] can be obtained on compact Hermitian manifolds. Particularly, the flow $(1.2)$ with smooth initial data $\varphi_0$ has a smooth solution for a time $T$ which depends only on $\sup|\varphi_0|, \sup|\dot{\varphi}_0|$. Then we obtain the following theorem. \begin{thm} Let $(M,\hat{g})$ be a $n$-dimensional compact Hermitian manifold with the fundamental 2-form $\hat{\omega}$ satisfying \begin{align} \forall \ u\in PSH(M,\hat{\omega})\cap L^\infty(M),\ \ \int_M(\hat{\omega}+\partial\bar{\partial}u)^n=\int_M\hat{\omega}^n \end{align} and $F:\mathbb{R}\times M\rightarrow \mathbb{R}$ be a smooth function. Suppose that $\phi\in PSH(M,\hat{\omega})\cap L^\infty(M)$ solves \begin{align} (\hat{\omega}+\sqrt{-1}\partial\bar{\partial}\phi)^n=e^{-F(\phi, z)}\hat{\omega}^n \end{align} in the sense of currents. Then $\phi$ is smooth. \end{thm} Here the assumption $(1.3)$ is automatically true on K\"{a}hler manifolds. In [18], the proof of the above theorem on K\"{a}hler manifolds needs Ko{\l}odziej's stability result [11]. We use the assumption $(1.3)$ from [3], under which the usual comparison principle is true, to make sure the stability result holds on such Hermitian manifolds [13]. Particularly, if $\hat{\omega}$ satisfies Guan-Li's [6] condition $\partial\bar{\partial} \hat{\omega}^k=0$, $k=1,2$, the assumption is satisfied. When $M$ is a complex surface, such metrics always exist due to a result of Gauduchon [4]. In the proof of our theorem, the main difference between the Hermitian case and K\"{a}hlar case lies in the $C^2, C^3$ estimates and bound for $|\Ric|$. The computation on Hermitian manifolds is more complicated due to the existence of torsion terms. The proof of the second order estimate follows closely the argument of Gill [5] and Tosatti-Weinkove [20]. For the third order estimate we make use of the arguments in Phong-\u{S}e\u{s}um-Sturm [14] and Sherman-Weinkove [19]. Such estimate for the first derivative of the evolving Hermitian metrics was also established in [24], where the authors took a local reference K\"{a}hler metric to obtain a good bound. To bound $|\Ric|$, we need to deal with the new terms involving $|\nabla\Ric|$ very carefully. The techniques used in this paper can be applied to construct a weak solution of the Chern-Ricci flow [22, 23, 24] with singular initial Gauduchon metric on complex surfaces. In [22], Tosatti and Weinkove conjectured that if the Chern-Ricci flow starting from a Gauduchon metric is non-collapsing in finite time, then it blows down finitely many exceptional curves and continues in a unique way on a new complex surface. They proved in [23] the smooth convergence of the metrics away from the exceptional curves and the global Gromov-Hausdorff convergence (under a suitable condition) as $t$ approaches the singular time. It is expected that the flow can continue on the new surface from the push-down of the limiting current. We will investigate this in further work. The paper is organized as follows. In section 2, we give some background material on Hermitian manifolds. Then we use maximum principle to obtain the estimates for existence of the parabolic flow for a short time depending only on $\sup |\varphi_0|$, $\sup |\dot{\varphi}_0|$. In section 4, we use the smoothing property of the parabolic flow to prove Theorem 1.1.\\ \noindent\textbf{Acknowledgements.} The author would like to thank her advisor Jiaping Wang for constant support, encouragement and many helpful discussions. The author also thanks Valentino Tosatti and Ben Weinkove for suggesting her the problem and helpful comments on the first version of this paper. In addition, the author is grateful to S{\l}awomir Ko{\l}odziej for the clarification on the stability result and to Haojie Chen for many useful conversations. \section{Preliminaries} For reader's convenience, in this section we introduce some basic material on Hermitian manifolds. The formulas given here can be found in [19]. Let $(M,g)$ be a $n$-dimensional compact Hermitian manifold with the fundamental 2-form $\omega=\sqrt{-1}g_{i\bar{j}}dz^i\wedge dz^{\bar{j}}$ in local coordinates. Denote $\nabla$ the Chern connection of $g$ with Christoffel symbols $\Gamma_{ij}^k$ and torsion $T$ given by: $$\Gamma_{ij}^k=g^{k\bar{l}}\partial_i g_{j\bar{l}}, \ \ \ \ T_{ij}^k=\Gamma_{ij}^k-\Gamma_{ji}^k.$$ The covariant derivatives of $X=X^j\frac{\partial}{\partial z^j}$ and $a=a_jdz^j$ are defined in components as $$\nabla_i X^j=\partial_i X^j+\Gamma^j_{ik}X^k, \ \ \nabla_i a_j=\partial_ia_j-\Gamma^k_{ij}a_k.$$ Then $\nabla$ can be extended naturally to any tensors. Define the Chern curvature tensor of $g$ in components to be $$R_{i\bar{j}k}^{\ \ \ l}=-\partial_{\bar{j}}\Gamma^l_{ik}.$$ We lower and raise indices using metric g. Then $$R_{i\bar{j}k\bar{l}}=-\partial_i\partial_{\bar{j}}g_{k\bar{l}}+g^{p\bar{q}}\partial_ig_{k\bar{q}}\partial_{\bar{j}}g_{p\bar{l}}.$$ and the Chern-Ricci tensor is given by $$ R_{i\bar{j}}=g^{k\bar{l}}R_{i\bar{j}k\bar{l}}=-\partial_i\partial_{\bar{j}}\log \det g$$ We have the following commutation formulas: \begin{align}\begin{split} [\nabla_i, \nabla_{\bar{j}}]X^l=R_{i\bar{j}k}^{\ \ \ l}X^k, \ \ \ [\nabla_i, \nabla_{\bar{j}}]a_k=-R_{i\bar{j}k}^{\ \ \ l}a_l\\ [\nabla_i, \nabla_{\bar{j}}]\overline{X^l}=-R_{i\bar{j}\ \bar{k}}^{\ \ \bar{l}}\overline{X^k}, \ \ \ [\nabla_i, \nabla_{\bar{j}}]\overline{a_k}=R_{i\bar{j}\ \bar{k}}^{\ \ \bar{l}}\overline{a_l}\end{split} \end{align} The Bianchi identities will not hold necessarily for general Hermitian manifolds. There are extra torsion terms in the following identities. \begingroup \addtolength{\jot}{.2em} \begin{align} \begin{split} R_{i\bar{j}k\bar{l}}-R_{k\bar{j}i\bar{l}}&=-\nabla_{\bar{j}}T_{ik\bar{l}}\\ R_{i\bar{j}k\bar{l}}-R_{i\bar{l}k\bar{j}}&=-\nabla_iT_{\bar{j}\bar{l}k}\\ R_{i\bar{j}k\bar{l}}-R_{k\bar{l}i\bar{j}}&=-\nabla_{\bar{j}}T_{ik\bar{l}}-\nabla_kT_{\bar{j}\bar{l}i}\\ \nabla_p R_{i\bar{j}k\bar{l}}-\nabla_i R_{p\bar{j}k\bar{l}}&=-T^{\ \ r}_{pi}R_{r\bar{j}k\bar{l}}\\ \nabla_{\bar{q}} R_{i\bar{j}k\bar{l}}-\nabla_{\bar{j}} R_{i\bar{q}k\bar{l}}&=-T^{\ \ \bar{s}}_{\bar{q}\bar{j}}R_{i\bar{s}k\bar{l}}. \end{split} \end{align} \endgroup \section{estimates for the parabolic flow} Consider the following parabolic equation on a compact Hermitian manifold $(M ,\hat{\omega})$, \begin{align} \frac{\partial \varphi}{\partial t}=\log\frac{(\hat{\omega} + \sqrt{-1}\partial \bar{\partial}\varphi)^n}{\hat{\omega}^n}+F(\varphi, z) \end{align} where $F:\mathbb{R}\times M \rightarrow \mathbb{R}$ is a smooth function and $\varphi|_{t=0}=\varphi_0$ is smooth. By the theory of parabolic equations, there exists a unique smooth solution $\varphi(t)$ with $\hat{\omega} + \sqrt{-1}\partial \bar{\partial}\varphi>0$ for a short time. Denote $\dot{\varphi}$ for $\frac{\partial \varphi}{\partial t}$. We have the following proposition which generalizes the estimates in [18] to compact Hermitian manifolds. \begin{prop} Given a compact Hermitian manifold $(M,\hat{\omega})$, there exists $T>0$ depending only on $\sup|\varphi_0|$ and $F$ such that the above equation has a smooth solution $\varphi(t,z)$on $[0,T]$. Moreover, there exist smooth functions $C_k(t)$ on $(0,T]$ depending only on $\sup|\varphi_0|,\ \sup|\dot{\varphi}_0|,\ \hat{\omega}$ and $F$ which blow up as $t\rightarrow 0$ such that $$\| \varphi(t)\|_{C^k(M)} < C_k(t)$$ for $t\leq T$. \end{prop} We write $g$ for the metric associated to $\omega=\hat{\omega} + \sqrt{-1}\partial \bar{\partial}\varphi$, where $\hat{\omega}$ is the background Hermitian metric on a compact complex manifold $M$. Denote $|\cdot|$ the norm of tensors with respect to $g$, $\nabla$ the Chern connection of $g$ and $\Delta=g^{p\bar{q}}\nabla_p\nabla_{\bar{q}}$ the Laplacian of $\nabla$. We use $\hat{\nabla},\ \hat{R}_{i\bar{j}},\ |\cdot|_{\hat{g}},\ \Delta_{\hat{g}}$ , etc. to denote the quantities associated to $\hat{\omega}$. Throughout the section, $C, C',c,c_i,...$ will be some constants which depend only on $\sup|\varphi_0|$, $\sup|\dot{\varphi}_0|$ (and $\hat{\omega}, F$), and may vary from line to line. Also we may denote $H$ to be different \vspace{1mm} quantities.\\ First we have the following lemma from [18]. \begin{lem} There exist $T, C>0$ depending on $\sup|\varphi_0|$ such that \vspace{1.6mm} \begin{align} |\varphi(t)|<C, \ \ \ |\dot{\varphi}(t)|\leq \sup|\dot{\varphi}(0)|e^{Ct}, \end{align} when the solution exists and $t\leq T$. In particular, \begin{align} |\log\frac{(\hat{\omega} + \sqrt{-1}\partial \bar{\partial}\varphi)^n}{\hat{\omega}^n}|<C' \end{align} for some $C'$ depending on $\sup|\varphi_0|$ and $ \sup|\dot{\varphi}_0|$. \end{lem} The proof follows from [18, Lemma 2.1] as it does not need the K\"{a}hler condition. Now we can fix a $T'\leq T$ such that there exists a smooth\vspace{1.2mm} solution to $(3.1)$ on $[0,T']$. The $C^1$ estimate in [18] was obtained by modifying B{\l}ocki's estimate [2] (see also [7], [15]). In Hermitian case, we need the following special local coordinate system from Guan-Li [6], which is also crucial for our second order estimate. \begin{lem} Around a point $p\in M$, there exist local coordinates such that at $p$, \begin{align} \hat{g}_{i\bar{j}}=\delta_{ij}, \ \ \ \ \frac{\partial \hat{g}_{i\bar{i}}}{\partial z_j}=0. \end{align} \end{lem} With the above lemma, we have the following gradient estimate. \begin{lem} There exists $\alpha>0$ depending on $\sup|\varphi_0|$ and $\sup|\dot{\varphi}_0|$ such that \begin{align} |\nabla\varphi(t)|^2_{\hat{g}}<e^{\alpha/t}, \end{align} for $t\leq T'$. \end{lem} \begin{proof} Define $$H=t\log|\nabla\varphi(t)|^2_{\hat{g}}-\gamma(\varphi),$$ where $\gamma$ is a smooth function which will be determined later. If $H$ achieves maximum on $[0,T']\times M$ at $t=0$, then $H$ is bounded by a constant depending on $F$ and $\sup |\varphi_0|$ by Lemma 3.1. Now assume $H$ achieves its maximum at a point $(t_0, z_0)$, $t_0>0$. Choose a coordinate system around $z_0$ in Lemma 3.2 such that $\varphi_{i\bar{j}}$ is diagonal at $z_0$. Write $\rho=|\nabla\varphi(t)|^2_{\hat{g}}=\hat{g}^{i\bar{j}}\varphi_i\varphi_{\bar{j}}$ and $\dot{\rho}=\frac{\partial\rho}{\partial t}$. As $(\frac{\partial}{\partial t}-\Delta )H = \frac{\partial}{\partial t}H-t\frac{\Delta \rho}{\rho}+t\frac{|\nabla \rho|^2}{\rho^2}+\Delta \gamma$, we do\vspace{.6mm} the following calculations at $z_0$. First we have \begin{align*} \frac{\partial }{\partial t} H &=\log \rho+\frac{t \dot{\rho}}{\rho}-\gamma ' \dot{\varphi}\\ &=\log \rho-\gamma ' \dot{\varphi}+ 2\frac{t}{\rho} (\sum_{i,k}\re(\frac {\varphi_{ki\bar{i}}\varphi_{\bar{k}}} {1+\varphi_{i\bar{i}}})+2F' \sum_i|\varphi_i|^2+2\sum_i\re(F_i \varphi_i)), \end{align*} where the second equality follows from \begingroup \addtolength{\jot}{.3em} \begin{align*} \dot{\rho}&=\sum_i\dot{\varphi}_i\varphi_{\bar{i}}+\varphi_i\dot{\varphi}_{\bar{i}}\\ \dot{\varphi_i}&=g^{k\bar{l}}(\partial_i \hat{g}_{k\bar{l}}+\varphi_{i\bar{l}k})-\hat{g}^{k\bar{l}}\partial_i \hat{g}_{k\bar{l}}+F'\varphi_i+F_i\\ &=\sum_k\frac{\varphi_{ik\bar{k}}}{1+\varphi_{k\bar{k}}}+F'\varphi_i+F_i.\end{align*}\endgroup Here $F'$ is the derivative in the $\varphi$ direction. Also, using $\partial_i\partial_{\bar{i}}\hat{g}^{k\bar{l}}=-\partial_i\partial_{\bar{i}} \hat{g}_{l\bar{k}}+\sum_q \partial_i \hat{g}_{q\bar{k}}\partial_{\bar{i}} \hat{g}_{l\bar{q}}+\sum_p \partial_i \hat{g}_{l\bar{p}}\partial_{\bar{i}} \hat{g}_{p\bar{k}}$, \vspace{-1.5mm} we get \begin{align*} \Delta \rho &=g^{i\bar{i}}\partial_i \partial_{\bar{i}}(\hat{g}^{k\bar{l}}\varphi_k\varphi_{\bar{l}})\\ &=\sum_{i,k}\frac{1}{1+\varphi_{i\bar{i}}}(-\sum_l\partial_i \partial_{\bar{i}} \hat{g}_{l\bar{k}}\varphi_k\varphi_{\bar{l}}+2\re(\varphi_{k\bar{i}i}\varphi_{\bar{k}})\\ &\ \ \ +|\varphi_{k\bar{i}}-\sum_l\partial_{\bar{i}}\hat{g}_{\bar{l}k}\varphi_l|^2 +|\varphi_{ki}-\sum_l\partial_{i}\hat{g}_{k\bar{l}}\varphi_l|^2). \end{align*} At $(t_0,z_0)$, $\nabla H=0$\vspace{-2mm} gives \begin{align}H_i=\frac{t}{\rho}\rho_i-\gamma'\varphi_i=0.\end{align} The\vspace{-1mm}n $$\frac{|\nabla \rho|^2}{\rho^2}=\sum_i \frac{1}{1+\varphi_{i\bar{i}}}(\frac{\gamma'}{t})^2|\varphi_i|^2.$$ Also $\Delta \gamma (\varphi)= \underset{i}{\sum}\frac{1}{1+\varphi_{i\bar{i}}}(\gamma''|\varphi_i|^2+\gamma'\varphi_{\bar{i}i}).$ Therefore\vspace{-1mm} we get \begin{align*} 0 \leq\ &(\frac{\partial}{\partial t}-\Delta )H\\ =\ &\frac{\partial}{\partial t}H-t\frac{\Delta \rho}{\rho}+t\frac{|\nabla \rho|^2_{g}}{\rho^2}+\Delta \gamma\\ \leq\ & \log \rho- \gamma' \dot{\varphi} +ct-\sum_{i,k}\frac{t}{\rho}\frac{1}{1+\varphi_{i\bar{i}}}(|\varphi_{k\bar{i}}-\sum_l\partial_{\bar{i}}\hat{g}_{\bar{l}k}\varphi_l|^2 +|\varphi_{ki}-\sum_l\partial_{i}\hat{g}_{k\bar{l}}\varphi_l|^2)\\ &+\sum_i\frac{|\varphi_i|^2}{1+\varphi_{i\bar{i}}}(\frac{(\gamma ')^2}{t}+\gamma'' )+ \sum_i\frac{c_1 t-\gamma'}{1+\varphi_{i\bar{i}}} +n\gamma'+\frac{c_2t}{\rho} \end{align*} Now we use the same trick in [2, 18] to control the term containing $\gamma'^2$. From (3.6) we get \begin{align*} \gamma'\rho\varphi_i=t\rho_i=t(\varphi_i\varphi_{i\bar{i}}+\sum_k\varphi_{ki}\varphi_{\bar{k}}-\sum_{k,l}\partial_i \hat{g}_{l\bar{k}}\varphi_k\varphi_{\bar{l}}) \end{align*} which gives $$\sum_k(\varphi_{ki}-\sum_l\partial_{i}\hat{g}_{k\bar{l}}\varphi_l)\varphi_{\bar{k}}=t^{-1}\gamma' \rho \varphi_i-\varphi_i\varphi_{i\bar{i}}.$$ So \begin{align*} \frac{t}{\rho}\sum_{i,k}\frac{|\varphi_{ki}-\sum_l\partial_{i}\hat{g}_{k\bar{l}}\varphi_l|^2}{1+\varphi_{i\bar{i}}}&\geq \frac{t}{\rho^2}\sum_i\frac{|\sum_k(\varphi_{ki}-\sum_l\partial_{i}\hat{g}_{k\bar{l}}\varphi_l)\varphi_{\bar{k}}|^2}{1+\varphi_{i\bar{i}}}\\ &=\frac{t}{\rho^2}\sum_i\frac{ |t^{-1}\gamma' \rho \varphi_i-\varphi_i\varphi_{i\bar{i}}|^2}{1+\varphi_{i\bar{i}}}\\ &\geq \frac{(\gamma')^2}{t}\sum_i\frac{|\varphi_i|^2}{1+\varphi_{i\bar{i}}}-2\gamma' \end{align*} where we assume $\gamma'>0$. As $\dot{\varphi}$ is bounded from Lemma 3.1 for $t\leq T'$, the above estimates gives $$0 \leq \log \rho+ct+\sum_i\frac{\gamma'' |\varphi_i|^2}{1+\varphi_{i\bar{i}}}+ \sum_i\frac{c_1 t-\gamma'}{1+\varphi_{i\bar{i}}} +(n+2+c)\gamma'+\frac{c_2t}{\rho}$$ Take $\gamma(x)=Ax-\frac{1}{A}x^2$. Assume that $\log \rho\geq 1$ at $ (t_0, z_0)$ and choose $A$ to be sufficiently large, then we get $$\sum_i\frac{|\varphi_i|^2}{1+\varphi_{i\bar{i}}}+ \sum_i\frac{1}{1+\varphi_{i\bar{i}}}\leq c' \log \rho$$ for some constan $c'$. The above inequality together with $(3.3)$ imply that $$ 1+\varphi_{i\bar{i}}\leq c(c'\log\rho)^{n-1},$$ Then we have $$\rho=\sum_i|\varphi_i|^2\leq nc(c'\log\rho)^n,$$ which shows that $\rho$ is bounded at $(t_0, z_0)$. Therefore $H$ has a bound depending only on $\sup|\varphi_0|$, $\sup|\dot{\varphi}_0|$ and the estimate $(3.5)$ follows. \end{proof} Now we will give the second order estimate. We use the idea of [6, 20] and follow the argument in [5] closely. For local computations in the proof of the following proposition, we always use a coordinate system in Lemma 3.2 at a point $p$, such that $\hat{g}_{i\bar{j}}=\delta_{ij}, \ \frac{\partial \hat{g}_{i\bar{i}}}{\partial z_j}=0.$ and $\varphi_{i\bar{j}}$ is diagonal. \begin{prop} There exists $C>0$ depending on $\sup|\varphi_0|$ and $\sup|\dot{\varphi}_0|$ such that \begin{align} \tr _{\hat{g}}g=n+\Delta \varphi(t)<e^{Ce^{\alpha/t}} \end{align} for \vspace{-1mm} $t\leq T'$, where $\alpha$ is the same as in Lemma $3.3$. \end{prop} \begin{proof} Let $$ H=e^{-\frac{\alpha}{t}}\log \tr _{\hat{g}}g+e^\Psi,$$ where $\Psi=A(\underset{[0,T']\times M}{\sup \varphi}-\varphi)$ and $A$ is a constant to be chosen later. First we have \begingroup \addtolength{\jot}{.1em} \begin{align} (\frac{\partial}{\partial t}-\Delta )H&= \frac{\alpha}{t^2}e^{-\frac{\alpha}{t}}\log\tr_{\hat{g}}g+\frac{e^{-\frac{\alpha}{t}}}{{\tr_{\hat{g}}g}}\Delta_{\hat{g}} \dot{\varphi}-Ae^{\Psi}\dot{\varphi}\nonumber \\ &-e^{-\frac{\alpha}{t}}\Delta \log \tr_{\hat{g}}g-A^2|\nabla{\varphi}|^2e^{\Psi}-A(\tr_g{\hat{g}}- n)e^{\Psi}. \end{align}\endgroup It follows from $(3.1)$ that \begin{align}\Delta_{\hat{g}} \dot{\varphi}=-\tr_{\hat{g}}\Ric(g)+\tr_{\hat{g}}\Ric(\hat{g})+\Delta_{\hat{g}} F(\varphi,z)\end{align} where $$\Delta_{\hat{g}} F(\varphi,z)=F''|\nabla \varphi|^2_{\hat{g}}+F'\Delta_{\hat{g}} \varphi+2\re(g^{i\bar{j}}F'_i \varphi_{\bar{j}})+\Delta_{\hat{g}} F.$$ Here $F'$ is the derivative in the $\varphi$ direction, $\Delta_{\hat{g}}$ is the complex Laplacian of $F$ in the $z$ variable. Use that \begin{align*}\tr_{\hat{g}}\Ric(g)&=\sum_{i,k} g^{i\bar{i}}(-\partial_k \partial_{\bar{k}} g_{i\bar{i}}+g^{j\bar{j}}\partial_kg_{i\bar{j}}\partial_{\bar{k}} g_{j\bar{i}})\\ &=\sum_{i,k} g^{i\bar{i}}(-\varphi_{i\bar{i}k\bar{k}}-\partial_k \partial_{\bar{k}} \hat{g}_{i\bar{i}}+g^{j\bar{j}}\partial_kg_{i\bar{j}}\partial_{\bar{k}} g_{j\bar{i}}) \end{align*} to rewrite $(3.9)$ as \begin{align} \sum_{i,k} g^{i\bar{i}}\varphi_{i\bar{i}k\bar{k}}&=-\sum_{i,k}g^{i\bar{i}}\partial_k \partial_{\bar{k}} {\hat{g}}_{i\bar{i}}+\sum_{i,j,k}g^{i\bar{i}}g^{j\bar{j}}\partial_kg_{i\bar{j}}\partial_{\bar{k}} g_{j\bar{i}} +\Delta_{\hat{g}} \dot{\varphi}-\tr_{\hat{g}}\Ric(\hat{g})-\Delta_{\hat{g}} F(\varphi,z)\nonumber\\ &\geq \sum_{i,j,k}g^{i\bar{i}}g^{j\bar{j}}\partial_kg_{i\bar{j}}\partial_{\bar{k}} g_{j\bar{i}} +\Delta_{\hat{g}} \dot{\varphi}-C_1 |\nabla\varphi|^2_{\hat{g}}-C_2\tr_{\hat{g}}g\tr_{g}\hat{g}. \end{align} From the bound in $(3.3)$, we have $\tr_{\hat{g}}g,\ \tr_g\hat{g}\geq C^{-1}$ for some constant $C$ and then $\tr_{\hat{g}}g,\ \tr_g\hat{g}\leq C\tr_g\hat{g}\tr_{\hat{g}}g$. We use these in the above inequality. Also we will use them frequently in the following. As the estimates in [20,\ (2.6)], we have \begin{align}\Delta \tr_{\hat{g}}g\geq \sum_{i,k} g^{i\bar{i}}\varphi_{i\bar{i}k\bar{k}}-2\re(\sum_{i,j,k}g^{i\bar{i}}\partial_{\bar{i}} \hat{g}_{j\bar{k}}\varphi_{k\bar{j}i})-C\tr_{\hat{g}}g\tr_{g}\hat{g}. \end{align} To control $\sum_{i,j,k}g^{i\bar{i}}\partial_{\bar{i}} \hat{g}_{j\bar{k}}\varphi_{k\bar{j}i}$, we use a trick from [6]. $$\sum_{i,j,k}g^{i\bar{i}}\partial_{\bar{i}} \hat{g}_{j\bar{k}}\varphi_{k\bar{j}i}=\sum_i\sum_{j\neq k}(g^{i\bar{i}}\partial_{\bar{i}} \hat{g}_{j\bar{k}}\partial_k g_{i\bar{j}}-g^{i\bar{i}}\partial_{\bar{i}} \hat{g}_{j\bar{k}}\partial_{k} \hat{g}_{i\bar{j}}).$$ so \begin{align} |2\re (\sum_{i,j,k} g^{i\bar{i}}\partial_i \hat{g}_{j\bar{k}}\varphi_{k\bar{j}i}|&\leq \sum_i\sum_{j\neq k} (g^{i\bar{i}}g^{j\bar{j}}\partial_kg_{i\bar{j}}\partial_{\bar{k}} g_{j\bar{i}}+g^{i\bar{i}}g_{j\bar{j}}\partial_{\bar{i}}\hat{g}_{j\bar{k}}\partial_i \hat{g}_{k\bar{j}})+C\tr_{g}\hat{g}\nonumber \\ &\leq \sum_i\sum_{j\neq k} g^{i\bar{i}}g^{j\bar{j}}\partial_kg_{i\bar{j}}\partial_{\bar{k}} g_{j\bar{i}}+C\tr_{g}\hat{g} \tr_{\hat{g}}g. \end{align} Combing $(3.10),(3.11),(3.12)$ we can get $$\Delta \tr_{\hat{g}}g\geq \sum_{i,j}g^{i\bar{i}}g^{j\bar{j}}\partial_jg_{i\bar{j}}\partial_{\bar{j}} g_{j\bar{i}}+\Delta_{\hat{g}}\dot{\varphi}-C_1 |\nabla\varphi|^2_{\hat{g}}-C\tr_{g}\hat{g}\tr_{\hat{g}}g$$ Now we will control $\frac{|\partial \tr_{\hat{g}}g|^2}{(\tr_{\hat{g}}g)^2}$. As $$\partial_i \tr_{\hat{g}}g=\partial_i \sum_j \varphi_{j\bar{j}}=\sum_j \partial_j \varphi_{i\bar{j}}=\sum_j(\partial_j g_{i\bar{j}}-\partial_j\hat{g}_{i\bar{j}}),$$ then $$\frac{|\partial \tr_{\hat{g}}g|^2}{(\tr_{\hat{g}}g)^2}\leq \frac{1}{(\tr_{\hat{g}}g)^2}\sum_{i,j,k} g^{i\bar{i}}\partial_jg_{i\bar{j}}\partial_{\bar{k}} g_{k\bar{i}}-\frac{2}{(\tr_{\hat{g}}g)^2}\re(\sum_{i,j,k} g^{i\bar{i}}\partial_j\hat{g}_{i\bar{j}}\partial_{\bar{k}} g_{k\bar{i}})+C\tr_{g}\hat{g}.$$ Assume that $H$ achieves maximum at $(t_0, z_0),\ t_0>0$, then $\nabla H(t_0, z_0)=0$ gives $$\frac{e^{-\frac{\alpha}{t}}\partial_{\bar{i}} \tr_{\hat{g}}g}{\tr_{\hat{g}}g}-Ae^{\Psi}\varphi_{\bar{i}}=0$$ That is, $$\sum_k\partial_{\bar{i}}g_{k\bar{k}}=Ae^{\frac{\alpha}{t}}\tr_{\hat{g}}g\varphi_{\bar{i}}e^{\Psi}$$ Together with $\partial_{\bar{k}} g_{k\bar{i}}=\partial_{\bar{k}} \hat{g}_{k\bar{i}}+\partial_{\bar{i}} g_{k\bar{k}}$, we get \begingroup \addtolength{\jot}{.4em} \begin{align} |\frac{2}{(\tr_{\hat{g}}g)^2}\re(\sum_{i,j,k} g^{i\bar{i}}\partial_j{\hat{g}}_{i\bar{j}}\partial_{\bar{k}} g_{k\bar{i}})| &\leq |\frac{2Ae^{\frac{\alpha}{t}}e^{\Psi}}{\tr_{\hat{g}}g}\re\sum_{i,j} g^{i\bar{i}}\partial_j{\hat{g}}_{i\bar{j}}\varphi_{\bar{i}}|+C\tr_{g}\hat{g}\nonumber\\ &\leq e^{\frac{\alpha}{t}}e^{\Psi}(A^2|\nabla \varphi|^2+\frac{C\tr_g\hat{g}}{(\tr_{\hat{g}}g)^2})+C\tr_g\hat{g} \nonumber\\ &\leq e^{\frac{\alpha}{t}}e^{\Psi}(A^2|\nabla \varphi|^2+C'\tr_{g}\hat{g})+C\tr_{g}\hat{g} \end{align}\endgroup Using the Cauchy-Schwarz inequality as in Yau's second order estimate [25] (see equation (2.21) in [20]), we have \begin{align} \frac{1}{\tr_{\hat{g}}g}\sum_{i,j,k}g^{i\bar{i}}\partial_jg_{i\bar{j}}\partial_{\bar{k}} g_{k\bar{i}}\leq \sum_{i,j} g^{i\bar{i}}g^{j\bar{j}}\partial_jg_{i\bar{j}}\partial_{\bar{j}} g_{j\bar{i}} \hspace{4mm} \end{align} Combing $(3.13)$ and $(3.14)$, we \vspace{-.5mm} get \begingroup \addtolength{\jot}{.3em} \begin{align*} \frac{|\partial \tr_{\hat{g}}g|^2}{(\tr_{\hat{g}}g)^2}&\ \leq\frac{1}{\tr_{\hat{g}}g}\sum_{i,j} g^{i\bar{i}}g^{j\bar{j}}\partial_jg_{i\bar{j}}\partial_{\bar{j}} g_{j\bar{i}}+ e^{\frac{\alpha}{t}}e^{\Psi}(A^2|\nabla \varphi|^2+C'\tr_{g}\hat{g})+C\tr_{g}\hat{g}. \end{align*}\endgroup \begingroup \addtolength{\jot}{.4em} So \begin{align} \begin{split} e^{-\frac{\alpha}{t}}\Delta \log\tr_{\hat{g}}g=&e^{-\frac{\alpha}{t}}(\frac{\Delta \tr_{\hat{g}}g}{\tr_{\hat{g}}g}-\frac{|\partial \tr_{\hat{g}}g|^2}{(\tr_{\hat{g}}g)^2} )\\ \geq\ & \frac{e^{-\frac{\alpha}{t}}}{\tr_{\hat{g}}g}\Delta_{\hat{g}} \dot{\varphi}-C_1\frac{e^{-\frac{\alpha}{t}}}{\tr_{\hat{g}}g}|\nabla\varphi|^2_{\hat{g}}-Ce^{-\frac{\alpha}{t}}\tr_{g}\hat{g}\\ &-A^2|\nabla \varphi|^2e^{\Psi}-C'\tr_{g}\hat{g}e^{\Psi} \end{split} \end{align} \endgroup From $(3.3)$ we have $\tr_{\hat{g}}g\leq \ C(\tr_{g}\hat{g})^{n-1}$ for some constant $C$. Now put $(3.15)$ into $(3.8)$ and using that $\varphi, \ \dot{\varphi}$ is bounded and $(3.5)$, we have \begingroup \addtolength{\jot}{.5em} \begin{align*} (\frac{\partial}{\partial t}-\Delta )H\leq\ &C\log\tr_{\hat{g}}g-A\dot{\varphi}e^{\Psi}+C_1+Ce^{-\frac{\alpha}{t}}\tr_{g}\hat{g}\\ &+C\tr_{g}\hat{g}e^{\Psi}+Ane^{\Psi}-A\tr_{g}\hat{g}e^{\Psi}\\ \leq\ &C'\log\tr_{\hat{g}}g+AC'e^{\Psi}-(A-C)e^{\Psi}\tr_{g}\hat{g}+Ce^{-\frac{\alpha}{t}}\tr_{g}\hat{g}\\ \leq\ &-(A-C-C_1)e^{\Psi}\tr_{g}\hat{g}+ AC'e^{\Psi} \end{align*} \endgroup Choosing $A$ large enough such that $A-C-C_1\geq 0$, then at $(t_0, z_0),$ $$0\leq -(A-C-C_1)\tr_{g}\hat{g}+AC'.$$ for $t\leq T'$ gives $\tr_{g}\hat{g}\leq C'$ at $(t_0, z_0)$, which implies that $H\leq C$ for some constant $C$ depending on $\sup|\varphi_0|$ and $\sup|\dot{\varphi}_0|$. Then we obtain the desired estimate $(3.7)$. \end{proof} Now we give the third order estimate. Our proof is based on the arguments in [14, 19]( see also [24]). As in [25], consider $S=g^{i\bar{p}}g^{q\bar{j}}g^{k\bar{r}}\varphi_{i\bar{j}k}\varphi_{\bar{p}q\bar{r}}$ where $\varphi_{i\bar{j}k}=\hat{\nabla}_k\varphi_{i\bar{j}}$. We introduce the tensor $\Phi_{ij}^{\ \ k}=\Gamma_{ij}^k-\hat{\Gamma}_{ij}^k$ and then $$S=|\Phi|^2=g^{i\bar{p}}g^{j\bar{q}}g_{k\bar{r}}\Phi_{ij}^{\ \ k}\Phi_{\bar{p}\bar{q}}^{\ \ \bar{r}}.$$ From now on, we will write $k(t), k_1(t), k_2(t)$, ... for a function of the form $Ke^{\lambda Ce^{\alpha/t}}$ where $e^{Ce^{\alpha/t}}$ is the bound in Proposition $3.2$, and $K, \lambda$ are constants depending only on $\hat{\omega}, F$. In the proof of the following proposition, We will use the estimates $|\nabla\varphi(t)|^2_{\hat{g}}\leq k(t),\ \tr _{\hat{g}}g\leq k(t)$ repeatedly. \begin{prop} There exists a smooth function $C(t)>0$ on $(0,T']$ depending only on $\sup |\varphi_0|,\ \sup |\dot{\varphi}_0|$ and blowing up as $t\rightarrow 0$ such that $S<C(t)$ for $t\leq T'$. \end{prop} \begin{proof} As the calculations in [14, 19], first we have \begin{align*} \Delta S=&|\overline{\nabla}\Phi|^2+|\nabla\Phi|^2-\Phi_{ij}^{\ \ k}\left(R^{\ p\ q}_{p\ k}\Phi^{ij}_{\ \ q}-R_{p\ q}^{\ p\ i}\Phi_{\ \ k}^{qj}-R_{p\ q}^{\ p\ j}\Phi^{iq}_{\ \ k}\right)\\ &+2\re \left(\Delta \Phi_{ij}^{\ \ k}\Phi_{\ \ k}^{ij}\right),\\ \frac{\partial}{\partial t}S=&\Phi_{ij}^{\ \ k}\left(\frac{\partial}{\partial t} g^{i\bar{q}}\Phi_{\bar{q}\ k}^{\ j}+\frac{\partial}{\partial t} g_{k\bar{q}}\Phi^{ij\bar{q}}+\frac{\partial}{\partial t} g^{j\bar{q}}\Phi^i_{\ \bar{q}k}\right)+2\re \left(\frac{\partial}{\partial t}\Phi_{ij}^{\ \ k} \Phi^{ij}_{\ k}\right)\\ =&\Phi_{ij}^{\ \ k}\left(g^{q\bar{r}}\frac{\partial}{\partial t}g_{k\bar{r}}\Phi^{ij}_{\ \ q}-g^{i\bar{r}}\frac{\partial}{\partial t}g_{q\bar{r}}\Phi^{qj}_{\ \ k}-g^{j\bar{r}}\frac{\partial}{\partial t}g_{q\bar{r}}\Phi^{iq}_{\ \ k}\right )\\ &+2\re \left(\frac{\partial}{\partial t}\Phi_{ij}^{\ \ k} \Phi^{ij}_{\ k}\right). \end{align*} Thus \begin{align} (\frac{\partial}{\partial t}-\Delta)S=&-|\overline{\nabla}\Phi|^2-|\nabla\Phi|^2+\Phi_{ij}^{\ \ k}\left(B_k^{\ q}\Phi^{ij}_{\ \ q}-B^{\ i}_q\Phi^{qj}_{\ \ k}-B_q^{\ j}\Phi^{iq}_{\ \ k}\right)\nonumber\\ &+2\re \left((\frac{\partial}{\partial t}-\Delta)\Phi_{ij}^{\ \ k}\Phi_{\ \ k}^{ij}\right). \end{align} where $B_i^j=g^{j\bar{r}}\frac{\partial}{\partial t}g_{i\bar{r}}+R^{\ p\ j}_{p\ i}$. From eqution (3.1) and formula (2.2), we get \begin{align*} \frac{\partial}{\partial t} g_{i\bar{j}}&=-R_{i\bar{j}}+\hat{R}_{i\bar{j}}+F_{i\bar{j}}(\varphi,z),\nonumber \\ R^{\ p\ q}_{p\ k}&=R_k^{\ q}-\nabla^p T_{pk}^{\ \ q}-\nabla_k T^{pq}_{\ \ p} \end{align*} Hence \begin{align} B_i^{\ j}=g^{j\bar{r}}(\hat{R}_{i\bar{r}}+F_{i\bar{r}}(\varphi,z))-\nabla^p T_{pi}^{\ \ j}-\nabla_i T^{pj}_{\ \ p}. \end{align} Here\vspace{.5mm} $ F_{i\bar{r}}(\varphi,z)=F_{i\bar{r}}+F''\varphi_i\varphi_{\bar{r}}+F'\varphi_{i\bar{r}}+F'_i \varphi_{\bar{r}}+F'_{\bar{r}} \varphi_i.$\\ Now we compute the evolution of $\Phi_{ij}^{\ \ k}$. First \begin{align*}\frac{\partial}{\partial t}\Phi_{ij}^{\ \ k} &=g^{k\bar{l}}\nabla_i\frac{\partial}{\partial t}g_{j\bar{l}}\\ &=-\nabla_iR_j^{\ k}+g^{k\bar{l}}(\nabla_i\hat{R}_{j\bar{l}}+\nabla_iF_{j\bar{l}}(\varphi, z)). \end{align*} Note that \begin{align}\nabla_{\bar{q}} \Phi_{ij}^{\ \ k}=-R_{i\bar{q}j}^{\ \ \ k}+\hat{R}_{i\bar{q}j}^{\ \ \ k}. \end{align} Then \begin{align*} \Delta \Phi_{ij}^{\ \ k}&=-\nabla^{\bar{p}}R^{\ \ \ k}_{i\bar{p}j}+\nabla^{\bar{p}}\hat{R}^{\ \ \ k}_{i\bar{p}j}\nonumber\\ &=\nabla_i (-R^{\ k}_j+\nabla^qT_{qj}^{\ \ k}+\nabla_jT_{\ \ p}^{pk})-T_{iq}^{\ r}R^{\ q\ k}_{r\ j}+\nabla^{\bar{p}}\hat{R}_{i\bar{p}j}^{\ \ \ k} .\end{align*} So we have \begin{align*}(\frac{\partial}{\partial t}-\Delta)\Phi_{ij}^{\ \ k}&=\nabla_i(g^{k\bar{l}}(\hat{R}_{j\bar{l}}+F_{j\bar{l}}(\varphi, z))-\nabla^qT_{qj}^{\ \ k}-\nabla_jT_{\ \ p}^{pk})+T_{iq}^{\ r}R^{\ q\ k}_{r\ j}-\nabla^{\bar{p}}\hat{R}_{i\bar{p}j}^{\ \ \ k}\\ &=\nabla_iB_j^{\ k}+T_{iq}^{\ r}R^{\ q\ k}_{r\ j}-\nabla^{\bar{p}}\hat{R}_{i\bar{p}j}^{\ \ \ k}. \end{align*} Combining with $(3.16)$, we get \begin{align*} (\frac{\partial}{\partial t}-\Delta )S=&-|\overline{\nabla}\Phi|^2-|\nabla\Phi|^2+ \Phi_{ij}^{\ \ k}\left(B_k^{\ q}\Phi^{ij}_{\ \ q}-B^{\ i}_q\Phi^{qj}_{\ \ k}-B_q^{\ j}\Phi^{iq}_{\ \ k}\right)\\ &2\re \left(\nabla_iB_j^{\ k}+T_{iq}^{\ r}R^{\ q\ k}_{r\ j}-\nabla^{\bar{p}}\hat{R}_{i\bar{p}j}^{\ \ \ k}\right)\Phi^{ij}_{\ \ k}. \end{align*} As $T_{ij\bar{k}}=\hat{T}_{ij\bar{k}},$ \begin{align} \nabla^pT_{pk}^{\ \ q}=g^{p\bar{l}}g^{q\bar{r}}(\hat{\nabla}_{\bar{l}}\hat{T}_{pk\bar{r}}-\Phi_{\bar{l}\bar{r}}^{\ \ \bar{s}}\hat{T}_{pk\bar{s}}). \end{align} By (3.17) $$|B_{i\bar{j}}|\leq k(t)(S^{1/2}+1+|\nabla \varphi|^2_g+|\varphi_{i\bar{j}}|_g^2)\leq k(t)(S^{1/2}+1),$$ Now we want to control $\nabla_i B_j^{\ k}$. From $(3.17)$ we need the following estimates from [19] obtained by similar calculations as (3.19), $$|\nabla_i\nabla^qT_{qj}^{\ k}|\leq k(t)(S+|\overline{\nabla}\Phi|+1), $$ $$|\nabla_i\nabla_jT_{\bar{p}}^{\ k\bar{p}}|\leq k(t)(S+|\nabla\Phi|+1).$$ Also $$|T_{iq}^{\ r}R^{q\ k}_{r\ j}|\leq k(t)(|\overline{\nabla}\Phi|+1),$$ $$ |\nabla^{\bar{p}}R_{\bar{p}ij}^{\ \ \ k}|\leq k(t)(S^{1/2}+1).$$ We bound the terms with $\varphi_{ij}$ and $\Phi^{ij}_{\ \ k}$ in $\re (\nabla_iB_j^{\ k}\Phi^{ij}_{\ \ k})$ by $|\varphi_{ij}|^2 + k(t)S$. Together with the above estimates we get \begingroup \addtolength{\jot}{.4em} \begin{align} (\frac{\partial}{\partial t}-\Delta )S&\leq k(t)(S^{3/2}+S+1)+\sum_{i,j}|\varphi_{ij}|^2-\frac{1}{2}(|\nabla \Phi|^2+|\overline{\nabla} \Phi|^2). \end{align} \endgroup We will use the similar way as in [16, 19] to control the term $S^{3/2}$. The evolution equations below can be obtained by following the computation in [18, 19]. \begin{align}\begin{split}(\frac{\partial}{\partial t}-\Delta )\tr_{\hat{g}}g&\leq -\frac{S}{k_2(t)}+k_2(t),\\ (\frac{\partial}{\partial t}-\Delta )|\nabla \varphi|^2_{\hat{g}}&\leq -\sum_{i,j}\frac{|\varphi_{ij}|^2}{k_3(t)}+k_3(t), \end{split} \end{align} Now we will apply a maximum priciple argument to the quantity \vspace{-1mm}$$H=\frac{S}{(C_1(t)-\tr_{\hat{g}}g)^2}+\frac{\tr_{\hat{g}}g}{C_2(t)}+\frac{|\nabla \varphi|_{\hat{g}}^2}{C_3(t)}.$$ Here we can take $C_i(t)$ to be the form of $Le^{\lambda Ce^{\alpha/t}}$ where $C,\ \alpha$ is the same as in (3.7) and $L, \lambda$ will be determined later. Let $L,\ \lambda>2$ such that \begin{align}\frac{C_1(t)}{2}\leq C_1(t)-\tr_{\hat{g}}g\leq C_1(t),\ \ \ 0<-\frac{C'_i(t)}{C^2_i(t)}\leq \frac{1}{\sqrt{C_i(t)}},\ \ i=1,2,3. \end{align} We calculate the evolution of $H$. \begingroup \addtolength{\jot}{.4em} \begin{align*} (\frac{\partial}{\partial t}-\Delta ) H=&\frac{1}{(C_1(t)-\tr_{\hat{g}}g)^2}(\frac{\partial}{\partial t}-\Delta )S+\frac{2S}{(C_1(t)-\tr_{\hat{g}}g)^3}(\frac{\partial}{\partial t}-\Delta )\tr_{\hat{g}}g\\ &-\frac{4\re\nabla\tr_{\hat{g}}g \cdot \overline{\nabla}S}{(C_1(t)-\tr_{\hat{g}}g)^3}-\frac{6S|\nabla \tr_{\hat{g}}g|^2}{(C_1(t)-\tr_{\hat{g}}g)^4}-\frac{2C'_1(t)S}{(C_1(t)-\tr_{\hat{g}}g)^3}\\ &+\frac{1}{C_2(t)}(\frac{\partial}{\partial t}-\Delta )\tr_{\hat{g}}g+\frac{1}{C_3(t)}(\frac{\partial}{\partial t}-\Delta )|\nabla \varphi|^2_{\hat{g}}-\frac{C_2'(t)\tr_{\hat{g}}g}{C_2(t)^2}-\frac{C_3'(t)|\nabla \varphi|^2_{\hat{g}}}{C_3(t)^2}. \end{align*} \endgroup Taking $C_2(t), C_3(t)$ large enough and using (3.5), (3.7) and (3.22), the last two terms can be bounded by a constant C. Assuming $S> 1$ at the maximum point of $H$, from (3.20) we have \begin{align*} (\frac{\partial}{\partial t}-\Delta )S&\leq k_1(t)(S^{3/2}+1)+\sum_{i,j}|\varphi_{ij}|^2-\frac{1}{2}|\overline{\nabla} \Phi|^2. \end{align*}Together with (3.21), (3.22), we get \begingroup \addtolength{\jot}{.5em} \begin{align*} 0\leq &\ (\frac{\partial}{\partial t}-\Delta ) H\\ \leq& \left(\frac{4k_1(t)}{C^2_1(t)}S^{3/2}+ \frac{4k_1(t)}{C^2_1(t)} +\frac{4}{C^2_1(t)}\sum_{i,j}|\varphi_{ij}|^2-\frac{|\overline{\nabla} \Phi|^2}{2C_1^2(t)}\right) +\left(-\frac{2S^2}{k_2(t)C_1^3(t)}+\frac{16k_2(t)S}{C_1^3(t)}\right)\\ &+\frac{4|\re\nabla\tr_{\hat{g}}g \cdot \overline{\nabla}S|}{(C_1(t)-\tr_{\hat{g}}g)^3}+\frac{2S}{\sqrt{C_1^3(t)}} +\left(-\frac{1}{k_2(t)C_2(t)}S+\frac{k_2(t)}{C_2(t)}\right)\\ &+\left(-\frac{1}{k_3(t)C_3(t)}\sum_{i,j}|\varphi_{ij}|^2+\frac{k_3(t)}{C_3(t)}\right)+C \end{align*} \endgroup As $|\nabla \tr_{\hat{g}}g|\leq \frac{1}{64}k_5(t)S^{1/2}$ an\vspace{1mm}d $|\overline{\nabla} S|\leq 2S^{1/2}|\overline{\nabla}\Phi|,$ \begin{align*}\frac{4|\re\nabla \tr_{\hat{g}}g \cdot \overline{\nabla} S|}{(C_1(t)-\tr_{\hat{g}}g)^3}&\leq \frac{k_5(t)S|\overline{\nabla}\Phi|}{C^3_1(t)} \leq\frac{|\overline{\nabla} \Phi|^2}{2C_1^2(t)}+ \frac{k^2_5(t)S^2}{2C_1^4(t)}.\end{align*} We will also use \vspace{.6mm} \begingroup \addtolength{\jot}{.6em} \begin{align*} \frac{4k_1(t)S^{3/2}}{C^2_1(t)}&\leq \frac{S^2}{k_2(t)C_1^3(t)}+\frac{4k_1^2(t)k_2(t)S}{C_1(t)}.\end{align*} \endgroup Recall that all $k_i(t), C_i(t)$ are functions of the form $Le^{\lambda Ce^{\alpha/t}}$. First choose $C_i(t)>k_i(t)$, then fix $C_2(t), C_3(t)$. Now take the constant $L, \lambda$ in $C_1(t)$ to be large enough such that $\frac{k^2_5(t)}{2C_1^4(t)}\leq \frac{1}{k_2(t)C_1^3(t)},\ \frac{4}{C^2_1(t)}\leq \frac{1}{k_3(t)C_3(t)}$ and $\frac{16k_2(t)}{C_1^3(t)}+\frac{2}{\sqrt{C_1^3}(t)}+\frac{4k_1^2(t)k_2(t)}{C_1(t)}\leq \frac{1}{2k_2(t)C_2(t)}$. The above estimates then give that at $(t_0, z_0)$, $$0\leq \frac{-1}{2k_2(t)C_2(t)}S+C',$$ for some constant $C'$. Therefore $S\leq 4C'k_2(t)C_2(t)\leq C'C_1(t)$ at $(t_0, z_0)$. It follows that $H$ is bounded by some constant $C$ depending only on $\sup|\varphi_0|$ and $\sup|\dot{\varphi}_0|$, which gives the desired estimate of $S$. \end{proof} Using $(3.7)$, the above estimate $S\leq C(t)$ implies that $\|\varphi(t)\|_{C^{2+\alpha}(M,g)}$ can be bounded by a smooth function $C(t)$ on $(0, T']$, which depends only on $\sup|\varphi_0|$ and $\sup|\dot{\varphi}_0|$. Differentiating the equation $(3.1)$ in $t$, we get \begin{align} \frac{\partial \dot{\varphi}}{\partial t}=\triangle \dot{\varphi}+ F'(\varphi,z)\dot{\varphi} \end{align}To apply parabolic Schauder estimates to obtain higher order estimates, we still need to bound the derivatives of $g_{i\bar{j}}$ in the $t$-direction. Then it is sufficient to bound $|\Ric(g)|$. \begin{lem} There exists a smooth function $C(t)>0$ on $(0,T']$ depending only on $\sup |\varphi_0|$ and $\sup |\dot{\varphi}_0|$ and blowing up as $t\rightarrow 0$ such that $|\Ric|<C(t)$ for $t\leq T'$. \end{lem} \begin{proof}To compute the evolution of $|\Ric|$, first $$\frac{\partial}{\partial t}R_{j\bar{k}}=-g^{l\bar{q}}\nabla_{\bar{k}}\nabla_j\frac{\partial}{\partial t}g_{l\bar{q}}\\ =-g^{l\bar{q}}\nabla_{\bar{k}}\nabla_j(-R_{l\bar{q}}+\hat{R}_{l\bar{q}}+F_{l\bar{q}}(\varphi,z))$$ Use $(2.1),(2.2)$ we have \begingroup \addtolength{\jot}{.4em} \begin{align*} \nabla_{\bar{k}}\nabla_jR_{l\bar{q}}&=\nabla_l\nabla_{\bar{q}} R_{j\bar{k}}-\nabla_l T_{\bar{k}\bar{q}}^{\ \ \bar{s}}R_{j\bar{s}}+T_{\bar{k}\bar{q}}^{\ \ \bar{s}}\nabla_lR_{j\bar{s}}+R^{\ \ \ r}_{l\bar{k}j}R_{r\bar{q}}\\ &\ \ \ -R^{\ \ \bar{s}}_{l\bar{k}\ \bar{q}}R_{j\bar{s}}+\nabla_{\bar{k}}T^{\ r}_{lj}R_{r\bar{q}}+T^{\ r}_{lj}\nabla_{\bar{k}}R_{r\bar{q}} \end{align*} \endgroup So \begingroup \addtolength{\jot}{.4em} \begin{align*} (\frac{\partial}{\partial t}-\Delta ) R_{j\bar{k}}=&\nabla_{\bar{k}}T^{\ r}_{lj}R^{\ l}_r+T^{\ r}_{lj}\nabla_{\bar{k}}R^{\ l}_r+R^{\ \ \ r}_{l\bar{k}j}R^{\ l}_r\\ &-R^{\ \ \bar{s}l}_{l\bar{k}}R_{j\bar{s}}+\nabla^{\bar{q}}T_{\bar{k}\bar{q}}^{\ \ \bar{s}}R_{j\bar{s}}+T_{\bar{k}\bar{q}}^{\ \ \bar{s}}\nabla^{\bar{q}}R_{j\bar{s}}\\ &-g^{l\bar{q}}\nabla_{\bar{k}}\nabla_j(\hat{R}_{l\bar{q}}+F_{l\bar{q}}(\varphi,z)). \end{align*} \endgroup From (3.19) we get $$|\nabla T|\leq k(t)(1+S^{1/2}), \ \ |\overline{\nabla} T|\leq k(t)(1+S^{1/2}).$$ where $S$ is bounded by some $k(t)$ by Proposition 3.3. Note that $(3.18)$ gives \begin{align} R_{i\bar{j}}=\hat{R}_{i\bar{j}}+\nabla_{\bar{j}}\Phi_{ik}^{\ \ k},\ \ \ |\overline{\nabla}\Phi|\leq |\Rm|+ k(t). \end{align} Use this and similar calculation as (3.19) to get $$|g^{l\bar{q}}\nabla_{\bar{k}}\nabla_j\hat{R}_{l\bar{q}}|\leq k(t)(|\Rm|+1),$$ Also we have $$|g^{l\bar{q}}\nabla_{\bar{k}}\nabla_jF_{l\bar{q}}(\varphi,z)|\leq k(t)(|\Rm|+1).$$ Therefore \begingroup \addtolength{\jot}{.4em} \begin{align*} |(\frac{\partial}{\partial t}-\Delta ) R_{j\bar{k}}|&\leq k(t)(|\nabla \Ric|+|\Rm|^2+|\Rm|+1)\\ &\leq k(t)(|\nabla \Ric|+|\Rm|^2+1). \end{align*} \endgroup As $|\frac{\partial}{\partial t}g_{i\bar{j}}|=|-R_{i\bar{j}}+\hat{R}_{i\bar{j}}+F_{i\bar{j}}(\varphi,z)|\leq |\Ric|+ k(t)$, direct computation gives \begin{align*} (\frac{\partial}{\partial t}-\Delta ) |\Ric|^2&\leq k(t)(|\Ric|^3 +|\Ric|^2)+ 2|(\frac{\partial}{\partial t}-\Delta )\Ric||\Ric|-2|\nabla \Ric|^2. \end{align*} We then obtain the following \begin{align*} (\frac{\partial}{\partial t}-\Delta ) |\Ric|&=\frac{1}{2|\Ric|}(\frac{\partial}{\partial t}-\Delta ) |\Ric|^2+2|\nabla |\Ric||^2)\\ &\leq k_1(t)(|\nabla \Ric|+|\Rm|^2+1)-\frac{|\nabla \Ric|^2}{|\Ric|}+\frac{|\nabla |\Ric||^2}{|\Ric|} \end{align*} Let us consider $$H=\frac{|\Ric|}{C_1(t)}+\frac{S}{C_2(t)},$$ as in [18] where $C_1(t), C_2(t)$ are the functions of the form $Le^{\lambda Ce^{\alpha/t}}$ as in the proof of Propostion 3.3 such that $-\frac{C'_i(t)}{C^2_i(t)}\leq \frac{1}{\sqrt{C_i(t)}}, i=1,2 $. Assume $H$ achieves maximum at a point $(t_0, z_0)$, $t_0>0$, and assume $|\Ric|\geq 1$ at $(t_0, z_0)$. From (3.20) and Proposition 3.2, 3.3 we have $$(\frac{\partial}{\partial t}-\Delta )S\leq -\frac{1}{2}Q+k_2(t)$$ where $Q=|\nabla \Phi|^2+|\overline{\nabla} \Phi|^2$. Take $C_1(t)>k_1(t), C_2(t)\geq \max\{S, S^2, k_2(t)\}$. Direct computation gives \begingroup \addtolength{\jot}{.4em} \begin{align} (\frac{\partial}{\partial t}-\Delta )H\leq\ &\frac{k_1(t)(|\nabla \Ric|+|\Rm|^2)}{C_1(t)}-\frac{|\nabla \Ric|^2}{C_1(t)|\Ric|}+\frac{|\nabla |\Ric||^2}{C_1(t)|\Ric|}\nonumber\\ &+\frac{|\Ric|}{\sqrt{C_1(t)}}+\left(-\frac{Q}{2C_2(t)}+\frac{k_2(t)}{C_2(t)}\right)+\frac{S}{\sqrt{C_2(t)}}\nonumber\\ \leq\ &\frac{k_3(t)|\Rm|^2}{C_1(t)}-\frac{|\nabla \Ric|^2}{2C_1(t)|\Ric|}+\frac{|\nabla |\Ric||^2}{C_1(t)|\Ric|}-\frac{Q}{2C_2(t)}+C \end{align} \endgroup where the last inequality we use \begingroup \addtolength{\jot}{.4em} \begin{align*} \frac{k_1(t)|\nabla \Ric|}{C_1(t)}\leq \frac{|\nabla \Ric|^2}{2C_1(t)|\Ric|}+\frac{k_1^2(t)|\Ric|}{2C_1(t)} \end{align*} \endgroup Using $\nabla H=0$ at $(t_0, z_0)$ and $|\nabla |\Ric||\leq |\nabla\Ric|$, we get \begingroup \addtolength{\jot}{.5em} \begin{align*} \frac{|\nabla |\Ric||^2}{C_1(t)|\Ric|}&=\frac{|\nabla S\cdot \overline{\nabla} |\Ric||}{C_2(t)|\Ric|}\\ &\leq \frac{|\nabla \Ric|^2}{2C_1(t)|\Ric|}+\frac{C_1(t)|\nabla S|^2}{2C_2^2(t)|\Ric|}\\ &\leq \frac{|\nabla \Ric|^2}{2C_1(t)|\Ric|}+\frac{C_1(t)k_4(t)Q}{C_2^2(t)|\Ric|} \end{align*} \endgroup where\vspace{.8mm} the last inequality we use $ |\nabla S|^2\leq 2S(|\overline{\nabla}\Phi|^2+|\nabla\Phi|^2).$ From (3.23), $$|\Rm|^2\leq \frac{3}{2}Q+k_5(t),\ \ \ |\Ric|\leq \sqrt{Q}+k_6(t).$$ Choose $C_2(t)\geq 8k_4(t)$. Fix $C_2(t)$ and choose $C_1(t)\geq \max\{k_3(t)k_5(t), k_6(t)\}$ large enough such that $\frac{3k_3(t)}{2C_1(t)}\leq \frac{1}{4C_2(t)}$ and then fix $C_1(t)$. Combining the above estimates, we obtain that at $(x_0,t_0)$, $$ 0\leq (\frac{\partial}{\partial t}-\Delta )H\leq -\frac{Q}{4C_2(t)}+\frac{C_1(t)Q}{8C_2(t)|\Ric|}+C'$$ for some constant $C'$. If $\frac{|\Ric|}{C_1(t)}\leq 1$, then $H\leq 2$ at $(t_0, z_0)$ and we obtain the estimate for $|\Ric|$. Otherwise at $(t_0, z_0)$ $$0\leq -\frac{Q}{8C_2(t)}+C'.$$ Therefore $Q\leq 8C'C_2(t)\leq 8C'C_1(t)$ at $(t_0, z_0)$. By our choice of $C_1(t), C_2(t)$, $H$ is bounded by some constant $C$ depending only on $\sup|\varphi_0|$ and $\sup|\dot{\varphi}_0|$ , which gives the bound for $|\Ric|$. \end{proof} The estimates we have obtained imply that the parabolic $C^{\alpha, \alpha/2}$ norm of the coefficients in equation (3.23) can be bounded. The parabolic Schauder estimates then give a $C^{2+\alpha, 1+\alpha/2}$ bound for $\dot{\varphi}$ in $[\epsilon, T']\times$ M for any $\epsilon>0$, with the bounds only depending on $\epsilon,\ \sup|\varphi_0|$ and $\sup|\dot{\varphi}_0|$. Similarly we can obtain a $C^{2+\alpha, 1+\alpha/2}$ bound for $\varphi_k, \varphi_{\bar{k}}$ in $[\epsilon, T']\times M$. Differentiate the flow again and repeat using Schauder estimate, we obtain all higher order estimates for $\varphi$. Let $\epsilon\rightarrow 0$, we obtain the bounds in Proposition 3.1 which blow up as $t\rightarrow 0$. Particularly, there exists a smooth solution on $[0,T]$ where $T$ is the same as in Lemma 3.1 and depends only on $\sup|\varphi_0|$. \section{ proof of Theorem 1.1} Assume that $\hat{\omega}$ satisfies the condition (1.3), then it follows from [13] that Ko{\l}odziej's stability result (Corollary 4.4 in [11]) is also true. In particular if \vspace{2mm} \begin{align*} (\hat{\omega}+\sqrt{-1}\partial\bar{\partial}\phi_1)^n=(\hat{\omega}+\sqrt{-1}\partial\bar{\partial}\phi_2)^n=f\hat{\omega}^n, \end{align*} with $f\geq 0 \in L^p(M,\hat{\omega}),\ p>1$ and $\int_Mf\hat{\omega}^n=\int_M\hat{\omega}^n$,\vspace{1.3mm} then $\phi_1-\phi_2=const$. Now suppose that $\phi\in PSH(M,\hat{\omega})\cap L^\infty(M)$ is a weak solution of the equation \vspace{1.3mm} \begin{align} (\hat{\omega}+\sqrt{-1}\partial\bar{\partial}\phi)^n=e^{-F(\phi, z)}\hat{\omega}^n. \end{align} Then $f(z)=e^{-F(\phi(z), z)}\in L^p(M,\hat{\omega}),\ p>1$ as $\phi$ is bounded for $t\leq T$. Also the condition (1.3) gives that $\int_Mf\hat{\omega}^n=\int_M\hat{\omega}^n$. Therefore theorem 5.2 in [3] indicates that $\phi$ is continuous. Approximating $\phi$ with a sequence of smooth functions $\phi_j$ such that \begin{align} \sup_M|\phi_j-\phi|\rightarrow 0,\ \ {as}\ \ j \rightarrow \infty \end{align} It follows from [21] there exist smooth functions $\psi_j$ such that \begin{align} (\hat{\omega}+\sqrt{-1}\partial\bar{\partial}\psi_j)^n=c_je^{-F(\phi_j,z)}\hat{\omega}^n \end{align} where $c_j>0$ are constants chosen to satisfy the integration equality of the above equation. We normalize $\psi_j$ as in [11] \begin{align} \sup (\psi_j-\phi)=\sup(\phi-\psi_j), \end{align} the stability result from [13] gives \begin{align} \lim_{j\rightarrow \infty} \|\psi_j-\phi\|_{L^{\infty}}=0. \end{align}\\ Consider the equations \begin{align}\frac{\partial \varphi_j}{\partial t}=\log\frac{(\hat{\omega} + \sqrt{-1}\partial \bar{\partial}\varphi_j)^n}{\hat{\omega}^n}+F(\varphi_j, z)-\log c_j.\end{align} Applying Proprosition 3.1, there exist a sequence of smooth functions $\varphi_j$ with $\varphi_j(0)=\psi_j$ such that $\varphi_j$ solves the equations on $[0,T_j]$ where $T_j$ only depends on $\sup |\psi_j|$ and $\sup |\dot{\varphi}_j(0)|$. Using (4.3) and (4.6) \begin{align} \dot{\varphi}_j(0)=F(\psi_j,z)-F(\phi_j,z). \end{align} It follows from (4.2) and (4.5) that $\sup |\psi_j|$ and $\sup |\dot{\varphi}_j(0)|$ can be bounded by a constant only depending on $\sup|\phi|$. Therefore there exists a $T>0$ independent of $j$ such that $\varphi_j$ solve the equation (4.6) on $[0,T]$. By Lemma 3.1 in [18], $\{\varphi_j\}$ is a Cauchy sequence in $C^0([0,T]\times M)$. Let $$\beta(t,z)=\lim_{j\rightarrow \infty} \varphi_j,$$ which is continuous on $[0,T]\times M$. For any $\epsilon >0$, from the proof of Proposition 3.1, we have bounds on all derivatives of $\varphi_j$ for $t\in [\epsilon, T]$. Then $\beta\in C^{\infty}([\epsilon, T]\times M)$ and $$\lim_{j\rightarrow \infty} \|\beta-\varphi_j\|_{C^k([\epsilon, T]\times M)}=0.$$ Lemma 3.1 gives that $|\dot{\varphi_j}(t)|\leq \sup|\dot{\varphi_j}(0)|e^{Ct}$, for $t\in [0,T]$. From (4.7) we get $$\dot{\varphi}_j(0)\rightarrow 0\ \ {as} \ \ j\rightarrow \infty.$$ Therefore for any $t>0$, $$\dot{\beta}(t)=\lim_{j\rightarrow \infty} \dot{\varphi}_j(t)=0.$$ As it is continuous on $[0,T]$, we have $\beta(0)=\beta(t)$ for $t\in (0,T]$ is smooth. But $\beta(0)=\lim_{j\rightarrow \infty} \varphi_j(0)=\lim_{j\rightarrow \infty} \psi_j=\phi$, thus we get the smoothness of $\phi$. \begin{remk} Note that Proposition 3.1 holds on any compact Hermitian manifolds. If Ko{\l}odziej's stability result can be extended to general Hermitian manifolds, then we can remove the assumption (1.3) in Theorem 1.1. \end{remk}
1,314,259,995,733
arxiv
\section{Introduction} \label{sect.intro} In this paper we shall describe the asymptotic behaviour (as $t\to\infty$) of a class of solutions $u(t,x)\ge 0$ of the fast diffusion equation (FDE) \begin{equation}\label{1.1} \partial_t u=\Delta\left(u^{m}/m\right)=\nabla \cdot(u^{m-1}\nabla u), \qquad m<1, \end{equation} posed\footnote{There is no restriction $m>0$. The last expression represents a parabolic equation whenever $u>0$ even if $m\le 0$. For $m=0$ the first expression must be replaced by $\Delta\log(u)$.} for $t>0$ in the whole space, $x\in \mathbb{R}^d$, in dimensions $d\ge 3$, and taking initial data \begin{equation} u(0,x)=u_0(x)>0, \end{equation} where $u_0$ belongs to a class to be made precise below, in particular $u_0$ is bounded and decays at infinity like $c\,|x|^{2/(1-m)}$ with lower order terms. Actually, since $m<(d-2)/d$ it is well-known that for initial data of the above form the weak solution exists and is unique for small times, and then extinguishes completely after a finite time $T=T(m,d,u_0)$, \cite{VazSmooth}. We are interested in the behaviour of such solutions near extinction, as $t\nearrow T$. A detailed analysis of this question has been performed in a recent paper \cite{BBDGV} for general $m<1$ (even when $m\le 0$), but rates of convergence could not be obtained for a special value of the diffusion exponent $m$, precisely\footnote{In space dimension $d=4$ we have $m_*=0$, logarithmic diffusion. For $d=3$ we deal with $m_*<0$, a very singular case that was only briefly exposed in \cite{BBDGV}.} for $m_*=(d-4)/(d-2)$. We refer to that paper for further references to the abundant literature on the topics of entropy methods, rescaling and rates of convergence for this type of nonlinear diffusion equations, cf. also \cite{MR1853037}, \cite{MR1986060}, \cite{Daskalopoulos-Sesum2006}, \cite{MR1940370}, \cite{DenzMcCann}, \cite{HP}, \cite{MR1974458}, \cite{Otto}, \cite{VazAs}. The present paper is devoted to settle the asymptotic behaviour in the special case $m=m_*$. We shall see that it falls out of the scope of asymptotic theory developed in the paper \cite{BBDGV} for the rest of the values $m<1$, both in the type of techniques and in the type of results. The clue to finding the stabilization rates of the rescaled orbits towards their equilibrium states in this special case relies on (i) Realizing that a suitable linearization of the rescaled flow can be viewed as plain heat flow in a suitable Riemannian manifold. This allows us to use the very detailed theory that has been developed for studying (the long-time behaviour of) such flows, see \cite{LY, D}; (ii) Performing a study of nonlinear stability based on an interesting modification of the entropy methods of \cite{BBDGV}. \noindent The paper gives precise statements and proof of these assertions. It is organized as follows: in the next section we shall review the needed facts about the asymptotic behaviour of our problem in the more general setting of variable $m\in \mathbb{R}$. We also introduce the family of entropies that allow to prove the plain stabilization result of \cite{BBDGV}, as well as the linearization method that allows to obtain rates of convergence when $m\ne m_*$, when used in combination with the limit of the previous entropies. The failure of this approach in the special case $m_*$ is identified in \cite{BBDGV} as the lack of a suitable {\sl spectral gap} \ in the operator analysis of the linearized problem. We then focus on $m=m_*$ and address such an essential difficulty. The convergence results are carefully stated in Section \ref{ssec.statement}. We start the new work in Section \ref{sect-lin} by a detailed analysis of the linearized equation, identified as a heat flow on a cigar-like Riemannian manifold. This is followed by the results on linearized stability. Section \ref{sect.nlem} gathers all the results needed in the comparison of linear and nonlinear entropies. The proof of nonlinear stability is given in Section \ref{sect.nlem2}. In Section \ref{m.neq.mstar} we revise the convergence for the case $m\neq m_*$ and show that our method provides a shorter proof and also a small improvement with respect to \cite{BBDGV}. The main difference in the asymptotic results is that convergence to a selfsimilar profile takes place with a rate of approach that differs in a marked way from the power rate of all the cases $m\ne m_*$. The convergence is most clearly visualized below in the rescaled representation, a nonlinear Fokker--Planck equation, where it takes the form of stabilization towards equilibrium with a polynomial rate of approach in terms of the new time variable $s$. Specifically, the study is made in terms of rescaled variable \begin{equation} v(s ,y)=(T-t)^{-d\beta}u(t,x), \quad y=ax (T-t)^{\beta}, \quad s = \gamma\log(T/(T-t)), \end{equation} where $T>0$ is the extinction time and the constants $\beta,\gamma$ and $a$ are precisely defined in Section \ref{sect.prel}\footnote{The exponent $\beta$ is essential, whereas the values of $a, \gamma>0$ are just convenient.}. This rescaled variable satisfies the nonlinear Fokker-Planck equation, see \eqref{eq.v1} or \eqref{eq.v}, which is better suited for the asymptotic analysis. The stationary profile for the latter version of the equation is given by the simple expression \begin{equation} V_D(y)=1/(D+|y|^2)^{(d-2)/2}, \qquad D>0, \end{equation} for a suitable constant $D$ determined by the initial data. This simple expression is handy since $V_D$ and powers of it will appear as weights in some functional inequalities that are essential in our study. In terms of the new logarithmic time $s$ (that goes to infinity as $t\to T$), the long time behaviour of the rescaled flow takes the form of stabilization towards the profile $V_D$ with a power rate of convergence: \begin{equation} \|v(s,y)-V_D(y)\|_{L^\infty(\mathbb{R}^d)}= O(s^{-1/4}) \quad \mbox{as \ } s\to +\infty. \end{equation} This rate replaces the exponential decay formulas with respect to $s$ of the cases $m\ne m_*$, that have been obtained in \cite{BBDGV}. This polynomial rate in $s$ is slower than the exponential rate in $s$ that obtains in all other cases $m<1$, $m\ne m_*$. Summing up, we are in a case of what is called {\sl slow asymptotics}, or {\it critical slowing down}, in mechanical systems and statistical mechanic, and such cases need as a rule special analytical methods. The needed assumption on the initial data is that $v(0,y)$ be a small perturbation of $V_D(y)$ in a sense made precise by assumptions (H1') and (H2') below. Let us stress that some kind of similar assumption on the data is needed to obtain the asymptotic result. Actually, for data that decay at infinity with a slower rate than $O(|y|^{2/(1-m)})$ (i.e., with a smaller power) solutions do not even extinguish in finite time. On the other hand, for data that decrease with a larger power, the behaviour near the extinction time follows a completely different pattern that is described in the monograph \cite{VazSmooth}. We complete this introduction with some comments on related topics. Let us first recall that there are two other known instances of interpretation of fast diffusions as geometrical flows. The first case is the evolution Yamabe flow, i.\,e., the fast diffusion with $m=(d-2)/(d+2)$, $d\ge 3$. It describes how a conformal Riemannian metric evolves by scalar curvature; in that case $u$ is interpreted as the conformal factor of the metric raised to the power $(d+2)/4$. An asymptotic study of this problem is made by Del Pino and S\'aez in \cite{DPS} with exponential convergence to a separate variable solution, and the results are extended in \cite{VazSmooth}. In the second case we deal with Ricci flow in dimension $d=2$, as proposed by Hamilton \cite{Ham}, and then $m=0$ (logarithmic diffusion). The asymptotic behaviour in that case is rather complex, cf. \cite{DH} or the monograph \cite{VazSmooth}. Both models happen in a different context, since they consist of interpreting the variable $u$ in the FDE as the evolving conformal factor of a conformal representation, while here we consider a heat flow on a fixed manifold as the linearization limit of a nonlinear fast diffusion flow. They have in common the property of extinction in finite time. Finally, we mention that a number of formulas and ideas used in the theory of Ricci flows bear a close similarity with developments in linear and nonlinear diffusion theory. Thus, the use of entropies is prominent in Perelman's study of the Ricci flow, \cite{Perel}, where he introduces his functionals $\cal F$ and $\cal W$ which are extensions of the Einstein-Hilbert functional. He then writes the gradient flow for the functionals as a system of equations for the evolving metric $g_{ij}$ and a scalar function $f$, which satisfies a backward heat equation. Strong connections exist with studies of entropies for heat equations on a static manifold, see for example Ni \cite{Ni1} and also the general references \cite{ChowK, ChowLN, Muller}. In a recent paper \cite{LNVV} Lu, Ni, Villani and one of the authors investigate Harnack inequalities and entropies for porous medium and fast diffusion equations on static manifolds that are closely related to Yau, Hamilton and Perelman's work, and on the other hand are close to the subject of this paper. The whole topic calls for further understanding. \subsection*{List of notations} \par\noindent $D_0,D_1,D_*$: the constants involved in Assumptions (H1), (H1$^\prime$), (H2),(H2$^\prime$). See Section \ref{s2.2}. \par\noindent $f$, $\tilde f$: the functions involved in Assumptions (H2), (H2$^\prime$). See Section \ref{s2.2}. \par\noindent ${\mathcal F}(w)$: the relative entropy. See Formula \eqref{Entropy.Quotients}. \par\noindent$F$: the linearized relative entropy. See Proposition \ref{4.13} and Lemma \ref{Lem.Bounds.RE}. The argument of $F$ can be both $g$ and $w$ (see below for the meaning of the latter quantities). \par\noindent$g$: the (weighted) linearization of $w-1$. See Formulas \eqref{2.14} and \eqref{lin.g}. \par\noindent ${\bf g}_\alpha$: the metric describing the geometric interpretation of the linearized operator. See Formula \eqref{metric}. \par\noindent ${\mathcal I}(w)$: the relative Fisher information. See Formula \eqref{Fisher.Quotients}. \par\noindent $I_m$: the linearized Fisher information, see \eqref{form}. The index $m$ is dropped in Section 5 for brevity. \par\noindent$K(t,x,y)$: the heat kernel of the Laplace--Beltrami associated to ${\bf g}_\alpha$. See Section \ref{linear}. \par\noindent$L_m$: the linearized generator. See Formula \eqref{op}. \par\noindent$\mu_*$: the weighted measure $\,{\rm d}\mu_*=V_D^{2-m_*}\,{\rm d} x$. See just before Section \ref{funct}. The L$^p$ norms in Section 4 are taken w.r.t. $\mu_*$. \par\noindent $T$: the extinction time of the Barenblatt solutions and of the solutions considered. See Formula \eqref{2.3} and Assumption (H1). \par\noindent$u(x,t)$: the solution to the fast diffusion equation. See Formula \eqref{1.1}. \par\noindent$U_D(t,x)$: the Barenblatt solutions for $m>m_c$. See Formulas \eqref{2.1} and \eqref{fc}. \par\noindent$U_{D,T}(t,x)$: the pseudo--Barenblatt solutions for $m<m_c$. See Formula \eqref{2.3}. \par\noindent$v(y,s)$: the rescaled solution of the nonlinear Fokker--Planck equation. See Formula \eqref{eq.v}. \par\noindent$V_D(y)$: the Barenblatt profiles in rescaled variables. See Formula \eqref{2.7}. $V_*(y):=V_{D_*}(y)$ is defined in Section \ref{entropies}. \par\noindent$w$: the ratio $v/V_{D_*}$. See Formula \eqref{eq.w} for the equation satisfied by $w$. \par\noindent$W_0,W_1$: lower and upper bounds for $w$. See Section \ref{entropies}. \medskip \noindent The notation $\|\cdot\|_p$ denotes in principle the standard norm in ${\rm L}^p({\mathbb R}^d)$, but starting at the end of Subsection \ref{sect-lin}.1 we will use weighted spaces and it will indicate ${\rm L}^p({\mathbb R}^d,\,{\rm d} \mu)$ with a weight $\mu$ related to the Barenblatt solutions. The context will always make it clear. \section{Preliminaries: rescaling, stabilization and entropy} \label{sect.prel} The fast diffusion equation with $0<m<1$ has attracted the attention of researchers in recent times, once the theory of the corresponding slow diffusion case $m>1$ came to be well known. In the latter case the long-time behaviour of all solutions with nonnegative and $L^1$ data $u_0$ is given by a one-parameter family of explicit self-similar solutions of the form \begin{equation}\label{2.1} U_D(t,x)=t^{-\alpha}B_D(xt^{-\beta}), \end{equation} with $\beta=1/(2+d(m-1))$, $\alpha=d\beta$ and profile $B_D=(D-k|\xi|^2)_+^{1/(m-1)}$ with a free constant $D>0$, a fixed constant $k=\beta(m-1)/2$, and putting $\xi=xt^{-\beta}$. These solutions, usually called Barenblatt solutions, replace the Gaussian profiles found in the long time behaviour of the classical heat equation, which is the case $m=1$. See the precise asymptotic result in \cite[Chapter 18]{BookPME}. When going over to the fast diffusion equation, the situation has been well understood in a first range of exponents $1>m>m_c=(d-2)/d$ (the `good' fast diffusion range); indeed, solutions of the above initial value problem exist and are unique, they are positive and smooth for every choice of the initial data in $\LL^1_{\rm loc}(\mathbb{R}^d)$, and even in more general cases, cf. \cite{ChassVaz}. In particular, Barenblatt solutions still exist, they have the same selfsimilar form though the profile looks a bit different \begin{equation}\label{fc} B_D(\xi)=(D+k|\xi|^2)^{-1/(1-m)} \end{equation} now with $k=\beta(1-m)/2$. This is a positive function everywhere in $\mathbb{R}^ d$ and decays at infinity like $O(|\xi|^{-2/(1-m)})$, so that $B_D\in L^1(\mathbb{R}^d)$ if $m>m_c$. The Barenblatt solutions still represent the asymptotic behaviour of all solutions with nonnegative and $L^1$ data $u_0$, with even better convergence result in relative error, cf. \cite{BV, VazAs}. Factors like $B_D$ will appear in the sequel as weights in functional inequalities and measure spaces. We shall use below a proper scaling to get rid of the inessential constant $k$. However, such a simple theory breaks down for $m<m_c$, even if $m>0$ (which is possible if $d\ge 3$), due in particular to the phenomenon of extinction in finite time, cf. \cite{VazSmooth}. In particular, our model solutions cannot be continued in the same form because the similarity exponents $\alpha$ and $\beta$ go to infinity as $m$ goes down to $m_c$. But for $m<m_c$ a related family of extinction solutions is found of the backward self-similar form \begin{equation}\label{2.3} U_{D,T}(t,x)=(T-t)^{\alpha}B_D(x(T-t)^{\beta}), \end{equation} with $\beta=1/(d(1-m)-2)>0$ and $\alpha=d\beta>0$ (just minus the formulas used before). Here, $T$ and $C$ are arbitrary positive constants and $B_D$ is given just as in the case $m_c<m<1$. It is to be noted that $B_D$ is no more an integrable function in $\mathbb{R}^d$, so we are completely away from the functional setting we started from. These new solutions are sometimes called pseudo-Barenblatt solutions to distinguish them from the original Barenblatt family. \subsection{Rescaled flow equation} Actually, these solutions do not possess the strong attractivity properties of their relatives for $m>m_c$. In order to investigate their partial attractivity (more precisely, their rescaled stability), we have studied in the paper \cite{BBDGV} the extinction behaviour of solutions with initial data close to a pseudo-Barenblatt solution. This is the situation in short terms: we can show that after a rescaling step we obtain the nonlinear Fokker-Planck equation \begin{equation}\label{eq.v1} \partial_s v= \frac{a^2}{\gamma}\nabla_y (v^{m-1}\nabla_y v)+ \frac{\beta}{\gamma}\nabla_y\cdot (y v) \end{equation} in terms of the rescaled variable $v(s ,y)$ defined as \begin{equation} v(s ,y)=(T-t)^{-d\beta}u(t,x), \quad y=ax (T-t)^{\beta}, \quad s = \gamma\log(T/(T-t)). \end{equation} Here $T=T(u_0)$ is the extinction time of the solution, $\beta=(d(1-m)-2)^{-1}$ and we will choose the free constants $a,\gamma>0$ to be $a^2=\gamma=(1-m)\beta/2$. Note that $s (0)=0$ and $s (t)\to\infty$ as $t\to T$. This means that whenever we use as $T$ the actual extinction time of the solution $u$, then $v$ is globally defined, for $y\in \mathbb{R}^d$ and $0\le s <\infty$. With such choices equation \eqref{eq.v1} takes the convenient form \begin{equation}\label{eq.v} \partial_s v =\nabla_y\cdot (v^{m-1}\nabla_y v)+ \frac{2}{1-m}\nabla_y\cdot (y v) =\nabla_y\cdot\left[v\nabla_y\left(\frac{v^{m-1}-V_D^{m-1}}{m-1}\right)\right] \end{equation} This is a convenient choice since the stationary states are now given by \begin{equation}\label{2.7} V_D(y)= (D+|y|^2)^{-1/(1-m)}, \quad D>0, \end{equation} which is just the profile $B_D$ of \eqref{fc} without the undesired constant $k$. We end this paragraph by noting that for $m=m_*$ the exponent in the above stationary profile is $-1/(1-m)= -(d-2)/2$, so that $V_D(y)$ decays at infinity like $O(|y|^{-(d-2)})$, i.e., like the stationary fundamental solution or harmonic potential. This is one of the reasons that makes $m_*$ special. \subsection{Stabilization Result}\label{s2.2} In paper \cite{BBDGV} we have shown stabilization of solutions of equation \eqref{eq.v} towards one of the stationary profiles $V_D$ for initial data that are not very far from $V_D$ to start with. We can write the assumptions on the {\sl initial conditions\/} in terms of either $u_0$ or $v_0$. The assumptions on $u_0$ are \noindent (H1) $u_0$ is a non-negative function in $\LL^1_{\rm loc}(\mathbb{R}^d)$ and that there exist positive constants $T$ and $D_0>D_1$ such that \begin{equation*} U_{D_0,T}(0,x)\le u_0(x)\le U_{D_1,T}(0,x)\quad\forall\;x \in\mathbb{R}^d. \end{equation*} \noindent (H2) There exist $D_*\in [D_1,D_0]$ and $f(|\cdot|)\in\LL^1(\mathbb{R}^d)$ such that \begin{equation*} \big|u_0(x)-U_{D_*,T}(0,x)\big|\le f(|x|)\quad\forall\;x \in\mathbb{R}^d. \end{equation*} In the case $m<m_c$ under consideration here, (H1) implies in particular that the extinction occurs at time $T$. Moreover, when $m>m_*$ (H2) follows from (H1) since the difference of two pseudo-Barenblatt solutions is always integrable. For $m\leq m_*$ this is no more true, and (H2) is an additional restriction. In terms of $v_0$, conditions (H1) and (H2) can be rewritten as follows. \noindent (H1') $v_0$ is a non-negative function in $\LL^1_{\rm loc}(\mathbb{R}^d)$ and there exist positive constants $D_0> D_1$ such that \begin{equation*} \label{eq:assumptionv} V_{D_0}(y)\le v_0(y)\le V_{D_1}(y)\quad\forall\;x \in\mathbb{R}^d. \end{equation*} (H2') There exist $D_*\in [D_1,D_0]$ and $\tilde f(|\cdot|)\in\LL^1(\mathbb{R}^d)$ such that \begin{equation*} \label{eq:assumptionmsmallermstarv} \big|v_0(y)-V_{D_*}(y)\big|\le \tilde f(|y|)\quad\forall\;y \in\mathbb{R}^d. \end{equation*} We point out that condition (H1') means a decay for large $y$ of the form $$v_0(y)=|y|^{-2/(1-m)}(1-c(y)|y|^{-2}) $$ with $c(y)$ bounded above and below away from zero. Moreover, (H2') imposes a stronger decay condition for $m\le m_*$. Notice we can take $\tilde f(|y|)=T^{-d\beta}f(|y|/aT^\beta)$, so that they can be identified up to an elementary scaling. As a starting point for our asymptotic study, we state the result of \cite{BBDGV} about the convergence of $v(t)$ towards a unique Barenblatt profile. \begin{thm}[Convergence to the asymptotic profile]\label{Thm:A1} Let $d\ge3$, $m<1$. Consider the solution $v$ of~\eqref{eq.v} with initial data satisfying {\rm (H1')-(H2')}. \begin{enumerate} \item[{\rm (i)}] For any $m>m_*$, there exists a unique $D_*\in[D_1,D_0]$ such that $\int_{\mathbb{R}^d}(v(s)-V_{D_*})\,{\rm d}x=0$ for any $t>0$. Moreover, for any $p\in (q(m),\infty]$, $\lim_{t\to\infty}\int_{\mathbb{R}^d}|v(s)-V_{D_*}|^pdy=0$. \item[{\rm (ii)}] For $m\le m_*$, $v(s)-V_{D_*}$ is integrable, $\int_{\mathbb{R}^d}(v(s)-V_{D_*}){\rm d}y=\int_{\mathbb{R}^d}(v(0)-V_{D_*}){\rm d}y$ and $v(s)$ converges to $V_{D_*}$ in $\LL^p(\mathbb{R}^d)$ as $t\to\infty$, for any $p\in (1,\infty]$. \item[{\rm (iii)}] {\rm (Convergence in Relative Error)} For any $p\in (d/2,\infty]$, \begin{equation}\label{CRE} \lim_{t\to\infty}\left\|{v(s)}/{V_{D_*}}-1\right\|_{p}=\;0\;. \end{equation} \end{enumerate} \end{thm} For simplicity, we write $v(s)$ instead of $y \mapsto v(s,y)$ whenever we want to emphasize the dependence on the time $s$. The exponent $q(m)$ is defined as the infimum of all positive real numbers $p$ for which two Barenblatt profiles $V_{D_1}$ and $V_{D_2}$ are such that $|V_{D_1}-V_{D_2}|$ belongs~to~$\LL^p(\mathbb{R}^d)$: \begin{equation*}\label{q_0} q(m):=\dfrac{d(1-m)}{2(2-m)}\;. \end{equation*} We see that $q(m)>1$ if $m\in(0,m_*)$, $q(m_*)=1$, and $q(m)<1$ if $m>m_*$. In case $m>m_*$, the value of $D_*$ can be computed at $s=0$ as a consequence of the mass balance law $\int_{\mathbb{R}^d}(v_0-V_{D_*})\,{\rm d}x=0$, and then the conservation result holds for all $s>0$ as is proved in the paper \cite{BBDGV}. On the other hand, in the case $m\le m_*$ the mass balance does not make sense, but $D_*$ is determined by Assumption (H2'). In this case, the presence of a perturbation of $V_{D_*}$ with nonzero mass, does not affect the asymptotic behavior of the solution to first order. \subsection{Relative error, entropies, and linearization} \label{entropies} The deeper stabilization analysis of equation \eqref{eq.v} leads to an interesting connection with a family of Poincar{\'e}-Hardy functional inequalities. In this way, we obtain stabilization rates that are exponential in the new time $s $, which means that they are power-like in the original time. The exponent $m_*$ appears precisely as the only exponent for which the linearized analysis based on Poincar{\'e}-Hardy inequalities fails and the corresponding rates are not obtained by that method. We shall prove below that the linearized analysis when $m$ takes the special value $m_*$ leads to a different functional framework and the actual rates are different, and actually slower. In any case, the approach and the use of entropies starts in the same way. Let $v$ be a solution to the rescaled Fokker-Plank equation \eqref{eq.v}, and let $V_*=V_{D_*}$ be the Barenblatt solution mentioned in Theorem \ref{Thm:A1}. We pass to the quotient $w(s,y)=v(s,y)/V_*(y)$. Notice that $w-1=(v-V_*)/V_*$ is the relative error of $v$ with respect to $V_*$. Notice also that, by straightforward calculations, our running assumptions imply that $W_0\le w\le W_1$, where \ $W_0=\left(D_*/D_0\right)^{1/(1-m)}<1$ and $W_1=\left(D_*/D_1\right)^{1/(1-m)}>1$. The equation for $w$ reads \begin{equation}\label{eq.w} \partial_s w=\frac{1}{V_*}\nabla\cdot\left[wV_*\nabla\left(\frac{w^{m-1}-1}{m-1}V_*^{m-1}\right)\right] \end{equation} In terms of $w$, we define the {\sl relative entropy\/} \begin{equation}\label{Entropy.Quotients} {\mathcal F}[w]:= \frac1{1-m}\int_{\mathbb{R}^d}\left[(w-1)-\frac 1m(w^m-1)\right]V_{*}^m\,{\rm d}y \end{equation} Strictly speaking, we are assuming that a time $s\ge 0$ is given and then we get ${\mathcal F}(w(s))$. In terms of $v$, when $m$ is sufficiently close to one, it can be derived as $E(v)-E(V_*)$ where \begin{equation}E[v]=:\frac1{1-m}\int_{\mathbb{R}^d}\left[ vV_*^{m-1}-\frac1m v^m\right]\,{\rm d}y \end{equation} (for $m$ farther away from 1 both $E[v]$ and $E[V_*]$ become infinite and only the expression for the difference, the relative entropy, makes sense). We also introduce the {\sl relative Fisher information\/} \begin{equation}\label{Fisher.Quotients} \mathcal{I}[w] =\int_{\mathbb{R}^d}\left|\nabla\left(\frac{w^{m-1}-1}{m-1}V_*^{m-1}\right)\right|^2 V_* w \,{\rm d}y =\int_{\mathbb{R}^d}\left|\nabla\left(\frac{v^{m-1}-V_*^{m-1}}{m-1}\right)\right|^2 v \,{\rm d}y \end{equation} (again, we should have written $\mathcal{I}[w(s)]$). By differentiation in time and using the equation, we get \begin{equation}\label{entropydiff} \frac{{\rm d}{\mathcal F}[w(s)]}{{\rm d}s}=-{\cal I}[w(s)]\,, \qquad\forall s>0\,. \end{equation} For a detailed proof of this time derivation, we refer to Proposition 2.6 of \cite{BBDGV}. \medskip We now introduce the linearization idea in \cite{BBDGV} that allows to treat the long-time behaviour of $w$. It consists in writing the relative error in the form \begin{equation}\label{2.14} w(s,y)-1=\varepsilon g(s,y)V_*^{1-m}(y) \end{equation} where the choice of weight $V_*^{1-m}$ is crucial. After a brief formal computation we obtain the differential equation for $g$ that is implied by \eqref{eq.w} in the limit $\varepsilon\to 0$: \begin{equation} \label{lin.g} \partial_s g =V_*^{m-2}\nabla\cdot\left(V_*\nabla g\right). \end{equation} Actually, since Theorem \ref{Thm:A1}, formula \eqref{CRE}, implies that $w\to 1$ as $s\to\infty$, the factor $\varepsilon$ will not be needed in the actual linearization step. Our next task is to study this linear flow; then, we shall have to relate the actual nonlinear flow to its linearized approximation. But let us point out that we will not need to prove the convergence of solutions of the original problem to solutions of the linear problem, the analysis is rather based on the relationship between the two linear quantities, entropy and Fisher information, associated to equation \eqref{lin.g}, and the close similarity of these linear quantities and the previously defined nonlinear ones. These facts plus \eqref{entropydiff} produce the desired convergence result. In the cases $m<1$, $m\ne m_*$, a suitable functional setting was found where the functional inequalities of Hardy-Poincar\'e type corresponding to the linear flow implied the existence of a spectral gap. According to more or less standard theory, existence of such a gap implies exponential decay rates (in $s$) of the norms and entropy of the solutions of the linear flow. A delicate analysis of comparison of entropy and Fisher information between the linear and nonlinear flow allowed finally to transfer the result about decay rate to the original nonlinear flow. See full details in \cite{BBDGV}. The problem arising when $m=m_*$ is the absence of spectral gap. We shall prove below that this is essential, in fact the actual rates are not exponential but power--like in $s$. This is related to the heat kernel behaviour of the operator appearing in \eqref{lin.g}, $V_*^{m-2}\nabla\cdot\left(V_*\nabla g\right)$, acting in a suitable weighted Hilbert space. Details will be given in Section \ref{sect-lin}, where a sharp power-like decay for the heat kernel is proved using a most fortunate coincidence, i.\,e., the representation of the linear semigroup as the heat flow on a conformally flat Riemannian manifold. It will also be proved that no Hardy--type inequality can hold for the quadratic form associated to the generator, so that it is hopeless to use the same line of reasoning of paper \cite{BBDGV}. \section{Statement of the main results for $m=m_*$} \label{ssec.statement} We are now ready to state our main results. We use the notations $v(s), w(s)$ instead of $v(s,y), w(s,y)$ and $u(t)$ instead of $u(t,x)$ when the dependence on time is stressed. We prove convergence of $v(s)$ to the appropriate Barenblatt profile in several senses. More precisely we prove quantitative bounds on the convergence in suitable L$^p$ norms, on the convergence of moments, and on the uniform convergence of all derivatives. Convergence takes place {\it with the same rate of the linearized case}. \begin{thm}[Convergence with rate to the asymptotic profile] \label{thm.conv.entropy} Consider a solution $v$ of the equation \eqref{eq.v} such that $v_0$ satisfies {\rm (H1')-(H2')} and fix some $s_0>0$. Then, the entropy of the quotient variable satisfies \begin{equation} \mathcal{F}[w(s)]\leq K s^{-1/2}\quad\forall\;s\geq s_0\;. \end{equation} for some $K=K(v_0,s_0)$. As a consequence, for any $\vartheta\in\left[0,\frac d2\right]$, there exists a positive constant $K_\vartheta$ such that \begin{equation} \left\||y|^\vartheta(v(s)-V_{D_*})\right\|_{2}\leq K_\vartheta s^{-1/4}\quad\forall\;s\geq s_0\;. \end{equation} \end{thm} The analysis of the linearized equation indicates that this rate should be optimal. We also have convergence without weights in suitable $\LL^p$ and $C^j$ spaces with the same rates, where we use interior regularity theory for parabolic equations: \begin{cor} \label{Conv.Weight}\ {\rm (i)} For any $q\in(1,\infty]$, there exists a positive constant $K(q)$ such that \begin{equation} \|v(s)-V_{D_*}\|_q\le K(q)s^{-1/4}\quad\forall\;s\geq s_0\;. \end{equation} \noindent {\rm (ii)} For any $j\in\mathbb{N}$ there exists a positive constant $H_j$ such that \begin{equation} \|v(s)-V_{D_*}\|_{C^j(\mathbb{R}^d)}\le H_js^{-1/4}\quad\forall\;s\geq s_0\;. \end{equation} \end{cor} These power-decay results are in contrast with the exponential rates obtained in \cite{BBDGV} for $-\infty < m <1$ and $m \ne m_*$. Rescaling back to the original space--time variables one gets the following result which can be called {\sl intermediate asymptotics}. \begin{cor}\label{Cor:A2} Consider a solution $u$ of~\eqref{1.1} with $m=m_*$, with initial data satisfying {\rm (H1)-(H2)}, and extinction time $T$. For $t$ sufficiently close to $T$ and for any $q \in (1,\infty]$, there exists a positive constant $C$ such that: \begin{equation*} \|u(t)-U_{D_*}(t)\|_q\le C\,(T-t)^{\sigma(q)}\,\log\left(T/(T-t)\right)^{-1/4}. \end{equation*} with $\sigma(\infty)=d(d-2)/4$, and $\sigma(q)=\sigma(\infty)(q-1)/q$ \ for $q<\infty$. \end{cor} \noindent We also obtain a quantitative bound on the decay of the {\it relative error of $v(s)$ with respect to $V_{D_*}$}. \begin{cor}[Decay of Relative Error]\label{thm:CRE-exp} Consider a solution $v$ of \eqref{eq.v} such that $v_0$ satisfies {\rm (H1')-(H2')} and fix some $s_0>0$. Then for any $q\in(d/2,\infty]$ and all $\varepsilon>0$ there exists a positive constant $\mathcal{C}_q$ such that \begin{equation} \big\|{v(s)}/{V_{D_*}}-1\big\|_q\le \mathcal{C}_qs^{-\frac {1-\varepsilon}{d}}\quad\forall\;s\geq s_0\;. \end{equation} If $q=d/2$ there is a positive constant $\mathcal{C}$ such that \begin{equation} \big\|{v(s)}/{V_{D_*}}-1\big\|_{d/2}\le \mathcal{C}s^{-\frac 1{d}}\quad\forall\;s\geq s_0\;. \end{equation} Finally we also have, for all $j\in{\mathbb N}$, that there exists a positive constant $\mathcal{C}_j$ \begin{equation} \big\|{v(s)}/{V_{D_*}}-1\big\|_{C^j(\mathbb{R}^d)}\le \mathcal{C}s^{-\frac {1-\varepsilon}{d}}\quad\forall\;s\geq s_0\;. \end{equation} \end{cor} Notice that, besides having a quantitative bound, we have some other improvements on Theorem \ref{Thm:A1} first because the value $q=d/2$ is now allowed and because convergence of $C^j$ norms is also dealt with. The constants involved depend also on $m,d,D_0, D_*, D_1$, but also on the solution at time $s_0$ through the relative mass (conserved along the evolution) and through the uniform bound $c_0$ on the ratio $\int_{{\mathbb R}^d} |\nabla v(y)|^2V_D(y)\,{\rm d}y/\|v\|_1^2\le c_0$\,. \section{Analysis of the linear case} \label{sect-lin} We address now a central topic of the paper, i.e., establishing of the long-time behaviour of the linearized flow in the still open case with exponent $m_*=(d-4)/(d-2)$. The clue to our study of the linearized flow in this case is to interpret it as the heat flow of the Laplace-Beltrami operator of a suitable Riemannian manifold $(M,{\bf g})$, with a metric ${\bf g}$ which is conformal to the standard $\mathbb{R}^d$ metric. Studying the pointwise heat kernel behaviour allows to prove Nash and log-Sobolev inequalities associated to the generator. Such inequalities will later on allow us to study the nonlinear evolution as well, and to determine its asymptotics, which will be shown to proceed with the same rate of convergence as the linearized one. Since the study can have independent interest, we replace $g$ by $v$, $y$ by $x$, and $s$ by $t$ throughout the section to conform to more standard notations. \subsection{Linear equation and geometry}\label{linear} Given $m<1$ and $D>0$, we consider the operator given on $C_c^\infty({\mathbb R}^d)$ ($d\ge3$) by \begin{equation}\label{op} L_mv= (D+|x|^2)^{(2-m)/(1-m)}\nabla\cdot\left(\frac{\nabla v}{(D+|x|^2)^{1/(1-m)}}\right)= V_D^{m-2}\nabla\cdot\left(V_D\nabla v\right). \end{equation} We recall that for $m=m_*$ the following holds: $1/(1-m)=(d-2)/2$, and $V_D^{m-2}(x)=(1+|x|^2)^{d/2}$. We have dropped the index $*$ from $D_*$ to simplify the notation, since the particular value of $D$ has no role here. We shall think of this operator as acting on the Hilbert space $H_m={\rm L}^2({\mathbb R}^d, \,{\rm d}\mu)$ with $d\mu=V_D^{2-m}\,{\rm d} x$. To define it more precisely we construct the quadratic form \begin{equation}\label{form} I_m[v]=\int_{{\mathbb R}^d} \frac{|\nabla v(x)|^2}{(D+|x|^2)^{1/(1-m)}}{\rm d}x=\int_{{\mathbb R}^d} |\nabla v(x)|^2V_D(x){\rm d}x,\ \ \ u\in C^\infty_c({\mathbb R}^d). \end{equation} Then, ${I}_m$ is closable in $H_m$ (for a quite general result implying the validity of the above assertion see e.g. \cite{D}, Section~4.7). We denote again by $-L_m$ the unique nonnegative self--adjoint operator in $H_m$ associated with its closure. In fact $L_m$ has the above explicit expression \eqref{op} on smooth compactly supported functions. There is a particular value of $m$ for which the above operator can be seen as the Laplace-Beltrami operator of a certain Riemannian manifold $(M,{\bf g})$, as we shall show. This in particular will imply (since $M$ turns out to be complete) that $L_m$ is essentially self--adjoint on $C_c^{\infty}(M)$ by a result of Calabi (see e.g. \cite{D}, Theorem 5.2.3). Consider indeed the following manifold, denoted by $M$, given by ${\mathbb R}^d$ endowed with the Riemannian, conformally flat metric defined, in Euclidean (global) coordinates, by \begin{equation}\label{metric} {\bf g}_\alpha(x)=(D+|x|^2)^{-\alpha}{\bf I}, \end{equation} where ${\bf I}$ is the Euclidean metric and $|\cdot|$ is the Euclidean norm. We denote by $\mu_{{\bf g}_\alpha}$ the Riemannian measure, by $|{\bf g}_\alpha|=\mbox{\rm det}({\bf g}_\alpha)$ the determinant of the metric tensor, by $\nabla_\alpha$ the Riemannian gradient and by $\Delta_\alpha$ the Laplace-Beltrami operator, defined on L$^2(\mu_{{\bf g}_\alpha})$, associated to the given metric. \begin{lem} The Laplace-Beltrami operator $\Delta_{\alpha}$ coincides with $L_m$, precisely when $\alpha=1$ and $m=m_*:=(d-4)/(d-2)$, both as concerns its explicit expression (in Euclidean coordinates) and as concerns the Hilbert space it acts on. \end{lem} \noindent {\sl Proof.~} We notice that for the above choice of metric we have \[ \sqrt{|{\bf g}_\alpha|(x)}=(D+|x|^2)^{-\alpha d/2},\ \ \ g_\alpha^{ij}(x)=(D+|x|^2)^{\alpha}\delta^{ij}. \] Then we have that the Dirichlet form associated to $\Delta_{\alpha}$ is given, on test functions, by \begin{equation} \begin{aligned}J_\alpha(v)&:=\int_M {\bf g}_\alpha(\nabla_\alpha v,\nabla_\alpha v)\,{\rm d}\mu_{{\bf g}_\alpha}=\int_{{\mathbb R}^d} \sqrt{|{\bf g}_\alpha|(x)}g^{ij}_\alpha(x)\frac{\partial v}{\partial x^i}\frac{\partial v}{\partial x^j}\,{\rm d}x\\ &=\int_{{\mathbb R}^d}(D+|x|^2)^{(-d\alpha/2)+\alpha}|\nabla_ev(x)|^2\,{\rm d}x \end{aligned} \end{equation} where $\nabla_e$ is the Euclidean gradient and the summation convention is used. Then we notice that the conditions that identify $\Delta_\alpha$ with $L_m$: \[\begin{aligned} &\sqrt{|{\bf g}_\alpha|(x)}=(D+|x|^2)^{-(2-m)/(1-m)}\\ &\sqrt{|{\bf g}_\alpha|(x)}g^{ij}_\alpha(x)=(D+|x|^2)^{-1/(1-m)}\delta^{ij} \end{aligned} \] force $\alpha, m$ to be related by $(d\alpha/2)-\alpha=1/(1-m)$ and $d\alpha/2=(2-m)/(1-m)$. This is equivalent to $\alpha=1$, $m=(d-4)/(d-2)=m_*$ as claimed.\qed \medskip We shall now compute, in the case discussed in the above Lemma, the Ricci curvature of $(M,{\bf g}_\alpha)$. Hereafter we shall drop the index $\alpha$, since we always choose $\alpha=1$. We put $D=1$ for simplicity without loss of generality. \begin{lem}\label{lemma.Rij} Then the Ricci curvature of $(M,{\bf g}_{\alpha=1})$ is given, in Euclidean coordinates, by \begin{equation} {R}_{ij}=-\frac{(d-2)x_ix_j}{(1+|x|^2)^{2}}+ \left[\frac{(d-2)|x|^2+ 2(d-1) }{(1+|x|^2)^{2}}\right]\delta_{ij}, \end{equation} where we write {\rm Ric}\,$=(R_{ij})$. In particular {\rm Ric}\,$>0$ on $M$, such lower bound cannot be improved, and {\rm Ric} is bounded on $M$. Actually, $R_{ij}(x)=O(|x|^{-2})$ as $|x|\to\infty$ in the transversal directions and it behaves as $O(|x|^{-4})$ in the radial directions. Finally, the scalar curvature is given by \begin{equation} R=(d-1)\frac{2d+ (d-2)|x|^2}{1+|x|^2}. \end{equation} \end{lem} We will postpone the proof of these formulas to appendix A1 not to break the flow of the exposition. It immediately follows that the symmetric tensor Ric is positive; indeed, given $\xi\in {\mathbb R}^d$, we have \[ R_{ij}(x)\xi_i\xi_j \ge\frac{2(d-1)}{(1+|x|^2)^2}|\xi|^2>0. \] The boundedness of Ric is clear from its explicit expression. Note that for $d=2$ we are dealing with an Einstein metric, $\textrm{Ric}=k\, {\bf g}$ (actually, it is Hamilton's cigar soliton to the Ricci flow, \cite{Ham, ChowK}), but for $d\ge 3$ it is not. \medskip Let us continue with the asymptotic analysis of the flow. By a celebrated result of Li and Yau \cite{LY}, the heat kernel $K(s,x,y)$ of the Laplace--Beltrami operator of a complete Riemannian manifold $(M,{\bf g})$ with nonnegative Ricci curvature is pointwise comparable with the quantity \[ \frac1{{\rm Vol}[B(x,\sqrt t)]}e^{-c\frac{d^2(x,y)}{t}} \] where $d(\cdot\,,\cdot)$ is the Riemannian distance in $(M,{\bf g})$, $B(x,r)$ is the Riemannian ball centered at $x$ and of radius $r$ and Vol is the Riemannian volume. More precisely, \begin{cor} For all small positive $\varepsilon$ there exists positive constants $c_1,c_2$ such that \[ \frac{c_1(\varepsilon)}{{\rm Vol}[B(x,\sqrt t)]}e^{-\frac{d^2(x,y)}{(4-\varepsilon)t}}\le K(t,x,y)\le \frac{c_2(\varepsilon)}{{\rm Vol}[B(x,\sqrt t)]}e^{-\frac{d^2(x,y)}{(4+\varepsilon)t}} \] for all $x,y\in M$, $t>0$. \end{cor} We recall that the Li--Yau bounds require completeness, a property which clearly holds for the manifold we are considering. We use the notation $a\wedge b=\min\{a,b\}$. \begin{cor} The heat kernel satisfies the following properties: \begin{equation} \label{on-diag}\begin{split} &K(t,x,x)\appros_{t\to0}\left(1\wedge\frac1{|x|}\right)\frac1{t^{\frac d2}},\\ &K(t,x,x)\l \frac C{t^{\frac 12}}\ \ \forall t\ge1, \forall x\in{\mathbb R}^d, \end{split} \end{equation} where $f_1\appros\limits_{t\to t_0} f_2$ means that there exists two constants $c_1,c_2>0$ such that $c_1f_1\le f\le c_2f_2$ near $t_0$. \end{cor} \noindent {\sl Proof.~} First notice that \[d(0,x)=\int_0^{|x|}\frac1{\sqrt{1+t^2}}\,{\rm d} t\] where $|x|$ is the Euclidean length, so that $d(0,x)\sim\log |x|$ for large $|x|$. Hence, \[\begin{aligned} {\rm Vol}(B(0,R))&=\int_{B(0,R)}\sqrt{|{\bf g}|}\,{\rm d} x=\int_{d(0,x)<R}\frac1{(D+|x|^2)^{d/2}}\,{\rm d} x\\ &\asint_{R\to+\infty}c\int_{r<e^R}\frac{r^{d-1}}{(D+r^2)^{d/2}}\,{\rm d} r\asint_{R\to+\infty}cR. \end{aligned} \] Proceeding similarly, one shows that $d(x_0,x)\appros \log\frac{|x|}{|x_0|}$ for large $|x|$ and hence that ${\rm Vol}(B(x_0,R))\sim c(R+\log|x_0|)$ for large $R$ and, say, $|x_0|\ge2$. The short time behaviour is clearly locally Euclidean, with a weight depending on $x$ given by definition by $1/\sqrt{D+|x|^2}$. \qed \noindent{\bf Remark}. The above corollary extends, for the present choice of the parameter $m$, the result of \cite{D}, Th. 4.7.5, in several respects. In fact, in the quoted Theorem the bounds on the heat kernel are from above and for short time only. Notice that the short time bound in the following results matches with the one of \cite{D}. One may notice that, in fact, we have proved the bound $$K(t,x,x)\appros \frac 1{t^{\frac 12}+\log(1+|x|)}\ \ \ \forall t\ge1, \forall x\in{\mathbb R}^d,$$ although we shall make no further use of it. \begin{cor} Each solution to the linear evolution equation $\partial_t v=L_{m_*}v$ corresponding to an initial datum in ${\rm L}^1({\mathbb R}^d,(D+|x|^2)^{(m-2)/(1-m)})$ satisfies the bound \begin{equation}\label{Ultra.1} \|v(t)\|_\infty\le H(t)\|v_0\|_1= \left\{\begin{array}{lll} c_1\dfrac{\|v_0\|_1}{t^{d/2}} &\mbox{for any }0<t\le 1\\[3mm] c_2\dfrac{\|v_0\|_1}{t^{1/2}} &\mbox{for any }t>1 \end{array}\right. \end{equation} where $c_i$ are positive constants. The power of $t$ cannot be improved for such general initial data, as can be seen by considering the time evolution of a Dirac delta. \end{cor} \noindent {\bf Warning:} Here, the symbol $\|\cdot\|_p$ denotes the norm in ${\rm L}^p({\mathbb R}^d,\,{\rm d} \mu_*)$, where $\,{\rm d}\mu_*=V_{D}^{2-m_*}(x)\,{\rm d} x$, and we know that $V_{D}^{2-m_*}(x)=(D+|x|^2)^{-d/2}$. This notation will be kept in the next three sections. \subsection{Functional Inequalities}\label{funct} We recall that $I_{m_*}[v]=\int_{{\mathbb R}^d} |\nabla v(x)|^2 V_{D}\,{\rm d}x$ on smooth compactly supported functions. The domain of its closure will be indicated by ${\rm Dom}\,(I_{m_*})$. \begin{cor} There is a family of logarithmic Sobolev inequalities \begin{equation} \int_{{\mathbb R}^d} v^2\log\left(\frac{v}{\|v\|_2}\right)\,{\rm d} \mu_* \le \varepsilon I_{m_*}[v]+\beta(\varepsilon)\|v\|_2^2 \end{equation} valid for all $v\in {\rm Dom}(I_{m_*})\cap {\rm L}^1({\mathbb R}^d,\,{\rm d} \mu_*)\cap {\rm L}^\infty({\mathbb R}^d,\,{\rm d} \mu_*)$ and all positive $\varepsilon$, where $\beta(\varepsilon)=c-\frac d4\log\varepsilon$ for $\varepsilon<1$, $\beta(\varepsilon)=c-\frac 14\log\varepsilon$ for $\varepsilon\ge1$, and $c$ is a suitable positive constant. \end{cor} \noindent {\sl Proof.~} We have $\|v(s)\|_\infty\le Cs^{-1/2}\|v_0\|_1$ for large $s$. Interpolating between such bound and the L$^\infty$ contractivity property (valid since $I_{m_*}$ is a Dirichlet form) shows that $\|v(s)\|_\infty\le Cs^{-1/4}\|v_0\|_2$ for large $s$. Similarly, $\|v(s)\|_\infty\le Cs^{-d/4}\|v_0\|_2$ for small $s$. The validity of such ultra-contractive bounds for the solution of the linear evolution considered is known to be equivalent, by \cite{D}, Example 2.3.2, to the stated logarithmic Sobolev inequalities for the initial datum $u_0$ if it belongs to ${\rm Dom}(I_{m_*})\cap {\rm L}^1({\mathbb R}^d,\,{\rm d} \mu_*)\cap {\rm L}^\infty({\mathbb R}^d,\,{\rm d} \mu_*)$. At this point the evolution has no role anymore and to avoid confusions we choose to write $v$ instead of $u_0$ in the statement. \qed The next consequences we draw involve the recurrence of the semigroup considered. \begin{cor} The semigroup $\{T_s\}_{s\ge0}$ associated to $L_{m_*}$ is recurrent. In particular, $L_{m_*}$ does not admit a (minimal) positive Green function and the manifold $({\mathbb R}^d,{\bf g}_{\alpha=-1})$ is parabolic. \end{cor} \noindent {\sl Proof.~} It suffices to note that a semigroup $\{T_s\}_{s\ge0}$ is, by definition, transient, iff $\int_0^\infty T_sv\,{\rm d} s$ is a.e. finite for all $v\in\LL^2({\mathbb R}^d,\,{\rm d} \mu_*)$. This of course does not hold in the present case because of the $s^{-1/2}$ behaviour for long times of the heat kernel. \qed \begin{cor} There is no bounded, strictly positive, $\mu_*$--integrable function $h$ such that \[ \int_{{\mathbb R}^d}|v|h\,{\rm d} \mu_* \le I_{m_*}[v]^{1/2} \] for all $v\in {\rm Dom}\,(I_{m_*})$. \end{cor} \noindent {\sl Proof.~} The existence of a function $h$ with the stated properties is equivalent to the transience of the semigroup at hand, by \cite{F}, Th. 1.5.1. \qed \begin{cor}\label{No.Hardy.No.Party} There is no bounded, strictly positive, $\mu_*$-integrable function $h$ such that for all $v\in {\rm Dom}\,(I_{m_*})$ \[ \int_{{\mathbb R}^d}v^2h\,{\rm d} \mu_* \le I_{m_*}[v]. \] \end{cor} \noindent {\sl Proof.~} Since $h$ is assumed to be integrable so that $h\,{\rm d} \mu_*$ is a finite measure, that we can normalize to 1, one would have by H\"older inequality that \[ \int_{{\mathbb R}^d}|v|h\,{\rm d} \mu_* \le \left(\int_{{\mathbb R}^d}v^2h\,{\rm d} \mu_*\right)^{1/2}\le (I_{m_*}[v])^{1/2} \] for all $v\in {\rm Dom}(I_{m_*})$, contradicting the above result. \qed \noindent{\bf Remark}. The above results prove that Hardy--type inequalities relative to the Dirichlet form $I_{m_*}$ and to a strictly positive integrable weight $h$ cannot hold, even if $h$ is required to be bounded. This shows that the strategy of \cite{BBDGV}, which relied heavily on the validity of Hardy--type inequalities and allowed to deal with the case $m\not=m_*$ cannot be adapted to the present situation. The ultra-contractive bounds discussed above can also be related to the validity of Nash inequalities for $I_{m_*}$. In fact we prove now some inequalities of that type in weighted Sobolev spaces which will be very important when dealing with the nonlinear evolution. Such inequalities play here the role that Hardy--type inequalities played in the case $m\neq m_*$ studied in \cite{BBDGV}, cf. also Section \ref{m.neq.mstar}. The following crucial result is a purely functional inequality which is proved using the linear evolution only, but will turn out to be the key point for the study of the nonlinear evolution as well. \begin{prop}\label{GNprop} For all $v$ such that $I_{m_*}[v]/\|v\|_1^2\le c_0$ for some $c_0>0$, the following Gagliardo--Nirenberg inequality holds true: \begin{equation}\label{GN} \|v\|_2^2\le KI_{m_*}[v]^{1/3}\|v\|_1^{4/3}, \end{equation} for all $v\in \LL^2({\mathbb{R}}^d,\,{\rm d} \mu_*)\cap{\rm Dom}(I_{m_*})$, where the positive constant $K$ depends on $c_0$, and diverges as $c_0\to+\infty$. \end{prop} \noindent {\sl Proof.~}\; To get the claim, first interpolate between the bound $\|v(s)\|_\infty\le H(s)\|v_0\|_1$ and the L$^1$ contraction property to get $\|v(s)\|_2\le H(s)^{1/2}\|v_0\|_1$. From this starting point we can use a known argument, cf. \cite{D}, and we briefly recall it for the sake of completeness. In fact, use the semigroup property and the fact that $I_{m_*}[v(s)]$ is nonincreasing as a function of $s$ to write \[ \begin{aligned} H(s)\|v_0\|_1^2&\ge (v(s), v(s))=(v(2s),v_0)\\ &=(v_0,v_0)-\int_0^{2s}I_{m_*}\left[v\left(\frac \lambda2\right)\right]\,{\rm d} \lambda\\ &\ge (v_0,v_0)-2sI_{m_*}[v_0]. \end{aligned} \] Therefore, \begin{equation}\label{optimize} \|v_0\|_2^2\le 2sI_{m_*}[v_0]+H(s)\|v_0\|_1^2. \end{equation} It would then be easy to minimize the r.h.s. of the latter formula should one have $H(s)=cs^{-\alpha}$ for all $s>0$. The fact that $H(s)$ has such form with different powers of time when $s$ is small and when $s$ is large forces us to proceed as follows. Assuming that $I_{m_*}[v_0]$ and $\|v_0\|_1$ are not zero, the right hand side takes the value infinity both as $s\to 0$ and $s\to\infty$ hence there is a minimum for one or several intermediate values of $s$. We want to take a particular value of $s$ that almost minimizes the above formula, and we want that value to correspond to the range of not small $s$ where $H(s)=c_2s^{-1/2}$. Since we assumed that $I_{m_*}[v_0]/\|v_0\|_1^2\le c_0$ for some $c_0>0$, we consider the 1-parameter quantity \[ s_\alpha=\alpha\left[\frac{\|v_0\|_1^2}{I_{m_*}[v_0]}\right]^{2/3} \] and observe that, trivially, \[ s_\alpha> 1\qquad\iff\qquad \alpha >c_0^{2/3}. \] We choose $\alpha$ accordingly (so that it is bounded away from zero) and plug the corresponding $s_\alpha$ into \eqref{optimize}, noticing that for $s>1$ we have $H(s)=c_2s^{-1/2}$; with these choices \eqref{optimize} becomes \begin{equation} \|v_0\|_2^2\le K\|v_0\|_1^{4/3}I_{m_*}[v_0]^{1/3} \end{equation} with $K=K(\alpha)=\alpha+c_2\alpha^{-1/2}$. This concludes the proof.\qed \subsection{Mass conservation for the linear flow} We introduce here the calculation of ``conservation of mass'' for the linear semigroup. As usual we put $V^{2-m_*}_{D}\,{\rm d} x=\,{\rm d} \mu_*$. \begin{lem} The following property of mass conservation holds true for every {\it nonnegative} $v\in \LL^1(\,{\rm d} \mu_*)$: \begin{equation} \frac{\,{\rm d}}{\,{\rm d}t }\int v \,{\rm d} \mu_*=0. \end{equation} \end{lem} \noindent We give two proofs of the result, first a quite direct one and then a proof relying on the special geometric nature of the linear flow. \noindent{\bf First proof}. We use the specific form of the weights involved for a direct calculation, first for an initial datum belonging to ${\rm L}^1(\rm{d}\mu_*)\cap {\rm L}^2(\rm{d}\mu_*)$. Choose a test function $\varphi_R$ supported in the Euclidean ball $B_{2R}$ with $\varphi_R=1$ on $B_R(0)$. Let $t\ge t_0>0$ and compute, for any such $t$: \begin{equation}\begin{split} &\left|\frac{\,{\rm d}}{\,{\rm d}t }\int_{\mathbb{R}^d} v\varphi_R\,{\rm d} \mu_* \right| =\left|-\int_{R\le |x|\le 2R} \nabla v\cdot \nabla\varphi_R V_{D}\,{\rm d} x\right|\\ &\le \int_{R\le |x|\le 2R} |\nabla V_D|\,|\nabla\varphi_R| v \,{\rm d} x + \int_{R\le |x|\le 2R} |\Delta \varphi_R|\,v\,V_D\,{\rm d} x \le \frac{c(m,d)}{R^2}\int_{R\le |x|\le 2R} v\,V_D\,{\rm d} x \end{split} \end{equation} because $|\nabla V_D|\le c_0(m,d) V_D/R$ since it is not restrictive to assume that $|\nabla \varphi_R|\le c_1/R$ and $|\Delta \varphi_R|\le c_2 /R^2$ whenever $R\le |x|\le 2R$. By H\"older inequality we obtain that \[ \int_{R\le |x|\le 2R} v\,V_D\,{\rm d} x \le \left(\int_{R\le |x|\le 2R} v^2\,V_D^{2-m_*}\,{\rm d} x\right)^{1/2} \left(\int_{R\le |x|\le 2R} V_D^{m_*}\,{\rm d} x\right)^{1/2} \le \varepsilon_R\, R^2 \] since we let $\varepsilon_R:=\left(\int_{R\le |x|\le 2R} v^2\,V_D^{2-m_*}\,{\rm d} x\right)^{1/2}$ and it is easy to check that $\int_{R\le |x|\le 2R} V_D^{m_*}\,{\rm d} x\le c_1 R^4$. We obtained that \[ \left|\frac{\,{\rm d}}{\,{\rm d}t }\int_{\mathbb{R}^d} v\varphi_R\,{\rm d} \mu_* \right|\le c_1\varepsilon_R \] and we notice that $\varepsilon_R\to 0$ as $R\to \infty$, a fact which holds because $v\in\LL^2(\,{\rm d}\mu_*)$. This proves that \begin{equation} \lim_{R\to\infty}\left|\int_{\mathbb{R}^d} v(t_1)\varphi_R\,{\rm d} \mu_*-\int_{\mathbb{R}^d} v(t_0)\varphi_R\,{\rm d} \mu_*\right|\le c_1\lim_{R\to\infty}\varepsilon_R (t_1-t_0)=0 \end{equation} for any $0\le t_0\le t_1$. We can use dominated convergence in the left-hand side, since the Markov property implies $v(t)\in\LL^1(\,{\rm d}\mu_*)$ for all $t\ge 0$. This yields the claim for strictly positive times and for initial data belonging to ${\rm L}^1(\rm{d}\mu_*)\cap {\rm L}^2(\rm{d}\mu_*)$. We can then reach $t=0$ using the strong continuity in $\LL^1(\,{\rm d}\mu_*)$ of the evolution, and consider general data in $\LL^1(\,{\rm d}\mu_*)$ by approximation. \noindent{\bf Second proof}. We can also use a general argument involving conservation of probability on manifolds with curvature bounded below. Let $\{T_t\}_{t\ge0}$ be the semigroup associated to the Laplace--Beltrami operator of the manifold considered. Then $\{T_t\}_{t\ge0}$ is a Markov semigroup and in particular it acts on all L$^p$ spaces ($p\in[1,+\infty]$), it is contractive on any such space and it preserves positivity. We have shown that the Ricci curvature of $M$ is bounded. An application of \cite{D}, Theorem 5.2.6 then shows that $\{T_t\}_{t\ge0}$ preserves the identity: $T_t1=1$. From this, conservation of the L$^1$ norm for data $v\ge0$ follows. In fact, with the notation $v(t)=T_tv$ and using the fact that the adjoint of $T_t$ when seen as acting on L$^1$ is $T_t$ itself but seen as acting on L$^\infty$, we have: \[\begin{aligned}\displaystyle \|v(t)\|_1&=\sup_{h\in{\rm L}^\infty\hskip-3pt,|h|\le1}\left|\int_{{\mathbb R}^d} (T_t v)h\,{\rm d} \mu_* \right|=\sup_{h\in{\rm L}^\infty\hskip-3pt,h\in[0,1]}\int_{{\mathbb R}^d}(T_t v)h \,{\rm d} \mu_* \\ &=\sup_{h\in{\rm L}^\infty\hskip-3pt,h\in[0,1]}\int_{{\mathbb R}^d}(T_t h)v \,{\rm d} \mu_* =\int_{{\mathbb R}^d}(T_t 1)v \,{\rm d} \mu_* =\int_{{\mathbb R}^d}v \,{\rm d} \mu_* =\|v\|_1\mbox{.\qed} \end{aligned} \] \subsection{Linear case. Entropy Method} \label{ssect.lcem} The behaviour of the heat kernel of the linear evolution considered and the L$^1$ contraction property allow to notice that for all $t\ge t_0$ \[ \|u(t)\|_2^2\le \|u(t)\|_1\|u(t)\|_\infty\le C\frac{\|u_0\|_1}{t^{1/2}}. \] Notice that the above bound is sharp. In fact, consider the solution corresponding to the Dirac delta at $x_0$, namely $v(t,x)=K(t,x,x_0)$. Its L$^2$ norm then satisfies, using the symmetry of the heat kernel and the semigroup property: \[ \begin{split} \Vert v(t)\Vert^2_2&=\int_{{\mathbb R}^d}K(t,x,x_0)^2\,{\rm d} \mu_* =\int_{{\mathbb R}^d}K(t,x,x_0)K(t,x_0,x)\,{\rm d} \mu_*\\ &=K(2t,x_0,x_0)\sim c t^{-1/2}\ \ \ {\rm for\ large\ }t \end{split} \] It is easy to get the same result by {\it entropy methods}. Although this is not necessary in the present case due to the previous calculations, this will serve as a model for the strategy of proof used in the nonlinear setting, and will make already apparent the role of the Nash inequalities proved before. \begin{prop}\label{4.13} Let $F(t)=\|v(t)\|_2^2$. Then $F(t)\le c t^{-1/2}$ for all $t>t_0$. \end{prop} \noindent {\sl Proof.~} First consider nonnegative data. Having shown that the L$^1$ norm of such solutions is conserved and, moreover, using the fact that $I_{m_*}[v(t)]$ is decreasing as a function of $t$, we get that $I_{m_*}[v(t)]/\|v(t)\|_1^2\le c_0$ for all positive $t$ and for some $c_0>0$. We are then allowed to use \eqref{GN} with $r=2, s=1$, so that we have \[ \frac{\,{\rm d} F(t)}{\,{\rm d}t}=-I_{m_*}[v(t)] \le - c \frac{F^3}{\|v(t)\|^4_1}= - c \frac{F^3}{\|v_0\|^4_1} \] Thus we get, integrating the above differential inequality: \[ F(t)\le \tilde c\frac{\|v_{0}\|^2_1}{t^{1/2}}. \] The same decay holds true for all L$^1$ data, since we may write $-(v_0)_-\le v_0\le (v_0)_+$ and use the order preserving property of the evolution and the decay bound already proved for nonnegative (or nonpositive) solutions. In fact, denoting by $v_{\pm}(s)$ the time evolved of $(v_0)_\pm$, we have first that, by comparison, $-v_-(s)\le v(s)\le v_+(s)$ and $v^2(s)= v^2_+(s)+v^2_-(s)$. This, together with the above decay property for nonnegative solutions $\|v_\pm(s)\|^2_2\le \tilde{c}\|(v_{0})_\pm\|_1 s^{-1/2}$, implies that \[\begin{split} F(t)=\|v(t)\|^2_2 &= \|v_-(t)\|^2_2+\|v_+(t)\|^2_2 \le \frac{\tilde{c}\left(\|(v_{0})_-\|^2_1+\|(v_{0})_+\|^2_1\right)}{t^{1/2}} \le \tilde{c}\|v_0\|^2_1 t^{-1/2} \mbox{.\qed} \end{split} \] \section{Nonlinear Entropy Method}\label{sect.nlem} Once the linear flow has been examined and its behaviour described, we prepare the way for the proof of convergence with rate of the nonlinear flow via a new version of the entropy-entropy dissipation method. We shall use the entropy and Fisher information introduced at the end of Section \ref{sect.prel}. The results of this section hold for any $m<1$, but the main interest is in employing them for the case $m=m_*$ as is done in the subsequent section. From now on we revert to the notations for space, time and flow variables introduced in sections \ref{sect.intro} and \ref{sect.prel}. Thus, $w=w(s,y)$. \subsection{Comparing linear and nonlinear quantities. The Fisher information} We have to prove the basic inequalities that relate the linear and the nonlinear quantities of the entropy method. We start the analysis by a new inequality between linear and nonlinear Fisher information, then we recall a Lemma of \cite{BBDGV} which compares linear and nonlinear entropy. We shall write $V_*$ instead of $V_{D_*}$. We put \begin{equation}\label{Fisher} \mathcal{I}_m[w]= \int_{\mathbb{R}^d}\left|\nabla\left(\frac{w^{m-1}-1}{m-1}V_*^{m-1}\right)\right|^2 V_* w \,{\rm d}y, \end{equation} which is the (nonlinear) Fisher information. It can be linearized, as done in \cite{BBDGV}, by letting $w=1+\varepsilon g V_*^{1-m}$ and taking the limit as $\varepsilon\to 0$. We obtain the linearized form of the Fisher information, that takes the expression of the Dirichlet form typical of the linearized equation \begin{equation} I_{m}[w] = \int_{\mathbb{R}^d}\left|\nabla (w-1)V_*^{m-1}\right|^2 V_*\,{\rm d}y = \int_{\mathbb{R}^d}\left|\nabla g\right|^2 V_* \,{\rm d}y \end{equation} the relation between $g$ and $w$ is $g=(w-1)V_*^{m-1}$; it is not restrictive to let $\varepsilon=1$ in the sequel. \noindent The next Lemma compares in a quantitative way $I_{m}$ and $\mathcal{I}_m$. This is a first attempt that will be improved subsequently for $m=m_*$ and $m\ne m_*$. We drop the subindex $m$ from both quantities for brevity. \begin{lem}\label{Fisher-lin-nonlin} Let $0<W_0\le w\le W_1<+\infty$, be a measurable function on $\mathbb{R}^d$, with $W_0<1$ and $W_1>1$, and assume that $\mathcal{I}(w)<+\infty$. Then the following inequality holds true \begin{equation} I [w] \le k_1\mathcal{I}[w] + k_2\int_{\mathbb{R}^d}g^4V_*^{4-3m}\,{\rm d}y\\ \end{equation} for any $m<1$, where $g=(w-1)V_*^{m-1}$, $k_1= 2W_1^{3-2m}$ and $k_2$ depends only on $W_1, W_0$, $m$ and $d$. \end{lem} \noindent {\sl Proof.~} We have $w-1=g V_*^{1-m}$. We first re-write the Fisher information \eqref{Fisher} in the following way: \begin{equation}\begin{split} \mathcal{I}[w] &= \int_{\mathbb{R}^d}\left|\nabla\left(\frac{w^{m-1}-1}{m-1}V_*^{m-1}\right)\right|^2 V_* w \,{\rm d}y\\ &:= \int_{\mathbb{R}^d}\left|\nabla\left(A(w)(w-1)V_*^{m-1}\right)\right|^2 V_* w \,{\rm d}y, \end{split} \end{equation} where we have defined \[ A(w):=\frac{w^{m-1}-1}{(m-1)(w-1)}=\frac{a(w)}{w-1}. \] It is easy to check that $A(1)=1$, $A(w)>0$, and that $A(w)\to 0$, when $w\to\infty$. Moreover, \begin{equation}\label{A'} A'(w)=\frac{w^{m-2}-A(w)}{w-1}\le 0 \end{equation} since the function $a(w)=(w^{m-1}-1)/(m-1)$ is concave in $w$, so that its incremental quotient $A(w)$ (taken in $w=1$) is a non-increasing function of $w$. If we let $W_0\le w\le W_1$, with $0<W_0\le 1$ and $1\le W_1< +\infty$, we then have the bounds \begin{equation}\label{bounds.A} W_1^{m-2}=a'(W_1)\le |A(w)| \le a'(W_0)=W_0^{m-2}. \end{equation} We shall also need estimates for $|A'(w)|$ for $W_0\le w\le W_1$ as above, and in fact it is easy to check that $A'(1)=(m-2)/2$ and that $A^\prime$ is bounded away from zero. Letting now $w$ be a function, noticing that $w-1=g V_*^{1-m}$ and that \eqref{A'} can be rewritten as $(w-1)A'(w)+A(w)=w^{m-2}$, we get \begin{equation}\begin{split} \nabla\left[A(w)(w-1)V_*^{m-1}\right]&=\nabla \big[A(w)g\big] = A(w)\nabla g + A'(w)\big[\nabla w \big] g \\ &= A(w)\nabla g + A'(w)\big[\nabla (1+g V_*^{1-m}) \big] g\\ &= A(w)\nabla g + A'(w)gV_*^{1-m}\big[\nabla g\big] + A'(w)g^2 \big[\nabla V_*^{1-m} \big]\\ &= \big[A(w)+ A'(w)(w-1)\big]\nabla g+ A'(w)g^2 \big[\nabla V_*^{1-m} \big]\\ & = w^{m-2}\nabla g+ A'(w) \big[\nabla V_*^{1-m} \big]g^2. \end{split} \end{equation} Now we can use this equality in \eqref{Fisher} to get: \[ \begin{split} \mathcal{I}[w] &= \int_{\mathbb{R}^d}\left|A(w)\nabla g+\big[\nabla A(w)\big]g\right|^2 V_* w \,{\rm d}y\\ &= \int_{\mathbb{R}^d}\left|w^{m-2}\nabla g+ A'(w) \big[\nabla V_*^{1-m} \big]g^2 \right|^2 V_* w \,{\rm d}y\\ &\ge \frac12 \int_{\mathbb{R}^d}\left|\nabla g\right|^2 V_* w^{2(m-2)+1} \,{\rm d}y - \int_{\mathbb{R}^d}g^4\big|A'(w)\big|^2 \left|\nabla V_*^{1-m}\right|^2 V_* w \,{\rm d}y\\ &\ge \frac12W_1^{2m-3}\int_{\mathbb{R}^d}\left|\nabla g\right|^2 V_* \,{\rm d}y - \int_{\mathbb{R}^d}g^4\big|A'(w)\big|^2 \left|\nabla V_*^{1-m}\right|^2 V_* w \,{\rm d}y, \end{split} \] where we have used the inequality $|a+b|^2 + |b|^2\ge (1/2)|a|^2 $ valid for any $a,b\in \mathbb{R}$, and the bounds $W_0\le w\le W_1$. Thus, we have \[ I [g]=\int_{\mathbb{R}^d}\left|\nabla g\right|^2 V_* \,{\rm d}y \le \frac{2}{W_1^{2m-3}}\mathcal{I}[w] + \frac{1}{W_1^{2m-3}} \int_{\mathbb{R}^d}g^4\big|A'(w)\big|^2 \left|\nabla V_*^{1-m}\right|^2 V_* w \,{\rm d}y\\ \] We next remark that the weight \[\begin{split} \left|\nabla V_*^{1-m}(y)\right|^2 V_*(y) &= \frac{4 |y|^2}{\big(D+|y|^2\big)^4} \frac{1}{\big(D+|y|^2\big)^{\frac{1}{1-m}}}\\ &\le \frac{4}{\big(D+|y|^2\big)^3} \frac{1}{\big(D+|y|^2\big)^{\frac{1}{1-m}}} = \frac{4}{\big(D+|y|^2\big)^{3+\frac{1}{1-m}}}=4 V_*^{4-3m} \end{split} \] is integrable whenever $(d-6)m>(d-8)$. \textit{Notice that when $m=m_*=(d-4)/(d-2)$, the weight is integrable.} We conclude by estimating $|A'|\le k_0$ so that \[ I [g]=\int_{\mathbb{R}^d}\left|\nabla g\right|^2 V_* \,{\rm d}y \le 2W_1^{3-2m}\mathcal{I}[w] + 4 W_1^{2(1-m)}k_0 \int_{\mathbb{R}^d}g^4V_*^{4-3m}\,{\rm d}y.\\ \] This concludes the proof.\qed \subsection{Evolution properties of the Fisher information} We now describe some further properties of the Fisher information $\mathcal{I}[w(s)]$ as a function of time, such as the fact that it is uniformly bounded for large $s$ and it goes to zero as $s\to+\infty$. We prove a new differential inequality for the Fisher information. Indeed, by Proposition 2.6 of \cite{BBDGV}, it is easy to see that the Fisher information is finite almost everywhere and is the time derivative of the entropy almost everywhere so that: \begin{equation} \mathcal{F}[w(s_0)]-\mathcal{F}[w(s)]=\int_{s_0}^s\mathcal{I}[w(\xi)]\,{\rm d}\xi \end{equation} taking the limits $s\to\infty$ and $s_0\to 0$, recalling that $0\le \mathcal{F}[w(0)]<+\infty$, $0\le \mathcal{F}[w(s)]\to 0$ as $s\to+\infty$, we can conclude that $\mathcal{I}[w(s)]$ is integrable (and nonnegative) on $(0,+\infty)$. \begin{prop}\label{prop.diff.ineq.Fisher} In addition to the running assumptions, suppose that $v(0)-V_D\ge0$. Then, the following differential inequality for the Fisher information holds true \begin{equation}\label{diff.ineq.Fisher} \frac{\,{\rm d} \mathcal{I}[w(s)]}{\,{\rm d}s}\;\le\; \kappa_1 \mathcal{I}[w(s)]\;-\;\kappa_2 \mathcal{I}^2[w(s)] \end{equation} the constant $\kappa_1$ depends on $m,d,s_0$, and the constant $\kappa_2$ depends on $m,d$, the relative mass $\int_{\mathbb{R}^d}(v_0-V_D)\,{\rm d}x$, $W_0$ and $W_1$. Moreover, $\mathcal{I}[w(s)]$ goes to zero as $s\to\infty$. \end{prop} \noindent {\bf Remark.} The time derivative of the Fisher information is usually controlled by means of the Bakry-Emery method (cf. \cite{MR889476}) that allows to obtain spectral gap estimates. Such an estimate cannot hold when $m=m_*$ since there is no spectral gap. The above proposition can be viewed as a substitute for the Bakry-Emery method and gives a solution for asymptotic estimates in applications with no spectral gap. \noindent {\sl Proof.~}The proof is divided in several steps. We use the notation $\Omega=\left(\frac{v^{m-1}-V_D^{m-1}}{m-1}\right)$ in this section for brevity. Note that for large $s$, $\Omega $ is uniformly bounded and \ $|\Omega|\le V_{D_0}^{m-2}|v-V_D|$. \medskip \noindent$\bullet~$\textsc{Expression of the derivative.} We first perform a formal time-derivative of $\mathcal{I}$, but in this case is convenient to write it in terms of $v$ and $V_D$ instead of $w$, where we recall that $w=v/V_D$ \begin{equation} \begin{split} \frac{\,{\rm d} \mathcal{I}}{\,{\rm d} s} &=\frac{\,{\rm d}}{\,{\rm d} s}\int_{\mathbb{R}^d}\left|\nabla\Omega\right|^2 v\,{\rm d}y\\ &= 2\int_{\mathbb{R}^d}\nabla\Omega \cdot\nabla\left(v^{m-2}\frac{\,{\rm d} v}{\,{\rm d}s}\right)v\,{\rm d}y +\int_{\mathbb{R}^d}\left|\nabla\Omega\right|^2 \frac{\,{\rm d} v}{\,{\rm d}s}\,{\rm d}y = (A)+(B) \end{split} \end{equation} Now we treat the two terms separately. \noindent$\bullet~$\textsc{Estimating the term} (A): We have \begin{equation} (A) =2\int_{\mathbb{R}^d} v\nabla\Omega \cdot\nabla\left(v^{m-2}\frac{\,{\rm d} v}{\,{\rm d}s}\right)\,{\rm d}y =-2\int_{\mathbb{R}^d} \nabla\cdot \left[v\nabla\Omega\right] v^{m-2}\frac{\,{\rm d} v}{\,{\rm d}s}\,{\rm d}y. \end{equation} Using the equation, $v_s=\nabla\cdot \Omega$, we get \begin{equation} (A) =-2\int_{\mathbb{R}^d} \left[\nabla\cdot \left(v\nabla\Omega\right)\right]^2 v^{m-2}\,{\rm d}y =-2\int_{\mathbb{R}^d} \left[\nabla\cdot \left(v\nabla\Omega\right)\right]^2 \frac{\Omega^2} {\Omega^2v^{2-m}}\,{\rm d}y. \end{equation} Then we have \begin{equation} \begin{split} (A) & \le_{(i)} -2\frac {\left[\int_{\mathbb{R}^d} |\nabla\cdot \big( v\nabla\Omega\big)| |\Omega|\,{\rm d}y \right]^2} {\int_{\mathbb{R}^d}\Omega^2v^{2-m}\,{\rm d}y} \le_{(ii)}-2\frac{\left[-\int_{\mathbb{R}^d} v|\nabla\Omega|^2 \,{\rm d}y \right]^2} {\int_{\mathbb{R}^d}\Omega^2v^{2-m}\,{\rm d}y}\\ &\le_{(iii)}-2\frac{\mathcal{I}^2}{c_2\int_{\mathbb{R}^d}(v(0)-V_D)\,{\rm d}y} :=-\kappa_2\mathcal{I}^2, \end{split} \end{equation} where in (i) we have used the H\"older inequality \[ \int \frac{h_1^2}{h_2}\,{\rm d}\mu\ge\frac{\left[\int h_1\,{\rm d}\mu\right]^2}{\int h_2\,{\rm d}\mu}, \] while in (ii) we use integration by parts, after noticing that $|a||b|\ge ab$. The point (iii) relies on fact that the difference between two Barenblatt solutions behaves like $V_D^{2-m}$, and on the fact that $V_{D_0}\le v(t)\le V_{D_1}$, so that \[ \begin{split} \int_{\mathbb{R}^d}\Omega^2v^{2-m}\,{\rm d}y &=\int_{\mathbb{R}^d}\left(\dfrac{w^{m-1}-1}{m-1}\right)^2V_D^{2(m-1)}v^{2-m}\,{\rm d}y\\ &=\int_{\mathbb{R}^d}\left(\dfrac{w^{m-1}-1}{m-1}\right)^2V_{D_1}^{m}\,{\rm d}y\\ (a)&\le\max\{W_0^{m-2},W_1^{m-2}\}\int_{\mathbb{R}^d}|w-1|^2V_{D_1}^{m}\,{\rm d}y\\ &=\max\{W_0^{m-2},W_1^{m-2}\}\int_{\mathbb{R}^d}|v-V_D|^2\frac{V_{D_1}^{m}}{V_D^{2}}\,{\rm d}y\\ (b)&\le c_0\max\{W_0^{m-2},W_1^{m-2}\}\int_{\mathbb{R}^d}|v-V_D|^2V_{D_1}^{m-2}\,{\rm d}y\\ (c)&\le c_0c_1\max\{W_0^{m-2},W_1^{m-2}\}\int_{\mathbb{R}^d}|v-V_D|V_{D}^{2-m}V_{D}^{m-2}\,{\rm d}y\\ (d)& = c_2 \int_{\mathbb{R}^d}|v-V_D|\,{\rm d}y = c_2 \int_{\mathbb{R}^d}(v-V_D)\,{\rm d}y = c_2\int_{\mathbb{R}^d}(v(0)-V_D)\,{\rm d}y, \end{split} \] where in (a) we have used \eqref{bounds.A}, namely \[ W_1^{m-2}|w-1|\le \left|\frac{(w^{m-1}-1)}{(m-1)}\right| \le W_0^{m-2}|w-1|, \] while in (b) we have used $V_D\ge c_0{V_{D_1}}$ and in (c) we have used $|v-V_D|\le c_1 V_D^{2-m}$. In the last step (d) we have used hypothesis (H2') together with the fact that $v(0)-V_D\ge 0$ and conservation of relative mass, proved in Proposition 2.3 of \cite{BBDGV}. \noindent$\bullet~$\textsc{Estimating the term} (B). We shall use the celebrated B\'enilan-Crandall estimates \cite{Benilan-Crandall}, that for solutions to the un-rescaled FDE $\partial_t u =\Delta u^m/m$ read \[ \partial_t u(t,x) \le \frac{u(t,x)}{(1-m)t}\qquad\mbox{for any $t>0$} \] if $m<1$, even for $m\le 0$. We perform the scaling to the Fokker-Plank equation, like in section \ref{sect.prel}, so that the B\'enilan--Crandall estimates read \begin{equation}\label{BC.est.FP}\begin{split} \partial_s v(s,y) &\le\frac{2}{[d(1-m)-2](1-m)}\left[\frac{d}{d(1-m)-2} +\frac{1}{(1-m)\left(\ee^{s(1-m)[d(1-m)-2]/2}-1\right)}\right]v(s,y)\\ &\le\frac{2}{[d(1-m)-2](1-m)}\left[\frac{d}{d(1-m)-2} +\frac{1}{(1-m)\left(\ee^{s_0(1-m)[d(1-m)-2]/2}-1\right)}\right]v(s,y)\\ &=\kappa_1(m,d,s_0)v(s,y), \end{split} \end{equation} if $s\ge s_0>0$. We remark that $\kappa_1\to +\infty$ when $s_0\to 0$ but this will not be a problem. We finally estimate (B) \begin{equation} \begin{split} (B) &=\int_{\mathbb{R}^d}\left|\nabla\Omega\right|^2 \partial_s v\,{\rm d}y \le \mathcal{C}(m,d,s_0) \int_{\mathbb{R}^d}\left|\nabla\Omega\right|^2 v\,{\rm d}y\\ &=\;\kappa_1(m,d,s_0)\;\mathcal{I}[w(s)]. \end{split} \end{equation} This calculation is formal and has to be justified, but before we do that let us draw a first consequence. \medskip \noindent$\bullet~$\textsc{Integrating the Differential Inequality.} We obtained a closed differential inequality for the Fisher information $\mathcal{I}[w(s)]=\mathcal{I}(s)$ \[ \frac{\,{\rm d}\mathcal{I}(s)}{\,{\rm d}s}-\kappa_1\mathcal{I}(s)+\kappa_2\mathcal{I}^2(s)\le 0 \] which is of Bernoulli type and can be estimated explicitly. Indeed, the exact solution on $(s_1,s)\subseteq [0,+\infty)$ of the Bernoulli ordinary differential equation $Z^\prime(s)-\kappa_1Z(s)+\kappa_2Z^2(s)=0$ is given by \begin{equation} Z(s)=\frac{\ee^{\kappa_1(s-s_1)}}{\left[Z_0^{-1} +\int_{s_1}^s\ee^{\kappa_1(\xi-s_1)}\kappa_2\,{\rm d}\xi\right]} \le\frac{\ee^{\kappa_1(s-s_1)}} {\kappa_2\int_{s_1}^s\ee^{\kappa_1(\xi-s_1)}\,{\rm d}\xi}\le c\frac{\kappa_1}{\kappa_2} \end{equation} if $s\ge s_1+1:=s_0$, for a suitable $c>1$ which can be taken to be arbitrarily close to one by choosing $s_1$ large enough. By comparison it is clear that $\mathcal{I}(s)\le Z(s)\le c\kappa_1/\kappa_2$, provided $\mathcal{I}(s_0)\le Z(s_0)$. Therefore, for all $c>1$ and $0<s_0\le s$, \begin{equation} \mathcal{I}[w(s)]\le\frac{c\kappa_1}{\kappa_2}. \end{equation} The constant $\kappa_2$ depends on $m,d$, the relative mass $\int_{\mathbb{R}^d}(v_0-V_D)\,{\rm d}x$, $W_0$ and $W_1$; the constant $\kappa_1$ depends on $m,d,s_0$ and $\kappa_1\to+\infty$ when $s_0\to 0$. \medskip \noindent$\bullet~$\textsc{Justification of the calculation.} The differentiation of $\mathcal{I}$ performed above contains calculations that are not justified in principle since they involve differentiations and integrations by parts in integrals over the whole space that are not justified a priori. Therefore, we introduce a cutoff function $\zeta_n(y)$ for the integrand of $\mathcal{I}$ and define $$ \mathcal{I}_n=\int_{\mathbb{R}^d}\left|\nabla\Omega\right|^2 v \zeta^2_n\,{\rm d}y. $$ We assume that $\zeta_n$ has value $1$ if $|y|\le n$, value 0 of $|y|\ge 2n$, and $|\nabla \zeta_n|\le 1/n$, $ |\Delta\zeta_n|\le 1/n^2$. Then we have \begin{equation} \frac{\,{\rm d} \mathcal{I}_n}{\,{\rm d} s}= (A_n)+(B_n) \end{equation} and the two terms are as before but for the cutoff factor. There is no problem with $(B_n)$. But $(A_n)$ produces extra terms that we must control. Indeed, \begin{equation*} \begin{split} (A_n) &=2\int_{\mathbb{R}^d} v\nabla\Omega \cdot\nabla\left(v^{m-2}\frac{\,{\rm d} v}{\,{\rm d}s}\right)\zeta_n^2\,{\rm d}y\\ &=-2\int_{\mathbb{R}^d} \nabla\cdot \left(v\nabla\Omega\right) v^{m-2}\frac{\,{\rm d} v}{\,{\rm d}s}\zeta_n^2\,{\rm d}y -2\int_{\mathbb{R}^d} v^{m-1}\frac{\,{\rm d} v}{\,{\rm d}s}\,\left(\nabla\Omega\cdot \nabla \zeta_n^2\right)\,{\rm d}y. \end{split} \end{equation*} When we replace $dv/ds$ by its value according to the equation, the first of the two terms of the last expression becomes \begin{equation}\label{form.An1} (A_{n1}) :=-2\int_{\mathbb{R}^d} \left|\nabla\cdot \left(v\nabla\Omega\right)\right|^2 v^{m-2}\zeta_n^2\,{\rm d}y \left(=-2\int_{\mathbb{R}^d}|v_s|^2 v^{m-2}\zeta_n^2\,{\rm d}y\right). \end{equation} which has a convenient negative sign. We now perform integration by parts on this term, an operation that is now perfectly justified, and we get much as before: \begin{equation*} \begin{split} (A_{n1}) &=-2\int_{\mathbb{R}^d} \left[\nabla\cdot \left(v\nabla\Omega\right)\right]^2 \frac{\Omega^2} {\Omega^2v^{2-m}} \zeta_n^2\,{\rm d}y \le -2\frac{\left[\int_{\mathbb{R}^d} \left|\nabla\cdot \left(v\nabla\Omega\right)\right| |\Omega|\zeta_n^2\,{\rm d}y \right]^2} {\int_{\mathbb{R}^d}\Omega^2v^{2-m} \zeta_n^2\,{\rm d}y} \end{split} \end{equation*} The numerator of the last term is larger than $|\int_{\mathbb{R}^d} \nabla\cdot \left(v\nabla\Omega\right) \Omega\zeta_n^2\,{\rm d}y|$, hence \begin{equation*} (A_{n1})\le -\kappa_2 \left| \int_{\mathbb{R}^d} \left(\nabla\cdot \left(v\nabla\Omega\right)\right) \Omega\zeta_n^2\,{\rm d}y\right|^2. \end{equation*} with the notation that we have used above. Let us calculate the integral: after integrating by parts, it gives a term as before plus a term where $\zeta_n^2$ is differentiated, as follows: \begin{equation*} \int_{\mathbb{R}^d} v\left|\nabla\Omega\right|^2\zeta_n^2 \,{\rm d}y +2 \int_{\mathbb{R}^d} v\Omega\nabla\Omega \cdot\zeta_n\nabla\zeta_n\,{\rm d}y=(X_n')+(X_n'') \end{equation*} The first term is $(X_n')=\mathcal{I}_n$, as before, while the new term, $(X_n'')$, can be tackled as follows. We separate by H\"older a factor like $\mathcal{I}_n^{1/2}$ (but we only need to integrate in the annulus $R_n=\{n\le |y|\le 2n\}$ so it goes to zero as $n\to\infty$) and we still have another factor: \begin{equation*} \int_{\mathbb{R}^d} v\left|\Omega\right|^2 |\nabla\zeta_n|^2\,{\rm d}y\le C \int_{R_n}v |V_{D_0}^{m-1}-V_{D_1}^{m-1}|^2 |\nabla\zeta_n|^2\,{\rm d}y\le C\int_{R_n} \frac{v}{n^2}dy \end{equation*} and this tends to zero as $n\to\infty$ for $m>m_*$. For $m\le m_*$ we calculate differently, \begin{equation*} \int_{R_n}v |V_{D_0}^{m-1}-V_{D_1}^{m-1}|^2 |\nabla\zeta_n|^2\,{\rm d}y\le C\int_{R_n} \frac{V_D^{m-1}}{n^2}|v-V_D|dy\le C \int_{R_n} |v-V_D|dy, \end{equation*} that goes to zero as $n\to\infty$, but in a uniform way we only know that is bounded a priori. In any case, raising to the square we get an estimate of the form \begin{equation} (A_{n1})\le -\kappa'_2 \mathcal{I}_n^2+ \kappa_2''\mathcal{I}, \end{equation} with constants uniform in $n$. \medskip \noindent $\bullet$ We now consider the other new term \begin{equation*} (A_{n2}) =-4\int_{\mathbb{R}^d} v^{m-1}\zeta_n \frac{\,{\rm d} v}{\,{\rm d}s} \left( \nabla\Omega \cdot \nabla \zeta_n \right)\, \,{\rm d}y. \end{equation*} Use H\"older and the numerical inequality $2ab\le \varepsilon a^2 + b^2/\varepsilon$, to separate a term like $(A_{n1})$ in formula \eqref{form.An1}, with a factor $\varepsilon$ which is convenient to be absorbed by the term already existing that has a negative sign in front, as we have remarked. We are then left with another factor of the form \begin{equation*} (A_{n22}) \le C\int_{\mathbb{R}^d} v^{m}\left|\nabla\Omega\right|^2 | \nabla \zeta_n|^2\,{\rm d}y\sim C \int_{n\le |y|\le 2n}\frac{v^m}{n^2} |\nabla (v^{m-1}-V^{m-1})|^2\,{\rm d}y. \end{equation*} Since $v^{m-1}(s,y)\sim |y|^2$ as $|y|\to\infty$, and we get an equivalent expression \begin{equation*}\label{formi2n} (A_{n22}) \le C\int_{n\le |y|\le 2n} \left|\nabla\Omega\right|^2 v\,\,{\rm d}y\le C \mathcal{I}_{2n}. \end{equation*} If we had $C \mathcal{I}_{n}$ instead of $C \mathcal{I}_{2n}$ we would have ended. The estimate for $(B_n)$ has no problems. \medskip \noindent $\bullet$ To solve the difficulty we take a little detour. We integrate the inequality obtained so far for $\,{\rm d}\mathcal{I}_{n}/\,{\rm d}s$ to get the integrated inequality: \begin{equation} \mathcal{I}_{n}(s_2)- \mathcal{I}_{n}(s_1)\le k \int_{s_1}^{s_2} \mathcal{I}_{n}\,{\rm d}s + C\int_{s_1}^{s_2} \mathcal{I}_{2n}\,{\rm d}s \end{equation} But the right-hand side is bounded above by the integral \begin{equation} (C+k)\int_{s_1}^{s_2} \mathcal{I}\,\,{\rm d}s \end{equation} and this is known to be bounded by the relative entropy. Moreover, since the integral $\int_{s_1}^{\infty} \mathcal{I}\,\,{\rm d}s$ is finite, for every $\varepsilon>0$ there exists a $s_\varepsilon$ such that \begin{equation}\label{integralofI} \int_{s_\varepsilon}^{\infty} \mathcal{I}\,\,{\rm d}s\le \varepsilon. \end{equation} We conclude that $\mathcal{I}_{n}(s_2)- \mathcal{I}_{n}(s_1)\le \varepsilon$ when $s_\varepsilon\le s_1\le s_2$. Combining this half continuity with the integrability of $\mathcal{I}_n(s)\le \mathcal{I}(s)$ given by \eqref{integralofI}, we obtain by an easy calculus lemma that \begin{equation} \mathcal{I}_n(s)\le C_1\varepsilon \end{equation} for all $s\ge 2 s_\varepsilon$, with $C_1$ uniform in $n$. We conclude that $\mathcal{I}(s)=\lim_n \mathcal{I}_n(s)$ is bounded for all large times and goes to zero as $s\to\infty$. \noindent $\bullet$ Coming back to the differential inequality satisfied by ${\mathcal I}_n$ we have proved that ${\mathcal I}_n^\prime\le c_1{\mathcal I}-c_2 {\mathcal I}_n^2$. Integrating this differential inequality in time between $s_1$ and $s_2$ with $s_1<s_2$ sufficiently large we get $I_n(s_2)-I_n(s_1)\le c_1\int_{s_1}^{s_2}I(s)\,{\rm d}s-c_2\int_{s_1}^{s_2}I_n(s)^2\,{\rm d}s$ so that, passing to the limit as $n\to+\infty$ and using both monotone convergence and the boundedness of $I$ as a function of time we get $I(s_2)-I(s_1)\le c_1\int_{s_1}^{s_2}I(s)\,{\rm d}s-c_2\int_{s_1}^{s_2}I(s)^2\,{\rm d}s$, which is an equivalent form of our statement.\qed \subsection{Comparing linear and nonlinear entropies} The quantitative comparison of linear and nonlinear entropies concludes the preliminary results needed for the nonlinear entropy method. Under Assumptions (H1'')-(H2''), the relative entropy is well defined. \begin{lem}[An equivalence result]\label{Lem.Bounds.RE} Let $m<1$. If $w$ satisfies {\rm (H1'')-(H2'')}, then \begin{equation}\label{Disug.Entr.Lin-Nolin} \frac{F[w]}{2 W_1^{2-m}} \le \mathcal{F}[w]\le \frac{F[w]}{2 W_0^{2-m}}. \end{equation} We recall that $F[w]= \int_{\mathbb{R}^d}|w-1|^2V_{D_*}^{m}\,{\rm d}x $. \end{lem} The short proof of this result has been given first in \cite{BBDGV} but we repeat it here for reader's convenience. \noindent\noindent {\sl Proof.~} For $a>0$, let $\phi_{a}(w):=\frac{1}{1-m}\left[(w-1)-(w^m-1)/m\right] -a\left(w-1\right)^2$. We compute $\phi_{a}'(w)=\frac{1}{1-m}\left[1-w^{m-1}\right]-2a\left(w-1\right)$ and $\phi_{a}''(w)=w^{m-2}-2a$, and note that $\phi_{a}(1)=\phi_{a}'(1)=0$. With $a=W_1^{m-2}/2$, $\phi_{a}''$ is positive on $(W_0,W_1)$, which proves the lower bound after multiplying by $V_D^m$ and integrating over $\mathbb{R}^d$. With $a=W_0^{m-2}/2$, $\phi_{a}''$ is negative on $(W_0,W_1)$ which proves the upper bound. \qed Equivalently, we may write \begin{equation}\label{Disug.Entr.Lin-Nolin.time} \frac{F[w]}{2 W_1^{2-m}}\le \frac{F[w]}{2\sup\limits_{\mathbb{R}^d}|w|^{2-m}} \le \mathcal{F}[w]\le \frac{F[w]}{2 \inf\limits_{\mathbb{R}^d}|w|^{2-m}}\le \frac{F[w]}{2 W_0^{2-m}}. \end{equation} \subsection{The entropy bounds a suitable $\LL^p$-norm} \begin{lem}\label{Lem.Entr.Lp} Let $m<1$. If $w$ satisfies {\rm (H1'')-(H2'')}, then \begin{equation}\label{Disug.Entr.Lp} \|w-1\|^{2+\frac{m}{1-m}}_{\LL^{2+\frac{m}{1-m}}(\mathbb{R}^d)} \le \overline{D}_m F[w], \end{equation} where $\overline{D}_m$ is given at the end of the proof. \end{lem} \noindent {\sl Proof.~} We first state some inequalities between Barenblatt solutions with different constants. Consider \[ \frac{\partial V_D}{\partial D} =-\frac 1{1-m}\left[D+|y|^2 \right]^{-\frac{2-m}{1-m}} =-\frac 1{1-m}V_D^{2-m}\le 0\;. \] Hence, for any $0<D_1<D_0$ \begin{equation*}\label{est.diff.baren} \frac{D_0-D_1}{1-m}V_{D_0}^{2-m}\le\left|V_{D_1}-V_{D_0}\right| \le \frac{D_0-D_1}{1-m}V_{D_1}^{2-m} \end{equation*} Moreover, it is easy to see that if $0<D_1\le D_0$ \[ V_{D_0}^{1-m}(y)=\frac{1}{D_0+|y|^2} \le \frac{1}{D_1+|y|^2}=V_{D_1}^{1-m}(y) \le \left(1+\frac{D_0}{D_1}\right)\frac{1}{D_0+|y|^2} =\left(1+\frac{D_0}{D_1}\right)V_{D_0}^{1-m}(y). \] The above inequalities prove that $|w-1|^{m/(1-m)}$ is bounded by a multiple of $V_*^m$. Indeed, by hypothesis (H1') we have that $0<D_0<D_*<D_1$, so that $V_{D_0}-V_{*}\le v(s)-V_{*}\le V_{D_1}-V_{*}$, and since $w=v/V_{*}$ \[\begin{split} |w-1| &=\left|\frac{v(s)-V_{*}}{V_{*}}\right| \le\frac{|V_{D_1}-V_{D_0}|}{V_{*}} \le \frac{D_0-D_1}{1-m} \frac{V_{D_1}^{2-m}}{V_{*}} \end{split} \] Thus, \[\begin{split} \|w-1\|^{2+\frac{m}{1-m}}_{\LL^{2+\frac{m}{1-m}}(\mathbb{R}^d)} &=\int_{\mathbb{R}^d}|w-1|^2|w-1|^{\frac{m}{1-m}}\,{\rm d}y\\ &\le\left(\frac{D_0-D_1}{1-m}\right)^{\frac{m}{1-m}} \left(1+\frac{D_*}{D_1}\right)^{\frac{m(2-m)}{(1-m)^2}} \int_{\mathbb{R}^d}|w-1|^2V_{*}^{m}\,{\rm d}y:= \overline{D}_m F[w] \mbox{.\qed} \end{split} \] \noindent\textbf{Remarks. } (i) The estimate proves that $w-1\in\LL^{2+\frac{m}{1-m}}(\mathbb{R}^d)$, whenever the initial entropy is finite, since we know, joining inequalities \eqref{Disug.Entr.Lin-Nolin} and \eqref{Disug.Entr.Lp}: \begin{equation}\label{Lp-entr} \|w(s)-1\|^{2+\frac{m}{1-m}}_{\LL^{2+\frac{m}{1-m}}(\mathbb{R}^d)} \le F[w(s)] \le 2\overline{D}_mW_1^{2-m}\mathcal{F}[w(s)] \le 2\overline{D}_mW_1^{2-m}\mathcal{F}[w_0], \end{equation} since the nonlinear entropy is decreasing in time. Moreover, we have also proved that \begin{equation} \|w(s)-1\|^{2+\frac{m}{1-m}}_{\LL^{2+\frac{m}{1-m}}(\mathbb{R}^d)} \le 2\overline{D}_mW_1^{2-m}\mathcal{F}[w(s)] \end{equation} and we shall show below that entropy goes to zero as $s\to+\infty$\,. \noindent (ii) As an easy consequence, letting $w-1=(v-V_*)/V_*$ and using the fact that $V_*\le C$, we obtain \begin{equation}\label{Disug.Entr.Lq} \|v-V_*\|^{2+\frac{m}{1-m}}_{\LL^{2+\frac{m}{1-m}}(\mathbb{R}^d)} \le \overline{D}_m F[w]\,. \end{equation} (iii) For $m=m_*$ we have $2+\frac{m}{1-m}= d/2$. \subsection{Comparing linear and nonlinear Fisher information} With the above remarks we can improve on Lemma \ref{Fisher-lin-nonlin} that compares the linear and nonlinear Fisher information: \begin{prop}\label{Fisher-lin-nonlin.2} Under the same assumptions of Lemma {\rm \ref{Fisher-lin-nonlin},} we have \begin{equation}\label{diseg.Fisher-lin-nonlin.2} I[g]=\int_{\mathbb{R}^d}\left|\nabla g\right|^2 V_* \,{\rm d}y \le k_1\mathcal{I}[w] + k_3\mathcal{F}^{1+\sigma}[w]\\ \end{equation} for any $m<1$, where $g=(w-1)V_*^{m-1}$, $\sigma=2/[d+2+m/(1-m)]>0$, $k_1= 2W_1^{3-2m}$, and $k_3>0$ is given at the end of the proof. \end{prop} \noindent {\sl Proof.~} We estimate the second term of the inequality of Lemma \ref{Fisher-lin-nonlin} in the following way \[\begin{split} \int_{\mathbb{R}^d}g^4V_*^{4-3m}\,{\rm d}y &= \int_{\mathbb{R}^d}\big(|w-1|V^{m-1}\big)^4V_*^{4-3m}\,{\rm d}y = \int_{\mathbb{R}^d}\big(|w-1|\big)^4V_*^{m}\,{\rm d}y\\ &\le \left\|w-1\right\|_{\infty}^2\int_{\mathbb{R}^d}\big(|w-1|\big)^2V_*^{m}\,{\rm d}y = \left\|w-1\right\|_{\infty}^2F[w]\\ \end{split} \] Now we recall the interpolation inequality \eqref{interp.Cj.Lp} with $j=0$ \begin{equation} \|f\|_{\LL^\infty(\mathbb{R}^d)}\; \le\;\mathcal{C}_{d}\;\|f\|_{C^{1}(\mathbb{R}^d)}^{\frac{d}{d+p}}\;\|f\|_p^{\frac{p}{d+p}} \end{equation} then we apply it to $f=w-1$ and we let $p=2+m/(1-m)$. We get: \begin{equation}\label{infinity-interp} \begin{split} \|w-1\|_{\LL^\infty(\mathbb{R}^d)}\; &\le\mathcal{C}_{p,d}\;\|w-1\|_{C^{1}(\mathbb{R}^d)}^{\frac{d}{d+p}} \;\left(\|w(s)-1\|^{2+\frac{m}{1-m}}_{\LL^{2+\frac{m}{1-m}}(\mathbb{R}^d)} \right)^{\frac{1}{d+p}}\\ &\le\mathcal{C}_{p,d}M_1\left(2\overline{D}_mW_1^{2-m} \mathcal{F}[w]\right)^\frac{1}{d+2+\frac{m}{1-m}}:=k_3\mathcal{F}[w]^{\sigma/2}\\ \end{split} \end{equation} where $\sigma=2/[d+2+m/(1-m)]>0$ for any $m<1$ and we used inequality \eqref{Lp-entr} and the fact that $\|w-1\|_{C^{1}(\mathbb{R}^d)}^{\frac{d}{d+p}}\le M_1$ by Theorem \ref{lem:holdereg}. Thus we have proved that \[\begin{split} \int_{\mathbb{R}^d}g^4V_*^{4-3m}\,{\rm d}y &\le\left\|w-1\right\|_{\infty}^2F[w] \le k_3\mathcal{F}[w]^{1+\sigma}. \end{split} \] The expression of $k_3$ is then \[ k_3=\mathcal{C}_{p,d}M_1\left(2\overline{D}_mW_1^{2-m}\right)^\frac{1}{d+2+\frac{m}{1-m}} \] where $\|w-1\|_{C^{1}(\mathbb{R}^d)}^{\frac{d}{d+p}}\le M_1$, $\overline{D}_m = \frac{D_0-D_*}{1-m}\frac{D_*-D_1}{1-m} \left(1+\frac{D_*}{D_1}\right)^{\frac{2-m}{1-m}}$, and $\mathcal{C}_{p,d}$ is the constant of the interpolation inequality \ref{interp.Cj.Lp} with $j=0$ and $p=2+m/(1-m)$. \qed \noindent\textbf{Remarks. } (i) The above proposition holds for any $m<1$ and allow to conclude that $I(s)\to 0$ as $s\to +\infty$, since we already know that both $\mathcal{I}(s)$ (cf. Proposition \ref{prop.diff.ineq.Fisher}) and $\mathcal{F}(s)$ tend to zero as $t\to +\infty$. (ii) When $m=m_*$, we obtain that $\sigma=4/(3d)$. But in this critical case we shall need another finer comparison between the linear and the nonlinear Fisher information that hold only when $m=m_*$, namely we would like to have that there exists $s_0>0$ and a constant $k_4>0$, such that \begin{equation}\label{I.3} I[g(s)]\le k_4\mathcal{I}[w(s)]. \end{equation} for any $s\ge s_0$, where $g=(w-1)V_*^{m_*-1}$. Unfortunately the above inequality is not guaranteed for all times $s\ge s_0$. In the next section we will prove a weaker version of this statement, sufficient to our scopes, namely we will show that the above estimate \eqref{I.3} holds on a family of intervals $[s_{1,k}, s_{2,k}]$ that is sufficiently dense as $s\to\infty$. The technical details will be postponed to Appendix A4. \section{Proofs of the main results in the critical case} \label{sect.nlem2} In this section we shall always take $m=m_*$, and we shall show that the nonlinear flow converges with the same rate as the linear case, cf. Section \ref{ssect.lcem}. We shall use the relationship between the entropy functional $\mathcal{F}$ and the Fisher information $\mathcal{I}$, namely $\,{\rm d}\mathcal{F}/{\,{\rm d}s}=-\mathcal{I}$. In view of the absence of any spectral gap (or Hardy-Poincar\'e inequality) inequality, valid instead in the case $m\neq m_*$, we have to proceed differently. The Gagliardo-Nirenberg inequality, that in the linear case give the correct decay of the linearized entropy in the $\LL^2(V_*^{2-m}\,{\rm d}x)$-norm, turn out to work as well in the nonlinear case as the previous proposition started to show. \medskip \noindent{\bf Proof of Theorem \ref{thm.conv.entropy}.} Notice first that $\|g\|_\infty$ is finite and bounded as a function of time. In fact, by hypothesis (H1') we know that \[\begin{split} |g(s,y)|&=|w(s,y)-1|V_{D_*}^{m-1}=\left|\frac{v-V_{D_*}}{V_{D_*}}\right|V_{D_*}^{m-1} \le c_0 |V_{D_1}(y)-V_{D_0}(y)|V_{D_*}^{m-2}\\ &\le c_1 V_{D_*}^{2-m}V_{D_*}^{m-2} = c_1 \end{split}\] for all $y\in\mathbb{R}^d$ and all $s>0$, where $c_i$ are a positive constant depending only on $m, D_0, D_1, D_{*}$. The inequality $|V_{D_1}-V_{D_0}|\le c V_{D_*}^{2-m}$ can be proved easily using the explicit expression of the pseudo--Barenblatt solutions (see the proof of Lemma \ref{Lem.Entr.Lp}). \medskip Next we prove that $I$ is bounded as a function of time. Indeed by Lemma \ref{Fisher-lin-nonlin} we observe that \begin{equation}\label{I.1} I[g]\le k_1\mathcal{I}[w] + k_2\int_{\mathbb{R}^d}g^4V_{D_*}^{4-3m}\,{\rm d}y\\ \le k_1\mathcal{I}[w] + k_2k_3\|g\|_\infty^4 \end{equation} where we have noticed that, for $m=m_*$, $V_{D_*}^{4-3m}=\big(1+|x|^2\big)^{-(d+4)/2}$ is integrable. It has been proved in Proposition \ref{prop.diff.ineq.Fisher} that $\mathcal{I}$ is also bounded. By conservation of relative mass, cf. Proposition 2.3 of \cite{BBDGV}, we know that $\|g(s)\|_1=\|g(0)\|_1$, where we have used the fact that $v_0-V_{D_*}$ above is taken also nonnegative with $\int (v_0-V_{D_*})\,\,{\rm d}y=M>0$ in this part of the proof. This implies that the ratio $I/M=I_{m_*}[g(s)]/\|g(s)\|_1^2$ is bounded as a function of time. We shall use now the Gagliardo-Nirenberg inequalities of Proposition \ref{GNprop} taking $v=g(t)$, putting $F=\|g\|^2_{L^2(V^{2-m}\,{\rm d}x)}$, $I$ the linear Dirichlet form, and $M=\|g\|_{L^1(V^{2-m}\,{\rm d}x)}$: \begin{equation}\label{GNI.3} F^3\le K_1 IM^4. \end{equation} The validity of such inequalities depends on the boundedness of the ratio $I_{m_*}[g(s)]/\|g(s)\|_{L^1(V^{2-m}\,{\rm d}x)}^2$, which is ensured along the evolution, as above mentioned. We now prove an entropy - entropy production inequality. We obtain a differential inequality for the entropy $\mathcal{F}$, by comparing it with the Fisher information $\mathcal{I}$ via Gagliardo-Nirenberg inequalities, \begin{equation}\label{entr.prod} \begin{split} \mathcal{F}^3[w] &\le^{(a)} \left[\frac 12W_0^{m-2}\int_{\mathbb{R}^d}|w-1|^2V_{D_*}^{m}\,{\rm d}y\right]^3 = \left[\frac 12W_0^{m-2}\right]^3F^3\\ &\le^{(b)} \left[\frac 12W_0^{m-2}\right]^3K_1 IM^4=K_2IM^4\\ \end{split} \end{equation} where (a) follows from \eqref{Disug.Entr.Lin-Nolin} of Lemma \ref{Lem.Bounds.RE}, while in (b) we used the Gagliardo-Nirenberg inequality \eqref{GNI.3} above. \medskip \noindent (i) In order to continue the argument, we assume for the moment that the initial datum satisfies $v_0\ge V_{D_*}$ and is radially symmetric so that $g_0=(w_0-1)V_{D_*}^{m-1}$ is {\it nonnegative}. This extra assumption will be removed afterwards. Under it we will prove in Appendix A4 that there is an infinite sequence of intervals of times $[s_{1,k}\,, s_{2,k}]\subset[2k,2k+2]$ (hence, $s_{2,k}\le s_{1,k+1}$) such that \begin{equation} I[g(s)]\le k_4\,\mathcal{I}[g(s)]\qquad\mbox{\rm for all} \ s\in \bigcup_{k\in\mathbb{N}}[s_{1,k}\,, s_{2,k}] \end{equation} for a constant $k_4$ that does not change along the evolution. We shall prove moreover that the length of each of such intervals is at least $1/2$ for all $k\ge k_0$, which in particular implies that \begin{equation} \sum_{k=k_0}^n (s_{2,k}- s_{1,k})\ge \sum_{k=k_0}^n \frac{1}{2}=\frac{n-k_0}{2}\ge c\,\normalcolor s_{2,n} \end{equation} whenever $n\ge n_0$ is large, for a suitable $c>0$. Then, recalling that $\mathcal{I}=-\,{\rm d}\mathcal{F}/\,{\rm d}s$, and using \eqref{entr.prod} we conclude that \[ \mathcal{F}^3\le K_2\,k_4\,M^4\, I \le k_5 \mathcal{I}= -k_5\frac{\,{\rm d}\mathcal{F}}{\,{\rm d}t}\,, \] and an integration oven the interval $[s_{1,k}, s_{2,k}]$ gives \[ \sum_{k=1}^{n} \left(\frac{1}{{\cal F}(s_{2,k})^2}-\frac{1}{{\cal F}(s_{1,k})^2}\right) \ge \frac{1}{k_5}\sum_{k=1}^n (s_{2,k}- s_{1,k}) \ge \frac{c}{k_5}s_{2,n}\,. \] This implies \[ \frac{1}{{\cal F}(s_{2,n})^2}-\frac{1}{{\cal F}(s_{1,1})^2}\ge\frac{c}{k_5}s_{2,n}, \] since the intermediate terms are such that \[ -\frac{1}{{\cal F}(s_{1,k})^2}+\frac{1}{{\cal F}(s_{2,k-1})^2}\le 0 \] because $\mathcal{F}(s)$ is non-increasing and $s_{2,k-1}\le s_{1,k}$. The monotonicity of the function ${\cal F}(s)$ allows then to conclude that for all $s\in [s_{2,k}, s_{2,k+1}]$ \begin{equation} \frac{1}{{\cal F}(s)^2}\ge \frac{1}{{\cal F}(s_{2,k})^2}\ge \frac{1}{{\cal F}(s_{1,1})^2}+ \frac{c}{k_5}\,s_{2,k}, \end{equation} Using the fact that $s\le s_{2,k+1}\le s_{2,k}+4$ we get \begin{equation} {\cal F}(s)\le \frac{1}{\left[{\cal F}(s_{1,1})^{-2}+ c\,k_5^{-1}\,s\right]^{\frac{1}{2}}}\le \frac{1}{(c_0+c_1s)^{\frac{1}{2}}} \end{equation} for large times $s$ and some positive constants $c_0, c_1$. We have thus proved that the nonlinear entropy decays with the same rate as the linear one, when the initial relative mass is nonzero. \medskip \noindent (ii) {\sc Proof without extra restrictions.} The arguments used above and in Appendix A4 are valid changing $h$ into $-h$ and $g$ into $-g$ under the same a priori bounds. Hence, the conclusion is valid for negative and radial initial difference $v_0-V_{D_*}\le 0$. To deal with the general case where $v_0-V_{D_*}$ is not radial or does not have a sign, we use the maximum principle, after writing $|v_0(x)-V_{D_*}(x)|\le f(|x|)$. By comparison we have $v_1\le v\le v_2$, where $v_1$ and $v_2$ are the solutions corresponding to initial data $V_{D_*}- f$ and $V_{D_*}+ f$ resp. For the corresponding $w=v/V_{D_*}$, $h=w-1$ and $g=h(D_{D_*}+y^2)$ a similar comparison holds. Thus, $w_1\le w\le w_2$, where $w_1\le 1$ and $w_2\ge 1$ are the solutions with radial initial data $1 \pm (f/V_{D_*})$, hence functions of $r=|y|$ and $s$. Same idea applies to $g$. Take now into account the form of the entropy \begin{equation}\label{Entropy.2} {\mathcal F}[w]:= \frac1{1-m}\int_{\mathbb{R}^d} \Psi(w)V_{{D_*}}^m\,{\rm d}y, \qquad \mbox{with} \quad \Psi (w)=(w-1)-\frac 1m(w^m-1). \end{equation} We note that $\Psi(w)$ is convex and has a zero minimum at $w=1$. Since we have just proved that the decay result holds for both $g_1$ and $g_2$, the statement also holds for $g$, even if we do not assume that $v_0-V_{D_*}$ is nonnegative or radial.\qed \medskip \noindent {\bf Proof of Corollary \ref{Conv.Weight}}. We recall the following facts proved in \cite{BBDGV}, Lemma 6.2 under the running assumptions, (H1) and (H2). First we have that for any $\vartheta\in[0,\frac{2-m}{1-m}]$, there exists positive constants $K_\vartheta, K_2$ such that \begin{equation*} \left\||x|^\vartheta(v-V_{D_*})\right\|_{2}\leq K_\vartheta\left( \mathcal{F}[w]\right)^{1/2}\;. \end{equation*} Moreover \begin{equation*} \left\|v-V_{D_*}\right\|_{2}\leq K_2\left( \mathcal{F}[w]\right)^{1/2}\;. \end{equation*} We now recall the result of Lemma 3.6 of \cite{BBDGV} \begin{equation}\label{Holder.Cont.Est} \|v(s)-V_{D_*}\|_{C^\alpha(\mathbb{R}^d)}\le\,\mathcal{H}\,\|v(s)-V_{D_*}\|_\infty\quad\forall\; t\geq t_0\;. \end{equation} for a suitable $\alpha\in(0,1)$, and we combine it with the interpolation inequality~\eqref{eq:interpolation}, with $\lambda=-\alpha\,d< 0=\mu<1/2=\nu$, $C=\mathcal C_{-\alpha d,\,0,\,1/2}$ \begin{equation*} \|v(s)-V_{D_*}\|_\infty \le\,C\,\|v(s)-V_{D_*}\|_{C^\alpha}^\vartheta\; \|v(s)-V_{D_*}\|_2^{1-\vartheta} \le\,C\,\mathcal{H}^\vartheta\,\|v(s)-V_{D_*}\|_\infty^\vartheta\;\|v(s)-V_{D_*}\|_2^{1-\vartheta} \end{equation*} where $\vartheta=1/(2+\alpha\,d)$. This implies \begin{equation*} \|v(s)-V_{D_*}\|_\infty \le C^{1/(1-\vartheta)}\,\mathcal{H}^{\vartheta/(1-\vartheta)}\,\|v(s)-V_{D_*}\|_2 \le \mathcal{K}_\vartheta \left( \mathcal{F}[w]\right)^{1/2} \quad\forall\; t\geq t_0\;. \end{equation*} From H\"older's inequality, \[ \|v(s)-V_{D_*}\|_q\le \|v(s)-V_{D_*}\|_\infty^{(q-2)/q}\, \|v(s)-V_{D_*}\|_2^{2/q} \le\mathcal{K}_q\left( \mathcal{F}[w]\right)^{1/2} \] for all $q\in[2,\infty]$, we deduce that $\|v(s)-V_{D_*}\|_q$ decays with the same rate as $\left( \mathcal{F}[w]\right)^{1/2}$. If $q\in(1,2)$, we know from Lemma 6.2. of \cite{BBDGV} that there exists a positive constant $K(q)$ such that \begin{equation*} \|v-V_{D_*}\|_q\le K(q)\left( \mathcal{F}[w]\right)^{1/2}, \end{equation*} This and the known decay of ${\mathcal F}$ proves (ii). To prove (iii), use first \eqref{interp.Cj.Lp} with the choice $p=\infty$, i.e. \begin{equation}\label{interp.Cj.Lp.infty} \|f\|_{C^{j}(\mathbb{R}^d)}\;\le\;\mathcal C_{j,d}\;\|f\|_{C^{j+1}(\mathbb{R}^d)}^{\frac{j}{(j+1)}}\;\|f\|_\infty^{\frac{1}{j+1}} \end{equation} for any $j\in\mathbb{N}$, and the decay of the L$^\infty$ norm, namely $\|v-V_{D_*}\|_\infty\le Ks^{-1/4}$ to get \begin{equation*} \|v(s)-V_{D_*}\|_{C^j(\mathbb{R}^d)}\le H_js^{-\frac{1}{4(j+1)}}\quad\forall\;s\geq s_0\;, \end{equation*} where in fact $H_j$ depends on $s$ itself and tends to zero as $s\to+\infty$, so that the bound can be improved. Indeed we iterate the procedure putting such bound for the $C^{j+1}$ norm into \eqref{interp.Cj.Lp.infty} to get a new bound for the $C^{j}$ norm. In fact what we get after $h$ steps is, for any fixed $s\ge s_0$, $j\in{\mathbb N}$: \[ \|v(s)-V_{D_*}\|_{C^j(\mathbb{R}^d)}\le \frac{{\mathcal C}_{j,d}^{\sum_{0}^{h-1}\left(\frac j{j+1}\right)^h}k^{\frac 1j \sum_{1}^{h+1}\left(\frac j{j+1}\right)^h}H_j^{\left( \frac j{j+1}\right)^h}}{s^{k_h}} \] where the value of $k_h$ will be determined later. Notice in first place that the numerator of the above expression remain finite as $h\to\infty$, for any fixed $s\ge s_0$, $j\in{\mathbb N}$. As for $k_h$, by construction it satisfies the recursion relation \[ k_0=\frac 1{4(j+1)},\ \ \ k_{h+1}=\frac j{j+1}k_h+\frac 1{4(j+1)}. \] subtracting $1/4$ to both sides of the latter equation gives \[ k_{h+1}-\frac 14=\frac j{j+1}\left(k_h-\frac 14\right) \] which immediately gives $k_h=\frac 14-\left(\frac j{j+1}\right)^h\frac j{4(j+1)}$, thus proving that $k_h\to 1/4$ as $h\to+\infty$.\qed \noindent{\bf Proof of Corollary \ref{thm:CRE-exp}.} We have proved that $\mathcal{F}[w(s)]\le c_0s^{-1/2}$ and by Lemma \ref{Lem.Bounds.RE} we also know that $F[w]\le c_1 \mathcal{F}[w]$. By Lemma \ref{Lem.Entr.Lp} with $m=m_*$, we have \begin{equation} \|w(s)-1\|_{d/2}^{d/2}=\|w(s)-1\|^{2+\frac{m_*}{1-m_*}}_{\LL^{2+\frac{m_*}{1-m_*}}(\mathbb{R}^d)} \le \overline{D}_{m_*} F[w(s)] \le c_2 \mathcal{F}[w(s)] \le c_3s^{-1/2} \end{equation} Moreover, \eqref{infinity-interp} yields \begin{equation} \begin{split} \|w(s)-1\|_{\LL^\infty(\mathbb{R}^d)}\; &\le c_4\mathcal{F}[w(s)]^\frac{1}{d+2+\frac{m_*}{1-m_*}}=c_4\mathcal{F}[w(s)]^{2/(3d)} \le c_5 s^{-1/(3d)}\\ \end{split} \end{equation} Interpolating between these bounds shows that, for $q\in[d/2,+\infty]$: \[ \|w(s)-1\|_q\le\|w-1\|_\infty^{[q-(d/2)]/q}\|w(s)-1\|_{d/2}^{d/(2q)} \le c_6s^{-\frac 13\left(\frac{1}{d}+\frac1{q}\right)}. \] To improve such bound we insert it in the interpolation inequality \eqref{interp.Cj.Lp} and use Theorem \ref{lem:holdereg} as well to get \[ \|w(s)-1\|_{C^j(\mathbb{R}^d)}\le \frac C{s^{\frac q3\left(\frac 1d+\frac 1q\right)\frac{k-j}{d+qk}}} \] for any $q\in[d/2,+\infty]$, $k>j\in{\mathbb N}$. As a function of $q$ the exponent of $s$ is nonincreasing, so we choose $q=d/2$ to get \[ \|w(s)-1\|_{C^j(\mathbb{R}^d)}\le \frac C{s^{\frac{k-j}{d(2+k)}}}. \] To optimize in $k$ we should take $k=\infty$, which is not allowed, so that for any fixed $\varepsilon>0$ we take $k$ large enough so that \[ \|w(s)-1\|_{C^j(\mathbb{R}^d)}\le \frac C{s^{\frac{1-\varepsilon}{d}}} \] as claimed. Putting this bound back into \eqref{interp.Cj.Lp} with $j=0$ and using what is known so far for the decay of the L$^p$ norm we get a decay of the form $\|w-1\|_\infty\le Cs^{-\alpha}$, $\alpha$ being given (with an inessential renaming of the free parameter $\varepsilon$) by \[ \alpha(p,k)=\left(\frac{1-\varepsilon}{d}\right)\frac d{d+pk}+\frac13\left(\frac1d+\frac1p\right)\frac{pk}{d+pk}. \] Maximizing $\alpha$ w.r.t. to $p$ when $\varepsilon$ is small enough yields again $p=d/2$, so that after some calculation we get the exponent $\alpha(d/2,k)=\frac1d-\frac{2\varepsilon}{d(2+k)}.$ This proves the claim for the L$^\infty$ norm and hence also for all L$^q$ norms with $q\in(d/2,+\infty)$ by interpolation.\qed \section{Proofs for fast diffusion with $m\neq m_*$ revisited}\label{m.neq.mstar} The previous method allows for shorter proofs of the convergence when $m\ne m_*$, and at the same time some minor improvements of paper \cite{BBDGV}. We recall that, in the case $m\not=m_*$, the spectral gap inequality \begin{equation}\label{HP.ineq} F[g(s)]\le \lambda_{m,d}^{-1}\,I [g(s)] \end{equation} holds true, and the best constant is known for $m<m_*$, since \cite{BBDGV-CRAS, BBDGV}. We also recall the result of Proposition \ref{Fisher-lin-nonlin.2} \[ I [g]=\int_{\mathbb{R}^d}\left|\nabla g\right|^2 V_{D_*} \,{\rm d}y \le k_1\mathcal{I}[w] + k_3\mathcal{F}^{1+\sigma}[w]\\ \] where, in particular, $k_1= 2W_1^{3-2m}$. From these bounds we get \begin{equation}\label{E-Epr.m} \begin{split} \mathcal{F}(w) &\le^{(a)} \frac 12W_0^{m-2}\int_{\mathbb{R}^d}|w-1|^2V_{D_*}^{m}\,{\rm d}y = \frac 12W_0^{m-2}F\\ &\le^{(b)} \frac 12W_0^{m-2}\lambda_{m,d}^{-1}I [g] \\ &\le^{(c)} \frac 12W_0^{m-2}\lambda_{m,d}^{-1} \left[k_1\mathcal{I}[w] + k_3\mathcal{F}^{1+\sigma}[w]\right]\\ \end{split} \end{equation} where in $(a)$ we compared the linear and nonlinear entropies via inequality \eqref{Disug.Entr.Lin-Nolin}, in $(b)$ the above spectral gap inequality, and in $(c)$ the above mentioned Proposition \ref{Fisher-lin-nonlin.2}. We may rewrite the latter formula as a differential inequality: \[ \mathcal{F}' + W_0^{2-m}W_1^{2m-3}\lambda_{m,d}\mathcal{F}-\frac{W_1^{2m-3}}{2}k_3\mathcal{F}^{1+\sigma}\le 0 \] so that by comparison with its explicit solution, we get \[ \mathcal{F}[w(s)] \le \frac{\ee^{-k_4(s-s_0)}} {\left[\mathcal{F}[w(s_0)]^{-\sigma} + \frac{k_3 W_1^{2m-3}}{2k_4}\left(\ee^{-\sigma k_4 s} -\ee^{-\sigma k_4 s_0} \right) \right]^{\frac{1}{\sigma}}} \le k_5 \ee^{-k_4s} \] provided $\mathcal{F}[w(s_0)]$ is small enough, a property which holds for $s_0$ large enough. This gives an exponential decay of the entropy, with a rate $k_4=W_0^{2-m}W_1^{2m-3}\lambda_{m,d}$. Note that $\lambda_{m,d}$ is the optimal constant in the Hardy- Poincar\'e inequality \eqref{HP.ineq}, known for $m<m_*$ since \cite{BBDGV}. Then we can proceed as in \cite{BBDGV} to show that the optimal rate is given by $\lambda_{m,d}$. Indeed, one can substitute $W_0$ and $W_1$ with $\inf_{\mathbb{R}^d}|w|$ and $\sup_{\mathbb{R}^d}|w|$ respectively, allowing them to depend on time. Then prove that they both tend to $1$ when $s\to +\infty$, so that $k_4\to\lambda_{m,d}$; this can be done in view of the uniform convergence of the relative error $\|w(t)-1\|_{\LL^\infty(\mathbb{R}^d)}\to 0$ as $s\to +\infty$ together with a Gronwall-type argument. For more details we refer to Section 6.3 of \cite{BBDGV}. \medskip \noindent{\bf Remarks} \noindent (i) When $m=m_*$ the above steps do not hold since we do not have a spectral gap for the linearized generator. This is one of the reasons which forced us to use Gagliardo-Nirenberg inequalities which, instead, compare the Fisher information with a power of the entropy. \noindent (ii) This method simplifies and complements some proofs of \cite{BBDGV} when $m\neq m_*$, but also gives a more detailed proof of the case $m\le 0$ that was only briefly treated in \cite{BBDGV}\,. We finally emphasize the analysis of the present paper covers the case $m=0$, that is logarithmic diffusion, even in dimension $d=4$ since in that case $m_*=0$ so that no spectral gap holds. \noindent (iii) The interpolations made in the proof of Theorem \ref{Conv.Weight} are valid also in the case $m\neq m_*$ and allow to improve the convergence rate of the derivatives, proving that the rate is always given by $\lambda_{m,d}$\,. We state here this improved version of the main asymptotic Theorem of \cite{BBDGV} \begin{thm}[Convergence with rate, $m\neq m_*$] Under the assumptions of Theorem~{\rm~\ref{Thm:A1}}, if $m\ne m_*$, there exists $t_0\geq 0$ such that the following properties hold: \begin{enumerate} \item[{\rm (i)}] For any $q\in[q_*,\infty]$, there exists a positive constant $C_q$ such that \begin{equation*} \|v(s)-V_{D_*}\|_q\le C_q\;\ee^{-\lambda_{m,d} \,s}\quad\forall\;s\geq s_0\;. \end{equation*} \item[{\rm (ii)}] For any $ \vartheta\in[0,(2-m)/(1-m)]$, there exists a positive constant $K_\vartheta$ such that \begin{equation*} \big\|\,|x|^\vartheta(v(s)-V_{D_*})\big\|_{2}\le K_\vartheta\;\ee^{-\lambda_{m,d}\,s}\quad\forall\;s\geq t_0\;. \end{equation*} \item[{\rm (iii)}] For any $j\in\mathbb{N}$, there exists a positive constant $H_j$ such that \begin{equation*} \|v(s)-V_{D_*}\|_{C^j(\mathbb{R}^d)}\le H_j\,\ee^{-\lambda_{m,d}\,s}\quad\forall\;s\geq s_0\;. \end{equation*} \end{enumerate} \end{thm} The constants $C_q$, $K_\vartheta$ and $H_j$ depend on $s_0$, $m$, $d$, $v_0$, $D_0$, $D_1$, and $q$, $\vartheta$ and $j$; $s_0$ also depends on $D_0$ and $D_1$. It is remarkable that the decay rate of the nonlinear problem is given exactly by $\lambda_{m,d}$. Rescaling back to the original equation, we obtain results in terms of intermediate asymptotics, cf. Corollary \ref{Cor:A2} or Corollary 1.3 of \cite{BBDGV}\,. \section*{Appendices} \subsection*{\bf A1. Calculation of curvatures. Proof of Lemma \ref{lemma.Rij}} We can use well-known formulas for the Ricci tensor as a function of the metric data: $$ R_{ij}=g^{km}R_{ikjm}, \quad R_{ikjm}=\frac12\left(\partial^2_{kj}g_{im}+ \partial^2_{im}g_{kj}- \partial^2_{km}g_{ij}- \partial^2_{ij}g_{km} \right)+ g_{np}(\Gamma^n_{kj}\Gamma^p_{im}+\Gamma^n_{km}\Gamma^p_{ij}). $$ but in the case of conformal transformation there is a worked out relation between the Ricci tensors of two metrics ${\bf g}$ and $\widetilde{{\bf g}}$ in terms of the conformal factor relating them. Precisely, if $\widetilde{{\bf g}}= (1/\varphi^2){\bf g}$, where $\varphi$ is a scalar, the formula reads as follows \cite{Besse}: $$ {\widetilde R}-R=\frac1{\varphi^2}\left[(d-2)\varphi\nabla^2\varphi+ (\varphi\Delta_{\bf g}\varphi - (d-1){\bf g}(\nabla \varphi, \nabla \varphi) \cdot {\bf g}\right]. $$ where $R=(R_{ij})$ is the Ricci tensor of ${\bf g}$, ${\widetilde R}={\widetilde R}_{ij}$ is the Ricci tensor of ${\widetilde {\bf g}}$, $\nabla$ denotes the gradient, $\nabla^2$ the Hessian and $\Delta_{\bf g}$ the Laplace-Beltrami operator with respect to ${\bf g}$. Specializing the formula to the case ${\bf g}=\delta_{ij}$, so that $R_{ij}=0$, we get in coordinates $$ {\widetilde R}_{ij}= \frac1{\varphi^2}\left[(d-2)\varphi\partial^2_{ij}\varphi+ (\varphi\Delta\varphi - (d-1)|\nabla \varphi |^2) \delta_{ij}\right]. $$ Put now ${\widetilde g}_{ij}=(1+|x|^2)^{-1}\delta_{ij}$ so that $\varphi=(1+|x|^2)^{1/2}$. Then $\partial_i\varphi=x_i(1+|x|^2)^{-1/2},$ $$\partial^2_{ij}\varphi=-x_ix_j(1+|x|^2)^{-3/2}+ (1+|x|^2)^{-1/2}\delta_{ij}, \quad \Delta\varphi=-|x|^2(1+|x|^2)^{-3/2}+ d(1+|x|^2)^{-1/2}. $$ Applying the last formula we get after some calculations $$ {\widetilde R}_{ij}=-\frac{(d-2)x_ix_j}{(1+|x|^2)^{2}}+ \left[\frac{(d-2)|x|^2+ 2(d-1) }{(1+|x|^2)^{2}}\right]\delta_{ij}. $$ There is a clear form of these expressions when we take the particular point $\widehat{x}=(X,0,\cdot,0)$ which implies no loss of geometrical generality since the metric is conformal and radial, hence invariant under rotations in the space. We get \begin{equation} \widetilde R_{11}(\widehat{x})=\frac{2(d-1)}{(1+X^2)^2}; \qquad \widetilde R_{ii}(\widehat{x})=\frac{(d-2)X^2+2(d-1)}{(1+X^2)^2} \quad \forall i=2,\cdots, d, \end{equation} and $\widetilde R_{ij}(\widehat{x})=0$ for all $i\ne j$. Both eigenvalues tend to zero as $|x|\to+\infty$ with different rates. It immediately follows that the symmetric tensor Ric is positive; indeed, given $\xi\in {\mathbb R}^d$, we have \[ \widetilde R_{ij}(\widehat{x})\xi_i\xi_j \ge\frac{2(d-1)}{(1+X^2)^2}|\xi|^2>0, \] and the same is true for all $x\in \mathbb{R}^d$ by invariance under rotations. If one wants to visualize the behaviour of this manifold, it is convenient to look at Ricci curvatures given by \begin{equation} \widetilde r_1=\frac{\widetilde R(e_1,e_1)}{\widetilde g(e_1,e_1)}=\frac{2(d-1)}{(1+X^2)}, \qquad \widetilde r_i=\frac{\widetilde R(e_i,e_i)}{\widetilde g(e_i,e_i)}=\frac{2(d-1)+(d-2)X^2}{(1+X^2)}, \end{equation} Note that the transversal curvatures tend to $(d-2)$ as $|x|\to \infty$ while the curvature in the radial direction behaves like $O(|x|^{-2})$. This clearly shows the difference in the behaviour of the curvatures in radial and transversal directions which is typical of a cigar manifold. Finally, the value of the scalar curvature follows from the formula $R=g^{ij}R_{ij}.$ Since we are in a conformal situation, it can be deduced in a direct way from the Yamabe formula \cite{yamabe60} $$ {\widetilde R}=-\frac{4(d-1)}{d-2}\frac{\Delta w}{w^{(d+2)/(d-2)}}, \quad \mbox{with} \ w=g^{(d-2)/4}, $$ where ${\bf g}$ is the conformal factor, here $(1+|x|^2)^{-1}$, cf. the formulas e.\,g. in \cite{VazSmooth}, pages 211-212. In order to obtain the results stated in Lemma \ref{lemma.Rij} we only need to eliminate the tildes from $\widetilde R_{ij}$ and $\widetilde R$. \qed \medskip \subsection*{\bf A2. Explicit representation of the cigar} We give here a simple parametric representation for the cigar-like manifold $(M,{\bf g})$. We want to represent it as a hypersurface in $\mathbb{R}^{d+1}$. The radial symmetry of the metric suggests to represent such imbedded manifold as $z=f(|y|)$ with variables $(y,z)\in \mathbb{R}^{d+1}$, where $y\in\mathbb{R}^d$ and $z\in\mathbb{R}$, having a unique chart $x\in \mathbb{R}^d\mapsto (y,z)$ given by the formulas \[ r=|y|=\Phi(\varrho)\qquad\mbox{and}\qquad z=\Psi(\varrho), \] where $\varrho=|x|\ge 0$. We fix $\Phi(0)=\Psi(0)=0$. We want the Euclidean metric in $\mathbb{R}^{d+1}$ to induce the metric on the hypersurface. We know that the infinitesimal length element in the radial direction satisfies \[ \,{\rm d}s^2=\,{\rm d} r^2 +\,{\rm d} z^2=\left[\Psi'^2(\varrho)+\Phi'^2(\varrho)\right]\,{\rm d}\varrho^2 =\frac{\,{\rm d}\varrho^2}{1+\varrho^2} \] which implies the relation $ \Psi'^2(\varrho)+\Phi'^2(\varrho)=1/(1+\varrho^2) $. On the other hand, the length calculation for the transversal part gives \[ \left[\frac{\Phi(\varrho)}{\varrho}\right]^2=\frac{1}{1+\varrho^2}. \] Solving the above two equations gives \[ \Phi(\varrho)=\frac{\varrho}{(1+\varrho^2)^{1/2}}, \qquad \Psi'(\varrho)=\frac{\varrho(2+\varrho^2)^{1/2}}{(1+\varrho^2)^{3/2}}. \] We see that $r=\Phi(\varrho)$ goes from 0 to 1 as $0<\varrho<\infty$. Analyzing the behaviour of $\Psi(\varrho)$, one concludes that \[ \Psi(\varrho)\approx \varrho^2 \quad\mbox{when }\varrho\approx 0, \qquad \Psi(\varrho)\approx \log\varrho \quad\mbox{when }\varrho\gg 1. \] This is the representation of a cigar. The point out that the transversal radius at infinity is constant; actually, $\Phi(\varrho)\to 1$ as $\varrho\to +\infty$. \medskip \subsection*{\bf A3. Some Technicalities} We recall here some technical facts that we used in the proofs. First we recall Theorem 2.4 of \cite{BBDGV}, \begin{thm}[Uniform $C^k$ regularity]\label{lem:holdereg} Let $m <1$ and $w \in \LL^{\infty}_{{\rm loc}}((0,T) \times \mathbb{R}^d)$ be a solution of~\eqref{eq.w}. Then for any $k\in\mathbb{N}$, for any $s_0\in (0,T)$, \begin{equation} \sup_{s\geq s_0}\|w(s)\|_{C^k(\mathbb{R}^d)}<+\infty\;. \end{equation} \end{thm} We also needed an interpolation Lemma due to Gagliardo \cite{Ga}, cf. also Nirenberg, \cite[p. 126]{MR0109940}. \begin{lem} Let $\lambda$, $\mu$ and $\nu$ be such that $-\infty<\lambda \le \mu \le \nu <\infty$. Then there exists a positive constant $\mathcal C_{\lambda,\mu,\nu}$ independent of $f$ such that \begin{equation}\label{eq:interpolation} \|f\|_{1/\mu}^{\nu-\lambda} \le \mathcal C_{\lambda,\mu,\nu}\|f\|_{1/\lambda}^{\nu-\mu} \; \|f\|_{1/\nu}^{\mu-\lambda}\quad\forall\;f\in\mathcal C(\mathbb{R}^d)\;, \end{equation} where $\|\cdot\|_{1/\sigma}$ stands for the following quantities: \noindent(i) If $\sigma>0$, then $\|f\|_{1/\sigma}=\left(\int_{\mathbb{R}^d}|f|^{1/\sigma}\,{\rm d}x\right)^\sigma$. \noindent(ii) If $\sigma<0$, let $k$ be the integer part of $(-\sigma d)$ and $\alpha=|\sigma|d-k$ be the fractional (positive) part of $\sigma$. Using the standard multi-index notation, where $|\eta|=\eta_1+\ldots+\eta_d$ is the length of the multi-index $\eta=(\eta_1,\ldots\eta_d)\in\mathbb{Z}^d$, we define \begin{equation*}\label{def.C^k} \|f\|_{1/\sigma}=\left\{\begin{array}{lll} \displaystyle\max_{|\eta|=k}\; \big|\partial^\eta f\big|_\alpha=\displaystyle\max_{|\eta|=k}\; \sup_{x,y\in\mathbb{R}^d}\;\dfrac{\big|\partial^\eta f(x)-\partial^\eta f(y)\big|}{|x-y|^\alpha}=|f\|_{C^\alpha(\mathbb{R}^d)}& \mbox{if~}\alpha>0\;,\\[5mm] \displaystyle\max_{|\eta|=k}\;\displaystyle\sup_{z\in\mathbb{R}^d}\big|\partial^\eta f(z)\big|:=\|f\|_{C^k(\mathbb{R}^d)}& \mbox{if~}\alpha=0\;. \end{array}\right. \end{equation*} As a special case, we observe that $\|f\|_{-d/j}=\|f\|_{C^{j}(\mathbb{R}^d)}$. \noindent(iii) By convention, we note $\|f\|_{1/0}=\sup_{z\in\mathbb{R}^d}|f(z)|=\|f\|_{C^0(\mathbb{R}^d)}=\|f\|_{\infty}$. \end{lem} \medskip \noindent\textbf{Remark.} The following special case of the above interpolation inequality \eqref{eq:interpolation} has been used in the paper: let $k>j\in\mathbb{N}$ and $\lambda=-k/d\le\mu=-j/d\le\nu=1/p$. Inequality~\eqref{eq:interpolation} becomes \begin{equation}\label{interp.Cj.Lp} \|f\|_{C^{j}(\mathbb{R}^d)}\;\le\;\mathcal C_{j,k,p}\;\|f\|_{C^{k}(\mathbb{R}^d)}^{\frac{d+jp}{d+kp}}\;\|f\|_p^{\frac{p(k-j)}{d+kp}} \end{equation} for any $k>j\in\mathbb{N}$ and $p>0$. \medskip \subsection*{\bf A4. Complete proof of the estimates of Section \ref{sect.nlem2}} In the proof of Theorem \ref{thm.conv.entropy} in Section \ref{sect.nlem2} we have assumed that for every solution under the stated conditions there is an infinite sequence of intervals of {\sl good times} $[s_{1,k}\,, s_{2,k}]\subset[2k,2k+2]$ with $s_{2,k}<s_{1,k+1}$ for all $k$, such that \begin{equation}\label{times.sk} I[g(s)]\le k_4\,\mathcal{I}[g(s)]\qquad\mbox{\it for all } \ s\in \bigcup_{k\in\mathbb{N}}[s_{1,k}\,, s_{2,k}]. \end{equation} Recall that in view of hypothesis (H2) and the discussion made in Section \ref{sect.nlem2}, we may assume that $g$ is radially symmetric and positive. We will also prove that the length, $l_k= s_{2,k}- s_{1,k}$ of the intervals in our construction is at least $1/2$ for all $k\ge k_0$. The proof of these facts is long, and would have broken the flow of the proof of Theorem \ref{thm.conv.entropy}: this is the reason why we put it here. The main point in getting \eqref{times.sk} consists in obtaining stronger estimates of the remainder term in the inequality of Lemma \ref{Fisher-lin-nonlin} than the ones obtained in Proposition \ref{Fisher-lin-nonlin.2}. We restate here Lemma 5.1 for convenience of the reader: \textit{Let $0<W_0\le w\le W_1<+\infty$, be a measurable function on $\mathbb{R}^d$, with $W_0<1$ and $W_1>1$, and assume that $\mathcal{I}(w)<+\infty$. Then for any $m<1$ the following inequality holds true \begin{equation} I [w(s)] \le k_1\mathcal{I}[w(s)]+ R[w(s)], \qquad \mbox{with } \ R[w(s)]= k_2\int_{\mathbb{R}^d}g(s,y)^4V_*(y)^{4-3m}\,{\rm d}y\,, \end{equation} where $g=(w-1)V_*^{m-1}$; $k_1$ and $k_2$ are positive constants.} We need to control the remainder term $R[w(s)]$ to proceed with the asymptotic estimate. Note that $V_*^{4-3m}$ is integrable for $m=m^*$: in fact, for such a value of $m$ we have \begin{equation}\label{Fisher.Reminder} R[w(s)]= k_2\int_{\mathbb{R}^d}\frac{g(s,y)^4}{(1+y^2)^{(d+4)/2}}\,{\rm d}y \end{equation} Put now $N(s)=N[g(s)]=\|g(\cdot,s)\|_\infty$, the supremum of $g$ for fixed time $s>0$. We know that $N(s)$ is uniformly bounded in time. Then we have \begin{equation}\label{better} R[w(s)]\le k_3 \,[N(s)]^4 \end{equation} We want to estimate the decay of $N(s)$ in time in terms of the linearized Fisher information $I [w]$. Suppose for a moment that we can prove that the remainder term is small relatively to $I [w]$, more precisely that $R[w(s)]\le \frac12I [w(s)]$ for all large $s$. In that case we conclude that $I [w(s)] \le 2 k_1\mathcal{I}[w(s)],$ and the desired estimate \eqref{times.sk} easily follows. Hence, we need to prove that \begin{equation}\label{conditionK} N(s)^4 \le \frac{I[g(s)]}{K} \end{equation} with $K>k_3$, say $K=2k_3$. This is a most convenient estimate on the values of $g$. Unfortunately, even under such assumption it is not clear that the last inequality holds at all times, or even at all large times. Therefore, we shall be cautious and call {\it good times } those times at which \eqref{conditionK} holds with $K\ge 2k_3$. The frequency and density of the intervals of such times is important, as the end of proof of Theorem \ref{thm.conv.entropy} shows. We will now proceed with the proof of the existence of the time intervals stated at the beginning of this section. They will consist only of so-called good times. The proof is split into two parts, namely \noindent(i) \textit{Controlling the remainder term away from the origin.} This is the part where we use the fact the $|v_0-V_*|\le \tilde f$ for a {\it radially symmetric} $\tilde f$; \noindent(ii) \textit{Transforming the outer control into a control on a small ball,} namely we will control $\sup_{r\ge R} g(r,s)$ for small $R>0$. Due to the peculiarities of the parabolic Harnack inequality, we shall prove that such a control only takes place for a large set of so-called good times, this sufficing for our goals. \noindent \textbf{Part (i). The control of a radial $g$ far away from the origin}\\ In the calculation that follows we drop the $s$-dependence for convenience since time does not enter in the argument. Let $g(r)$ be a \textit{nonnegative} continuous function such that $g(r)\to 0$ as $r\to\infty$ and let \[ {\cal M}_R=\int_R^{\infty}\frac{g(r)}{(1+r^2)^{d/2}}r^{d-1}\,{\rm d} r<\infty\,,\qquad\mbox{and}\qquad I_R=\int_R^{\infty}\frac{|g'(r)|^2}{(1+r^2)^{(d-2)/2}}r^{d-1}\,{\rm d} r<\infty\,. \] We put the powers in a way such that it is clear that for $r>1$ we are dealing with the radial case and we are merely asking the mass and the linearized Fisher information to be finite. Now pick $\alpha>0$, $R_1>R>1$ and calculate \[\begin{split} g^{1+\alpha}(R)-g^{1+\alpha}(R_1) &=-\alpha\int_R^{R_1}g'(r)g^\alpha(r)\,{\rm d} r \le \alpha\int_R^{R_1}|g'(r)|g(r)^\alpha\,{\rm d} r \le \alpha\int_R^{R_1}|g'(r)|r^{\frac{1}{2}}\frac{g(r)^\alpha}{r^{\frac{1}{2}}}\,{\rm d} r\\ &\le \alpha\left[\int_R^{R_1}|g'(r)|^2 r\,{\rm d} r\right]^{\frac{1}{2}} \left[\int_R^{R_1}\frac{g(r)^{2\alpha}}{r}\,{\rm d} r\right]^{\frac{1}{2}} \end{split} \] Now, if we assume $\alpha\ge 1/2$, the last integral can be bounded as follows: \[ \int_R^{R_1}\frac{|g(r)|^{2\alpha}}{r}\,{\rm d} r \le \sup_{r\ge R}|g(r)|^{2\alpha-1}\, \int_R^{\infty}\frac{|g(r)|}{r}\,{\rm d} r, \] so that for $\alpha\ge 1/2$ we have obtained \[\begin{split} g^{1+\alpha}(R)-g^{1+\alpha}(R_1)\le \alpha \left[\sup_{r\ge 1}|g(r)|^{2\alpha-1}\, \int_1^{\infty}\frac{|g(r)|}{r}\,{\rm d} r \right]^{\frac{1}{2}} \left[\int_1^{\infty}|g'(r)|^2 r\,{\rm d} r\right]^{\frac{1}{2}}\,. \end{split} \] Letting $R_1\to\infty$, and assuming that $g(R)\to 0$ as $R\to \infty$, we get \[\begin{split} g^{1+\alpha}(R)\le \alpha \left[\sup_{r\ge 1}|g(r)|^{2\alpha-1}\, \int_1^{\infty}\frac{|g(r)|}{r}\,{\rm d} r \right]^{\frac{1}{2}} \left[\int_1^{\infty}|g'(r)|^2 r\,{\rm d} r\right]^{\frac{1}{2}}\,. \end{split} \] Taking the supremum over $R\ge1$ on the l.h.s. and simplifying we get: \[\begin{split} \left[\sup_{R\ge 1}g(R)\right]^{4}\le \alpha^{\frac{8}{3}} \left[\int_1^{\infty}\frac{|g(r)|}{r}\,{\rm d} r \right]^{\frac{4}{3}} \left[\int_1^{\infty}|g'(r)|^2 r\,{\rm d} r\right]^{\frac{4}{3}}\le c\,{\cal M}_1^{\frac{4}{3}}\,I_1^{\frac{4}{3}}\le\tilde c I^{\frac43}. \end{split} \] This is a very good estimate because it says that the supremum of $g^4$ outside the unit ball is not only proportional to $I$ as expected in so-called better times, but even more: it is proportional to a higher power of $I$. Now, recall that $I[w(s)]\to 0$ as $s\to\infty$. If the same could be done near $r=0$ the proof that every large $s$ is a good time would be complete. The previous calculation can be done in the complement of the ball of radius $R$ as small as we like and then $g(R)$ will depend also on an inverse power of $R$, because of the presence of the factors $1+r^2$ in the denominators of the last quantities. We now get in the last line for $0<R<1$ \[\begin{split} \left[\sup_{r\ge R}|g(r)|\right]^{4}\le \alpha^{\frac{8}{3}} \left[\int_R^{\infty}\frac{|g(r)|}{r}\,{\rm d} r \right]^{\frac{4}{3}} \left[\int_R^{\infty}|g'(r)|^2 r\,{\rm d} r\right]^{\frac{4}{3}}\le \frac{C}{R^{8(d-1)/3}} {\cal M}_R^{\frac{4}{3}}\,I_R^{\frac{4}{3}}. \end{split} \] The estimate blows up at $R=0$. Therefore, we cannot let $R\to 0$ to get an estimate for $N(s)$ for any $x\in \mathbb{R}^d$. \noindent\textit{Justifying that $g$ goes to zero at infinity.} To conclude part (i) of the proof, it remains to prove that $g(R)\to 0$ as $R\to \infty$. Choose $R_n$ such that \[ \int_{R_n}^{\infty}|g'|^2 r\,\,{\rm d} r<\frac{1}{4n^2}\qquad\mbox{and}\qquad \int_{R_n}^{\infty}\frac{|g|^2}{r} \,\,{\rm d} r<\frac{1}{4n^2} \] and define $\widetilde R_n=\min\left\{r\ge R_n\;; g^2(r)\le \frac{1}{2n^2}\right\}$. Indeed the set in the r.h.s. is not empty since $g/r$ is integrable at infinity: this is not compatible with $g$ being everywhere larger than a positive constant for all $r\ge R_n$. Notice that $\widetilde R_n\ge R_n$ and $g^2(\widetilde R_n)\le1/(2n^2)$. Hence, for all $R\ge R_n$: \[ g^2(R)=g^2(\widetilde R_n)+2\int_{\widetilde R_n}^{R}g\,g'\,{\rm d} r \le \frac{1}{2n^2} +2\left|\int_{\widetilde R_n}^{R}g|g'|\,{\rm d} r\right| \le \frac{1}{2n^2} + 2\left[\int_{R_n}^{\infty}|g'|^2\,{\rm d} r\right]^{\frac{1}{2}}\left[\int_{R_n}^{\infty}g^2\,{\rm d} r\right]^{\frac{1}{2}}\le \frac1{n^2} \] Therefore, $0\le g(R)\le 1/n$ for all $R\ge R_n$. The proof of part (i) is now complete. \medskip \noindent\textbf{Part (ii). Transforming the outer control into a control on a small ball.} \noindent In part (i) we have estimated the supremum of a radial $g^4$ outside a ball of radius $R>0$ in terms of $I[g(s)]^{4/3}$, so that the problem is to estimate the supremum inside a ball as well, hopefully in terms of $I[g(s)]^{1+\alpha}$, at least in the form $\varepsilon I[g(s)]$. We are unable to prove that for all (sufficiently large) times. To circumvent such a difficulty we have to make use of a rather complicated argument that takes into account the possibility that such estimate does not hold because of possible bad behaviour of $g$ at points near the origin. We begin by carefully labeling the times. We say that a time $s\in[0,\infty)$ belongs to the class of {\sl good times} $\mathcal{G}_K$, if \[ N(s)^4=\sup_{y}(g(s,y))^4 < \frac{I[g(s)]}{K} \] We are not claiming that some half-line $[T,\infty)\subset {\cal G}_{2k_3}$, which would finish the proof in the simple way. Finally, we say that a time is {\sl very good}, $s\in {\cal V}_C$, if \begin{equation} N(s)\le C \, I[g(s)]^{4/3} \end{equation} for some $C>0$, in the spirit of the radial estimate away from the origin. Note that since $I(s)\to 0$ we have the inclusion of very good times with constant $C$ into the good times with any constant $K>0$ if $s$ is large enough. \medskip \noindent\textsc{Harnack inequality.} The study of points near the space origin is based on classical regularity theory for linear or quasilinear parabolic equations in divergence form. We are going to use the version of the celebrated paper by Aronson-Serrin \cite{AS} . We consider the equation satisfied by the error function \begin{equation} h(s,y)=w(s,y)-1=\frac{v(s,y)}{V_{D_*}(y)}-1. \end{equation} It can be written in the standard form \begin{equation} \partial_sh=\nabla\cdot {\bf A}(y,h,\nabla h) + B(y,h,\nabla h). \end{equation} In fact, starting with the equation satisfied by $w$, we have \begin{equation}\label{eq.w-1} \partial_s h = \partial_s w =\frac{1}{V_*}\nabla\cdot\left[w V_*\nabla \left(\frac{w ^{m-1}-1}{m-1}V_*^{m-1}\right)\right] =\frac{1}{V_*}\nabla\cdot\left[(h +1) V_*\nabla \left(\frac{(h +1)^{m-1}-1}{m-1}V_*^{m-1}\right)\right] \end{equation} so that we can identify \[ {\bf A}=(h+1)\nabla\left(\frac{(h+1)^{m-1}-1}{m-1}V_*^{m-1}\right),\ \ \ B=(h+1)\frac{\nabla V_*}{V_*} \cdot\nabla\left[\frac{(h+1)^{m-1}-1}{m-1}V_*^{m-1}\right] \] and we have to check that the structure conditions are satisfied by ${\bf A}$ and $B$ in a compact ball with constants that do not depend on $s$. In fact, the structure conditions are satisfied in the homogeneous form of \cite{AS}, which means that, in the notation of that paper, the terms $f,g,h=0$ in the structure condition for ${\bf A}, B$ vanish. Checking this is a straightforward calculation involving also the known bounds on $w$ namely $0<W_0\le w=h+1\le W_1$. We note in passing that since we already know that $w\to 1$ uniformly in $\mathbb{R}^d$ as $s\to\infty$, the lower and upper bounds $W_0,W_1$ can be taken closer and closer to 1 if we restrict the time to $s\ge s_0$ and $s_0$ is large enough. In any case, we conclude that the Harnack inequality has the standard form, as stated below. This also implies a similar Harnack inequality for $g$ if we work on a bounded space domain, say, in $B_1(0)$. We state it next. Take $T>0$ large and consider the parabolic cylinders \[ Q=[T-2,T]\times B_1(0)\,,\qquad Q_{1/2}=[T-1/2, T]\times B_{1}(0)\,,\qquad \widetilde{Q}=[T-2, T-1]\times B_1(0)\,. \] The parabolic Harnack inequality on the disjoint cylinders $Q_{1/2}$ and $\widetilde{Q}$ is then \begin{equation} \inf_{Q_{1/2}}g(s,y)\ge c\; \sup_{\widetilde{Q}}g(s,y)=c\; \widetilde{N} \end{equation} for some positive constant $c<1$ depending only on structural constants. Note that since $g$ is continuous on $Q$ all the above suprema are attained at some points. \medskip \noindent\textsc{Evolution of the maximum of $h$.} The equation for $h$ can be written as \[\begin{aligned} \partial_sh&=\nabla\cdot\left[(h+1)\nabla\left(\frac{(h+1)^{m-1}-1}{m-1}V_*^{m-1}\right)\right]+V_*^{m-2}(h+1)^{m-1}\nabla V_*\cdot\nabla h\\ &+V_*^{m-3}|\nabla V_*|^2(h+1)[(h+1)^{m-1}-1]=\\ &=\nabla h\cdot\nabla\left(\frac{(h+1)^{m-1}-1}{m-1}V_*^{m-1}\right)+(h+1)\nabla\cdot[(h+1)^{m-2}V_*^{m-1}\nabla h]\\ &+mV_*^{m-2}(h+1)^{m-1}\nabla V_*\cdot\nabla h\\ &+(h+1){[(h+1)^{m-1}-1]}{\nabla\cdot(V_*^{m-2}\nabla V_*)} +{V_*^{m-3}|\nabla V_*|^2}(h+1)[(h+1)^{m-1}-1] \end{aligned} \] In particular, using the fact $h$ is small we get \begin{equation} \partial_sh= \mbox{second and first order terms in $h$}+ C(r,h)\,h \end{equation} where $C(r,h)\le \overline{k}$ for a suitable $\overline{k}$ independent of $r$ and depending only on the known a priori bounds for $h$. Then, as a consequence of the Maximum Principle, cf. e.g. \cite{AS}, the maximum $N(s)$ obeys the growth rate \begin{equation} N'(s)\le \overline{k} N(s). \end{equation} In fact the function $H(s,y)=N(s_0)e^{\overline{k}(s-s_0)}$ is an explicit supersolution in the whole space for $s\ge s_0$. \medskip \noindent\textsc{The structure of good times. Alternative.} Now we are going to prove that \begin{lem} For every time interval $(T-2,T)$ with $T$ large enough there is at least a subinterval of length 1/2 consisting of very good times.\end{lem} \noindent {\sl Proof.~} We can use the cylinders $Q_{1/2}$ and $\widetilde{Q}$ as in the previous paragraph on the Harnack inequality. The idea is to consider separately the two possibilities: (a) either the maximum in $x$ of $h$ at every time of the lower cylinder is taken outside the ball $B_2(0)$, or (b) the maximum at one of such times, say $s_0$, is taken inside. In case \textsl{(b)}, $T$ must be a good time if it is large enough as we show next. We take $s_0\in (T-2,T-1)$ as above and let $N_1$ be the corresponding maximum in the ball. Of course $N_1\le \widetilde N$. We now apply the growth rate of previous paragraph to obtain \begin{equation} N(s_2)\le C\,N_1\le C\widetilde{N}, \qquad C=e^{2\overline{k}}. \end{equation} for every $s_2$ in the upper cylinder: $T-(1/2)\le s_2\le T$. On the other hand, for every $y\in B_1(0)$ we have for such $s_2$ the lower estimate $h(s_2,y)\ge c\widetilde{N}$. We conclude that the maximum and the minimum at all those times are related by a constant. This is also true for the function $g$ up to a small change in the constant, hence \begin{equation} g(s_2,y'_M)\le C_1 g(s_2,y) \end{equation} where now $y_M'$ is the point of maximum of $g$ in the ball $B_1(0)$. Now, we know that on the boundary $|y|=1$ there is a good estimate for $g$, more precisely, for such $y$ of unit norm $g(s_2,y)$ satisfies the estimates that defines the {\sl very good time}, and it does with a fixed constant $C$. We conclude that $g(s_2,y)$, $|y|\ge 1$, also satisfies such an estimate with a possible worse constant $C'=C_1C$. Since the estimate was true for $|y|\ge 1$ we are done. Therefore, whenever $T$ is not a good time in ${\cal G}_K$ with $K=2k_3$, the first part of the alternative, \textsl{(a)}, must be true. But in that case for every time in the lower cylinder we know that the maximum of $h$ is taken in the exterior of the ball $B_1$. If we look for the expression of $g=h\,(1+|y|^2)$ this also means that the maximum of $g(s,\cdot)$ at times in $(T-2,T-1)$ is taken at an exterior point (maybe different). So all these times are very good. Recalling what was said before, they are good times in ${\cal G}_K$ if $T$ is large enough.\qed \noindent\textsc{Choice of intervals of good times.} We can apply the previous results letting $T=2k+2$, for $k\ge k_0$ and $k_0$ sufficiently large. The above lemma implies that there exists a subinterval $[s_{1,k}\,, s_{2,k}]\subset[2k,2k+2]$ of length at least $1/2$ made of times in ${\cal G}_K$ with $K=2k_3$. \section{Concluding remarks and open problems} \label{sec.cr} {\bf\ref{sec.cr}.1.} The special situation has been studied for the critical exponent $m_*=(d-4)/(d-2)$ in dimensions $d\ge 3$ where our considerations make sense. Algebraic extensions have been shown to be fruitful or intriguing in some dynamical studies. Here, for $d=2$ we formally have $m_*= \pm\infty$, which is an extreme situation for porous medium that has appeared in the literature (for instance, in connection with the mesa problem), \cite{mesa, CF, VazSmooth}, while for $d=1$ we formally get $m_*=3$, a value inside the porous medium range where nothing special has been shown to happen. \medskip \noindent {\bf \ref{sec.cr}.2.} We pose the following questions: $\bullet$ Are the rates obtained in this paper optimal for a certain class of data, as the linearized analysis suggests? $\bullet$ Can we prove convergence, maybe with worse rates or without rates, for more general initial data? we recall that for $m>m_c$ all nonnegative initial data in $L^1(\mathbb{R}^d)$ are attracted towards a Barenblatt solution, with no rate in that generality. $\bullet$ Find an explicit optimal dependence of the constant in the asymptotic formula with respect to the data. $\bullet$ Assuming that we get an optimal rate of convergence, can we find a profile for the next level of approximation? \noindent {\bf\ref{sec.cr}.3.} One may wonder if the techniques used in \cite{BBDGV-CRAS,BBDGV} for the case $m\neq m_*$, which use Hardy-Poincar\'e inequalities, work also in the case $m_*$. We have partially given a negative answer to this question in Corollary \ref{No.Hardy.No.Party}, in which we have shown that no inequality of Hardy type can hold for the linearized Fisher information $I[w]$. However, one may wonder if modified versions of the Hardy-Poincar\'e inequalities, with logarithmic terms added in the spirit of the classical Hardy inequality in $\mathbb{R}^2$, allow to solve the problem. Thus, there is a family of valid Hardy inequalities (see below) in which the Dirichlet form involved has a logarithmic correction, but we are not able to prove the asymptotic results by means of such inequalities. It is then a further open problem to see whether this path may lead to the goal or not. \begin{prop}\label{HP.log} Let $d\ge 3$. We have \begin{equation}\label{HP.log.ineq} \int_{\mathbb{R}^d}g^2\,\,{\rm d}\mu_\alpha\,\le \mathcal{H}_{\alpha\,,d}\,\int_{\mathbb{R}^d}\left|\nabla g\right|^2\,\,{\rm d}\nu_\alpha \end{equation} for any $g\in\mathcal D(\mathbb{R}^d)$ and for any $0<\alpha\le\frac{d}{2}-1$, where \begin{equation}\begin{split} \,{\rm d}\mu_\alpha(y)&=\left(1+|y|^2\right)^{-\frac{d}{2}}\,\left[1+\log(1+|y|^2)\right]^{\alpha-1}\,\,{\rm d} y,\\ \,{\rm d}\nu_\alpha(y)&=\left(1+|y|^2\right)^{1-\frac{d}{2}}\,\left[1+\log(1+|y|^2)\right]^{\alpha+1}\,{\rm d} y\\ \end{split} \end{equation} and \begin{equation}\label{Const.Gap.Lem} \mathcal{H}_{\alpha\,,d}=\frac{2(d-2)}{\alpha\,(d-2-2\alpha)\,\min \Big\{2\alpha\,,(d-2-2\alpha)\Big\}} \end{equation} \end{prop} \noindent{\bf Acknowledgements.} G.G. acknowledges an ESF grant of the research group GLOBAL (Global and Geometric Aspects of Partial Differential equations), which allowed him to visit M.B. and J.L.V. at the Universidad Aut{\'o}noma de Madrid, and the Departamento de Matem\'{a}ticas of such University, where part of this work has been done. M.B. and J.L.V. were partially supported by Spanish Project MTM2005-08760-C02-01. M.B. and J.L.V. wish to thank the Dipartimento di Matematica of Politecnico di Torino where part of the work has been done. We thank J. Dolbeault for a careful reading of this article which resulted in a serious improvement. \ {\small
1,314,259,995,734
arxiv
\section{Introduction} It is difficult for human to reach the stars using current rocket technology. The energy sources range from chemical fuel, nuclear power and even anti-matter conceptually. The major problem in these systems is that propulsion required large amount of time and fuel\cite{Lemos}. We try to solve this problem by starting at the requirement of large amount of fuel. We all know that current rockets in function are chemical rockets which take oxidant and fuel at the same time. Interestingly, the airplanes with similar propulsion system only take fuel, without oxidant, because in the atmosphere there are enough oxygen which are absorbed by airplane engines during the flight. Similarly, if there are enough fuel in the universe, the spaceship may absorb them during its flight like airplanes absorb the oxygen. Fortunately, dark matter is widely spread in the universe and the mass density is about five times of the baryonic matter density\cite{Komatsu:2008hk}, which make it a possible new energy source for interstellar flight. Thus the requirement of fuel may be solved in the self-help way with dark matter as the energy source. \section{DM engine and acceleration in the saturation density} We give an example of DM engine which uses DM annihilation remnants as propulsion. Fig.\ref{cartoon} is a sketch of the DM engine for this kind of new spaceship. The DM engine is the box in the picture. Here we assume the DM particle and the annihilation products can not pass through the wall of the box. In picture A, the space ship moves very fast from right to left. The DM particles, which are assumed to be static, go into the box and are absorbed in the picture B. In the picture C, we compress the box and raise the number density of the DM for annihilation, where we assume the annihilation process happens immediately. In the picture D, only the wall on the right side is open. The annihilation products, for example Standard Model (SM) particles, are all going to the right direction. The processes from A to D are the working cycle for the engine. Thus, the spaceship is boosted by the recoil of these SM particles. Note the spaceship can decelerate by the same system when it reaches the destination, by opening the left wall in the picture D. \begin{figure}[h] \vspace*{-.03in} \centering \includegraphics[width=3.0in,angle=0]{picA.eps}% \includegraphics[width=3.0in,angle=0]{picB.eps} \\ \includegraphics[width=3.0in,angle=0]{picC.eps}% \includegraphics[width=3.0in,angle=0]{picD.eps} \vspace*{-.03in} \caption{The illustration of work cycle for the DM engine. \vspace*{-.1in}} \label{cartoon} \end{figure} This kind of new spaceship has a very interesting character that the faster it is, the easier it accelerates. In the picture A, we assume the rest mass of the spaceship is $ M$ and its velocity is $\beta$ in the unit of speed of light. The time for one cycle of the engine is $ dt $ and the area of the engine is $ S$. During one work cycle, the number of DM particles collected by the engine is $ N = \beta dt \cdot S \cdot \frac{{\rho _D }}{{m_D }}$ , where $ {\rho _D }$ and $ {m_D }$ are the density of DM and mass of DM, respectively. In picture D, we assume there are only one kind of particles X as the annihilation products for simplicity. The annihilation process is $ DD \to X\bar X$, with the mass $ m_X < m_D$. For the DM particles, we assume DM mass $ m_D \sim O(100GeV)$ and the annihilation products to be SM fermions mainly, which are quite natural in SuperSymmetry and Extra Dimension models. Thus, the mass of the annihilation products $m_X$ are quite small comparing with the mass of dark matter $m_D$. So it is reasonable to use the approximation $ m_X = 0$, where products are treated as massless photon in the following calculation. Using the conservation of energy and momentum, we can get \begin{equation} \left\{ {\begin{array}{*{20}c} {\beta dt \cdot S \cdot \rho _D + p^0 = p^0 + dp^0 + \varepsilon } \\ {0 + p^1 = p^1 + dp^1 - \varepsilon \cdot \theta } \\ \end{array}} \right. \label{conservation} \end{equation} where $p^0 = M\gamma $ and $ p^1 = M\gamma \beta $ are the energy and the momentum of the spaceship, and $ \gamma \equiv (1 - \beta ^2 )^{ - 1/2}$ , $ \varepsilon$ is the energy of the massless photons. $ \theta$ is defined as the propulsion efficiency, which means $ \theta \in [0,1]$. For example, if the annihilation particles all go to the right direction, then $ \theta = 1$. However, if the annihilation particles have equal possibility go into any direction in the right hemisphere, then $ \theta = 1/2$. Moreover, it can also be used to count in the other inefficiency of the engine. By the Eqn.\ref{conservation}, one can get the differential equation for the velocity, \begin{equation} \frac{{k\beta }}{{\theta ^{ - 1} + \beta }} = \gamma ^3 \frac{{d\beta }}{{dt}} \label{differential} \end{equation} where $ k \equiv S\rho _D /M$. In the non-relativistic region, the above equation has the simple form, \begin{equation} \theta k \cdot \beta = \frac{{d\beta }}{{dt}}. \end{equation} To carry out the numerical calculations, we give some reasonable parameters first. We assume the weight of the spaceship is $M = 100ton$ and the area is $S = 100m^2$, according to the current rockets and space shuttles. For the DM density, we use the saturation density in the center of cusped halos $ \rho _{sat}$. The saturation density in the halo is due to the balance between the annihilation rate of the DM $ [\left\langle {\sigma v} \right\rangle \rho _{sat} /m_D ]^{ - 1}$ and the gravitational in falling rate of the DM $ (G\bar \rho )^{ - 1/2}$, where the $ {\bar \rho }$ is taken to be $200$ times of the critical density. Thus the saturation density is $ \rho _{sat} \sim 10^{19} M_ \odot \cdot kpc^{ - 3} $\cite{Berezinsky:1992mx, Lavalle:1900wn}. The propulsion efficiency is taken to be $ \theta = 0.5$, since in the picture D we assume the annihilation particles have equal possibility go into any direction in the right hemisphere. We can also rewrite the parameter $k$ as following, \begin{equation} k = 2 \times 10^{ - 4} s^{ - 1} \cdot \left( {\frac{\rho }{{10^{19} M_ \odot \cdot kpc^{ - 3} }}\frac{S}{{100m^2 }}\frac{{100ton}}{M}} \right) \end{equation} One can solve the Eqn.\ref{differential} and get the time and length needed for acceleration as the function of velocity, \begin{equation} t = (\theta k)^{ - 1} \cdot \left. {[\frac{{1 + \theta \beta }}{{\sqrt {1 - \beta ^2 } }} + In(\frac{\beta }{{1 + \sqrt {1 - \beta ^2 } }})]} \right|_{\beta _0 }^\beta \label{contime}, \end{equation} \begin{equation} L = (\theta k)^{ - 1} \cdot \left. {\frac{{\beta + \theta }}{{\sqrt {1 - \beta ^2 } }}} \right|_{\beta _0 }^\beta. \label{conlength} \end{equation} where $ {\beta _0 }$ is the initial velocity at the $ t = 0$. We give the plots of the above equations in Fig.\ref{constdensity}. We can see the velocity increases exponentially with time, since the acceleration is proportional to the velocity. In the non-relativistic region, where $ \beta \ll 1$, the Eqn.\ref{contime} and the Eqn.\ref{conlength} can be simplified as \begin{equation} \beta = \beta _0 e^{\theta kt} , \end{equation} \begin{equation} L = (\theta k)^{ - 1} (\beta - \beta _0 ) . \label{conlength1} \end{equation} The initial velocity $ {\beta _0 }$ is taken to be $ 10^{ - 6} c$ which is much smaller than the first cosmic velocity. However, the result is not sensitive to the initial velocity, because of the exponential increase. In Fig.\ref{constdensity}, we see the spaceship can reach the relativistic speed in about $2$ $days$ and the length needed for acceleration is about $ 10^{ - 4} pc$. From the above equations, if the DM density $ \rho$ and the area of the spaceship $S$ are larger, the time $t$ and length $L$ needed for acceleration will go down. If the mass of spaceship $M$ is larger, the time $t$ and length $L$ needed for acceleration will increase. However, the mass of DM particle does not have great influence on results, but the DM density does. \begin{figure}[h] \vspace*{-.03in} \centering \includegraphics[width=3.4in,angle=0]{contime.eps}% \includegraphics[width=3.4in,angle=0]{conlength.eps} \vspace*{-.03in} \caption{The velocity as a function of the time and length needed for the acceleration in the saturation density. \vspace*{-.1in}} \label{constdensity} \end{figure} \section{Acceleration in the halo or subhalo} Before celebration for the reach of relativistic speed, we should check whether the saturation region in the halo or subhalo is large enough for the above calculation. The DM profile can be parameterized as $\rho(r)=\frac{\rho_s}{(r/r_s)^{\gamma}[1+(r/r_s)^{\alpha}]^{(\beta-\gamma)/\alpha}}$, where $\rho_s$ and $r_s$ are the scale density and scale radius parameters respectively. The parameters $(\alpha,\beta,\gamma)$ are $(1,3,1)$ for NFW profile\cite{Navarro:1996gj}. Since we are interested in the central region of halo, where $ r \ll r_s$, the profile can be simplified as, \begin{equation} \rho = \frac{{\rho _s r_s }}{r}. \label{NFW} \end{equation} This profile is singular at the center of the halo. It is natural to have cut-off for this singularity due to the balance between the annihilation rate of the DM and the gravitational in falling rate of the DM. The saturation DM density is taken to be $ \rho _{sat} \sim 10^{19} M_ \odot \cdot kpc^{ - 3} $, thus the radius of saturation is $ r_{sat} = {{\rho _s r_s } \mathord{\left/ {\vphantom {{\rho _s r_s } {\rho _{sat} }}} \right. \kern-\nulldelimiterspace} {\rho _{sat} }}$. Once we know the scale density $\rho_s$ and scale radius $r_s$, we can calculate the saturation radius $ r_{sat}$. The $\rho_s$ and $r_s$ can be fully determined by the concentration model and DM halo mass, which will be calculated in the appendix. Here we show the saturation radius $ r_{sat}$ in Fig.\ref{rsat}. We can see the saturation radius of halo or subhalo is much smaller than the required length for acceleration to the relativistic speed. \begin{figure}[h] \vspace*{-.03in} \centering \includegraphics[width=3.4in,angle=0]{rsat.eps} \vspace*{-.03in} \caption{The saturation radius $ r_{sat}$ for different (sub)halo mass and concentration models. The B01 and ENS01 stand for different concentration models which are described in the appendix. \vspace*{-.1in}} \label{rsat} \end{figure} In Fig.\ref{real}, we show the details of acceleration in the subhalo. The subhalo with mass $ 10^6 M_ \odot$ in B01 model is taken as an example, which has saturation radius of about $ 10^{ - 9} pc$. Starting from the center of subhalo with initial velocity $ \beta _0 = 10^{ - 6} c$, it reaches the velocity of about $ 10^{ - 5} c$ when it leaves the saturation region, which can be read out from the Fig.\ref{constdensity}. However, the rest of the subhalo is not sufficient to accelerate the spaceship to the relativistic speed, since the density begins to decrease by $ r^{ - 1}$. By solving the differential equation numerically, we can get the relations among velocity, time and distance in Fig.\ref{real}. We can see the spaceship reaches the velocity $ 10^{ - 4} c$ in about ten days. However, its velocity hardly increases after that, since the DM density goes down quickly. We can see that the acceleration is fastest in the saturation area of the halo. But the rest region of subhalo can still accelerate the spaceship from the velocity $ 10^{ - 5} c$ to the velocity $ 10^{ - 4} c$. \begin{figure}[h] \vspace*{-.03in} \centering \includegraphics[width=3.4in,angle=0]{realtime.eps}% \includegraphics[width=3.4in,angle=0]{reallength.eps} \vspace*{-.03in} \caption{The velocity as a function of the time and length needed for the acceleration in the DM (sub)halo. The spaceship starts from the (sub)halo center with initial velocity $ \beta _0 = 10^{ - 6}c$. \vspace*{-.1in}} \label{real} \end{figure} To better understand the acceleration power of the (sub)halos, we give the velocity at different times for different (sub)halos mass and spaceship parameters in Fig.\ref{subhalov}. From the picture on the left, we can find that the subhalos have the power to accelerate the spaceship to velocity $ 10^{ - 5} c \sim 10^{ - 3} c$ with reasonable parameters $ S/M = 100m^2 /100ton$. In the first few hours, the spaceship flies in the saturation region where they will be accelerated to velocity $ 10^{ - 6} c \sim 10^{ - 4} c$, which can be understood with the help of Fig.\ref{constdensity} and Fig.\ref{rsat}. Out of the saturation region, the velocity of the spaceship can get further boosted by about one order in the $ r^{ - 1}$ density region. Note that the above accelerations take place at the very center of halo, which is far less than the scale radius $ r_s$. The above results rely on the parameters of spaceship, e.g. the ratio $ S/M$. If we lower the the weight of spaceship and increase the area of the engine, the velocity we can achieve will significantly increase. We specially give the plot on the right for $ S/M$ which is ten times larger, although the parameters maybe unreasonable in practice. It shows the the corresponding velocity increases about ten times. The main reason is the velocity at $ r = r_{sat}$ increases by ten times, which can be understood with the Eqn.\ref{conlength1}. \begin{figure}[h] \vspace*{-.03in} \centering \includegraphics[width=3.4in,angle=0]{subhalov1.eps}% \includegraphics[width=3.4in,angle=0]{subhalov2.eps} \vspace*{-.03in} \caption{The velocity at time $t=1 day$ and $t=1 month$ for different (sub)halos mass and spaceship parameters. The concentration model is taken to be B01. The spaceship is still assumed to be started at the center of (sub)halo with initial velocity $ \beta _0 = 10^{ - 6} c$. \vspace*{-.1in}} \label{subhalov} \end{figure} Anyway, the (sub)halos seem difficult to boost the spaceship to relativistic velocity, because their saturation radius is small comparing with the required acceleration length $ 10^{ - 4} pc$ in the Fig.\ref{constdensity}. In the above calculation, we assume there are no baryonic matter in the halo. The gravity from the DM halo have negligible effects on the spaceship, even at the saturation region. Note the saturation density $ \rho _{sat} \sim 10^{19} M_ \odot \cdot kpc^{ - 3}$ is much smaller than the density of water $ 1g/cm^3 \sim 10^{31} M_ \odot \cdot kpc^{ - 3}$. However, in case there are baryonic matter in the halo, it may modify the DM profile. The adiabatic contraction due to dissipating baryons can steepen the DM profile\cite{Diemand:2009bm}. A more interesting case is that there is a central black hole in the DM halo, for example at galactic center. The DM density can become a dense spike due to accretion by the black hole, assuming adiabatic growth of the black hole\cite{Gondolo:1999ef}. The annihilations in the inner regions of the spike set a maximal dark matter density $ \rho _{core} = \frac{{m_D }}{{\left\langle {\sigma v} \right\rangle t_{bh} }} \sim 10^{17} M_ \odot \cdot kpc^{ - 3}$, where $ {m_D }$ is the mass of DM particle, and $ {t_{bh} }$ is the age of black hole, conservatively $ 10^{10} yr$. And more importantly, the radius of the core can be as large as $ O(10^{ - 2} pc)$ for inner cusped model like NFW profile. Recall the Eqn.\ref{conlength}, the required acceleration length for velocity $0.9c$ is about $ O(10^{ - 2} pc)$ in this case, which means the spaceship can achieve the velocity close to the speed of light. \section{Conclusion and discussions} In this work, we give an example of DM engine using DM annihilation products as propulsion. The acceleration is proportional to the velocity, which makes the velocity increase exponentially with time in the non-relativistic region. The important points for the acceleration are how dense is the DM density and how large is the saturation region. The parameters of the spaceship also have great influence on the results. For example, the velocity will increase if $ S/M$ increases. We show that the (sub)halos can accelerate the spaceship to velocity $ 10^{ - 5} c \sim 10^{ - 3} c$ under the reasonable parameters of spaceship. Moreover, in case there is a central black hole in the halo, like galactic center, the core radius of DM can be large enough to accelerate the spaceship close to the speed of light. We have used three assumptions in this work. First, we have assumed static DM for simplicity. But the DM particle may have velocity as large as $ O(10^{ - 3} c) $. Once we know the velocity distribution of DM, it can be solved by programming the direction of the spaceship when speed is low. An analogue in our daily life is airplanes work well in both headwind and tailwind. Second, we have assumed the DM particles and the annihilation products can not pass through the wall of the engine. For the annihilation products, they may be SM fermions which have electric charges. Thus we can make them go into certain direction by the electromagnetic force. The most serious problem comes from DM which are weakly interacting with matter. Current direct searches of DM have given stringent bound on cross-section of DM and matter. It may be difficult using matter to build the containers for the DM, because the cross-section is very small. However, the dark sector may be as complex as our baryon world, for example the mirror world. Thus the material from dark sector may build the container, since the interactions between particles in dark sector can be large. Third, the annihilation process is assumed to happen immediately in the picture C. This is the second serious problem we should pay attention to. The annihilation speed takes the form, $ A = \left\langle {\sigma v} \right\rangle \frac{{\rho _{sat}^2 }}{{2m_D^2 }} $. The $ \left\langle {\sigma v} \right\rangle$ is taken to be the natural scale of the correct thermal relic, which is $3 \times 10^{ - 26} cm^3 s^{ - 1}$. One can show that $ A = 2.2 \times 10^{ - 7} cm^{-3 } s^{ - 1}$ . However, the number density of the dark matter is $ n_D = \frac{{\rho _{sat} }}{{m_D }} = 4 \times 10^9 cm^{ - 3} $. Thus, to make the annihilation process efficient, we have to compress the volume of the engine to raise the annihilation speed. Whether it can be achieved in the future is not clear. Nevertheless, the engine works in the vacuum where the baryonic matter is dilute, which means we do not need to worry about the pressure from the baryonic matter. Sometimes, when looking at the N-body simulation pictures of DM, I think it may describe the future human transportation in some sense. In the picture, there are bright big points which stand for large dense halos, and the dim small points for small sparse halos. Interestingly, these halos have some common features with the cities on the Earth. The dense halos can accelerate the spaceship to higher speed which make it the important nodes for the transportation. However, the sparse halos can not accelerate the spaceship to very high speed, so the spaceship there would better go to the nearby dense halo to get higher speed if its destination is quite far from the sparse halos. Similarly, if we want to take international flight, we should go to the nearby big cities. The small cities usually only have flights to the nearby big cities, but no international flights. Thus we can understand the dense halos may be very important nodes in the future transportation, like the big cities on the Earth.
1,314,259,995,735
arxiv
\section{Introduction} Lepton flavour violation (LFV)is an important signature into physics beyond the standard model (SM). In the SM with massive neutrinos, neutrinos have the Yukawa interaction. Lepton flavour is violated like quark flavour. However, LFV is strongly suppressed by the neutrino masses. Supersymmetry (SUSY) with massive neutrinos makes the situation drastically changed. It can predict sizable LFV effects, because the alternative source of LFV is generated from the slepton mixing through the renormalization group effect on the right-handed neutrino Yukawa interaction. The LFV processes $l_i \to l_j \gamma (i \ne j)$ of charged lepton are being measured. The MEG experiment \cite{MEG} gives the upper bound on the $\mu \to e \gamma$ process as \begin{align} {\rm Br}(\mu \to e \gamma) \le 1.2 \times 10^{-11}. \end{align} The forthcoming experiment reaches to ${\cal O} (10^{-14})$. The $\tau$ decay processes are also measured at B-factories \cite{Abe:2003sx}. On the other hand, it has been already discovered that lepton flavour is violated in the neutrino sector. The oscillation parameters will be measured in detail. In this letter, we study the phase of the neutrino Yukawa matrix and investigate its effect on the magnitude of the LFV processes. \section{The neutrino Yukawa matrix} In the framework based on the see-saw mechanism, we can parameterize the neutrino Yukawa matrix $Y_\nu$ in terms of physical quantities as \cite{Pascoli:2003rq} \begin{align} \frac{v_u}{\sqrt{2}} Y_\nu = \sqrt{M_R^{diag}} {\cal R} \sqrt{m_\nu^{diag}} U_{MNS}^\dag , \end{align} where $v_u$ is the vacuum expectation value of the Higgs boson, $U_{MNS}$ is the observed Maki-Nakagawa-Sakata (MNS) matrix including two Majorana phases $\xi_{1,2}$; i.e., \begin{align} \hspace{-9mm} U_{MNS} \sim \begin{pmatrix} 0.85 & -0.53 & 0 \\ 0.37 & 0.60 & -0.71 \\ 0.37 & 0.60 & 0.71 \end{pmatrix} \begin{pmatrix} 1 && \\ &e^{i \xi_1}& \\ &&e^{i \xi_2} \end{pmatrix}. \end{align} Here we neglect the $(1-3)$ element of $U_{MNS}$. $m_\nu^{diag}$ is the neutrino mass matrix and $M_R^{diag}$ is the right-handed Majorana neutrino mass matrix in each diagonal base. An arbitrary complex orthogonal matrix ${\cal R}$ is expressed as \begin{align} {\cal R} \equiv O_{12} O_{23} O_{31} Q_{12} Q_{23} Q_{31}, \end{align} where \begin{align} \hspace{-8mm} O_{12}(\theta_{12})\equiv \left( \begin{array}{ccc} \cos \theta_{12} & - \sin \theta_{12} & \\ \sin \theta_{12} & \cos \theta_{12} & \\ & & 1 \end{array} \right), \rm{etc.}, \end{align} and \begin{align} Q_{ij} \equiv O_{ij}(i \eta_{ij}). \end{align} The elements $Q_{ij}$ give a drastic effect on structure of the Yukawa matrix as hyperbolic functions. We call $i \eta_{ij}$ as {\it R-phase} to distinguish it from the Majorana phase. In the following, we assume that the neutrino masses are {\it approximately} degenerate and the right-handed neutrino masses are degenerate \begin{align} &\hspace{-5mm} m_\nu^{diag} \sim m \left( \begin{array}{ccc} 1 & & \\ & 1 + \frac{\Delta m^2_\odot}{2 m^2} &\\ & & 1 + \frac{\Delta m^2_@}{2 m^2} \end{array} \right) ,\\ &\hspace{0mm} M_R^{diag} \sim M_R \left( \begin{array}{ccc} 1 & & \\ & 1 &\\ & & 1 \end{array} \right), \end{align} where, $\Delta m^2_{\odot}$ and $\Delta m^2_@$ are the solar and atmospheric neutrino mass squared differences, $M_R$ is the size of right-handed neutrinos, and $m$ is undetermined light neutrino mass parameter. We then obtain the neutrino Yukawa matrix $Y_\nu$ in its diagonal base as \begin{align} Y_\nu^{diag} \sim \frac{\sqrt{2}}{v_u} \sqrt{M_R m} \left( \begin{array}{ccc} 1/r & & \\ & 1 & \\ & & r \end{array} \right), \end{align} where \begin{align} &r^2 \equiv 2x^2-1+2x \sqrt{x^2-1}, \label{R1} \\ &x \equiv \cosh \eta_{12} \cosh \eta_{23} \cosh \eta_{31}. \label{R2} \end{align} This expression indicates the characteristic relation $(y_1/y_2 = y_2/y_3)$ among the neutrino Yukawa couplings, which is similar to those of the quarks and the charged leptons $(m_u/m_c \sim m_c/m_t, {\rm etc.})$. For $m \sim 0.1 \rm{eV}$ and $M_R \sim 10^9 \rm{GeV}$, $r$ is close to ${\cal O}(10^2)$, so that the neutrino Yukawa couplings become hierarchical. The parameter $r$ is written in terms of the combination of {\it R-phase} (see Eqs.(\ref{R1}) and (\ref{R2}) ), and it determines the size of the neutrino Yukawa hierarchy. \section{Lepton Flavour Violation} We consider the SUSY models, especially the minimal supersymmetric standard model (MSSM) with right-handed neutrinos. In the slepton sector, the MSSM Lagrangian has an alternative source of LFV through the following soft SUSY breaking terms, \begin{align} &\hspace{-8mm}- {\cal L}_{soft} = \left( A^e_{ij} H_d \tilde{e}_{Ri}^* \tilde{L}_j+ A^\nu_{ij} H_u \tilde{\nu}_{Ri}^* \tilde{L}_j + {\rm h.c.} \right) \nonumber \\ &\hspace{-4mm} + (m_{\tilde{L}}^2)_{ij} \tilde{L}^\dag_i \tilde{L}_j + (m_{\tilde{e}}^2)_{ij} \tilde{e}_i^* \tilde{e}_j + (m_{\tilde{\nu}}^2)_{ij} \tilde{\nu_R}_i^* \tilde{\nu_R}_j, \end{align} where $A^{e,\nu}$ are the slepton tri-linear couplings and $m_{\tilde{L},\tilde{e},\tilde{\nu}}$ are the soft mass parameters for the sleptons. We assume that these couplings are universal at the Grand Unified Theory (GUT) scale ($M_{GUT}$), i.e., \begin{align} (m_{\tilde{L}}^2)_{ij} &= (m_{\tilde{e}}^2)_{ij} = (m_{\tilde{\nu}}^2)_{ij} = \delta_{ij} m_0^2 \nonumber \\ A^\nu &= Y_\nu a_0 m_0, \, A^e = Y_e a_0 m_0. \end{align} In this framework, the LFV processes $l_i \to l_j \gamma (i \ne j)$ can appear due to the slepton mixing \cite{Hisano:1995cp}, which comes from the renormalization group effect on $Y_\nu$ between the scales of $M_{GUT}$ and $M_R$ \begin{align} \hspace{-8mm} (\Delta m^2_{\tilde{L}})_{ij} \sim - \frac{1}{16 \pi}(6+2a_0)m_0^2 (Y_\nu^\dag Y_\nu)_{ij} \log \frac{M_{GUT}}{M_R}. \end{align} The branching ratios for these processes are expressed by \begin{align} {\rm Br}(l_i \to l_j \gamma) \sim \frac{\alpha^3}{G_F^2} \frac{|(\Delta m^2_{\tilde{L}})_{ij}|}{m_S^8} \tan^2 \beta, \end{align} where $m_S$ is the typical SUSY scale, $\tan \beta$ is the ratio of the vacuum expectation values of Higgs bosons, $\alpha$ is the fine structure constant, and $G_F$ is Fermi constant. Let us consider the $r$ dependence of LFV with $\eta_{23,31}=0$, the effect of {\it R-phase} contributes to only $\mu \to e \gamma$ process. A comparison of each magnitude of LFV processes is already meaningless, and we plotted the branching ratio for $\mu \to e \gamma $ process (see Figure \ref{BR-r}.), \begin{figure} \includegraphics[width=7.5cm]{fig1.eps} \vspace{-1cm} \caption{The branching ratio of the $\mu \to e \gamma$ event for $\alpha=\pi$ (upper solid line) and $\alpha=\pi/2$ (lower solid line). The experimental upper limit from MEG is also shown (dashed line).} \label{BR-r} \end{figure} where we take $m_S = m_0 = 1 {\rm TeV}, a_0=1 , \tan \beta =10 $, and $M_{GUT} = 2 \times 10^{16} {\rm GeV}$. The LFV branching ratio is approximately proportional to $r^4$, so that a large value of $r$ can change the branching ratio by several orders of magnitude. On the other hand, we show the dependence on the Majorana phase $\xi_1$ (see Figure \ref{BR-a}.). When $\eta_{23,31}=0$ , the branching ratio does not depend on $\xi_2$. \begin{figure} \includegraphics[width=7.5cm]{fig2.eps} \vspace{-1cm} \caption{The branching ratio of the $\mu \to e \gamma$ event for $r=200$ (upper solid line) and $r=50$ (lower solid line). The upper limit is also shown (dashed line).} \label{BR-a} \end{figure} Figure \ref{BR-a} shows that the branching ratio is a periodic function on $\xi_1$, and that the effect of Majorana phase is smaller than that of $r$. \section{Summary} We have analyzed the structure of the neutrino Yukawa matrix and have discussed the magnitude of LFV processes from the phase effect on the neutrino Yukawa matrix in the MSSM with the right-handed neutrinos. The neutrino Yukawa matrix has two types of phases, Majorana phases and {\it R-phases}. In the case that neutrino masses are degenerate and the right-handed neutrino masses are degenerate, the eigenvalues of the neutrino Yukawa matrix become hierarchical spectrum and the {\it R-phases} determine the size of Yukawa hierarchy. In the SUSY models, sizable LFV can arise due to the slepton mixing from the renormalization group effect on the neutrino Yukawa matrix between $M_{GUT}$ and $M_R$. The Majorana phases can change the LFV branching ratios by a factor, and these magnitudes become periodic as the function of Majorana phases. On the other hand, {\it R-phases} enhance the magnitude of the LFV branching ratios by several orders.
1,314,259,995,736
arxiv
\section{Introduction} \label{sec:introduction} The prototypical parabolic differential equation is the heat equation. It forms a cornerstone of mathematics and physics, and its understanding is essential for defining more complicated mathematical models. Fourier introduced this equation as a means to describe transient heat flow. Fick quickly recognized its importance to particle and chemical concentrations. As a result, parabolic equations are now ubiquitous in describing diffusion processes, which are found in a vast array of physical problems, among which are reaction-diffusion models of chemical kinetics \cite{Fisher1999,Fitzhugh1961,Nagumo1962}, phase field models describing morphology and pattern formation in multiphase fluids and solids \cite{Cahn1961,Cahn1971,Dai2013}, and even the volatility of stocks and bonds in mathematical finance \cite{Shreve2004}. Numerical solutions of (linear and nonlinear) diffusion equations have been the subject of active research for many decades. As early as the 1950's and 60's, it was recognized that due to the parabolic scaling, method of lines discretizations of the heat equation lead to numerically stiff systems of equations, especially for explicit time stepping. Larger time steps (on the order of the mesh spacing) can be taken with fully implicit solvers, but in practice, full matrix inversions may become difficult and costly, especially when memory is extremely limited, as was especially the case for early computers. Thus, alternate dimensionally implicit (ADI) splitting methods \cite{Douglas1955,Douglas1956,Peaceman1955,Douglas1962,Fairweather1967,Crank1996}, that make use of dimensional splitting and tridiagonal solvers, quickly gained popularity as part of an effort to reduce the amount of memory required to invert these systems. Later on, memory constraints no longer defined the bottleneck for computing, and attention shifted toward methods that focused on reducing floating point operations (FLOPs), albeit with additional memory constraints. Most notable among these are Krylov methods \cite{Lambers2008,Jia2008}, boundary integral methods \cite{Greengard1991, Kropinski2011}, and quadrature methods \cite{Lubich1992,Jiang2013,Kassam2005,Tausch2007,Li2009}. However, with the advent of GPU processors, it appears that we are yet again seeing a paradigm shift towards methods that should emphasize small memory footprints, even at the expense of incurring a higher operation count. Thus, ADI-like methods, which can efficiently decompose larger problems and limit overhead communication, warrant further investigation, and these features are the motivating factor for this work. In this paper we propose a novel numerical method for obtaining solutions to the linear heat equation, and nonlinear reaction-diffusion type equations. As an alternative to classical MOL formulations, we use the method of lines transpose (MOL$^T$), which is sometimes referred to as Rothe's method \cite{Rothe32}, or the transverse method of lines \cite{Salazar2000}. In this case, the PDE is first discretized in time, and the resulting semi-discrete (modified Helmholtz) problem can be solved using a variety of methods. From potential theory \cite{Jia2008, Kropinski2011}, the solution can be constructed by discretizing boundary integrals. However, with dimensional splitting (that is related to the original ADI formulations), the MOL$^T$ can be used to analytically solve simpler, one-dimensional boundary value problems, and the subsequent solution can be constructed through dimensional sweeps, resulting in an $\mathcal O(N\log N)$ \cite{Lyon2010,Bruno2010} or $\mathcal O(N)$ \cite{Causley_2013,Causley_2013b,Causley_2013c} solver. Furthermore, we extend the method to higher orders of accuracy by using a novel idea referred to as successive convolution. This strategy has recently been developed in \cite{Causley_2013} for the wave equation by the present authors. In the present work, we not only extend the method of lines transpose to parabolic problems, but we recognize the resulting expansion as a so-called resolvent expansion \cite{abadias2013c_0,grimm2010approximation}, which we leverage to prove stability and convergence of the successive convolution series. In addition, we incorporate nonlinear terms with an integrating factor method that relies on high order Hermite-Birkhoff interpolants as well as the (linear) resolvent expansions developed in this paper. The rest of this paper is organized as follows. In Section \ref{sec:op_calc} we derive the basic scheme for the one-dimensional heat equation, which is L-stable and can achieve high orders of accuracy in space and time. In Section \ref{sec:high-order-resolvent} we describe how to obtain an arbitrary order discretization in a single dimension with resolvent expansions. In Section \ref{sec:multiple}, we describe how this can be extended to multiple dimensions, and in Section \ref{sec:Linear} we present results for linear heat in one and two dimensions. In Section \ref{sec:Source}, we describe how our approach can handle nonlinear source terms, and in Section \ref{sec:Numerical} we present numerous numerical results including Allen-Cahn and the Fitzhugh-Nagumo system of equations. Finally, we draw conclusions and point to future work in Section \ref{sec:conclusions}. \section{First order scheme in one spatial dimension} \label{sec:op_calc} We begin by forming a semi-discrete solution to the 1D heat equation using the method of lines transpose (MOL$^T$). Let $u = u(x,t)$ satisfy \begin{align} \label{eqn:heat} u_t &= \gamma u_{xx}, \quad (x,t) \in (a,b) \times [0,T], \end{align} with constant diffusion coefficient $\gamma$, and appropriate initial and boundary conditions. The MOL$^T$ amounts to employing a finite difference scheme for the time derivative, and collocating the Laplacian term at time levels $t = t^{n}$ and $t=t^{n+1}$. Following a similar approach from \cite{Causley_2013}, we introduce a free parameter $\beta>0$, so that the collocation has the form\footnote{In \cite{Causley_2013}, there are a total of two time derivatives (and two space), so the right hand side depends on $u^{n+1}, u^n$, and $u^{n-1}$.} \[ \frac{u^{n+1}-u^n}{\Delta t} = \gamma\partial_{xx}\left(u^n+\frac{u^{n+1}-u^n}{\beta^2}\right), \qquad \beta>0. \] Next, we introduce the differential operator corresponding to the modified Helmholtz equation, defined by \begin{equation} \label{eqn:MHL} \mathcal{L} = I- \frac{\partial_{xx}}{\alpha^2}, \quad \alpha = \frac{\beta}{\sqrt{\gamma\Delta t}}. \end{equation} After some algebra, we find that the scheme can be written as \begin{equation} \label{eqn:First} \mathcal{L}[u^{n+1}-(1-\beta^2)u^n] = \beta^2 u^n. \end{equation} We note that there are at least two reasonable strategies for choosing $\beta$: \begin{enumerate} \item {\bf Maximize the order of accuracy.} For example, if we choose $\beta^2=2$, then the discretization is the trapezoidal rule, which is second order accurate and A-stable. \item {\bf Enforce stiff decay.} For example, if we choose $\beta^2=1$, then the discretization is the backward Euler scheme, which is first order accurate, L-stable, yet does not maximize the order of accuracy. \end{enumerate} Here and below, we opt for the second strategy, as the stiff decay of numerical solutions of the heat equation is of paramount importance. In Section \ref{subsec:Stability}, we develop this discussion in the context of higher order schemes that relies on a careful selection of $\beta$ as well as repeated applications of a single inverse operator. Upon solving equation \eqref{eqn:First} for $u^{n+1}$, we find that the equation for the update is \begin{equation} \label{eqn:First_update} u^{n+1} = (1-\beta^2)u^n + \beta^2\mathcal{L}^{-1}[u^n], \end{equation} that requires inverting a modified Helmholtz operator. We accomplish this \textit{analytically} by using Green's functions, from which \begin{equation} \label{eqn:L_inverse} \mathcal{L}^{-1}[u] = \left(I- \frac{\partial_{xx}}{\alpha^2}\right)^{-1}[u] := \frac{\alpha}{2}\int_a^b e^{-\alpha|x-y|}u(y)dy + B_a e^{-\alpha(x-a)} + B_b e^{-\alpha(b-x)}. \end{equation} The coefficients $B_a$ and $B_b$ are determined by applying prescribed boundary conditions at $x = a, b$ which we describe in Section \ref{subsec:homogenous-solution}. \begin{remark} Alternatively, had we followed the method of lines (MOL) and first discretized \eqref{eqn:heat} in space, then the differential operator $\mathcal{L}$ would be replaced by an algebraic operator $L$, and would be inverted \textit{numerically}. \end{remark} \begin{remark} Although the update \eqref{eqn:First_update} (with $\beta \neq 1$) is only first order accurate, we describe in Section \ref{sec:high-order-resolvent} how to extend our procedure to arbitrary order in time. \end{remark} This MOL$^T$ approach has several advantages. First, the solution is now explicit, but remains unconditionally stable. Secondly, in recent work \cite{Causley_2013,Causley_2013b,Causley_2013c} we show that the convolution integral in Eqn. \eqref{eqn:L_inverse} can be discretized using a fast $\mathcal O(N)$ algorithm, where $N$ is the number of spatial points. We introduce more details in Section \ref{subsec:High_Space}, wherein we update the current algorithm to achieve a user-defined accuracy of $\mathcal O(\Delta x^{M})$ with mesh spacing $\Delta x$. Finally, since the solution is still continuous in space, we can decouple the spatial and temporal errors, and by combining resolvent expansions with dimensional splitting, we extend the method to multiple dimensions without recoupling the errors. \begin{remark} Since dimensional splitting is used, all spatial quantities are computed according to a one-dimensional convolution integral of the form \eqref{eqn:L_inverse}, which is performed on a line-by-line basis, following so-called "dimensional sweeps". Since the discrete convolution is computed in $\mathcal O(N)$ complexity, the full solver scales linearly in the number of spatial points (assuming each sweep is performed in parallel). \end{remark} A fully discrete scheme is obtained after a spatial discretization of \eqref{eqn:L_inverse}. The domain $[a,b]$ is partitioned into $N$ subdomains $[x_{j-1},x_j]$, with $a = x_0<x_1< \ldots x_N = b$. The convolution operator is comprised of a particular solution, which is defined by the convolution integral \begin{equation} \label{eqn:I_def} I[u](x): = \frac{\alpha}{2} \int_a^b e^{-\alpha|x-y|}u(y) dy \end{equation} and a homogeneous solution \begin{equation} \label{eqn:homogeneous_def} B_a e^{-\alpha(x-a)} + B_b e^{-\alpha(b-x)}, \end{equation} both of which can be constructed in $\mathcal O(N)$ operations using fast convolution. We now describe each of these in turn, starting with the first. \subsection{Spatial discretization of the particular solution} \label{subsec:High_Space} The particular solution is first split into $I[u](x) = I^L(x) + I^R(x)$, where \[ I^L(x) = \frac{\alpha}{2} \int_a^x e^{-\alpha(x-y)} u(y) dy, \quad I^R(x) = \frac{\alpha}{2} \int_x^b e^{-\alpha(y-x)} u(y) dy. \] Each of these parts independently satisfy the first order "initial value problems" \begin{align} \notag (I^L)'(y) + \alpha I^L(y) = \frac{\alpha}{2} u(y)&, \quad a < y < x, \quad I^L(a) = 0, \\ \notag (I^R)'(y) - \alpha I^R(y) = -\frac{\alpha}{2} u(y)&, \quad x < y < b, \quad I^R(b) = 0, \end{align} where the prime denotes spatial differentiation. By symmetry, the scheme for $I^R$ follows from that of $I^L$, which we describe. From the integrating factor method, the integral satisfies the following identity, known as exponential recursion \[ I^L(x_j) = e^{-\nu_j}I^L(x_{j-1}) + J^L(x_j), \quad \text{where} \quad J^L(x_j) = \frac{\alpha}{2} \int_{x_{j-1}}^{x_j} e^{-\alpha(x_j-y)} u(y) dy, \] and \[ \nu_j = \alpha h_j, \qquad h_j = x_j-x_{j-1}. \] This expression is still exact, and indicates that only the "local" integral $J^L$ needs to be approximated. We therefore consider a projection of $u(y)$ onto $P_M$, the space of polynomials of degree $M$. A local approximation \[ u(x_j-z h_j) \approx p_j(z), \quad z \in [0,1], \] is accurate to $\mathcal O(h_j^{M})$, and defines a quadrature of the form \begin{equation} \label{eqn:JL} J^L(x_j) = \frac{\nu_j}{2} \int_{0}^{1} e^{-\nu_j z} u(x_j - h_jz) dz\approx \frac{\nu_j}{2} \int_0^1 e^{-\nu_j z} p_j(z) dz. \end{equation} If standard Lagrange interpolation is used, then the polynomials can be factorized as \begin{equation} \label{eqn:pj} p_j(z) = \sum_{k=-\ell}^r p_{jk}(z) u_{j+k} = z^T A_j^{-1} u^M_j, \end{equation} where $z = [1,z,\ldots, z^M]^T$, and $u^M_j = [u_{j-\ell},\ldots, u_j, \ldots, u_{j+r}]^T$, and $A_j$ is the Vandermonde matrix corresponding to the points $x_{j+k}$, for $k=-\ell \ldots r$. The values of $\ell$ and $r$ are such that $\ell+r=M+1$, and are centered about $j$ except near the boundaries, where a one-sided stencil is required. Substituting this factorization into \eqref{eqn:JL} and integrating against an exponential, we find that \[ J^L(x_j) \approx J^L_j := \sum_{k=-\ell}^r w_{jk} u_{j+k}, \] where the weights $w_j = [w_{j,-\ell},\ldots w_{j,r} ]$ satisfy \begin{equation} \label{eqn:weights} w_{j}^T = \phi_j^T A_j^{-1} \end{equation} and where \[ \phi_{jk} = \frac{\nu_j}{2}\int_0^1 e^{-\nu_j z} z^k dz = \frac{k!e^{-\nu_j}}{2\nu_j}\left(e^{\nu_j}-\sum_{p=0}^k \frac{(\nu_j)^p}{p!}\right). \] If the weights are pre-computed, then the fast convolution algorithm scales as $\mathcal O(MN)$ per time step, and achieves a user-defined $\mathcal O(M)$ in space. In every example shown in this work, we choose $M=2$ or $M=4$. \subsection{Homogeneous solution} \label{subsec:homogenous-solution} The homogeneous solution in \eqref{eqn:homogeneous_def} is used to enforce boundary conditions. We first observe that all dependence on $x$ in the convolution integral, $I[x] := I[u^n](x)$, in \eqref{eqn:I_def} is on the Green's function, which is a simple exponential function. Through direct differentiation, we obtain \begin{equation} \label{eqn:derivative_integral} I_x(a) = \alpha I(a), \quad I_x(b) = -\alpha I(b). \end{equation} Various boundary conditions at $x=a$ and $x=b$ can be enforced by solving a simple $2\times2$ system for the unknowns $B_a$ and $B_b$. For example, for periodic boundary conditions we assume that (at each discrete time step, $t=t^n$) \begin{equation} \label{eqn:periodic_assumption} u^n (a) = u^n (b), \quad u_x^n(a) = u_x^n(b), \qquad \forall n \in \mathbb{N}. \end{equation} We next enforce this assumption to hold on the scheme \eqref{eqn:L_inverse}, \begin{align} \notag \mathcal{L}^{-1}[u^n](a) = \mathcal{L}^{-1}[u^n](b) &\quad \Longleftrightarrow \quad I(a) + B_a + B_b\mu = I(b) + B_a\mu + B_b, \\ \notag \mathcal{L}^{-1}_x[u^n](a) = \mathcal{L}^{-1}_x[u^n](b) &\quad \Longleftrightarrow \quad \alpha \left(I(a) - B_a + B_b\mu \right)= \alpha \left( -I(b) - B_a\mu + B_b \right), \end{align} where $\mu = e^{-\alpha(b-a)}$ and the identities \eqref{eqn:derivative_integral} are used to find $\mathcal{L}_x^{-1}$. Solving this linear system yields \begin{equation} \label{eqn:periodic_coefficient} B_a = \dfrac{I(b)}{1- \mu}, \quad B_b = \dfrac{I(a)}{1- \mu}. \end{equation} Different boundary conditions (e.g. Neumann) follow an analogous procedure that requires solving a simple $2\times2$ linear system for $B_a$ and $B_b$. \section{Higher order schemes from resolvent expansions} \label{sec:high-order-resolvent} In our recent work \cite{Causley_2013}, we apply a successive convolution approach to derive high order A-stable solvers for the wave equation. The key idea is to recognize the fact that, in view of the modified Helmholtz operator \eqref{eqn:MHL}, the second derivative can be factored as \begin{equation} \label{eqn:Lap_sc} \left(-\frac{\partial_{xx}}{\alpha^2}\right) = \mathcal{L}-I = \mathcal{L}\left(I-\mathcal{L}^{-1}\right) :=\mathcal{L}\mathcal{D}, \end{equation} where \begin{equation} \label{eqn:D_def} \mathcal{D} = I-\mathcal{L}^{-1}, \qquad \mathcal{L} = \left(I-\mathcal{D}\right)^{-1}. \end{equation} Substitution of the second expression into \eqref{eqn:Lap_sc} determines the second derivative completely in terms of this new operator \begin{equation} \label{eqn:Lap_D} \left(-\frac{\partial_{xx}}{\alpha^2}\right) = \left(I-\mathcal{D}\right)^{-1}\mathcal{D} = \sum_{p=0}^\infty \mathcal{D}^p. \end{equation} This shows that second order partial derivatives of a sufficiently smooth function $u(x)$ can be approximated by truncating a resolvent expansion based on successively applying $\mathcal{D}$ to $u(x)$, which is a linear combination of successive convolutions \eqref{eqn:L_inverse}. Now, we consider a solution $u(x,t)$ to the heat equation \eqref{eqn:heat}, that for simplicity we take to be infinitely smooth. We perform a Taylor expansion on $u(x,t+\Delta t)$, and then use the Cauchy-Kovalevskaya procedure \cite{seal2014picard,seal2014high} to exchange temporal and spatial derivatives to yield \begin{equation} \label{eqn:Taylor_t} u(x,t+\Delta t) = \sum_{p=0}^\infty \frac{(\Delta t \partial_t)^p}{p!}u(x,t) = \sum_{p=0}^\infty \frac{(\gamma \Delta t \partial_{xx})^p}{p!}u(x,t) =: e^{\gamma\Delta t \partial_{xx}} u(x,t). \end{equation} The term $e^{\gamma\Delta t \partial_{xx}}$ is a spatial pseudo-differential operator, and it compactly expresses the full Taylor series. Our goal is to make use of the formula \eqref{eqn:Lap_D} to convert the Taylor series into a resolvent expansion. This can be performed term-by-term, and requires rearranging a doubly infinite sum. However, if we instead work directly with the operator defining the Taylor series, then \[ e^{\gamma \Delta t \partial_{xx} } = e^{-\beta^2\left(-\frac{\partial_{xx}}{\alpha^2}\right)} =e^{-\beta^2\left(I-\mathcal{D}\right)^{-1}\mathcal{D}}. \] At first glance this expression looks quite unwieldy. However, fortune is on our side, since the generating function of the \textit{generalized Laguerre} polynomials $L^{(\lambda)}_p(z)$ is \begin{equation} \label{eqn:Laguerre0} \sum_{p=0}^\infty L^{(\lambda)}_p(z) t^p = \frac{1}{(1-t)^{\lambda+1}} e^{- \frac{t z}{1-t} }, \end{equation} which bears a striking resemblance to our expansion. Indeed, if we take $\lambda=-1$, substitute $z=\beta^2$ and $t = \mathcal{D}$, then \begin{equation} \label{eqn:Laguerre} e^{-\beta^2\left(I-\mathcal{D}\right)^{-1}\mathcal{D}} = \sum_{p=0}^\infty L^{(-1)}_p(\beta^2) D^p = I+\sum_{p=1}^\infty L^{(-1)}_p(\beta^2) D^p. \end{equation} \subsection{Convergence} This expansion has been considered in the context of $C_0-$ semigroups \cite{abadias2013c_0,grimm2010approximation}, where $\left(-\frac{\partial_{xx}}{\alpha^2}\right)$ is replaced with a general closed operator $A$ on a Hilbert space $X$. In our notation, we restate part $(ii)$ of Theorem 4.3 in \cite{abadias2013c_0}, which is proven therein. \begin{theorem} Let the $C_0-$ semigroup \begin{align*} T(\beta^2)&=e^{-\beta^2\left(-\frac{\partial_{xx}}{\alpha^2}\right)} = \sum_{p=0}^\infty L^{(-1)}_p(\beta^2) \left(I-\frac{\partial_{xx}}{\alpha^2}\right)^{-p} \left(-\frac{\partial_{xx}}{\alpha^2}\right)^p \\ &= \sum_{p=0}^\infty L^{(-1)}_p(\beta^2) \mathcal{L}^{-p} \left(\mathcal{L}-I\right)^p =\sum_{p=0}^\infty L^{(-1)}_p(\beta^2) D^p \end{align*} be approximated by \[ T_P(\beta^2)= \sum_{p=0}^P L^{(-1)}_p(\beta^2) D^p. \] Then, for $u(x) \in C^{2P+2}$, there exists for each $\beta^2>0$ an integer $m_0$ such that for all integers $2\leq k \leq P+1$, with $P \geq m_0$, \[ \left\|T(\beta^2)u - T_P(\beta^2) u\right\|\leq \frac{C(\beta^2,k)}{P^{k/2-1}}\left\|\left(-\frac{\partial_{xx}}{\alpha^2}\right)^k u\right\|, \] where $C(\beta^2,k)$ is a constant that depends only on $\beta^2$ and $k$. \end{theorem} \begin{remark} The salient point of the theorem is that, in consideration of $\alpha$ (c.f. Eqn. \eqref{eqn:MHL}), the approximation error is of the form $C \Delta t^{P+1} \left\|u^{(2P+2)}(x)\right\|$, which matches the form given by a typical Taylor method. \end{remark} Finally, we truncate the resolvent expansion \eqref{eqn:Laguerre} at order $p=P$. For the heat equation, this defines the numerical method as \begin{equation} \label{eqn:Scheme_D} u(x,t+\Delta t) = u(x,t)+\sum_{p=1}^P L^{(-1)}_p(\beta^2) \mathcal{D}^p[u](x,t), \end{equation} which has a truncation error of the form \begin{equation} \label{eqn:Trunc} \tau:= L^{(-1)}_{P+1}(\beta^2) \mathcal{D}^{P+1}[u](x,t) + \mathcal O(\Delta t^{P+2}). \end{equation} For $P=1,2,3$, these schemes (evaluated at $t = t^n$) are \begin{align} u^{n+1} &= u^n - \beta^2\mathcal{D}[u^n], \\ u^{n+1} &= u^n - \beta^2\mathcal{D}[u^n]-\left(\beta^2-\frac{\beta^4}{2}\right)\mathcal{D}^2[u^n], \\ u^{n+1} &= u^n - \beta^2\mathcal{D}[u^n]-\left(\beta^2-\frac{\beta^4}{2}\right)\mathcal{D}^2[u^n]-\left(\beta^2-\beta^4+\frac{\beta^6}{6}\right)\mathcal{D}^3[u^n]. \end{align} We note that for implementation, each operator is applied successively, and is defined by \[ \mathcal{D}^{(p+1)}[u] := \mathcal{D}[\mathcal{D}^{p}[u]], \quad \mathcal{D}^{0}[u] := u. \] \subsection{Stability} \label{subsec:Stability} There remains one critical issue: the choice of the free parameter $\beta$. In 1974, N{\o}rsett studied a similar single-step multiderivative method for the heat equation \cite{Norsett1974} and he too, had a free parameter in his solver. We follow his lead on the Von-Neumann analysis based on his MOL discretization, but in this work we optimize $\beta$ to obtain stiff decay, whereas N{\o}rsett chose $\beta$ to maximize the order of accuracy of the solver. Consider the linear test problem \[ \frac{dy}{dt} = \lambda y, \quad \quad y(0) = 1, \quad \lambda \in \mathbb{C}, \] whose exact solution $y(t)$ satisfies \[ y(t+h) = e^z y(t), \quad z = h\lambda \in \mathbb{C}. \] Application of \eqref{eqn:Scheme_D} to this test problem results in \[ y(t+h) = \sum_{p=0}^P L^{(-1)}_p(\beta^2) \hat{D}^p y(t) = \phi(z)y(t) \] where \[ \hat{D} = \frac{-(z/\beta^2)}{1-(z/\beta^2)} = 1-\left(1-(z/\beta^2)\right)^{-1}. \] The generalized Laguerre polynomials satisfy many identities, the following of which is the most pertinent: \begin{equation} \label{eqn:Laguerre_Identity} L^{(0)}_{p+1}(x)-L^{(0)}_{p}(x) = L^{(-1)}_{p+1}(x) = \left(\frac{x}{p+1}\right)\frac{d}{dx}L^{(0)}_{p+1}(x). \end{equation} Here, $L^{(0)}_p(x)$ is the standard Laguerre polynomial $L_p(x)$. Following standard definitions, we say that a numerical scheme is \textit{$A$-stable}, provided $|\phi|\leq 1$ in the left-half of the complex plane $z\in \mathbb{C}^-$. Likewise, a scheme exhibits \textit{stiff decay} if $\phi(z) \to 0$ as $Re(z) \to -\infty$. If an $A$-stable method also exhibits stiff decay, it is $L$-stable. Now, observing that $\hat{D} \to 1$ as $Re(z) \to -\infty$, we find that \begin{equation} \label{eqn:limit} \lim_{z\to -\infty} \phi(z) = \sum_{p=0}^P L^{(-1)}_p(\beta^2) = L^{0}_0(\beta^2)+\sum_{p=1}^P \left(L^{0}_p(\beta^2)-L^{0}_{p-1}(\beta^2)\right) =L^{0}_P(\beta^2), \end{equation} where we have used the first two expressions in \eqref{eqn:Laguerre_Identity} to introduce a telescoping sum. We are now prepared to prove the following. \begin{theorem} Let $u(x,t)$ be an approximate solution to the heat equation \eqref{eqn:heat}, given by the successive convolution scheme \eqref{eqn:Scheme_D}. Then, \begin{enumerate} \item If $\beta^2=x_1^{(P)}$ is chosen as the smallest root of $L'_{P+1}(x) = (L^{(0)}_{P+1}(x))'$, then the scheme achieves order $P+1$, but does not exhibit stiff decay. \item If $\beta^2 = x_1^{(P)}$ is chosen as the smallest root of $L_{P}(x) = L^{(0)}_{P}(x)$, then the scheme achieves order $P$, and exhibits stiff decay. \item Following the first strategy, the schemes are $A$-stable for $P=1,2,3$, whereas the second strategy ensures $L$-stability. For both strategies, $A(\alpha)$-stability is achieved for $P>3$, with large values of $\alpha \approx \pi/2$. \end{enumerate} \end{theorem} \begin{proof} The proof follows by applying the maximum modulus principle coupled with \eqref{eqn:limit}. For part 1, upon examining the truncation error \eqref{eqn:Trunc}, we see that an additional order of accuracy can be gained if we choose \[ L^{(-1)}_{P+1}(\beta^2) = \left(\frac{\beta^2}{P+1}\right)L'_{P+1}(\beta^2) = 0. \] However, $L_{P}(\beta^2)\neq 0$ for this choice, and so stiff decay does not hold. For part 2, we instead enforce stiff decay, but then the truncation error is of order $P$. Finally, part 3 is demonstrated by the maximum amplification factors $\phi$ along the imaginary axis, as shown for both strategies in Figure \ref{fig:Stab}. In particular, we observe that $|\phi(iy)|\leq 1$ for $P=1,2,3$. \end{proof} \begin{figure}[h!] \subfloat[Maximal Order]{\includegraphics[width=0.48\linewidth]{Stability_Plot_Maximal_Order}} \hspace{.02\linewidth} \subfloat[Stiff Decay]{\includegraphics[width=0.48\linewidth]{Stability_Plot_Stiff_Decay}} \caption{Maximum amplification factors $|\phi(iy)|$ for the first few orders $P$, with (a) maximal order, or (b) stiff decay. When maximizing order, the first 3 schemes exhibit $A$-stability, whereas ensuring stiff decay leads to $L$-stable schemes. For $P>3$, both schemes become $A(\alpha)$-stable. } \label{fig:Stab} \end{figure} \begin{remark} In \cite{Norsett1974}, the scheme was chosen to maximize the order of accuracy, implicitly leading to eliminating the first term in the truncation error \eqref{eqn:Trunc}, which is equivalent to the first strategy. However, in this work we follow the second strategy, and choose $\beta^2$ as the smallest root of $L_P(x)$ to ensure stiff decay. \end{remark} For comparison we record the values of $\beta^2$ chosen for each order $1\leq P \leq 6$, to those of N{\o}rsett in Table \ref{tab:LagZeros}. For all of our solvers, we choose $\beta$ to be the largest possible value that still yields provable stiff decay. \begin{table}[htbp] \begin{center} \caption{Values of $\beta^2$ chosen for orders $P=1,2,\ldots 6$. The first column are those used in our schemes, and uniquely guarantee stiff decay and $A(0)$-stability. For comparison, we also display the values in N{\o}rsett \cite{Norsett1974} which give optimal order $P+1$, at the expense of stiff decay. \label{tab:LagZeros} } \begin{tabular}{|c||c|c||c|c|} \hline & \multicolumn{2}{c||}{Stiff Decay}& \multicolumn{2}{c|}{Maximal Order} \\ \hline $P$ & $\beta^2$ & $L_P(\beta^2)$ & $\beta^2$ & $L_P(\beta^2)$ \\ \hline $1$ & 1.0000 & 0 & 2.0000 & -1.0000 \\ \hline $2$ & 0.5858 & 0 & 1.2679 & -0.7320 \\ \hline $3$ & 0.4158 & 0 & 0.9358 & -0.6304 \\ \hline $4$ & 0.3225 & 0 & 0.7433 & -0.5768 \\ \hline $5$ & 0.2636 & 0 & 0.6170 & -0.5436 \\ \hline $6$ & 0.2228 & 0 & 0.5277 & -0.5211 \\ \hline \end{tabular} \end{center} \end{table} \section{Resolvent expansions for multiple spatial dimensions} \label{sec:multiple} We extend the 1D solver to multiple spatial dimensions through the use of dimensional splitting. Our key observation is that we can use the factorization property of the exponential to perform the series expansion. For instance, in three dimensions, we have \begin{equation} e^{\gamma\Delta t \nabla^2} = e^{-\beta^2\left(-\frac{\partial_{xx}}{\alpha^2}\right)}e^{-\beta^2\left(-\frac{\partial_{yy}}{\alpha^2}\right)}e^{-\beta^2\left(-\frac{\partial_{zz}}{\alpha^2}\right)}. \end{equation} Now, we first replace each term with the identity \eqref{eqn:Laguerre} dimension by dimension, and then truncate the expansions which will be in terms of the univariate operators $\mathcal{L}_\gamma^{-1}$ and $\mathcal{D}_\gamma$ for $\gamma = \{x,y,z\}$ as defined by \eqref{eqn:MHL}, and \eqref{eqn:Scheme_D} acting on a function $u^n(x,y,z)$. This infinite sum with three indices must then be truncated to order $P$, and after a change of indices we find \begin{equation} \label{eqn:EPL_3} E_P = \sum_{p,q,r} \binom{P-1}{p,q,r} L^{(-1)}_p(\beta^2)L^{(-1)}_q(\beta^2)L^{(-1)}_r(\beta^2)\mathcal{D}_x^p \mathcal{D}_y^q\mathcal{D}_z^r, \end{equation} in 3D, with the corresponding 2D operator given by \begin{equation} \label{eqn:EPL_2} E_P = \sum_{p,q} \binom{P-1}{p,q} L^{(-1)}_p(\beta^2)L^{(-1)}_q(\beta^2)\mathcal{D}_x^p \mathcal{D}_y^q. \end{equation} Here we adopt the notation that sums are taken over all non-negative indices that sum to $P$, and the multinomial coefficients are defined such that $\binom{n}{p,q,r} = \frac{n!}{p!q!r!}$ and $\binom{n}{p,q} = \frac{n!}{p!q!}$. \begin{remark} The proof of stability for the multi-dimensional algorithm follows directly from that of the one-dimensional case, with the same approach applied to each spatial dimension (i.e. $\phi(z) = \phi_x(z)\phi_y(z)$ for the 2D case, and similarly for the 3D case). \end{remark} \section{The heat equation} \label{sec:Linear} \subsection{Heat equation in 1D} \label{subsec:1dheat} We first illustrate the accuracy of our method for the 1D heat equation defined in \eqref{eqn:heat}. We consider initial conditions $u(x,0) = \sin(x)$, for $x \in [0,2\pi]$ with periodic boundary conditions. We integrate up to a final time of $T=4$, and set $\gamma = 0.18^2$. We use the fast convolution algorithm that is fourth order accurate in space ($M=4$), and set the spatial grid size to be $\Delta x = \frac{2\pi}{1024} \approx 0.0061$. This ensures that the dominant error in the solution is temporal. We compute errors by the $L^{\infty}$-norm, and compare against the exact solution $u(x,T) = e^{-\gamma T}u_0(x)$ at $T=4$. The result of a temporal refinement study for $P=1,2$ and $3$ is presented in Table \ref{tab:refinement_1D_1}. \begin{table}[htbp] \begin{center} \caption{Refinement study for a 1D Heat equation defined in \ref{subsec:1dheat}.} \label{tab:refinement_1D_1} \begin{centering} \begin{tabular}{|c||c|c||c|c||c|c|} \hline & \multicolumn{2}{c||}{$P=1$} & \multicolumn{2}{c||}{$P=2$ } & \multicolumn{2}{c|}{$P=3$} \\ \hline $\Delta t$ & $L^{\infty}$ error & order & $L^{\infty}$ error & order & $L^{\infty}$ error & order \\ \hline $0.1$ & $\num{0.00018405}$ & $-$ & $\num{0.0000016255}$ & $-$ & $\num{0.000000024225}$ & $-$ \\ \hline $0.05$ & $\num{0.000092121}$ & $0.9985$ & $\num{0.00000040841}$ & $1.9928$ & $\num{0.0000000030620}$ & $2.9839$ \\ \hline $0.025$ & $\num{0.000046084}$ & $0.9993$ & $\num{0.00000010236}$ & $1.9964$ & $\num{0.00000000038501}$ & $2.9915$ \\ \hline $0.0125$ & $\num{0.000023048}$ & $0.9996$ & $\num{0.000000025622}$ & $1.9982$ &$\num{0.000000000048402}$ & $2.9918$ \\ \hline $0.00625$ & $\num{0.000011525}$ & $0.9998$ & $\num{0.0000000064097}$ & $1.9990$ & $\num{0.0000000000062021}$ & $2.9642$ \\ \hline \end{tabular} \end{centering} \end{center} \end{table} \subsection{Heat equation in 2D} \label{subsec:2dheat} As a second example, we present results for the 2D heat equation. We consider initial conditions $u(x,y,0)=\sin(x)\sin(y)$, for $(x,y) \in [0,2\pi] \times [0,2\pi]$ with periodic boundary conditions. We use a uniform mesh of size $\Delta x=\Delta y= 2\pi/512 \approx 0.0123$. Likewise, the $L^{\infty}$-error is computed by comparing against the exact solution $u(x,y,T) = e^{-2\gamma T} u_0(x,y)$ at the final time $T=1$. In Table \ref{tab:refinement_2D_1}, we present results for a temporal refinement study for orders $P=1,2,$ and $3$. \begin{table}[htbp] \begin{center} \caption{Refinement study for a 2D Heat equation defined in \ref{subsec:2dheat}.} \label{tab:refinement_2D_1} \begin{centering} \begin{tabular}{|c||c|c||c|c||c|c|} \hline & \multicolumn{2}{c||}{$P=1$} & \multicolumn{2}{c||}{$P=2$ } & \multicolumn{2}{c|}{$P=3$} \\ \hline $\Delta t$ & $L^{\infty}$-error & order& $L^{\infty}$-error & order & $L^{\infty}$-error & order \\ \hline $0.1$ & $\num{0.000098182}$ & $-$ & $\num{0.00000086717}$ & $-$ & $\num{0.000000012925}$ & $-$ \\ \hline $0.05$ & $\num{0.000049143}$ & $0.9985$ & $\num{0.00000021788}$ & $1.9928$ & $\num{0.0000000016354}$ & $2.9825$ \\ \hline $0.025$ & $\num{0.000024584}$ & $0.9992$ & $\num{0.000000054608}$ & $1.9963$ & $\num{0.00000000020791}$ & $2.9756$ \\ \hline $0.0125$ & $\num{0.000012295}$ & $0.9996$ & $\num{0.000000013672}$ & $1.9979$ &$\num{0.000000000029204}$ & $2.8317$ \\ \hline \end{tabular} \end{centering} \end{center} \end{table} \section{Reaction-diffusion systems} \label{sec:Source} We next extend our method to nonlinear reaction-diffusion systems of the form \begin{align} \label{eqn:Source} {\bf u}_t = {\bf D} \nabla^2 {\bf u} + {\bf{F}}({\bf {u}}), \quad ({\bf x},t) \in \Omega \times (0,T], \end{align} where ${\bf u} = (u_1, u_2, \cdots, u_N)$, with $u_i = u_i({\bf x},t)$, $\bf D$ is a diffusion coefficient matrix, and the reaction term ${\bf F} := (f_1, f_2, \cdots, f_N)$ is a function of $u_i$, $(i=1,2,\cdots,N)$. In the above, $\Omega \subset {\mathbb R}^N$ is a bounded domain, and we assume appropriate initial values and boundary conditions. We shall view the diffusion as being the linear part of the differential operator, and invert this linear part analytically, using successive convolution. To derive the scheme, we use operator calculus to first write \begin{align}\label{eqn:operator_calculus} \left(\partial_t - {\bf D} \nabla^2\right){\bf u} = {\bf{F}} \quad \implies \quad \left(e^{-{\bf D} t \nabla^2} {\bf u}\right)_t = e^{-{\bf D} t \nabla^2}{\bf{F}}, \end{align} where $e^{-{\bf D} t \nabla^2}$ is a pseudo-differential operator. Upon integrating \eqref{eqn:operator_calculus} over the interval $[t, t+\Delta t]$, we arrive at the update equation \begin{align} {\bf u} (t+\Delta t) - e^{ {\bf D} \Delta t \nabla^2}{\bf u} (t) =& \int_t^{t+\Delta t} e^{ {\bf D} (t+\Delta t-\tau) \nabla^2 }{\bf F} (\tau) d\tau \nonumber \\ \label{eqn:first} =& \int_0^{\Delta t} e^{{\bf D}(\Delta t- \tau) \nabla^2}{\bf F} (t+ \tau)d\tau, \end{align} where we have made use of the abbreviated notation, ${\bf F}(t) := {\bf F}({\bf u}({\bf x},t))$. On the left hand side, the diffusion terms have been collected by this pseudo-differential operator, and will be approximated using the successive convolution techniques developed above. The reaction terms on the right hand side \eqref{eqn:first} are fully nonlinear, and we must consider nonlinear stability when choosing a method of discretization. We first consider approximating the integral on the right hand side \eqref{eqn:first} with the trapezoidal rule. This defines a single-step update equation, which will be second order accurate \begin{align} \label{eqn:2nd_scheme} {\bf u}(t+\Delta t) - e^{{\bf D} \Delta t \nabla^2}{\bf u}(t) = \frac{\Delta t}{2} \left[ e^{{\bf D} \Delta t \nabla^2} {\bf F}(t) +{\bf F}(t+\Delta t) \right]. \end{align} We may also obtain a single-step third order scheme, using multiderivative integration \cite{seal2014high}. By replacing the integrand \eqref{eqn:first} with a third order Hermite-Birkhoff interpolant and performing exact integration of the resulting function, we arrive at \begin{align} \label{eqn:3rd_scheme} {\bf u}(t+\Delta t) - e^{{\bf D} \Delta t \nabla^2}{\bf u}(t) = e^{{\bf D} \Delta t \nabla^2} \left[ \dfrac{2 \Delta t}{3} {\bf F}(t) + \dfrac{\Delta t}{6} \left( -{\bf D}\Delta t \nabla^2 {\bf F}(t) + \Delta t \dfrac{d{\bf F}}{d{\bf t}}(t) \right)\right] +\dfrac{\Delta t}{3}{\bf F}(t+\Delta t), \end{align} where $\frac{d{\bf F}}{dt}(t) = \frac{d{\bf F}}{d{\bf u}}(t) \cdot ({\bf D}\nabla^2 {\bf u}(t)+{\bf F}(t))$. The Hermite-Birkhoff interpolant that matches the integrand in \eqref{eqn:first} at times $\tau = 0$, and $\tau=\Delta t$, as well as its derivative at time $\tau = 0$ produces the quadrature rule in \eqref{eqn:3rd_scheme}. \begin{remark} The proposed schemes in \eqref{eqn:2nd_scheme} and \eqref{eqn:3rd_scheme} produce nonlinear equations for ${\bf u}(x,t+\Delta t)$ that need to be solved at each time step. Therefore, any implicit solver will necessarily be problem dependent. \end{remark} For the problems examined in this work, we make use of simple fixed-point iterative schemes. We stabilize our iterative solvers by linearizing ${\bf F} ({\bf u})$ about a background state ${\bf F}_{\bf u}({\bf u}^{\ast})$, which depends on the problem under consideration. \subsection{A discretization of the Laplacian operator} Upon perusing the third order update equation \eqref{eqn:3rd_scheme}, we will need to use successive convolution to replace the psuedo-differential operator $\exp\left({\bf D} \Delta t \nabla^2\right)$, as well as the Laplacian operator $\nabla^2$. This latter point has been detailed in \cite{Causley_2013}, and so we comment briefly on it here. Using the one-dimensional expansion \eqref{eqn:Lap_D}, we observe that the two-dimensional Laplacian is similarly given by \[ -\frac{\nabla^2}{\alpha^2} = -\frac{\partial_{xx}}{\alpha^2}-\frac{\partial_{yy}}{\alpha^2} = \sum_{p=1}^\infty \left(\mathcal{D}_x^p+\mathcal{D}_y^p\right), \] and can be truncated at the appropriate accuracy $p=P$. Here, the subscripts indicate that the convolution is only in one spatial direction, and the other variable is held fixed. Thus, $\mathcal{D}_x$ is applied along horizontal lines for fixed $y$-values, and likewise for $\mathcal{D}_y$. \section{Nonlinear numerical results} \label{sec:Numerical} \subsection{Allen-Cahn} \label{subsec:allencahn} We examine in greater detail the application of our schem to the Allen-Cahn (AC) equation \cite{Allen1979}, \begin{align} \label{eqn:AC} u_t = \epsilon^2 \nabla^2 u + f(u), \qquad (x,t) \in \Omega \times (0,T], \end{align} where the reaction term is $f(u) = u - u^3$, and $\Omega \subset {\mathbb R}^d$ is a bounded domain, and $u$ satisfies homogeneous Neumann boundary conditions. For our fixed point iteration, we linearize $f$ about the stable fixed points $u^{\ast} = \pm 1$, which satisfy $f'(u^{\ast}) = 0$. For example, the second order scheme from \eqref{eqn:2nd_scheme} becomes \begin{equation} \label{eqn:AC_2ndorder_iteration} \left(1 +\Delta t \right)u^{n+1,k+1} = e^{\epsilon^2 \Delta t \nabla^2} \left( u^n + \frac{\Delta t}{2} f^n \right)+ \frac{\Delta t}{2} \left(f^{n+1,k} +2u^{n+1,k} \right), \end{equation} where $n$ indicates the time step as before, and now $k$ is the iteration index. By lagging the nonlinear term $f^{n+1,k}$, the fixed point update is made explicit. Likewise, the third order scheme from \eqref{eqn:3rd_scheme} becomes \begin{align} \label{eqn:AC_3rdorder_iteration} \left( 1+\frac{2\Delta t}{3} \right)u^{n+1,k+1} = e^{\epsilon^2 \Delta t \nabla^2} \left[u^n + \frac{2\Delta t}{3} f^n + \frac{\Delta t}{6} \left( -\epsilon^2 \Delta t \nabla^2 f^n +\Delta t f_t^n \right) \right] +\frac{\Delta t}{3} \left(f^{n+1,k} +2u^{n+1,k}\right). \end{align} Here, $e^{\epsilon^2 \Delta t \nabla^2}$ is again understood by replacing it with a resolvent expansion, which is a truncated series of successive convolution operators. \subsection{Allen-Cahn: One-dimensional test} \label{subsec:1dallen} We demonstrate the accuracy of our proposed schemes by simulating a well-known traveling wave solution \cite{chen1998applications, lee2014semi}, \begin{equation} \label{AC_initial} u_{AC}(x,t) = \frac{1}{2} \left( 1 - \tanh\left(\frac{x-T_s-st}{2\sqrt{2} \epsilon} \right) \right), \qquad x \in \Omega = [0,4], \quad 0 \leq t \leq T. \end{equation} Here, $s = \frac{3\epsilon}{\sqrt{2}} = 0.09 $ is the speed of the traveling wave, and we choose $\epsilon = 0.03\sqrt{2}$. Additionally, we choose the delay time $T_s := 1.5 - sT$, so that the exact solution satisfies $u_{AC}(1.5,T) = 0.5$. \begin{figure}[h] \centering \includegraphics[width = 0.3\textwidth]{AC1D_profile3} \caption{Traveling wave solutions $u(x,T)$ at $T=8$ using \eqref{eqn:AC_2ndorder_iteration} with two different time step sizes, compared with the exact profile in \eqref{AC_initial}.} \label{fig:AC_1d} \end{figure} Results for a final time of $T=8$ are shown in Figure \ref{fig:AC_1d}, with two different time steps. The solutions agree well with the exact solution. \begin{table}[htbp] \begin{center} \caption{Refinement study for the 1D Allen-Cahn (AC) equation with an exact traveling wave solution \ref{subsec:1dallen}.} \label{tab:refinement_AC} \begin{centering} \begin{tabular}{|c||c|c||c|c||c|c|} \hline & \multicolumn{2}{c||}{$P=1$} & \multicolumn{2}{c||}{$P=2$ } & \multicolumn{2}{c|}{$P=3$}\\ \hline $\Delta t$ & $L^{\infty}$ error & order & $L^{\infty}$ error & order & $L^{\infty}$ error & order \\ \hline $0.025$ & $\num{0.00028216}$ & $-$ & $\num{0.000013895}$ & $-$ & $\num{0.0000026060}$ & $-$ \\ \hline $0.0125$ & $\num{0.00014419}$ & $0.9686$ & $\num{0.0000036115}$ & $1.9439$ & $\num{0.00000039417}$ & $2.7249$ \\ \hline $0.0063$ & $\num{0.000072874}$ & $0.9845$ & $\num{0.00000092164}$ & $1.9703$ & $\num{0.000000055010}$ & $2.8411$ \\ \hline $0.0031$ & $\num{0.000036632}$ & $0.9923$ & $\num{0.00000023294}$ & $1.9842$ & $\num{0.0000000073122}$ & $2.9113$ \\ \hline $0.0016$ & $\num{0.000018365}$ & $0.9961$ & $\num{0.000000058695}$ & $1.9886$ & $\num{0.00000000095714}$ & $2.9335$ \\ \hline \end{tabular} \end{centering} \end{center} \end{table} In Table \ref{tab:refinement_AC}, we present the $L^{\infty}$-error in the numerical solution at a final time $T=1$, using the exact solution $u_{AC}(x,T)$ \eqref{AC_initial}. We observe first order accuracy from the Backward Euler method, and the expected orders of accuracy from the second \eqref{eqn:AC_2ndorder_iteration} and third \eqref{eqn:AC_3rdorder_iteration} order schemes. To ensure that the temporal error is dominant, we have used the fourth order accurate scheme (eq. \eqref{eqn:JL} with $M=4$), with $\Delta x = 2^{-9}$ to perform spatial integration in the successive convolutions. In principle, we can achieve higher orders accuracy in space and time. The latter would require using higher order Hermite-Birkhoff interpolation to discretize the reaction term in \eqref{eqn:first}. \subsection{Allen-Cahn: Two-dimensional test} \label{subsec:2dallen} We next solve the Allen-Cahn equation in two spatial dimensions. A standard benchmark problem involves the motion of a circular interface \cite{chen1998applications, lee2014semi, shen2010numerical}, to which an exact solution is known in the limiting case $\epsilon \to 0$. The radially symmetric initial conditions are defined by \begin{equation} \label{AC2d_initial} u(x,y,0) = \tanh \left( \frac{0.25 - \sqrt{(x-0.5)^2+(y-0.5)^2}}{\sqrt{2} \epsilon} \right), \end{equation} which has an interfacial circle ($u(x,y,0)=0$) centered at $(0.5,0.5)$, with a radius of $R_0 = 0.25$. This interfacial circle is unstable, and will shrink over time, as determined by the mean curvature \cite{Allen1979} \begin{equation}\label{eqn:meancurvature} V = \frac{dR}{dt} = -\frac{1}{R}. \end{equation} Here $V$ is the velocity of the moving interface, and $R$ is the radius of the interfacial circle at time $t$ (i.e., it is the radius of the curve defined by $u(x,y,t)=0$). By integrating \eqref{eqn:meancurvature} with respect to time, we solve for the radius as a function of time \begin{equation}\label{eqn:velocity} R(t) = \sqrt{R_0^2 - 2\epsilon^2 t}. \end{equation} The location where $\epsilon$ is placed in equation \eqref{eqn:AC} differs from other references \cite{chen1998applications, lee2014semi, shen2010numerical}. Therefore, we point out that our time scales have been appropriately rescaled for comparison. The moving interface problem was simulated using $\epsilon = 0.05$, $\Delta t = \frac{6.4 \times 10^{-4}}{\epsilon^2} = 0.0256$, and $\Delta x = \Delta y = 2^{-8} \approx 0.0039$, which are based on the parameters used in \cite{lee2014semi}. The numerical solution is displayed in Figure \ref{fig:AC_2d}, and we observe that the interfacial circle shrinks, as is expected. \begin{figure}[h] \centering \subfloat[$u(x,y,0)$]{\includegraphics[width = 0.30\textwidth]{AC2D_u0}} \subfloat[$u(x,y,\frac{T}{2})$]{\includegraphics[width = 0.30\textwidth]{AC2D_u1}} \subfloat[$u(x,y,T)$]{\includegraphics[width = 0.30\textwidth]{AC2D_u2}} \caption{Time evolution of the initial condition \eqref{AC2d_initial} of the Allen-Cahn equation, up to time $T=\frac{0.0256}{\epsilon^2} = 10.24$.} \label{fig:AC_2d} \end{figure} In Figure \ref{fig:AC_2d2}, we plot compare the evolution of the radii obtained by our second order scheme with the exact radius \eqref{eqn:velocity}, for two different values of the diffusion parameter $\epsilon$. The radius is measured by taking a slice of the solution along $y=0$, and then solving for the spatial point where $u=0$ using linear interpolation between the two closest points that satisfy $u(x,0,t) < 0$ and $u(x,0,t) > 0$. Refinement is performed with a fixed spatial mesh $\Delta x = \Delta y = 2^{-8}$, and time steps of $\Delta t = 0.2560, 0.1280$, and $0.0640$. Because the radius is derived as an exact solution in the limit (i.e., $\epsilon \to 0$) \cite{Allen1979}, we observe that the smaller value of $\epsilon$ is indeed more accurate. \begin{figure}[h] \centering \subfloat[$\epsilon=0.05$]{\includegraphics[width = 0.46\textwidth]{AC2D_Eps005_radius}} \subfloat[$\epsilon=0.01$]{\includegraphics[width = 0.46\textwidth]{AC2D_Eps001_radius}} \caption{Radii of the interfacial circle as a function of time ($ 0 \leq t \leq \frac{0.0256}{\epsilon^2}$) compared with the reference radius R (red line) in \eqref{eqn:velocity}.} \label{fig:AC_2d2} \end{figure} We next perform a refinement study for the Allen-Cahn equations, but this time in two spatial dimensions. To do so, we must incorporate the multivariate successive convolution algorithms in \eqref{eqn:EPL_2} and \eqref{eqn:Lap_D} into the second \eqref{eqn:AC_2ndorder_iteration} and third \eqref{eqn:AC_3rdorder_iteration} order schemes. Given that we do not have an exact solution, we compute successive errors in an $L^{\infty}$-norm. That is, we compute $||u_{\Delta t} - u_{\frac{\Delta t}{2}} ||_{\infty}$ for each time step $\Delta t$. Results are as expected, and are presented in Table \ref{tab:refinement_AC2D}. The parameters used are $\epsilon = 0.05$, $\Delta x = \Delta y =2^{-9}$, and the final computation time is $T=0.5$. Again, the quadrature method is fourth order accurate in space, so that the dominant source of error is temporal. \begin{table}[htbp] \begin{center} \caption{Refinement study for the Allen-Cahn equataion in 2D, with homogeneous Neumann boundary conditions.} \label{tab:refinement_AC2D} \begin{centering} \begin{tabular}{|c||c|c||c|c||c|c|} \hline & \multicolumn{2}{c||}{$P=1$} & \multicolumn{2}{c||}{$P=2$ } & \multicolumn{2}{c|}{$P=3$}\\ \hline $\Delta t$ & $L^{\infty}$ error & order & $L^{\infty}$ error & order & $L^{\infty}$ error & order \\ \hline $0.0063$ & $\num{0.00065941}$ & $-$ & $\num{0.00011740}$ & $-$ & $\num{0.0000051744}$ & $-$ \\ \hline $0.0031$ & $\num{0.00033065}$ & $0.9959$ & $\num{0.000032637}$ & $1.8468$ & $\num{0.00000078351}$ & $2.7234$ \\ \hline $0.0016$ & $\num{0.00016563}$ & $0.9973$ & $\num{0.0000086726}$ & $1.9120$ & $\num{0.00000010811}$ & $2.8574$ \\ \hline $0.0008$ & $\num{0.000082894}$ & $0.9987$ & $\num{0.0000022389}$ & $1.9537$ & $\num{0.000000012961}$ & $3.0602$ \\ \hline \end{tabular} \end{centering} \end{center} \end{table} \subsection{Two-dimensional test: The Fitzhugh-Nagumo system} \label{subsec:fitzhugh-nagumo} Finally, we solve a well known reaction diffusion system that arises in the modeling of neurons, the Fitzhugh-Nagumo (FHN) model \cite{fife1979mathematical,keener1998mathematical}. The FHN system consists of an activator $u$ and an inhibitor $v$, which are coupled via nonlinear reaction diffusion equations \begin{equation} \label{eqn:FHN} \begin{aligned} u_t &= D_{u} \nabla^2 u + \frac{1}{\delta} h(u,v), \\ v_t &= D_{v} \nabla^2 v + g(u,v), \end{aligned} \end{equation} where $D_u$, $D_v$ are the diffusion coefficients for $u$ and $v$, respectively, and $0 < \delta \ll 1$ is a real parameter. We use the classical cubic FHN local dynamics \cite{keener1998mathematical}, that are defined as \begin{equation} \begin{aligned} \label{eqn:dynamics} h(u,v) &= Cu(1-u)(u-a)-v, \\ g(u,v) &= u - dv, \end{aligned} \end{equation} where $C, a$ and $d$ are dimensionless parameters. The parameters we use are the same as in \cite{christlieb2015high,olmos2009pseudospectral}: $D_u = 1$, $D_v =0$, $a=0.1$, $C=1$, $d=0.5$, and $\delta = 0.005$. The diffusion coefficient for the inhibitor is $D_v=0$, identical to the work found in \cite{olmos2009pseudospectral,krinsky1998models, starobin1997common, xie2001coexistence}. The second order scheme from \eqref{eqn:AC_2ndorder_iteration} is applied to each variable $u$ and $v$ separately. This defines the numerical scheme as \begin{equation} \begin{aligned} u^{n+1} &= e^{\Delta t \nabla^2} \left( u^n + \dfrac{\Delta t}{2\delta} h^n \right) + \dfrac{\Delta t}{2\delta} h^{n+1}, \\ v^{n+1} &= \left( v^n + \frac{\Delta t}{2} g^n \right) + \dfrac{\Delta t}{2} g^{n+1}, \label{eqn:FHN_2ndorder} \end{aligned} \end{equation} where $h^n = h(u^n,v^n)$ and $g^n = g(u^n,v^n)$. We again use a stabilized fixed point iteration to address the nonlinear reaction terms. Because $(u^{\ast},v^{\ast}) = (0,0)$ is the only stable excitable fixed point of equation \eqref{eqn:FHN} simply construct the Jacobian of ${\bf{F}} = (h,g)$ of \eqref{eqn:dynamics} about this point: \begin{equation} \label{eqn:Jacobian} \mathcal{J}_{\bf{F}} (u^{\ast},v^{\ast}) \cdot \begin{bmatrix} u -u^{\ast} \\[0.3em] v -v^{\ast} \end{bmatrix} \equiv \dfrac{\partial (h,g)}{\partial (u,v)}|_{(0,0)} \cdot \begin{bmatrix} u \\[0.3em] v \end{bmatrix} = \begin{bmatrix} -C a& -1 \\[0.3em] 1 & -d \end{bmatrix} \begin{bmatrix} u \\[0.3em] v \end{bmatrix} = \begin{bmatrix} -Cau -v \\[0.3em] u - dv \end{bmatrix}. \end{equation} The resulting second order scheme is \begin{equation} \label{eqn:FHN_2ndorder_iteration} \begin{bmatrix} 1+\dfrac{Ca \Delta t}{2 \delta}& \dfrac{\Delta t}{2 \delta} \\[0.5em] -\dfrac{\Delta t}{2} & 1+\dfrac{d \Delta t}{2} \end{bmatrix} \begin{bmatrix} u^{n+1,k+1} \\[0.3em] v^{n+1,k+1} \end{bmatrix} = \begin{bmatrix} e^{\Delta t \nabla^2} \left( u^n + \dfrac{\Delta t}{2\delta} h^n \right) \\[0.5em] v^n + \dfrac{\Delta t}{2} g^n \end{bmatrix} + \dfrac{\Delta t}{2} \begin{bmatrix} \frac{1}{\delta} \left( h^{n+1,k} +Ca u^{n+1,k} + v^{n+1,k} \right) \\[0.5em] g^{n+1,k} - u^{n+1,k} + d v^{n+1,k} \end{bmatrix}, \end{equation} where $k$ is the iteration number, and $n$ is the time level. In Figure \ref{fig:FHN_2d}, we present the numerical evolution of the activator $u$ over the domain $\Omega=[-20,20] \times [-20,20]$, with periodic boundary conditions. We observe similar spiral waves that form in other recent work from the literature \cite{christlieb2015high}. \begin{figure}[h] \begin{center} \centering \subfloat[$T=1$]{\includegraphics[width = 0.30\textwidth]{FHN_periodic_T1}} \subfloat[$T=2$]{\includegraphics[width = 0.30\textwidth]{FHN_periodic_T2}} \subfloat[$T=4$]{\includegraphics[width = 0.30\textwidth]{FHN_periodic_T4}} \caption{Temporal evolution of the concentration of activator $u$ using \eqref{eqn:FHN_2ndorder_iteration}.} \label{fig:FHN_2d} \end{center} \end{figure} \section{Conclusions} \label{sec:conclusions} In this work, we have introduced a numerical scheme for parabolic problems, which achieves high order in space and time for the linear heat equation and some sample nonlinear reaction diffusion equations. The scheme is $\mathcal O(N)$ for $N$ spatial points, and exhibits stiff decay for any order of accuracy. In the future, we intend to explore parallelization of the algorithm using domain decomposition, from which we expect strong performance due to the decoupled spatial factorization that we employ. {\bf Acknowledgements. } We would like to thank the anonymous reviewer and editor for the encouragement to improve the manuscript. This work was supported in part by AFOSR grants FA9550-12-1-0343, FA9550-12-1-0455, FA9550-15-1-0282, NSF grant DMS-1418804, New Mexico Consortium grant NMC0155-01, and NASA grant NMX15AP39G. \bibliographystyle{abbrv}
1,314,259,995,737
arxiv
\section{Introduction} Astrophysicists have had a long-standing interest in the physics of elementary processes in super-strong magnetic fields, with field strengths $B\gtrsim 10^{12}$~G. The cyclotron lines observed in the spectra of Her X-1 \cite{tr78} and of 4U 0115+63 \cite{gr80} as well as in many other X-ray pulsars have energy centers which correspond to field intensities in this range. There is also evidence for such field strengths in the spin-down rates of radio pulsars. If the spin-down is attributed to energy loss to electromagnetic radiation from a spinning magnetic dipole, many observations are consistent with field strengths of the order of $10^{12}$-$10^{13}$~G, with some pulsar field strengths well in excess of even $10^{13}$~G \cite{ha91,ts86}. In addition, there is tantalizing evidence for cyclotron lines in the spectra of gamma-ray bursts seen by the gamma-ray burst detector aboard the GINGA satellite \cite{mu88}, with line center energies consistent with field intensities of order $10^{12}$~G. The association of Soft Gamma Repeaters with supernova remnants provides indirect evidence for even stronger fields. If the 8~s periodicity of the March 5 1979 event is identified with the rotation period of the neutron star, the known age of the N49 remnant may be used to estimate that the field strength is approximately $6\times 10^{14}$~G \cite{dt92}. Moreover, such a strong field could help resolve the puzzle of how the March 5 1979 event could ostensibly have been so extravagantly in excess of the Eddington limit ($L\approx 10^4L_{\text{Edd}}$), by suppressing the Thomson cross-section for photons propagating nearly parallel to the field lines \cite{pa92}. In fields such as these, comparable in strength to the critical field strength $B_c\equiv m^2c^3/e\hbar = 4.414\times 10^{13}$~G, all calculations of elementary processes must be carried out using Quantum Electrodynamics. There have been many such calculations over the past two decades, covering topics such as cyclotron absorption \cite{dv78}, cyclotron decay \cite{mz81,hrw82}, single photon pair production \cite{k54,dh83}, pair annihilation to a single photon \cite{w79,db80,h86,wphr86}, Compton scattering \cite{h79,dh86,bam86,hd91}, two photon pair production \cite{km86}, two photon pair annihilation \cite{w79,db80}, $\text{e}^-\text{e}^-$ scattering \cite{l81}, and several more. All these processes have very different behavior from their $B=0$ counterparts (if those counterparts are even possible), on account of the peculiar kinematics,as well as the discrete electronic states (Landau levels) associated with a uniform external magnetic field. These calculations have always been carried out in the Furry picture, in close analogy to the $B=0$ Feynman rules. The free space electron propagator is replaced by a propagator which is a Green's function for the Dirac equation in the external field, and the external fermion lines are represented by solutions of that equation. The results have often been interesting and useful, but they have not been uniformly satisfactory. The leading order calculation of resonant Compton scattering yields results which are divergent at the cyclotron resonances \cite{h79,dh86}, evidently because to this order the theory makes no provision for natural line width. The line width may be included ``by hand'' in the results, making the cyclotron resonances finite \cite{bam86,hd91}. However, there are other such ``resonant divergences'' in Compton scattering, which have nothing to do with the cyclotron resonances and which are not nearly as tractable \cite{h79,dh86}. In fact, one such resonance, which occurs exactly at the threshold where the initial photon may pair-produce, is responsible for making the total Compton cross-section divergent {\em everywhere} above this threshold. In fact, the theory of elementary processes in external magnetic fields is plagued with such divergences. With a little practice, it is not hard to discover divergences resembling these {\em in every single process of second or higher order}. Clearly, this is a troublesome development that casts a shadow upon the entire undertaking. These resonant divergences occur because the kinematics of these processes allow intermediate ``virtual states'' to be real --- that is, on-shell. Often, the on-shell intermediate state is an excited state, such as an electron in an excited Landau level, or a photon above the one photon pair-production threshold. In this case there is an associated decay width that may be pressed into service to control the divergence. The Weisskopf-Wigner broadening prescription, \begin{equation} E\rightarrow E-\frac{1}{2}i\Gamma,\label{ecmplx} \end{equation} when applied to the energy denominators in the propagator, pushes the poles in the propagator off the real axis, so that while the intermediate states may still be on-shell the propagator no longer diverges there. This is in fact the approach that has been adopted for Compton scattering \cite{bam86,hd91,g93}, and which ascribes the natural line width to the cyclotron resonances. However, there are circumstances in which a {\em stable} on-shell intermediate state may be produced. Such states are not attended by a decay width, so the associated divergence may not be reined in as before. It should be pointed out that these divergences are entirely unrelated to the notorious ultraviolet divergences of QED. They are not the consequence of improper manipulation of field-theoretic distributions; rather, they occur whenever the circumstances of the elementary process permit a kinematically accessible on-shell intermediate state (KAOSIS). It is easy to recognize when a KAOSIS is permitted. For example, a second-order process will allow one if it may be viewed as a succession of two real first-order processes. Thus the KAOSIS corresponding to cyclotron resonance occurs because the process may be viewed as a real cyclotron absorption followed by a real cyclotron emission. Similarly, the second ``disastrous'' resonance in Compton scattering is due to a KAOSIS that corresponds to the initial photon undergoing a real decay to a pair, followed by the resulting positron annihilating with the initial electron to produce the final photon. The reason this KAOSIS is catastrophic is that the intermediate positron may be in the Landau ground state, so that no decay width is available to restrain the divergence. At the same time, there is a second defect of the theory which so far has not received recognition as a problem. The calculation of S-matrix elements as outlined above always results in a $\delta$-function which enforces strict energy conservation between initial and final states. This remains true even if some of these scattering states are unstable. But this is not physically sensible; the energy of an unstable state is only known to within its decay width, so that it is ludicrous to demand strict energy conservation for such transitions. Nevertheless, the current theory does so irrespective of whether or not the states are stable. As an example, the calculation of the decay of an electron in an excited Landau state produces the result that the emitted cyclotron photon is monochromatic, rather than having the Lorentzian line shape characteristic of resonant decay \cite{mz81,hrw82}. The difficulties of resonant divergence and of spurious energy conservation are related. Briefly, a stable KAOSIS may only occur if some of the particles in the initial and final states are themselves unstable, and it is their decay widths that restrain the divergence. However, the introduction of these decay widths smears out their energy, so that energy conservation (which was a consequence of their assumed eternal duration) no longer obtains. Thus it appears that we must modify the current theory somehow if we are to circumvent these unphysical features. That modification is the purpose of this work. We demonstrate below that radiative corrections modify the propagators and the scattering states by introducing their respective decay widths into the S-matrix elements. In this sense, we are extending the results of Graziani \cite{g93}, who carried out this program for the electron propagator only. The corrected ``dressed'' states and propagators result in scattering cross-sections that are always finite. In Sec.\ \ref{sec_rdp} we discuss the role of the bare propagators in producing resonant divergences. In Sec.\ \ref{sec_dep} we review the work of reference \cite{g93} on the electron propagator, which we extend to the photon propagator in Sec.\ \ref{sec_dpp}. We exhibit the corrections to the scattering states in Sec.\ \ref{sec_css}. In Sec.\ \ref{sec_sf} we derive the modification of the S-matrix elements, and exhibit two useful limits: the absorption limit and the emission limit, wherein the initial and final states are stable, respectively. Finally, in Sec.\ \ref{sec_disc} we discuss the applicability of these results and the extent of the modification from the ``standard'' theory. \section{Resonant divergences and propagators}\label{sec_rdp} We begin with a discussion of the role that the bare electron and photon propagators have in producing the resonant divergences. In this section and throughout the rest of this work we use the metric signature $[+1,-1,-1,-1]$. The bare electron propagator is simply expressed in terms of the fermion one-particle states. We work in the Furry picture, so these states are represented by solutions $\Psi^{(\eta)}_A$ of the Landau-Dirac (L-D) equation, that is, the Dirac equation in the presence of a classical, uniform, external magnetic field: \begin{equation} [\gamma^\mu(i\partial_\mu-eA_\mu)-m]\Psi^{(\eta)}_A=0.\label{ldeqn} \end{equation} Here, $\eta=\pm 1$ refers to whether the solution has positive or negative energy, while $A$ is the set of quantum numbers that specify the state --- Landau level number, $x^3$-momentum, spin, and an orbit center coordinate. We choose the gauge $A^0=0$, ${\bf A}=Bx^1{\bf e}_2$ for the external magnetic field, which is directed in the ${\bf e}_3$ direction. We also choose the orbit center coordinate $a$ to be the $x^1$ Cartesian coordinate of the orbit center. With these choices, the functional dependence of the solutions of Eq.\ (\ref{ldeqn}) is that of a plane wave in the $x^2$ and $x^3$ coordinates, while the $x^1$ dependence is that of a one-dimensional harmonic oscillator eigenfunction centered about $x^1=a$ \cite{jl49,mp83}. It is convenient to separate out the time dependence of $\Psi^{(\eta)}_A$: \begin{equation} \Psi^{(\eta)}_A(x)=e^{-i\eta E_Ax^0}\phi^{(\eta)}_A({\bf x}),\label{esep} \end{equation} where $\phi^{(\eta)}_A({\bf x})$ is an eigenstate of the Landau-Dirac Hamiltonian with quantum numbers $A$ and eigenvalue $E_A$. $E_A$ is given by $E_A=[m^2+(p^3)^2+2mN\omega_B]^{1/2}$, where $N$ is the Landau level number, $p^3$ is the momentum in the $x^3$ direction, and $\omega_B$ is the Larmor frequency. The bare electron propagator $G_B(x,y)$ may be represented in terms of the $\phi^{(\eta)}_A$: \begin{equation} G_B(x,y)={\displaystyle\sum_{A,\eta}}\int\frac{dp^0}{2\pi}\,e^{-ip^0(x^0-y^0)} \frac{\phi^{(\eta)}_A({\bf x})\overline{\phi^{(\eta)}_A}({\bf y})} {p^0-\eta(E_A-i0)}.\label{beprop} \end{equation} It is easy to verify that the expression in Eq.\ (\ref{beprop}) is a Green's functions for Eq.\ (\ref{ldeqn}) with the required boundary conditions \cite{bd64}. We write $G_B(x,y)$ rather than $G_B(x-y)$ because neither Eq.\ (\ref{ldeqn}) nor its Green's functions are translationally invariant --- they are invariant under combinations of translations and gauge transformations. The bare electron propagator enters the leading order calculation of such processes as Compton scattering, pair annihilation to two photons, and two photon pair production. The S-matrix elements for these processes are obtained by sandwiching $G_B(x,y)$ between appropriate fermion and photon states and integrating over $x$ and $y$. These processes all exhibit resonant divergences. The proximate cause of those divergences is evidently the energy denominator in Eq.\ (\ref{beprop}). The convolution of the propagator with the initial and final states fixes the values of $p^0$ and $p^3$ at the total energy and $x^3$-momentum of the states, respectively. Thus, the residual energy-related degree of freedom in the summation over intermediate states is the Landau level number $N$, a non-negative integer. It follows that the energy denominator in Eq.\ (\ref{beprop}) may be zero if the energy and $x^3$-momentum of the scattering states are suitably tuned to one of the Landau levels. If this happens a resonant divergence occurs. The bare photon propagator has the usual ``transverse'' expression \begin{equation} D_B(x-y)^{\mu\nu}=\int\frac{d^4k}{(2\pi)^4}\,(-g^{\mu\nu}+k^\mu k^\nu/k^2) \frac{e^{-ik(x-y)}}{k^2+i0}.\label{bpprop} \end{equation} It enters the calculation of S-matrix elements for processes such as electron-electron scattering \cite{l81} and electron-positron scattering. These processes have total cross-sections which exhibit resonant divergences. This may seem surprising at first, since the S-matrix elements for these processes are entirely finite, even when a KAOSIS is present. There is a key difference between the photon and electron propagators: In the electron case the on-shell energy contains a dependence on the Landau level quantum number, a discrete degree of freedom. Thus the resonances are ``spaced-out'' and well-separated. On the other hand, the photon on-shell energy is entirely dependent on the purely continuous degrees of freedom ${\bf k}$, and the ``sum over intermediate states'' is really an integral. While this integral will undoubtedly encounter any KAOSIS singularity, the small imaginary part in the denominator of the propagator provides a perfectly straightforward prescription for circumventing the pole. Thus the S-matrix elements are finite for these processes. Nevertheless, whenever an on-shell intermediate state is kinematically accessible, the total cross-sections diverge. What is going on? The divergence is evident in the expressions in Appendix C of Langer \cite{l81} for the e-e scattering cross-section. Langer performed the integrals over the intermediate states simultaneously with those over final states and the average over initial states, in order to take advantage of several simplifications that ensue. Unfortunately, these manipulations obscure the source of the divergences. In order to shed some light on the situation, we have calculated cross-sections for e-e and e$^+$-e$^-$ scattering by computing the S-matrix elements before summing over final and averaging over initial states. What we have found can be best illustrated by considering the specific example of e$^+$-e$^-$ scattering. As depicted in Fig.\ \ref{e+e-}, there are two relevant Feynman diagrams. We have confirmed that in each case the S-matrix elements are finite even when there is a KAOSIS. When a KAOSIS exists, however, there is a divergence resulting from the sum over {\em final} states. In the diagram of Fig.\ \ref{e+e-}(a), the divergence arises from the integral over the variable $s=a_f^+-a_f^-$, where $a_f^+$ and $a_f^-$ are the orbit center $x^1$-coordinates of the final positron and electron, respectively. In the diagram of Fig.\ \ref{e+e-}(b), the divergence is due to the integral over the variable $t=a_f^--a_i^-$, where $a_i^-$ is the orbit center $x^1$-coordinate of the initial electron. We see, then, that the divergence manifests itself as an infinite range of interaction. When a KAOSIS is present, the intermediate photon may travel an arbitrarily large distance between its emission and its absorption without a reduction in amplitude. Consequently there is no spatial cutoff in the interaction, and the resulting integral over orbit center separation diverges. The situation may be further clarified by explicitly computing the interaction range in the $x^1$ direction. When we calculate an S-matrix element for these processes, we integrate the bare photon propagator $D_B(x-y)^{\mu\nu}$ multiplied by two fermion currents, one for each vertex. Since our states are chosen to be plane waves in the $x^2$-$x^3$ direction, the currents are also plane waves in $x^2$ and $x^3$, and the integrals over $x^0$, $x^2$, and $x^3$ are tantamount to Fourier transforms of the photon propagator in those coordinates. The remaining integral over $x^1$ folds together two (in general well-separated) current functions $j_x(x^1)^\mu$, $j_y(y^1)^\nu$ with the transformed propagator $f_B(x^1-y^1;k^0,k^2,k^3)g_{\mu\nu}$. We may easily compute $f_B(x^1;k^0,k^2,k^3)$. If we define $\rho\equiv (k^0)^2-(k^2)^2-(k^3)^2=\pm q^2$, with $q\ge 0$, we find \begin{equation} f_B(x^1;k^0,k^2,k^3)=\left\{\begin{array}{ll} -\frac{i}{2q}e^{iq|x^1|} & \mbox{if $\rho\ge 0$,}\\ -\frac{1}{2q}e^{-q|x^1|} & \mbox{otherwise.} \end{array}\right.\label{effrange1} \end{equation} Thus, if $\rho\ge 0$ (the condition for the existence of a KAOSIS), the effective interaction range is infinite, whereas it is of finite range $\sim 1/q$ otherwise. The situation is reminiscent of Coulomb scattering. If a wave packet scatters with large impact parameter from the center of the Coulomb potential, the momentum transfer is very low, and thus the exchanged photon is very close to its mass-shell (that is, its $k$ vector is very near the apex of the light cone). This is the reason that the total Coulomb cross-section is divergent --- the extreme infrared photons, which are nearly on-shell, give the interaction an infinite range. There is one noteworthy difference here, however: for the processes we consider, the virtual photons do not need to be infinitely soft to be on-shell, so that these resonant divergences are in no sense infra-red divergences. As a consequence of this analysis, the cure for photon-propagator-related resonant divergences will necessarily have a slightly different mathematical character from that for the divergences that arise in connection with the electron propagator. While in the latter case we need to show how the previously diverging amplitudes may be made to converge, in the former case we must show how the previously long-range ``effective interaction'' may have its range curtailed. \section{The dressed electron propagator}\label{sec_dep} The necessity of complexifying the energy denominator of the electron propagator so as to obtain the Weisskopf-Wigner line shape has been recognized for some time \cite{bam86,hd91}, although these authors lacked a formal justification for their {\em ad hoc} complexification of the energy, given in Eq. (\ref{ecmplx}). Graziani\cite{g93} provided this justification by considering radiative corrections to the electron propagator. As it turns out, the same procedure may be applied to the photon propagator as well. Quite generally, one finds that the operator which represents the self-interaction (the mass operator $\Sigma$ in the case of the electron, the polarization operator $\Pi$ in the case of the photon) singles out those solutions of the wave equation in which it is diagonal. The dressed propagator may then be expressed as a sum over all such states, with the energy in the denominator acquiring an imaginary part given by Eq.\ (\ref{ecmplx}). We review the discussion of the electron propagator from Ref.\ \cite{g93}, and extend it to the photon propagator in the next section. The dressed propagator $G(x,y)$ is related to the bare propagator and the self-energy operator $\Sigma(x,y)$ by the Dyson equation $G=G_B+G_B\cdot\Sigma\cdot G$, where the dot denotes four-dimensional convolution as well as spinorial matrix multiplication. It follows that $G(x,y)$ is a Green's function for the dressed L-D operator \begin{equation} [\gamma^\mu(i\partial_\mu-eA_\mu)-m-\Sigma]\cdot\Psi^{(\eta)}_A=0. \label{dldeqn} \end{equation} Now, let $\Theta^{(\eta)}_{p^0A}(x)=e^{-ip^0x^0}\phi^{(\eta)}_A({\bf x})$. It may be shown \cite{mp83,p87,g93} that $\Theta$ diagonalizes $\Sigma$ if the $\phi^{(\eta)}_A$ are simultaneous eigenstates of the Landau-Dirac Hamiltonian and of the $x^3$-component of the magnetic moment operator of Sokolov and Ternov \cite{st68}. This condition defines what is meant by ``spin up'' and ``spin down'', in the absence of the tools provided by Poincar\'e group representation theory (the physical system no longer has full Poincar\'e invariance). If this condition is satisfied, {\em and only in this case}, $G$ may be represented in a form analogous to Eq.\ (\ref{beprop}): \begin{equation} G(x,y)={\displaystyle\sum_{A,\eta}}\int\frac{dp^0}{2\pi}\,e^{-ip^0(x^0-y^0)} \frac{\phi^{(\eta)}_A({\bf x})\overline{\phi^{(\eta)}_A}({\bf y})} {p^0-\eta(E_A-i0)-\Sigma(p^0,A,\eta)},\label{dp1} \end{equation} where $\Sigma(p^0,A,\eta)$ is the diagonal matrix element of $\Sigma$ in the state $\Theta^{(\eta)}_{p^0A}$. It is in general a complex number, in contradistinction to the case where the external field strength is zero, where the diagonal matrix elements of the self-energy operator are real. The real part of $\Sigma(p^0,A,\eta)$ merely yields a small shift in the energy of the state, while the imaginary part provides a line-width to the otherwise divergent resonant energy denominator. This justifies neglecting the real shift while preserving the imaginary width. It was shown explicitly in Ref.\ \cite{g93} that in this approximation, the pole in the propagator corresponding to state $A$, $\eta$ is located at $p^0=\eta(E_A-\frac{i}{2}\Gamma_A)$, where $\Gamma_A$ is just the Weisskopf-Wigner decay rate of the state $A$, computed to leading order: \begin{equation} \Gamma_A=e^2\int\frac{d^3{\bf k}}{2\omega_k(2\pi)^3}\, {\displaystyle\sum_{\epsilon}\sum_B} |T_{AB}({\bf k},\mbox{\boldmath$\epsilon$})|^2\,(2\pi)\, \delta(E_A-E_B-\omega_k).\label{gammadef} \end{equation} Here, $T_{AB}({\bf k},\mbox{\boldmath$\epsilon$})$ is the interaction matrix element for the transition from the state $A$ to the state $B$ with the emission of a photon with wave vector {\bf k} and polarization \mbox{\boldmath$\epsilon$}. It follows that the dressed propagator may be expressed as \begin{eqnarray} G(x,y)&=&{\displaystyle\sum_{A,\eta}}\int\frac{dp^0}{2\pi}\,e^{-ip^0(x^0-y^0)} \frac{\phi^{(\eta)}_A({\bf x})\overline{\phi^{(\eta)}_A}({\bf y})} {p^0-\eta(E_A-i\Gamma_A/2)}\label{depropa}\\ &=&-i{\displaystyle\sum_{A,\eta}}\eta\,\theta\left[\eta\,(x^0-y^0)\right] e^{-i\eta (E_A-i\Gamma_A/2)(x^0-y^0)} \phi^{(\eta)}_A({\bf x})\overline{\phi^{(\eta)}_A}({\bf y})\label{depropb} \end{eqnarray} to first order in $e^2$. It appears from these equations that the prescription of Eq.\ (\ref{ecmplx}) is in fact rigorously justifiable. We wish to emphasize, however, that Eqs.\ (\ref{depropa})-(\ref{depropb}) are only correct when the states $\phi^{(\eta)}_A$ are chosen so that the $\Theta^{(\eta)}_{p^0A}$ diagonalize the self-energy operator, or equivalently, so that the $\phi^{(\eta)}_A$ diagonalize the $x^3$-component of the Sokolov-Ternov magnetic moment operator. The states of Sokolov and Ternov \cite{st68}, and those of Herold, Ruder, and Wunner \cite{hrw82} satisfy these conditions, and their use leads to correct expressions for the scattering cross-sections. The states of Johnson and Lippmann \cite{jl49}, which have gained some currency in the literature, do not satisfy the required conditions \cite{mp83}. In particular, as discussed in \cite{g93}, the use of Johnson-Lippmann states in the computation of cyclotron scattering cross-sections can lead to relative errors of order 45\% at the first cyclotron harmonic, depending on the field strength. \section{The dressed photon propagator}\label{sec_dpp} The procedure for the photon propagator is entirely analogous to the one followed for the electron propagator. The self-interaction of the Maxwell field is represented by the polarization operator $\Pi(x-y)^{\mu\nu}$, which satisfies the condition of gauge invariance, $\frac{\partial}{\partial x^\mu}\Pi(x-y)^{\mu\nu}=0$. The dressed photon propagator $D(x-y)^{\mu\nu}$ is obtained from the bare propagator $D_B(x-y)^{\mu\nu}$ and $\Pi(x-y)^{\mu\nu}$ by solving the Dyson equation $D=D_B+D_B\cdot\Pi\cdot D$. In order to accomplish this, it is necessary to find the polarization states which diagonalize $\Pi^{\mu\nu}$, which are analogous to the spinor states that diagonalize $\Sigma$. These polarization states were found by Batalin and Shabad \cite{bs71}, and written explicitly for the case of a uniform magnetic field by Shabad \cite{s75}. For a photon with 4-momentum $k$ propagating in a uniform magnetic field ${\bf B}=B{\bf e}_3$ we have the following three (unnormalized) polarization vectors: \begin{mathletters} \label{poldef} \begin{equation} b_\perp^\mu=k^2(e_1)^\mu-k^1(e_2)^\mu,\label{polperp} \end{equation} \begin{equation} b_\|^\mu=k^3(e_0)^\mu+k^0(e_3)^\mu,\label{polpar} \end{equation} \begin{equation} b_{\text L}^\mu=(k^\nu k_\nu){k_\perp}^\mu-({k_\perp}^\nu{k_\perp}_\nu)k^\mu, \label{pollong} \end{equation} \end{mathletters} where ${k_\perp}^\mu=k^1(e_1)^\mu+k^2(e_2)^\mu$, and $(e_\rho)^\mu$ is a unit vector in the $x^\rho$ direction. These modes diagonalize the Fourier-transformed polarization operator $\Pi(k)^{\mu\nu}=\int d^4x\, \Pi(x)^{\mu\nu}e^{ikx}$. There are three of them, because by gauge invariance $\Pi$ satisfies $k_\mu\Pi(k)^{\mu\nu}=0$, so that the fourth mode is just $k$, and the eigenvalue of $\Pi$ that corresponds to it is zero. The mode $b_\|$ is so labeled because on shell it differs from the usual ``parallel'' polarization 3-vector by an inessential multiple of $k$, while $b_\perp$ is just the usual ``perpendicular'' mode. The vector $b_{\text L}$ represents a longitudinal mode, which on shell is proportional to $k$. The $b$ in Eqs.\ (\ref{poldef}) are orthogonal to each other, and to $k$. Using these modes, the transverse photon propagator may be expressed as follows: \begin{equation} D(x-y)^{\mu\nu}=-\int\frac{d^4k}{(2\pi)^4}\,e^{-ik(x-y)} \displaystyle{\sum_{j}}\frac{b_j^\mu b_j^\nu}{b_j\cdot b_j}\, \frac{1}{(k^0)^2-{\omega_k}^2+\Pi(k^0,{\bf k},j)},\label{dp2} \end{equation} where \begin{equation} \Pi(k^0,{\bf k},j)\equiv \Pi(k)_{\mu\nu}{b_j}^\mu{b_j}^\nu /(b_j\cdot b_j). \label{pimel} \end{equation} To first order in $e^2$, the pole in Eq.\ (\ref{dp2}) is located at $(k^0)^2={\omega_k}^2-\Pi(\omega_k,{\bf k},j)$, so that to this order Eq.\ (\ref{dp2}) may be written \begin{equation} D(x-y)^{\mu\nu}=-\int\frac{d^4k}{(2\pi)^4}\,e^{-ik(x-y)} \displaystyle{\sum_{j}}\frac{b_j^\mu b_j^\nu}{b_j\cdot b_j}\, \frac{1}{(k^0)^2-{\omega_k}^2+\Pi(\omega_k,{\bf k},j)}.\label{dp3} \end{equation} Note that when the light-cone condition $k^0=\omega_k$ is satisfied, both $b_\perp$ and $b_\|$ are space-like ($b\cdot b<0$) while the longitudinal mode $b_L$ is light-like. It follows that $\Pi(\omega_k,{\bf k},{\text L})=0$, so we only need compute $\Pi(\omega_k,{\bf k},j)$ for $j=\perp,\|$. The (unrenormalized) leading-order expression for $\Pi(x-y)^{\mu\nu}$ is \begin{equation} \Pi(x-y)^{\mu\nu}=-ie^2\text{Tr}(G_B(y,x)\gamma^\mu G_B(x,y)\gamma^\nu). \label{pidef} \end{equation} Following the analogy to the case of the electron propagator, we calculate the imaginary part of the diagonal matrix elements of $\Pi^{\mu\nu}$, while neglecting the real part. For this purpose, Eq.\ (\ref{pidef}) is entirely adequate, even though it is not renormalized. The renormalization counter-terms which are to be subtracted from the diagonal matrix elements of $\Pi^{\mu\nu}$ are purely real, so that the imaginary parts are unaffected by renormalization. This expression for $\Pi^{\mu\nu}$ is in fact translationally invariant, even though $G_B$ is not. The translational invariance may be established using the ``translation+gauge transformation'' invariance of $G_B$ alluded to in the previous subsection \cite{ms76}. Substituting Eq.\ (\ref{beprop}) into Eq.\ (\ref{pidef}), after some manipulation we obtain \begin{equation} \Pi(x-y)^{\mu\nu}=-e^2{\displaystyle\sum_{A,A^\prime,\eta}} \int\frac{dp^0}{2\pi}\,e^{-ip^0(x^0-y^0)}\, \frac{[\overline{\phi^{(\eta)}_A}({\bf x})\gamma^\mu \phi^{(-\eta)}_{A^\prime}({\bf x})] [\overline{\phi^{(-\eta)}_{A^\prime}}({\bf y})\gamma^\nu \phi^{(\eta)}_A({\bf y})]} {\eta p^0+E_A+E_{A^\prime}-i0}. \end{equation} We now Fourier transform this equation. Taking the implicit translation invariance into account, we obtain \begin{eqnarray} \Pi(k)^{\mu\nu}\epsilon_\mu \epsilon_\nu&=&L^{-3}T^{-1}\int d^4x\, d^4y\, e^{ik(x-y)}\Pi(x-y)^{\mu\nu}\epsilon_\mu \epsilon_\nu \nonumber\\ &=&-2\omega_k{\displaystyle\sum_{A,A^\prime,\eta}} \frac{|J_{AA^{\prime}}^{(\eta)}({\bf k})^\mu \epsilon_\mu|^2} {\eta k^0+E_A+E_{A^\prime}-i0},\label{pimel1} \end{eqnarray} where \begin{equation} J_{AA^{\prime}}^{(\eta)}({\bf k})^\mu\equiv e^2L^{-3/2}(2\omega_k)^{-1/2} \int d^3{\bf x}\, \overline{\phi^{(\eta)}_A}({\bf x}) \gamma^\mu\phi^{(-\eta)}_{A^\prime}({\bf x})\, e^{i{\bf k}\cdot{\bf x}}.\label{current} \end{equation} When $\epsilon^\mu$ is a normalized polarization vector, $J_{A^{\prime}A}^{(+)}({\bf k})^\mu\epsilon_\mu$ is just the interaction matrix element for a transition from a photon with wave vector ${\bf k}$ and polarization {\boldmath$\epsilon$} to a pair with quantum numbers $A^{\prime}A$. The imaginary part of Eq.\ (\ref{pimel1}) is \begin{equation} \text{Im}[\Pi(k)^{\mu\nu}\epsilon_\mu \epsilon_\nu]= -2\omega_k{\displaystyle\sum_{A,A^\prime}} \left|J_{AA^{\prime}}^{(+)}({\bf k})^\mu \epsilon_\mu\right|^2\, \pi\,\delta\left(E_A+E_{A^{\prime}}-|k^0|\right),\label{pimel2} \end{equation} where we have used the identity $\sum_{AA^\prime}\left|J_{AA^{\prime}}^{(+)}({\bf k})^\mu \epsilon_\mu\right|^2= \sum_{AA^\prime}\left|J_{AA^{\prime}}^{(-)}({\bf k})^\mu \epsilon_\mu\right|^2$, a consequence of the parity invariance of Eq.\ (\ref{ldeqn}). Comparing with Eq.\ (\ref{pimel}), we see that to obtain the imaginary part of $\Pi(\omega_k,{\bf k},j)$, we may impose the light-cone condition $k^0=\omega_k$ and let $\epsilon_\mu=(\epsilon_j)_\mu=(b_j)_\mu/|b_j\cdot b_j|^{1/2}\Big|_{k^0=\omega_k}$ in Eq.\ (\ref{pimel2}), keeping in mind that $b_j\cdot b_j<0$. The result is \begin{eqnarray} \text{Im}[\Pi(\omega_k,{\bf k},j)]&=& 2\omega_k{\displaystyle\sum_{A,A^\prime}} \left|J_{AA^{\prime}}^{(+)}({\bf k})^\mu (\epsilon_j)_\mu\right|^2\, \pi\,\delta\left(E_A+E_{A^{\prime}}-\omega_k\right)\nonumber\\ &=&2\omega_k\times\Gamma({\bf k},j)/2.\label{gammaphot} \end{eqnarray} Clearly, $\Gamma({\bf k},j)$ is just the Weisskopf-Wigner decay rate of the photon state $({\bf k},j)$. The energy denominator in Eq.\ (\ref{dp3}) is thus $(k^0)^2-{\omega_k}^2+2i\omega_k\Gamma({\bf k},j)/2\approx (k^0)^2-(\omega_k-i\Gamma({\bf k},j)/2)^2$, to first order in $e^2$. Consequently, the dressed photon propagator is \begin{eqnarray} D(x-y)^{\mu\nu}&=&-\int\frac{d^4k}{(2\pi)^4}\,e^{-ik(x-y)} \displaystyle{\sum_{j}}\frac{b_j^\mu b_j^\nu}{b_j\cdot b_j}\, \frac{1}{(k^0)^2-(\omega_k-i\Gamma({\bf k},j)/2)^2}.\label{dppropa}\\ &=&i\int\frac{d^3{\bf k}}{(2\pi)^32\omega_k}\nonumber\\ &&\quad\times\displaystyle{\sum_{j}\sum_{\eta=\pm 1}}\, \frac{b_j^\mu b_j^\nu}{b_j\cdot b_j}\, \theta\left[\eta\,(x^0-y^0)\right]\, e^{-i\eta(\omega_k-i\Gamma({\bf k},j)/2)(x^0-y^0)}\, e^{i\eta{\bf k}\cdot ({\bf x}-{\bf y})}.\label{dppropb} \end{eqnarray} We see that the prescription of Eq.\ (\ref{ecmplx}) continues to hold in the case of the photon field. It should be emphasized, however, that it is essential that the polarization modes given in Eqs.\ (\ref{poldef}) be used in the expression for the propagator in order for that expression to be correct. Recalling the discussion at the end of Sec.\ \ref{sec_rdp}, we investigate the range of the ``dressed'' interaction by computing the partial Fourier transform of the propagator in Eq.\ (\ref{dppropa}) with respect to $x^0$, $x^2$, and $x^3$. Assuming the presence of a KAOSIS (so that $q^2=(k^0)^2-(k^2)^2-(k^3)^2\ge 0$), we find for the transformed propagator $f(x^1;k^0,k^2,k^3)$, \begin{eqnarray} f(x^1;k^0,k^2,k^3)&=&-\frac{i}{2Z(q)}e^{iZ(q)|x^1|},\nonumber\\ Z(q)&\equiv&\frac{1}{\sqrt{2}}\left(\sqrt{u^2+q^2}+i\sqrt{u^2-q^2}\right), \nonumber\\ u^2&\equiv&\sqrt{q^4+(k^0)^2{\Gamma({\bf k},j)}^2},\label{effrange2} \end{eqnarray} where $\Gamma({\bf k},j)$ is evaluated at the point $k^1=q$. Since the imaginary part of $Z(q)$ is positive, we find that the presence of a non-zero decay rate $\Gamma({\bf k},j)$ has cut off the interaction range. Note that $Z(q)\rightarrow q$ as $\Gamma\rightarrow 0$, so that if the KAOSIS is below the one photon pair-production threshold the interaction range is not cut off (in fact we then have $f=f_B$, as expected). In the next section, we will show how the interaction range is cut off when the KAOSIS is below threshold. In the mean time though, we have already disposed of the resonant divergence associated with the process of Fig.\ \ref{e+e-}(b), which always has a KAOSIS above the one photon pair-production threshold. Indeed, we may now give a useful interpretation of that divergence: it arose because the intermediate photon (which at the KAOSIS may be viewed as being due to a pair annihilation) was not instructed to decay to a pair in a finite time, so that it produced finite amplitude for the final pair at all values of $x^1$. The resulting total cross-section was infinite. Now that the dressed propagator is used to calculate the S-matrix element, the photon is aware of its decay obligations, and the cross-section due to this process is finite. \section{Corrections to Scattering States}\label{sec_css} The propagator corrections described in the previous section are sufficient to control resonant divergences in many processes and regimes. Nevertheless, there are still cases where a scattering process may lead to a divergent resonance. Specifically, any process exhibiting a KAOSIS with vanishing decay rate will exhibit divergent scattering cross-sections even if calculated using the dressed propagators. Such processes are not at all rare. Consider the case of an electron scattering with a photon which is above the one photon pair-production threshold. The following two first-order processes correspond to an on-shell second-order process: first the photon pair-produces, then the initial electron annihilates with the newly produced positron to produce the final photon [see Fig.\ \ref{nowidth}(a)]. If the intermediate on-shell positron is in the Landau ground state, its decay rate is zero, so that there is no line width supplied by the propagator to control the resulting divergence. This divergence represents something of a calamity, since its effect is to make the total cross-section for Compton scattering divergent everywhere above the one photon pair-production threshold \cite{h79,dh86}. A second example is provided by electron-electron scattering, in which at least one of the initial electrons is in an excited Landau state [Fig.\ \ref{nowidth}(b)]. Once again, there is a second-order on-shell process that is analogous to a succession of two first order processes, in which the excited electron emits a cyclotron photon which is in turn absorbed by the other electron. If the on-shell photon is below the one photon pair-production threshold, its decay rate is zero, and there is a resonant divergence --- see the expressions in Appendix C of Ref.\ \cite{l81}. It turns out that in every second-order process with a stable KAOSIS there are non-zero decay rates associated with the initial and final scattering states. Thus, in the first of the two examples above, the initial and final photons are capable of decay, while in the second example, there are excited Landau levels in both the initial and final states. One might hope, then, that the decay rates of the scattering states might be pressed into service to control the resonant divergences when the decay rate of the intermediate state is zero. As we now demonstrate, this is not only possible, it is a necessary feature of the same program of radiative corrections that brought the decay rates into the propagators. That program demands that we should apply radiative corrections to the external lines, in addition to the propagators. It might be objected at this point that loop corrections to external lines can have no bearing on the problem, since the arbitrary constants which arise during renormalization are in part fixed by the requirement that the external lines should suffer no corrections, so that the scattering states should continue to be represented by solutions of the ``free'' wave equation with the physical mass \cite{bs59}. The answer to this objection is that in the present case, the limited number of arbitrary constants is not sufficient to satisfy the physical requirement above for all scattering states. For example, in the case of the electron field, we may {\em only} demand that the Landau ground state $A_G$ propagate according to Eq.\ (\ref{ldeqn}). Once we have used up the relevant renormalization constants to ensure this condition, the remaining excited Landau states must propagate according to Eq.\ (\ref{dldeqn}). In other words, in Eq.\ (\ref{dldeqn}) we may set $\Sigma(p^0=\eta E_G,A_G,\eta)=0$ (where $E_G$ is the energy of the ground state), but then the remaining $A\neq A_G$ will yield non-zero on-shell, diagonal matrix elements of $\Sigma$. Similarly for the Maxwell field, we may only demand that $\Pi(\omega_k,{\bf k},j)=0$ in the limit ${\bf k}\rightarrow 0$, so that only infinitely soft photons see no refringence in the magnetized vacuum. The origin of this ``feature'' of magnetic QED is the fact that excited scattering states are technically not scattering states at all, insofar as they do not correspond to asymptotic one-particle states of the quantum field which are stable. In principle, we should only use stable states as initial and final scattering states --- electrons and positrons in the Landau ground state and photons below the one photon pair-production threshold. The consequence of this restriction on the theory would be that multiple scattering events and scatterings followed by multiple emissions could only be treated by computing scattering amplitudes corresponding to very high order Feynman diagrams, a notoriously burdensome task. Thus, some of the most interesting astrophysical applications of the theory would become virtually inaccessible. As an alternative, we may represent such high-order multiple events as a succession of lower order transitions between states that are not necessarily stable, and treating those states as if they were genuine scattering states. For example, a process in which an electron in the ground state and a photon of energy above the third cyclotron harmonic make a resonant transition to a state with an electron in the ground state and four photons may be approximated by a Compton scattering event in which the final electron is left at the third harmonic, followed by three resonant decays. This approximation of scattering states by excited states is a common one in the literature \cite{dh86,bam86,hd91,km86}, but to date there has been no investigation of its validity and limitations, or of what modifications the usual Feynman perturbation theory must suffer in order to accommodate them. That investigation is the central concern of this work. We now discuss the specific modifications to the scattering states due to radiative corrections. Consider a bare external electron line in a Feynman diagram which is represented in the scattering amplitude by the spinor $\Psi^{(\eta)}_A(x)$, a solution of Eq.\ (\ref{ldeqn}). After subjecting the line to corrections associated with the self-energy operator $\Sigma$, the result is a dressed line represented by the spinor $\Theta^{(\eta)}_A(x)$, where \begin{eqnarray} \Theta^{(\eta)}_A&=&\Psi^{(\eta)}_A+G_B\cdot\Sigma\cdot\Psi^{(\eta)}_A +G_B\cdot\Sigma\cdot G_B\cdot\Sigma\cdot\Psi^{(\eta)}_A+\ldots\nonumber\\ &=&\Psi^{(\eta)}_A+G_B\cdot\Sigma\cdot\Theta^{(\eta)}_A.\label{dste1} \end{eqnarray} If we view the L-D operator [the wave operator in Eq.\ (\ref{ldeqn})] as a ``free Hamiltonian'' and $\Sigma$ as a perturbation, Eq.\ (\ref{dste1}) may be cast as a four-dimensional Lippmann-Schwinger equation connecting an eigenstate $\Psi^{(\eta)}_A$ of the L-D operator with eigenvalue zero, to an eigenstate $\Theta^{(\eta)}_A$ of the dressed L-D operator [the wave operator in Eq.\ (\ref{dldeqn})], also with eigenvalue zero. In other words, $\Theta^{(\eta)}_A$ satisfies \begin{equation} [\gamma^\mu(i\partial_\mu-eA_\mu)-m-\Sigma]\cdot\Theta^{(\eta)}_A=0. \label{dldeqn2} \end{equation} It is a simple matter to find solutions of this equation, since we already know of states that simultaneously diagonalize the ``free'' operator and its perturbation. Substituting $\Theta^{(\eta)}_{A}(x)=e^{-ip^0x^0}\phi^{(\eta)}_A({\bf x})$ in Eq.\ (\ref{dldeqn2}), we find \begin{equation} p^0=\eta(E_A-i\Gamma_A/2),\label{ecmpest} \end{equation} so that the dressed scattering state is \begin{equation} \Theta^{(\eta)}_{A}(x)=e^{-i\eta(E_A-i\Gamma_A/2)x^0}\phi^{(\eta)}_A({\bf x}). \label{dest1} \end{equation} We may repeat the above argument for a bare external line which is represented by the Dirac conjugate spinor $\overline{\Psi^{(\eta)}_A}(x)$. The result is that the dressed state is $\overline{\Lambda^{(\eta)}_A}(x)$, where \begin{equation} \overline{\Lambda^{(\eta)}_A}(x)=e^{+i\eta(E_A-i\Gamma_A/2)x^0} \overline{\phi^{(\eta)}_A}({\bf x}).\label{dest2} \end{equation} Note that $\overline{\Lambda^{(\eta)}_A}(x)$ is {\em not} the Dirac conjugate spinor of $\Theta^{(\eta)}_{A}(x)$, since the real part of the exponent changes sign. The procedure is analogous for the external photon lines. The dressed states $a(x)^\nu$ are solutions of \begin{equation} (g_{\mu\nu}\square-\Pi_{\mu\nu})a^\nu=0.\label{dmeqn} \end{equation} Using the polarization states of the previous section (for $j=\perp,\|$) to write \begin{equation} {a_{{\bf k}j}}(x)^\nu=(2\omega_kL^3)^{-1/2}(\epsilon_j)^\nu e^{-ik^0x^0+i{\bf k}\cdot{\bf x}}, \end{equation} and substituting in Eq.\ (\ref{dmeqn}), we obtain \begin{equation} k^0=\pm[\omega_k-i\Gamma({\bf k},j)/2],\label{ecmppst} \end{equation} so that the dressed scattering state is \begin{equation} {a_{{\bf k}j}}(x)^\nu=(2\omega_kL^3)^{-1/2}(\epsilon_j)^\nu e^{\pm i[\omega_k-i\Gamma({\bf k},j)/2]x^0-i{\bf k}\cdot{\bf x}} .\label{dpst} \end{equation} {}From Eqs.\ (\ref{dest1}), (\ref{dest2}), and (\ref{dpst}) it is apparent that the prescription of Eq. (\ref{ecmplx}) applies equally well to scattering states as to propagators. It is a general feature of this prescription that the resulting positive and negative energy states are not conjugate to each other, since the real part of the exponent changes sign. Consequently, positive energy solutions decay as time increases, while negative energy solutions grow. This is in keeping with the interpretation of the negative energy solutions as particles which move backwards in time. In the application of these formulae to the calculation of S-matrix elements, the $e^{-i(E-i\Gamma/2)x^0}$ dependence is ascribed to initial states, while the $e^{+i(E-i\Gamma/2)x^0}$ dependence is ascribed to final states. There is a second way of understanding the introduction of decay rates in the time-development exponentials of scattering states. We may take the view, discussed above, that the metastable scattering states are really approximations standing in for internal lines of much larger Feynman diagrams. That being the case, their functional form may be read directly from the components of their respective propagators, written in the forms of Eqs. (\ref{depropb}) and (\ref{dppropb}). The appearance of the decay rates in the time-development exponentials of the states will ultimately lead to their appearance in energy denominators of scattering amplitudes, after integration over time variables. Here, however, there appears a major difference with the usual practice of obtaining amplitudes. The real parts of the exponentials will lead in general to divergent expressions if the time integration limits are allowed to go to $\pm\infty$ as usual. It is not difficult to see physically why we should expect trouble in this limit. Letting the upper time limit go to infinity in the S-matrix element is tantamount to asking the question, ``what is the probability of observing this decaying state in the infinitely distant future?'' Clearly, no calculation is required to see that the answer must be ``zero.'' Similarly, letting the lower time limit go to minus infinity amounts to inquiring about an interaction at a finite time of a decaying state which was prepared in the infinitely distant past --- a process which also has vanishing probability of occurrence. It is therefore necessary that the scattering theory be formulated for states prepared at finite times $T_i$ and measured at finite times $T_f$. Only when the initial state is stable may the limit $T_i\rightarrow -\infty$ be taken, and only when the final, measured state is stable may we set $T_f\rightarrow\infty$. These limits are termed the ``absorption'' and ``emission'' limits, for reasons that will shortly be made clear. Note that due to the real parts in the time-development exponentials, the scattering states are not generally normalized to unit probability. In fact, they may only be so normalized at a given, fixed time. Physically, it is necessary to ensure that the initial states be normalized at the preparation time $T_i$, and that the final states be normalized at the measurement time $T_f$. This could be accomplished by setting $x^0\rightarrow x^0-T_i$ or $x^0\rightarrow x^0-T_f$, as appropriate, in the expressions of Eqs.\ (\ref{dest1}), (\ref{dest2}), and (\ref{dpst}). It is often simpler to calculate the amplitudes with the states as written above and multiply the result by the factor $\exp[\frac{1}{2}(\Gamma_iT_i-\Gamma_fT_f)]$, where $\Gamma_{i(f)}$ is the sum of the decay rates of the particles in the initial (final) state. There is an extremely important consequence of this finite-time formulation of the theory: {\em strict energy conservation no longer holds}. The energy conserving $\delta$-functions which appeared in the old amplitudes were a consequence of the integration of time exponentials over all time. By the time-energy uncertainty relation, we may not determine the energy to infinite precision over a finite time interval. This is not an alarming consequence of the theory, but rather a desirable one. When we discuss the excitation or decay of metastable states, we cannot expect to determine the energy of those states to better than the natural line width, so energy-conserving $\delta$-functions should actually violate our physical intuition for these processes. In fact, we will see that strict energy conservation is recovered {\em only} when $\Gamma_i=\Gamma_f=0$. Thus, for example, in the expressions of Herold \cite{h79} for Compton scattering from ground state to ground state, the energy-conserving $\delta$-functions are appropriate. We now discuss an example which provides a simple application of these ideas: cyclotron decay. Consider an electron which is prepared in an excited state $A$ at a time $T_i=0$. We calculate, to first order, the probability that the system should make a transition to the ground state $A_G$ with the emission of a photon in the state (${\bf k},j$), which is below the one photon pair-production threshold. Since $\Gamma_f=0$, we may take the limit $T_f\rightarrow\infty$. The S-matrix element is given by \begin{eqnarray} S_{fi}&=&T_{A_GA}({\bf k})_\mu(\epsilon_j)^\mu {\displaystyle\int_0^{\infty}}dx^0\, e^{-i[(E_A-i\Gamma_A/2)-E_G-\omega_k]x^0}\nonumber\\ &=&\frac{-iT_{A_GA}({\bf k})_\mu(\epsilon_j)^\mu} {E_A-E_G-\omega_k-i\Gamma_A/2},\label{em1} \end{eqnarray} where $T_{A_GA}({\bf k})_\mu$ is the interaction matrix element for the transition: \begin{equation} T_{A_GA}({\bf k})^\mu=e^2L^{-3/2}(2\omega_k)^{-1/2} \int d^3{\bf x}\,e^{-i{\bf k}\cdot{\bf x}}\, \overline{\phi^{(+)}_{A_G}}({\bf x}) \gamma^\mu\phi^{(+)}_{A}({\bf x}).\label{emmel} \end{equation} Note that in Eq. (\ref{em1}), the energy conserving $\delta$-function has been replaced by the Wigner-Weisskopf line shape function. Thus, the new formalism has reproduced the well-known result from non-relativistic quantum mechanics, which the old formalism could not (compare Refs.\ \cite{mz81,hrw82,k54,dh83,t52}). We close this section with a discussion of the effect of using the dressed electron scattering states of Eqs.\ (\ref{dest1}) and (\ref{dest2}) on the range of the interaction in the presence of a KAOSIS of the photon propagator below the one photon pair-production threshold. We may attempt to repeat the procedure that led to Eqs.\ (\ref{effrange1}) and (\ref{effrange2}). However, this time the computation of the S-matrix element corresponding to Fig.\ \ref{e+e-}(a) is no longer equivalent to taking the Fourier transform of the photon propagator with respect to $x^0$, since the dependence of the scattering states on $x^0$ is no longer purely oscillatory. Rather, the effective one-dimensional interaction $g(x^1;k^2,k^3)$ that results is given by \begin{equation} g(x^1;k^2,k^3)=\int\frac{dk^0}{2\pi}\,f_B(x^1;k^0,k^2,k^3) W(k^0-E),\label{effrange3} \end{equation} where $W(z)$ is a function which is only appreciable in a range $\pm\Delta$ about $z=0$. The restricted domain of $W$ is a reflection of the restricted domain in time of the scattering states --- in fact, we have either $\Delta\sim\Gamma$ or $\Delta\sim (T_f-T_i)^{-1}$, whichever is largest. By using stationary phase arguments, it is easy to see that $g(x^1;k^2,k^3)$ can only be appreciable for a limited range of $|x^1|$: \begin{equation} |x^1|\lesssim\frac{2\pi}{\left[(E+\Delta)^2-(k^2)^2-(k^3)^2\right]^{1/2} -\left[E^2-(k^2)^2-(k^3)^2\right]^{1/2}}.\label{effrange4} \end{equation} Thus the range of the interaction is curtailed when the scattering states may decay. We see that the spurious infinite interaction range that entered calculations which used bare excited scattering states was a consequence of their assumed infinite duration. Once their limited duration is incorporated into the formalism, their interaction becomes short-ranged. Note that a KAOSIS of the photon propagator may exist only if either some of the scattering states are excited or the KAOSIS itself is above the one photon pair-production threshold. Therefore, the scattering cross-sections for processes with virtual photons {\em are now always finite} if the dressed states and propagators are used to calculate them. We will show in the next section that resonant divergences are now also under control in processes with virtual fermions. \section{Scattering formulae}\label{sec_sf} We now compute generic second-order formulae for two-particle to two-particle scattering. The Feynman diagram in Fig.\ \ref{genscat} depicts such a process irrespective of whether the various lines represent fermions or photons. Our notation, which is illustrated in Fig.\ \ref{genscat}, is as follows. Let the energies of the external lines be $E_\rho$, and let their decay rates be $\Gamma_\rho$ ($\rho=a,b,c,d$). Define $\mu_\rho=\pm 1$, with $\mu_\rho=+1$ if the line represents an incoming state and $\mu_\rho=-1$ if it represents an outgoing state. Let the lines with $\rho=a,b$ join at the vertex with coordinates $x$, and those with $\rho=c,d$ join at the vertex with coordinates $y$ (see Fig.\ \ref{genscat}). Define $E_x\equiv\mu_aE_a+\mu_bE_b$, $\Gamma_x\equiv\mu_a\Gamma_a+\mu_b\Gamma_b$, and similarly for $E_y$ and $\Gamma_y$. Also define the complex energies ${\cal E}_x\equiv E_x-\frac{i}{2}\Gamma_x$, ${\cal E}_y\equiv E_y-\frac{i}{2}\Gamma_y$. Let the energies of the two initial particles be $e_{i1}$, $e_{i2}$, and let their decay rates be $\gamma_{i1}$, $\gamma_{i2}$. Also let the energies of the two final particles be $e_{f1}$, $e_{f2}$, and let their decay rates be $\gamma_{f1}$, $\gamma_{f2}$. The $e$ and $\gamma$ are set equal to the $E$ and $\Gamma$ as appropriate to the process under consideration, an identification illustrated by the passage from Fig.\ \ref{genscat} to Fig.\ \ref{gendc}. We denote the quantum state of the intermediate particle by $l$: $l=A$ if the particle is a fermion, $l=({\bf k},j)$ if it is a photon. The energy and decay rate of the intermediate state are $E_l$ and $\Gamma_l$, respectively, and we define ${\cal E}_l\equiv E_l-\frac{i}{2}\Gamma_l$. We also introduce the index $\eta=\pm 1$, where $\eta=+1$ if the intermediate state has positive energy and $\eta=-1$ otherwise. The propagators are given by Eqs.\ (\ref{depropb}) and (\ref{dppropb}). The S-matrix element for the process is obtained by sandwiching the appropriate propagator between the scattering states in the usual way \cite{bs59} and integrating over the space time coordinates $x$ and $y$. The general result is \begin{equation} S_{fi}=i {\displaystyle\sum_{l\eta}}c_\eta\, M_{fi}(l,\eta)\, \xi^{(l\eta)}_{T_iT_f}({\cal E}_x,{\cal E}_y).\label{amp} \end{equation} Here, $M_{fi}(l,\eta)$ is the product of two interaction matrix elements appropriate to the process, $c_\eta$ is 1 if the intermediate state is a photon and $-\eta$ if it is a fermion, and \begin{eqnarray} \xi^{(l\eta)}_{T_iT_f}({\cal E}_x,{\cal E}_y)&\equiv& e^{+(\gamma_{i1}+\gamma_{i2})T_i/2-(\gamma_{f1}+\gamma_{f2})T_f/2} e^{+i(e_{i1}+e_{i2})T_i-i(e_{f1}+e_{f2})T_f}\nonumber\\ &&\times{\displaystyle\int_{T_i}^{T_f}dx^0\,\int_{T_i}^{T_f}dy^0}\, \theta\left[\eta\,(x^0-y^0)\right] \exp\left[-i\eta{\cal E}_l(x^0-y^0)-i{\cal E}_xx^0-i{\cal E}_yy^0\right]. \label{etfac1} \end{eqnarray} The factor $e^{+(\gamma_{i1}+\gamma_{i2})T_i/2-(\gamma_{f1}+\gamma_{f2})T_f/2}$ in Eq.\ (\ref{etfac1}) adjusts the normalization of the initial and final states to be 1 at $T_i$ and $T_f$, respectively, while the factor $e^{+i(e_{i1}+e_{i2})T_i-i(e_{f1}+e_{f2})T_f}$ allows a convenient choice of phase. The quantity $\xi^{(l\eta)}_{T_iT_f}({\cal E}_x,{\cal E}_y)$ is the new object which incorporates the resonant energy denominators and in general replaces the energy-conserving $\delta$-function. It may be calculated by the substitution $u=x^0-y^0$, $v=x^0+y^0$. The result is \begin{eqnarray} \xi^{(l\eta)}_{T_iT_f}({\cal E}_x,{\cal E}_y)&=&\quad \frac{e^{+(\gamma_{i1}+\gamma_{i2})T_i/2-(\gamma_{f1}+\gamma_{f2})T_f/2} e^{+i(e_{i1}+e_{i2})T_i-i(e_{f1}+e_{f2})T_f}} {{\cal E}_x+{\cal E}_y}\nonumber\\ &&\times\left\{\quad \frac{ e^{-i({\cal E}_x+{\cal E}_y)T_f} \left[e^{(i/2)[(1-\eta){\cal E}_x+(1+\eta){\cal E}_y-2{\cal E}_l](T_f-T_i)}-1 \right]} {(1/2)[(1-\eta){\cal E}_x+(1+\eta){\cal E}_y-2{\cal E}_l]}\right.\nonumber\\ &&\left.\quad-\quad\frac{ e^{-i({\cal E}_x+{\cal E}_y)T_i} \left[e^{(i/2)[(-1-\eta){\cal E}_x+(-1+\eta){\cal E}_y-2{\cal E}_l](T_f-T_i)}-1 \right]} {(1/2)[(-1-\eta){\cal E}_x+(-1+\eta){\cal E}_y-2{\cal E}_l]}\right\}. \label{etfacgen} \end{eqnarray} Eq.\ (\ref{etfacgen}) provides an expression which may be adapted to any second-order process of interest by adjusting the $\mu$ and the assignments of the $e$ and $\gamma$ to the $E$ and $\Gamma$. In the particular case of two-particle to two-particle scattering we need consider four cases: the ``direct'' and ``cross'' diagrams (Fig.\ \ref{gendc}), each for $\eta=\pm1$. The expressions are simplified by the notation $\Delta e\equiv e_{i1}+e_{i2}-e_{f1}-e_{f2}$, $\Delta\gamma\equiv\gamma_{i1}+\gamma_{i2}-\gamma_{f1}-\gamma_{f2}$, and $\tau\equiv T_f-T_i$. \paragraph{Direct diagram, $\eta=+1$}\label{dplhd} We obtain this case from Eq.\ (\ref{etfacgen}) by setting ${\cal E}_x=-(e_{f1}+e_{f2})+i(\gamma_{f1}+\gamma_{f2})/2$ and ${\cal E}_y=(e_{i1}+e_{i2})-i(\gamma_{i1}+\gamma_{i2})/2$. We obtain \begin{eqnarray} \xi^{(l,+1)}_{T_iT_f}({\cal E}_x,{\cal E}_y)&=& \frac{1}{\Delta e-i\Delta\gamma/2} \times\left\{ \frac{e^{-i(E_l-i\Gamma_l/2)\tau}- e^{-i[(e_{i1}+e_{i2})-i(\gamma_{i1}+\gamma_{i2})/2]\tau}} {(e_{i1}+e_{i2}-E_l)-i(\gamma_{i1}+\gamma_{i2}-\Gamma_l)/2}\right. \nonumber\\ &&\left.-\frac{e^{-i(E_l-i\Gamma_l/2)\tau}- e^{-i[(e_{f1}+e_{f2})-i(\gamma_{f1}+\gamma_{f2})/2]\tau}} {(e_{f1}+e_{f2}-E_l)-i(\gamma_{f1}+\gamma_{f2}-\Gamma_l)/2}\right\}. \label{dpl} \end{eqnarray} \paragraph{Direct diagram, $\eta=-1$}\label{dmnhd} For the same assignments as case (\ref{dplhd}), but with $\eta=-1$, we find \begin{eqnarray} \xi^{(l,-1)}_{T_iT_f}&&({\cal E}_x,{\cal E}_y)= \frac{1}{\Delta e-i\Delta\gamma/2}\nonumber\\ &&\times\left\{ \frac{e^{-i[(e_{i1}+e_{i2}+e_{f1}+e_{f2}+E_l) -i(\gamma_{i1}+\gamma_{i2}+\gamma_{f1}+\gamma_{f2}+\Gamma_l)/2]\tau} -e^{-i[(e_{i1}+e_{i2})-i(\gamma_{i1}+\gamma_{i2})/2]\tau}} {(-e_{f1}-e_{f2}-E_l)-i(-\gamma_{f1}-\gamma_{f2}-\Gamma_l)/2}\right. \nonumber\\ &&\left.- \frac{e^{-i[(e_{i1}+e_{i2}+e_{f1}+e_{f2}+E_l) -i(\gamma_{i1}+\gamma_{i2}+\gamma_{f1}+\gamma_{f2}+\Gamma_l)/2]} -e^{-i[(e_{f1}+e_{f2})-i(\gamma_{f1}+\gamma_{f2})/2]\tau}} {(-e_{i1}-e_{i2}-E_l)-i(-\gamma_{i1}-\gamma_{i2}-\Gamma_l)/2}\right\}. \label{dmn} \end{eqnarray} \paragraph{Cross diagram, $\eta=+1$}\label{cplhd} Here we set ${\cal E}_x=(e_{i2}-e_{f2})-i(\gamma_{i2}-\gamma_{f2})/2$ and ${\cal E}_y=(e_{i1}-e_{f1})-i(\gamma_{i1}-\gamma_{f1})/2$: \begin{eqnarray} \xi^{(l,+1)}_{T_iT_f}({\cal E}_x,{\cal E}_y)&=& \frac{1}{\Delta e-i\Delta\gamma/2}\times\left\{ \frac{e^{-i[(e_{f1}+e_{i2}+E_l)-i(\gamma_{f1}+\gamma_{i2}+\Gamma_l)/2]\tau}- e^{-i[(e_{i1}+e_{i2})-i(\gamma_{i1}+\gamma_{i2})/2]\tau}} {(e_{i1}-e_{f1}-E_l)-i(\gamma_{i1}-\gamma_{f1}-\Gamma_l)/2}\right.\nonumber\\ &&\left.-\frac{ e^{-i[(e_{f1}+e_{i2}+E_l)-i(\gamma_{f1}+\gamma_{i2}+\Gamma_l)/2]\tau}- e^{-i[(e_{f1}+e_{f2})-i(\gamma_{f1}+\gamma_{f2})]\tau}} {(e_{f2}-e_{i2}-E_l)-i(\gamma_{f2}-\gamma_{i2}-\Gamma_l)/2} \right\}. \label{cpl} \end{eqnarray} \paragraph{Cross diagram, $\eta=-1$}\label{cmnhd} The assignments here are as in case (\ref{cplhd}), and $\eta=-1$: \begin{eqnarray} \xi^{(l,-1)}_{T_iT_f}({\cal E}_x,{\cal E}_y)&=& \frac{1}{\Delta e-i\Delta\gamma/2}\times\left\{ \frac{e^{-i[(e_{f2}+e_{i1}+E_l)-i(\gamma_{f2}+\gamma_{i1}+\Gamma_l)/2]\tau}- e^{-i[(e_{i1}+e_{i2})-i(\gamma_{i1}+\gamma_{i2})/2]\tau}} {(e_{i2}-e_{f2}-E_l)-i(\gamma_{i2}-\gamma_{f2}-\Gamma_l)/2} \right.\nonumber\\ &&\left.-\frac{ e^{-i[(e_{f2}+e_{i1}+E_l)-i(\gamma_{f2}+\gamma_{i1}+\Gamma_l)/2]\tau}- e^{-i[(e_{f1}+e_{f2}+E_l)-i(\gamma_{f1}+\gamma_{f2}+\Gamma_l)/2]\tau}} {(e_{f1}-e_{i1}-E_l)-i(\gamma_{f1}-\gamma_{i1}-\Gamma_l)/2}\right\}. \label{cmn} \end{eqnarray} The expressions in Eqs.\ (\ref{dpl}), (\ref{dmn}), (\ref{cpl}), and (\ref{cmn}) all contain products of two complex energy denominators, one ``energy-conserving'' and the other resonant. All these energy denominators may in principle go to zero. When this happens, however, there is always a cancellation of the exponentials in the numerators, so that the result is always finite. This is a consequence of the fact that these expressions were derived starting from Eq.\ (\ref{etfac1}), an integral over a finite range of a finite integrand, which consequently may never diverge. Therefore, scattering processes containing virtual fermions now have finite S-matrix elements. These were the last remaining resonant divergences in the theory, {\em which is now entirely finite}. We have thus eliminated all resonant divergences from our scattering cross-sections. The price we have paid is the dependence of the S-matrix elements on the time lapse $\tau$ between the preparation of the initial state and the measurement of the final state, and the attendant loss of strict energy conservation. Note that in general, the nature of the dependence on $\tau$ is for the S-matrix elements to decay away as $\tau\rightarrow\infty$. As discussed in the previous section, this is the behavior expected for scattering from excited states to excited states. Note also that these expressions lack crossing symmetry. The reason is the introduction of the exponential factor outside the integral in Eq.\ (\ref{etfacgen}), which is not symmetric under the replacement $e_{i1}\leftrightarrow -e_{f2}$, $\gamma_{i1}\leftrightarrow -\gamma_{f2}$. If we divide the expressions of Eqs.\ (\ref{dpl}), (\ref{dmn}), (\ref{cpl}), and (\ref{cmn}) by the exponential factor we find that the resulting expressions are, in fact, crossing symmetric. In order to parlay the above expressions into cross-sections, we must substitute them into Eq.\ (\ref{amp}) to obtain a $\tau$-dependent expression for the S-matrix element. The reaction rate $R$ is then given by $R=d|S_{fi}|^2/d\tau=2\text{Re}\{{S_{fi}}^*dS_{fi}/d\tau\}$. The cross-section may be obtained from $R$ by the usual kinematic manipulation: $d\sigma/d\Omega_f=L^3|{\bf v}|^{-1}R$, where $d\Omega_f$ is a volume element in the space of final states and ${\bf v}$ is the relative velocity of the initial particles. There are two special cases where it is possible to eliminate the dependence on $\tau$ from these expressions: when the initial state is stable ($\gamma_{i1}=\gamma_{i2}=0$), and when the final state is stable ($\gamma_{f1}=\gamma_{f2}=0$). These cases are called ``absorption'' and ``emission'' scattering, respectively. In the absorption scattering case, we may set $T_i\rightarrow-\infty$, while in the emission scattering case we may set $T_f\rightarrow\infty$. In either case, $\tau\rightarrow\infty$, and the above expressions for the $\xi$ are greatly simplified. \subsection{Absorption scattering} The special case of a stable initial state corresponds to a situation analogous to absorption, in which we prepare beams of stable particles and observe the excited products before they have the opportunity to decay. If we set $\gamma_{i1}=\gamma_{i2}=0$ in Eqs.\ (\ref{dpl}), (\ref{dmn}), (\ref{cpl}), and (\ref{cmn}), and take the limit $\tau\rightarrow\infty$, we find the following results: For the direct diagram, \begin{mathletters} \label{absetfd} \begin{equation} \xi^{(l,+1)}_{-\infty,T_f}({\cal E}_x,{\cal E}_y)= \frac{1}{(\Delta e-i\Delta\gamma/2) [(-e_{i1}-e_{i2}+E_l)-i\Gamma_l/2]},\label{absetfdpl} \end{equation} \begin{equation} \xi^{(l,-1)}_{-\infty,T_f}({\cal E}_x,{\cal E}_y)= \frac{1}{(\Delta e-i\Delta\gamma/2) [(e_{f1}+e_{f2}+E_l)-i(\gamma_{f1}+\gamma_{f2}+\Gamma_l)/2]},\label{absetfdmn} \end{equation} \end{mathletters} and for the cross diagram, \begin{mathletters} \label{absetfc} \begin{equation} \xi^{(l,+1)}_{-\infty,T_f}({\cal E}_x,{\cal E}_y)= \frac{1}{(\Delta e-i\Delta\gamma/2) [(e_{f1}-e_{i1}+E_l)-i(\gamma_{f1}+\Gamma_l)/2]},\label{absetfcpl} \end{equation} \begin{equation} \xi^{(l,-1)}_{-\infty,T_f}({\cal E}_x,{\cal E}_y)= \frac{1}{(\Delta e-i\Delta\gamma/2) [(e_{f2}-e_{i2}+E_l)-i(\gamma_{f2}+\Gamma_l)/2]}.\label{absetfcmn} \end{equation} \end{mathletters} We have eliminated an inessential phase factor. Note that $\Delta\gamma=-(\gamma_{f1}+\gamma_{f2})$. Substituting these expressions in Eq.\ (\ref{amp}), we obtain a time-independent S-matrix element. In order to get the result into the form of a cross-section, note that the final state is decaying at a rate $\gamma_{f1}+\gamma_{f2}$. In order that the probability of that final state be time-independent and equal to $|S_{fi}|^2$, the reaction rate must exactly balance the decay rate, so that we must have $R=(\gamma_{f1}+\gamma_{f2})|S_{fi}|^2=|\Delta\gamma||S_{fi}|^2$. Again, cross-section $d\sigma/d\Omega_f$ may be trivially obtained from $R$. Note that $d\sigma/d\Omega_f$ contains the following functional dependence on $\Delta e$: \begin{equation} \frac{d\sigma}{d\Omega_f}\propto\frac{\left|\Delta\gamma\right|} {\Delta e^2+(\Delta\gamma/2)^2} \stackrel{\Delta\gamma\rightarrow 0}{\longrightarrow}2\pi\, \delta(e_{i1}+e_{i2}-e_{f1}-e_{f2}).\label{absecon} \end{equation} In other words, the energy conservation in the cross-section is Lorentzian, and in the limit of stable final states we recover the energy-conserving $\delta$-function with the correct coefficient of $2\pi$. As expected, the strict energy conservation in the old expressions for the S-matrix elements is correct for scattering from stable states to stable states. \subsection{Emission scattering} The special case of a stable final state corresponds to a situation analogous to emission, in which we prepare a beam of excited particles and observe them after they have scattered into the stable final state. We set $\gamma_{f1}=\gamma_{f2}=0$ in Eqs.\ (\ref{dpl}), (\ref{dmn}), (\ref{cpl}), and (\ref{cmn}), and take the limit $\tau\rightarrow\infty$, to obtain: For the direct diagram, \begin{mathletters} \label{emetfd} \begin{equation} \xi^{(l,+1)}_{T_i,\infty}({\cal E}_x,{\cal E}_y)= \frac{1}{(\Delta e-i\Delta\gamma/2)[(e_{f1}+e_{f2}+E_l)-i\Gamma_l/2]}, \label{emetfdpl} \end{equation} \begin{equation} \xi^{(l,-1)}_{T_i,\infty}({\cal E}_x,{\cal E}_y)= \frac{1}{(\Delta e-i\Delta\gamma/2) [(e_{i1}+e_{i2}+E_l)-i(\gamma_{i1}+\gamma_{i2}+\Gamma_l)/2]}, \label{emetfdmn} \end{equation} \end{mathletters} and for the cross diagram, \begin{mathletters} \label{emetfc} \begin{equation} \xi^{(l,+1)}_{T_i,\infty}({\cal E}_x,{\cal E}_y)= \frac{1}{(\Delta e-i\Delta\gamma/2) [(e_{i2}-e_{f2}+E_l)-i(\gamma_{i2}+\Gamma_l)/2]}, \label{emetfcpl} \end{equation} \begin{equation} \xi^{(l,-1)}_{T_i,\infty}({\cal E}_x,{\cal E}_y)= \frac{1}{(\Delta e-i\Delta\gamma/2) [(e_{i1}-e_{f1}+E_l)-i(\gamma_{i1}+\Gamma_l)/2]}. \label{emetfcmn} \end{equation} \end{mathletters} Once again, we have eliminated an inessential phase from the amplitudes. Now we have $\Delta\gamma=\gamma_{i1}+\gamma_{i2}$. Substituting these expressions in Eq.\ (\ref{amp}), we again obtain a time-independent S-matrix element. We obtain a cross-section by the argument illustrated in Fig.\ \ref{tube}, which depicts a semi-infinite tube of cross-sectional area $d\sigma/d\Omega_f$, terminating at the position of the target particle and extending in the direction of $-{\bf v}$. The total probability (per unit final phase-space volume) of an interaction leading to a final state in $d\Omega_f$ is equal to the integral over the interior of the tube of the probability that each infinitesimal slice should contain the projectile particle {\em and} that it should actually reach the target particle in spite of the fact that the two-particle state is decaying away: \begin{equation} |S_{fi}|^2=L^{-3}{\displaystyle\int_0^\infty}|{\bf v}|dt\, \frac{d\sigma}{d\Omega_f}e^{-(\gamma_{i1}+\gamma_{i2})t} =\frac{|{\bf v}|}{\Delta\gamma\,L^3}\frac{d\sigma}{d\Omega_f}, \end{equation} so that \begin{equation} \frac{d\sigma}{d\Omega_f}=L^3|{\bf v}|^{-1}\Delta\gamma\,|S_{fi}|^2. \end{equation} We again see the Lorentzian energy conservation of Eq.\ (\ref{absecon}), so that in this case we also recover strict energy conservation for stable state to stable state scattering. \section{Discussion}\label{sec_disc} The regime of validity of the ``emission'' and ``absorption'' scattering limits is obvious from their context. On the other hand, the general formula in Eq.\ (\ref{etfacgen}) requires some discussion. As discussed previously, the formula never ``misbehaves'', in the sense that it never yields a divergent result. In fact, in general that result tends to zero as $\tau\rightarrow\infty$. While this is physically sensible, it obviously makes the large time-lapse limit a less than useful one. It is clear that what fails in this limit is the validity of the perturbation-theoretic order of the calculation. Since the scattering states are themselves decaying to other states, those other states should be included in the calculation, leading to higher-order processes. The second-order calculations outlined in the previous section are only useful for values of $\tau$ such that the scattering states have little chance to decay, that is for $\Gamma\tau\ll 1$, where $\Gamma$ is the largest of the decay rates in the process. For example, we might choose $\tau$ to be on the order of a collision time, if we are studying a gas with density and temperature such that the collision rates far exceed the decay rates. The general case above obviously represents a fairly radical departure from the usual scattering formulae. The ``emission'' and ``absorption'' scattering limits amount to somewhat less radical modifications. One obvious such modification is the replacement of strict energy conservation with ``Lorentzian'' energy conservation. Another is that the ``non-resonant'' energy denominators [Eqs.\ (\ref{absetfdmn}), (\ref{absetfcpl}), (\ref{absetfcmn}), (\ref{emetfdmn}), (\ref{emetfcpl}), (\ref{emetfcmn})] now contain the decay rates of the scattering states as well as those of the intermediate states. The importance of these changes depends upon whether the decay widths entering the energy denominators are electron decay widths or photon decay widths. If the decay widths are purely fermionic, they are gently varying functions of energy, and their magnitudes are smaller by $e^2$ than their own characteristic scale of variation, the scale of variation of the interaction matrix elements, and the characteristic separation of the resonances. Consequently, in this case the relative change that results from introducing Lorentzian, rather than exact, energy conservation, and from introducing the decay widths of the scattering states into the energy denominators, is of order $e^2$. On the other hand, if some of the decay widths correspond to photon lines, they can vary rather rapidly as a function of energy \cite{k54,dh83}. Thus, the behavior of the energy denominators is not really ``Lorentzian'', despite notational appearances to the contrary. The departure from the cross-sections computed assuming strict energy conservation and not including external line decay widths might turn out to be appreciable in this case, although its precise magnitude remains to be assessed. In the limit of stable scattering states the usual results are completely reproduced, since we recover strict energy conservation and there are no scattering state decay widths to include in resonant energy denominators. In connection with the dressed photon propagator of Eq.\ (\ref{dppropa}), we wish to comment on a point which is a potential source of confusion. The decay width $\Gamma({\bf k},j)$ is to be evaluated on the light cone, as implied by the first line of Eq.\ (\ref{gammaphot}). Now, when the photon propagator is used in an S-matrix element, the values of $k^0$, $k^2$, and $k^3$ are fixed by the $x^0$, $x^2$, and $x^3$ momenta of the scattering states. Thus, the sum over intermediate photon wave states involves an integral over the component $k^1$ of the wave vector ${\bf k}$. The value of $\Gamma({\bf k},j)$ must be evaluated for {\em each} value of $k^1$ in the integral. This is analogous to the case of the electron propagator, Eq.\ (\ref{depropa}), in which the intermediate electron decay width is evaluated, on the energy shell, for each Landau level in the sum over intermediate states. The only difference between the two cases is that the relevant degrees of freedom are discrete for the electron propagator, while they are continuous for the photon propagator. The radiatively corrected photon propagator permits for the first time the evaluation of processes such as $\text{e}^+\text{e}^-\rightarrow\text{e}^+\text{e}^-$, $\text{e}^-\rightarrow\text{e}^-\text{e}^+\text{e}^-$, and $\gamma\text{e}^-\rightarrow\text{e}^-\text{e}^+\text{e}^-$, all of which are important for neutron star emission. Of equal astrophysical importance is the evaluation above the one photon pair-production threshold of Compton scattering, two photon pair annihilation, and two photon pair production, which is now possible by virtue of the radiatively corrected scattering states. Finally, we now have access to the processes $\text{e}^-\text{e}^-\rightarrow\text{e}^-\text{e}^-$ and $\text{e}^+\text{e}^-\rightarrow\text{e}^+\text{e}^-$ even when the initial and final states are excited. \section{Acknowledgments} While this work was performed, Carlo Graziani held a National Research Council-NASA Goddard Space Flight Center Research Associateship.
1,314,259,995,738
arxiv
\section{Introduction} \label{sec:intro} Deep learning based systems have progressed leaps and bounds over the past few years, enabling their deployment in critical applications such as self-driving cars, surveillance systems and biomedical applications. Furthermore, organizations often provide pretrained machine learning models as a service (MLaaS) where the end user is allowed to query the model and get access to its predictions via APIs for use in various applications. \begin{figure} \centering \includegraphics[width=\linewidth]{latex/figures/Motivation_Diagram.pdf} \caption{\textbf{Model Stealing Attack and its vulnerabilities}: An adversary queries the victim Model $\mathcal{V}$ with proxy data to obtain its labels. The labelled training data is used to train a clone Model $\mathcal{C}$ which can be further used by the adversary to stage membership inference~\cite{shokri2017membership}, model inversion~\cite{fredrikson2015model} or adversarial attacks~\cite{zhou2018transferable}.} \vspace{-1.0em} \label{fig:motivation} \end{figure} However, exposing the predictions of the models through queries makes the model susceptible to model stealing attacks, which attempt to clone the victim model even in a black-box setting that restricts access to its gradients. Protecting the privacy of an ML model is of paramount importance as organizations invest significant resources on cutting edge research and also on gathering and labelling large amounts of training data~\cite{halevy2009unreasonable} for achieving competent performance on various tasks. In addition, recent works ~\cite{papernot2017practical, tramer2017space, zhou2020dast, wang2021delving} have shown that an adversary could train a substitute model via model stealing and use it for crafting adversarial examples~\cite{goodfellow2014explaining} in a black-box setting, which poses a serious threat when the model is deployed in security critical applications. A stolen model could also compromise the privacy of users by leaking confidential data through a membership inference attack~\cite{shokri2017membership} or model inversion~\cite{zhang2020secret, zhao2021exploiting}. Fig.-\ref{fig:motivation} showcases some of the possible malicious outcomes of Model Stealing. In order to prevent model stealing attacks, some defenses attempt to perturb the softmax predictions of the model, while preserving its top-1 prediction~\cite{lee2018defending}. In this work we consider the problem of model stealing in a more practical and challenging hard label setting, where only the top-1 prediction of the model is accessible, and is thus effective even in the presence of such defenses. In a model stealing attack, an adversary first queries a black-box victim model $\mathcal{V}$ with input data and obtains a prediction for it as shown in Fig.\ref{fig:motivation}. This data along with victim model predictions is used to train a clone model $\mathcal{C}$. In a practical scenario, the attacker would not have access to the training data, and hence we consider the problem of Data-Free Model Stealing (DFMS) in this work. In such a data-free scenario, the attacker could use publicly available related datasets \cite{papernot2017practical,orekondy2019knockoff}, or synthetically generated samples \cite{truong2021data} to query the model. While the use of publically available datasets assumes access to related data, the data-free generative approach could suffer from a large query budget, as the synthetic data can be far from the true training data distribution. In this work, we overcome both challenges by utilizing the available data that may be unrelated to the original training dataset, as a weak image prior. This enables the generation of representative samples under a low query budget, which is a crucial requirement in model stealing attacks, since MLaaS APIs work on a pay-per-query basis. While existing algorithms for Data-Free Knowledge Distillation~\cite{addepalli2020degan, nayak2019zero, lopes2017data, yin2020dreaming, fang2019data} and Model Extraction~\cite{kariyappa2021maze,truong2021data} achieve near perfect clone-model accuracy, there are additional challenges in a Model Stealing framework due to the lack of access to gradients and a hard-label setting. Therefore, we consider a practical setup of data-free hard-label model stealing and overcome the challenges by utilizing the clone model’s gradients as a proxy to the gradients of the victim model. As the clone model starts training, it acts as a useful proxy for the victim model, and helps the generator learn to generate rich informative samples, which boosts the clone accuracy further. We explicitly enforce the generation of a class-balanced dataset from the generator that is also more aligned with the distribution of the training dataset. Additionally, we also utilize an adversarial loss in a GAN framework \cite{goodfellow2014generative}, by using publicly available potentially unrelated data, which we refer to as proxy data \cite{addepalli2020degan}. While this could be completely unrelated to the original training dataset, it still helps in enforcing a weak image prior in the generated data. This in turn reduces the number of victim model queries needed to perform Model Stealing. In fact, we show that it is possible to even use synthetic samples, such as multiple overlapping shapes with a planar background, to steal a model in a completely data-free setting. Our method achieves a significant improvement over ZSDB3KD \cite{wang2021zero}, a zero-shot data-free method in a similar hard label setting using only synthetic samples. In the upcoming sections, we describe our approach in detail and show results on various datasets. \noindent Our \textbf{key contributions} are listed below: \begin{itemize} \itemsep0em \item We propose DFMS-HL, a Data-Free Model Stealing (DFMS) attack in a Hard-Label (HL) setting, to train a clone model with the help of unrelated proxy data or manually crafted synthetic data. We show that DFMS-HL outperforms the existing baseline ZSDB3KD \cite{wang2021zero} and results in a significant reduction of around $500\times$ in the number of queries to the victim model. \item We demonstrate state-of-the-art results on the CIFAR-10 dataset using unrelated proxy samples, such as a given subset (containing 40 or 10 non-overlapping classes) from CIFAR-100, or synthetic data. \item We are the first to show noteworthy results of data-free model stealing on a dataset with a larger number of classes such as CIFAR-100. This demonstrates that our approach is both effective and scalable. \item We compare our method with the state-of-the-art model stealing attacks MAZE~\cite{kariyappa2020protecting} and DFME~\cite{truong2021data}, which additionally utilize softmax predictions of the victim model. Although we consider a more restrictive setting, we achieve a comparable accuracy using the DFMS-HL approach, and a significant boost of around 3\% using a Soft-Label (SL) variant of the proposed method (DFMS-SL). \end{itemize} \section{Related Work} In this section, we discuss existing Knowledge Distillation and Model Stealing works with varied levels of access to the victim model as shown in Table-\ref{table:taxonomy}. \subsection{Knowledge distillation} Knowledge distillation~\cite{hinton2015distilling} aims to transfer the knowledge of a large pretrained teacher model to a smaller student model without a significant impact on accuracy. This is primarily used to compress models for deployment, in order to reduce the memory requirements and inference time~\cite{gou2021knowledge, adriana2015fitnets, yang2020model, aguinaldo2019compressing}. In practical scenarios, training data is kept confidential due to privacy concerns. Hence, there has been a lot of focus on developing data-free approaches for knowledge-distillation. ZSKD~\cite{nayak2019zero}, DAFL~\cite{chen2019data}, DFKD~\cite{lopes2017data} are popular knowledge distillation methods in a data-free setting. A data-free KD method DeGAN~\cite{addepalli2020degan} demonstrated that it is possible to use publicly available unrelated data (proxy dataset) to distill the knowledge of a teacher model to a smaller student model. However, all these methods require access to the teacher model’s gradients. Following this, Black-Box Ripper~\cite{barbalau2020black} was proposed to implement model stealing by querying a black-box teacher model with unrelated proxy data. A recent work ZSDB3KD~\cite{wang2021zero} proposed knowledge distillation for a black box model with only hard-label outputs. However, this approach is highly computationally intensive due to the requirement of a very large number of queries (4000 million) to the teacher model. Our work considers the same setup of having access to only the top-1 labels, with a significantly lower query budget of 8 million. \input{latex/tables/taxonomy} \subsection{Model Stealing} Tramer \etal \cite{tramer2016stealing} demonstrated that an attacker could use queries to steal a machine learning model with near perfect fidelity. Following this, model stealing has been implemented in various domains ~\cite{krishna2019thieves, jagielski2020high, pal2019framework, correia2018copycat, milli2019model}. A partial data approach JBDA~\cite{papernot2017practical} assumed access to a small set of samples from the data distribution. On the other hand, surrogate data approaches such as KnockOffNets ~\cite{orekondy2019knockoff} and Black-Box dissector\cite{wang2021black} consider that attackers could use images from a different data source to steal a model. These methods fail to perform well without a suitable surrogate dataset. This motivated the development of data-free approaches which work well without using surrogate data or seed samples from the training data. Recent data-free approaches such as MAZE~\cite{kariyappa2021maze} and DFME~\cite{truong2021data} attempt to extract models using GAN generated synthetic data. In these approaches, the generator is trained to produce images that maximize the dissimilarity score between the clone and victim models. The victim model's gradients are required to measure this dissimilarity score, and are estimated using zeroth-order gradient approximation. These approaches are computationally expensive as they require a lot of queries ($\sim$20 million) to the victim model for synthesizing data samples in a black-box setting. Moreover, these methods assume that the softmax vector from the teacher model is accessible. Contrary to this, we consider a practical setting that allows access to only hard labels from the victim model. \subsection{Defenses against model stealing} Lee \etal \cite{lee2018defending} propose to defend against model stealing attacks by perturbing the model predictions while preserving its top-1 label, to maintain similar classification accuracy. On similar lines, Prediction Poisoning~\cite{orekondy2019prediction} perturbs model predictions by poisoning the output distribution at the cost of model accuracy. However, such defenses fail in a scenario where an attacker has access to only hard labels from the model. A more sophisticated approach EDM~\cite{kariyappa2020protecting} introduces randomness into the predictions by using an ensemble of diverse models to produce dissimilar outputs for Out-of-Distribution (OOD) samples, that are likely to be used for querying the victim model in a model stealing attack. Similarly, Adaptive Misinformation~\cite{kariyappa2020defending} perturbs the predictions for OOD inputs only. However, these approaches have been shown to cause utility degradation \cite{orekondy2019prediction}, or can be made ineffective using an adaptive query synthesis strategy \cite{chandrasekaran2020exploring}. Further, Chandrasekaran \etal~\cite{chandrasekaran2020exploring, chandrasekaran2021sok} provide theoretical insights to demonstrate that ``model extraction is inevitable", even in a realistic setting with only hard labels, and even when models use randomised defenses. Hence, a model with a reasonably good accuracy would always leak information that could lead to model extraction. In this work we demonstrate that it is indeed possible to perform model stealing in a severely restricted setting as well, and further achieve competent clone accuracy. This paves way to the development of better defenses for preserving model privacy in future. \section{Proposed Approach} For model stealing, the goal of an adversary is to learn the parameters of the clone model $\mathcal{C}$ so as to match the predictions of the victim model $\mathcal{V}$. Towards this end, we propose a data-free model stealing approach \textbf{DFMS-HL} that requires only hard-label access. In the following sections, we describe the proposed model stealing attack algorithm. \begin{figure*}[ht] \centering \includegraphics[width=0.80\linewidth]{latex/figures/Approach_Diagram.pdf} \caption{\textbf{Architecture of DFMS-HL}: Generator $\mathcal{G}$ generates data $x$ with a proxy image prior. The clone model $\mathcal{C}$ is trained using the predictions from the victim model $\mathcal{V}$ with a cross-entropy loss objective $\displaystyle \mathcal{L}_{CE}$. The discriminator $\mathcal{D}$ learns to discriminate between proxy data and the samples generated from $\mathcal{G}$. The generator $\mathcal{G}$ is trained using the adversarial loss $\mathcal{L}_{adv}$ along with the class-diversity loss $\mathcal{L}_{class\_div}$. The generator and clone model are trained alternately in every iteration of the algorithm.} \label{fig:architecture} \end{figure*} \subsection{Overview} We use a GAN based architecture to train the clone model. We first train a DCGAN\cite{radford2015unsupervised} by imposing an image prior using synthetic data or unrelated proxy data, and use this as an initialization for the generator $\mathcal{G}$. Further, the clone model and generator are trained alternately. The data flow of the proposed model stealing attack is shown in Fig.~\ref{fig:architecture}, wherein the generator $\mathcal{G}$ generates data $x=\mathcal{G}(z)$ from a random vector $z$. The victim model takes input $x$ and generates input-label pairs $(x,\hat{y}(x))$. Since, the victim model is black-box, we do not backpropagate the gradients through it. We use the input-label pairs to train the clone model. Further, the generated data $x$ is used to train the generator using the adversarial loss \cite{goodfellow2014generative} and a diversity loss \cite{addepalli2020degan}. The discriminator learns to differentiate between fake and real proxy data using the adversarial loss. In the subsequent sections, we describe the loss functions for training the generator and clone model in further detail. \subsection{Clone model Training} The clone model $\mathcal{C}$ is trained using the data samples generated from the generator $\mathcal{G}$. In every iteration, we sample an $m$-dimensional random vector $z$, whose elements are sampled from $m$ \textit{i.i.d.} Standard Normal distributions. This vector is forward propagated through $\mathcal{G}$ to generate images $x$. These images are then passed to the victim model to obtain its hard-labels. The clone model is trained with the cross-entropy loss objective using the victim predictions as ground truth, as shown below: \vspace{-0.5em} \begin{equation} \mathcal{L}_{C} = \underset{z \sim \mathcal{N}(0,I)}{\mathbb{E}} \left[ \mathcal{L}_{CE}(\mathcal{C}(x),\hat{y}(x)) \right], \ x=\mathcal{G}(z) \end{equation} where $\displaystyle \ \hat{y}(x) = \underset{i}{\mathrm{argmax}}\ \mathcal{V}_i(x) $ is the class label corresponding to the maximum probability class, $I$ is an $m$ dimensional identity matrix, and $\mathcal{C}(x)$ is the pre-softmax output from the clone model. \begin{algorithm}[t] \caption{DFMS-HL : Algorithm for Model Stealing}\label{alg:MoSAlgo} \begin{algorithmic} \Require $N_Q, \mathcal{G}, \mathcal{D}, n_G, n_C $ \State // Initialize a Generator $\mathcal{G}$ with DCGAN parameters \State // Train the clone model $\mathcal{C}$ with DCGAN and proxy images using $n_C$ queries for initialization. \While{$n_G \neq 0$} \State $x = \mathcal{G}(z), z\sim \mathcal{N}(0,I)$ \State $\mathcal{L}_G \gets \mathcal{L}_{adv, fake} + \lambda_{div} \mathcal{L}_{class\_div}$ \State $\mathcal{L}_D \gets \mathcal{L}_{adv,real} + \mathcal{L}_{adv,fake}$ \State $\mathcal{\theta_G} \gets \mathcal{\theta_G} - \mathcal{\epsilon_G} \nabla_{\theta_G}\mathcal{L}_G$ \State $\mathcal{\theta_D} \gets \mathcal{\theta_D} - \mathcal{\epsilon_D} \nabla_{\theta_D}\mathcal{L}_D$ \State $n_G \gets n_G-1$ \EndWhile \State // Train clone model $\mathcal{C}$ \While{$n_C \neq 0$} \State $x = \mathcal{G}(z), z\sim \mathcal{N}(0,I)$ \State $\mathcal{L}_{C} \gets \mathcal{L}_{CE} (\mathcal{C}(x), \hat{y}(x)) $ \State $\mathcal{\theta_C} \gets \mathcal{\theta_C} - \mathcal{\epsilon_C} \nabla_{\theta_C}\mathcal{L}_C$ \State $n_C \gets n_C-1$ \EndWhile \State // Start alternate training between $\mathcal{G}$ and $\mathcal{C}$ \While{$N_Q \neq 0$} // Train $\mathcal{G}$ and $\mathcal{D}$ with $\mathcal{C}$ as fixed \State $x = \mathcal{G}(z), z\sim \mathcal{N}(0,I)$ \State $\mathcal{L}_G \gets \mathcal{L}_{adv, fake} + \lambda_{div} \mathcal{L}_{class\_div}$ \State $\mathcal{L}_D \gets \mathcal{L}_{adv,real} + \mathcal{L}_{adv,fake}$ \State $\mathcal{\theta_G} \gets \mathcal{\theta_G} - \mathcal{\epsilon_G} \nabla_{\theta_G}\mathcal{L}_G$ \State $\mathcal{\theta_D} \gets \mathcal{\theta_D} - \mathcal{\epsilon_D} \nabla_{\theta_D}\mathcal{L}_D$ // Train $\mathcal{C}$ with $\mathcal{G}$ and $\mathcal{D}$ as fixed \State $x = \mathcal{G}(z), z\sim \mathcal{N}(0,I)$ \State $\mathcal{L}_{C} \gets \mathcal{L}_{CE} (\mathcal{C}(x), \hat{y}(x)) $ \State $\mathcal{\theta_C} \gets \mathcal{\theta_C} - \mathcal{\epsilon_C} \nabla_{\theta_C}\mathcal{L}_C$ \EndWhile \end{algorithmic} \end{algorithm} \subsection{Generator Training} For imposing an image prior, we initially train a DCGAN generator using proxy data or synthetic images. However, we find that this is not sufficient as the generator could potentially suffer from mode collapse and lack of diversity. Moreover, lack of class diversity can severely impact the learning of tail classes in a hard-label setting. Hence, it crucial for the generator to generate a class-balanced set of images for learning the information across all classes. Therefore, we use a class-diversity loss formulation \cite{addepalli2020degan} to generate diverse samples from the generator $\mathcal{G}$ while remaining close to the manifold of the proxy/synthetic images. The generator loss has two components. The first component is the adversarial loss \cite{goodfellow2014generative} which causes the generator to generate data close to the proxy data distribution. The second component is a class balancing loss \cite{addepalli2020degan}, to enforce a diversity constraint. The two loss formulations for the generator are described in more detail below. \textbf{Adversarial Loss}~\cite{goodfellow2014generative}: The adversarial loss ensures that the distribution of images is close to the images in the proxy or synthetic dataset. \vspace{-1.0em} \begin{equation} \displaystyle \mathcal{L}_{adv,real} = \underset{x\sim p_{data}(x)}{\mathbb{E}} \left[ log \mathcal{D}(x) \right] \end{equation} \begin{equation} \displaystyle \mathcal{L}_{adv,fake} = \underset{z\sim \mathcal{N}(0,I)}{\mathbb{E}} \left[ log (1 - \mathcal{D}(\mathcal{G}(z)) \right] \end{equation} The discriminator $\mathcal{D}$ and generator $\mathcal{G}$ play a min-max game~\cite{goodfellow2014generative} as follows: \begin{equation} \displaystyle \underset{\mathcal{G}}{min}\ \underset{\mathcal{D}}{max} \ \mathcal{L}_{adv,real} + \mathcal{L}_{adv,fake} \end{equation} \textbf{Class Diversity Loss \cite{addepalli2020degan}}: The class diversity loss encourages the generation of diverse images across all classes. In a batch of $N$ samples, we consider the expected confidence value over the batch as $\alpha_j$ for every class $j$, and obtain the entropy over all $K$ classes. The negative entropy, denoted as $\mathcal{L}_{class\_div}$ is computed as shown below: \vspace{-1.0em} \begin{equation} \displaystyle \mathcal{L}_{class\_div} = \sum_{j=0}^{K} \alpha_j \log \alpha_j \end{equation} \vspace{-0.5em} \begin{equation} \displaystyle \alpha_j = \frac{1}{N} \sum_{i=1}^{N} \SoftMax(\mathcal{C}(x_i))_j \end{equation} \textbf{Using Clone Model as a Proxy for Victim:} Since, the victim model is black-box, backpropagation through $\mathcal{V}$ is not permitted. Hence, for imposing diversity we use the clone model parameters to compute the loss. Over the training process, the clone learns to imitate the gradients of the victim, making it a suitable proxy for enforcing diversity in the generated images. \iffalse \textbf{Cross-Entropy Loss}: We add a cross entropy loss term between the clone model's outputs and the victim's labels to the generator loss. While training, this enforces the generator to generate samples for which the clone model predicted correctly with a higher confident value. This speeds up the training of the clone model within a smaller query budget. Since, the manifold of synthetic images is different from the training data, the generator images gradually drift apart from the synthetic data distribution, increasing the confidence of the clone model over the generated samples. In order to compute the cross-entropy loss, the images generated from the generator are forward propagated through the clone model and cross-entropy loss with respect to victim's labels are minimised. \begin{equation} \displaystyle L_{CE}^G = \frac{1}{|G(z)|} \sum_{x \in G(z)} CE(\hat{V}(x), C(x)) \end{equation} where $\displaystyle \ \hat{V}(x) = \underset{i}{\mathrm{argmax}}\ V_i(x) $is the class label for the maximum probability class and $C(x)$ is the pre-softmax output from the clone model. \fi The equations given below describe the overall generator and discriminator losses. \begin{equation} \mathcal{L}_G = \mathcal{L}_{adv, fake} + \lambda_{div} \mathcal{L}_{class\_div} \end{equation} \begin{equation} \mathcal{L}_D = \mathcal{L}_{adv,real} + \mathcal{L}_{adv,fake} \end{equation} \subsection{Algorithm} The overall training algorithm is outlined in Algorithm-\ref{alg:MoSAlgo}. We first train a DCGAN to initialize the generator model with an image prior. Following this, we train the clone model using a mix of images from the DCGAN and the proxy dataset to obtain a good initialization for the clone model. Using this clone model, we further fine-tune the generator for $n_G$ epochs using the two proposed losses; adversarial loss and class-diversity loss. We then train a clone model from scratch for $n_C$ epochs using the images from the diverse generator $\mathcal{G}$. Following this, we start the alternate training process for the generator and clone model. We train the generator for one iteration by freezing weights of the clone model and subsequently train the clone model for one iteration using labels from the victim model. This procedure is repeated until the query budget $N_Q$ is exhausted. \subsection{Computing the Query Cost} In this section, we compute the total number of queries to the victim model. The number of samples in the proxy data is denoted as $N_P$. Initially, we require $n_C$ queries to obtain a clone model to initialize the generator and an additional $n_C$ queries to initialize the Classifier $\mathcal{C}$. For our experiments, we set $n_C$ as 50,000. The alternate training of the clone and generator continues for $E$ epochs and in each epoch, the victim model is queried $N_P$ times. So the total query cost is computed as follows, \vspace{-0.3cm} \begin{equation} N_Q = E \cdot N_P \end{equation} \vspace{-0.4cm} \begin{equation} \displaystyle \mathrm{Total\ Queries} = 2 \cdot n_C + N_Q \end{equation} We set the query limit $N_Q$ to 8 million for our proxy and synthetic data experiments on CIFAR-10. \subsection{Insights on Query Budget} Chandrasekaran \etal\cite{chandrasekaran2020exploring} formulated the model extraction task as a query synthesis active learning problem where an adversary learns a hypothesis function with a query complexity $q_A(\epsilon, \delta)$. The authors observe that it is possible for an adversary to implement an $\epsilon$-extraction attack with query complexity $q_A(\epsilon, \delta)$ and confidence $1-\delta$ (described in Section 2 of the Supplementary). The authors\cite{chandrasekaran2020exploring} further prove that model stealing is inevitable and there exists a query bound within which a model could be stolen. We empirically find the query budget needed for the proposed approach in the Query ablation (Section \ref{sec:abla_study}). \input{latex/tables/cifar-10-main} \input{latex/tables/resnet_main} \section{Experiments} We perform experiments to evaluate the effectiveness of the proposed algorithm DFMS-HL in a hard-label data-free setting. We primarily compare our approach to the existing method ZSDB3KD~\cite{wang2021zero}, which is a zero-shot hard-label Knowledge distillation method. We present evaluations of DFMS-HL by using various proxy datasets as well as with synthetically crafted data. Our attack not only outperforms ZSDB3KD by a large margin, but also achieves clone-model accuracy comparable to the soft-label methods by using only hard-labels from the victim model. Additionally, we perform ablations to highlight the number of queries required to successfully steal a model, and also to understand the impact of the class-diversity loss. Our analysis reveals that the proposed attack is computationally more efficient when compared to existing approaches since it requires significantly lesser queries. \subsection{Experimental Setup} We evaluate DFMS-HL on two datasets, CIFAR-10 and CIFAR-100. For evaluation, we first train a victim model with the same teacher accuracy as ZSDB3KD ~\cite{wang2021zero} for a fair comparison. The victim models are trained until the accuracy reaches the expected value. We evaluate our approach using the following two (Victim, Clone) pairs: (ResNet18, ResNet-18) and (AlexNet, AlexNet-half). For the clone model training, we use an SGD optimizer with momentum of 0.9, maximum learning rate of 0.1 and a weight decay of $5\times10^{-4}$. We use a cosine annealed scheduler to decay the learning rate across epochs. For initialization, the clone model is trained for 200 epochs. For the main approach, the clone model is further trained with the images generated from the generator within the query budget or until the accuracy saturates. For the generator, we use a DCGAN \cite{radford2015unsupervised} with upto five transpose convolution layers followed by batch-normalization and ReLU units. We use Tanh activation units after the last convolution layer to generate images in the normalised range of $[-1,1]$. The discriminator contains a stack of five convolution layers followed by batch normalization and Leaky ReLU units. The last layer of the discriminator uses Sigmoid activation. The GAN is trained with an Adam optimizer~\cite{kingma2014adam} and a learning rate of $2\times10^{-4}$ with $(\beta_1, \beta_2)$ set to $(0.5, 0.999)$. \subsection{Results} \textbf{Comparison with Knowledge distillation (KD) methods:} We compare the proposed approach with existing KD methods on CIFAR-10 in Table \ref{table:cifar10_main}. DeGAN~\cite{addepalli2020degan} and ZSKD~\cite{nayak2019zero} are data-free KD methods with white-box teacher access, while KnockoffNets~\cite{orekondy2019knockoff} and Black-Box Ripper~\cite{barbalau2020black} are data-free KD methods in a black-box setting. Similar to the experimental setting of prior works \cite{addepalli2020degan,barbalau2020black}, we use images from 40 unrelated classes of CIFAR-100 as the proxy dataset for CIFAR-10 model stealing. We also show results using images from 10 classes randomly sampled from these 40 unrelated classes. We achieve results comparable to the data-free KD methods despite having more restrictions on access to the victim model. We also show results by using synthetically crafted data for imposing image priors using the discriminator. For this, we generate a synthetic dataset of 50k samples by including random shapes (triangle, rectangle, ellipse or circles) of randomly sampled sizes at random locations on a plain background of random color (details in Section 1.1 of the Supplementary). We also generate textured images by increasing the maximum number of shapes to 100 and reducing the maximum region occupied by the shapes in the image. These images are converted to grey-scale as shown in Fig.\ref{fig:gan_imgs}, and further used as proxy data to train the generator. For comparing our results with ZSDB3KD, we train a victim model with a comparable accuracy of 80.18\%. From Table \ref{table:cifar10_main}, it can be observed that our approach not only outperforms ZSDB3KD by a large margin, but also achieves a comparable accuracy with respect to DeGAN and Black-Box Ripper using 40 unrelated classes from CIFAR-100 as the proxy data. We use a significantly lower query budget of 8M when compared to ZSDB3KD which requires 4000M queries. We report the clone model accuracy with other proxy datasets in Table \ref{table:other_proxy_data}. When synthetic data is used, we report our numbers under the ``Data-Free" column across all tables since we do not use any additional data in this case. We obtain significant gains when compared to ZSDB3KD across different proxy datasets. \input{latex/tables/cifar100_main} \input{latex/tables/other_datasets} \textbf{Comparison with Model Stealing methods.} We compare our approach with the state-of-the-art data-free Model Stealing approaches~\cite{kariyappa2021maze, truong2021data} on CIFAR-10 in Table \ref{table:resnet_main}. We obtain an accuracy of 84.51\% by merely using synthetic samples in a completely data-free hard-label setting. We use a lower query budget of 8M, as compared to that of DFME and MAZE that require 20M queries for CIFAR-10. We further extend our attack to the soft-label black-box scenario (denoted as DFMS-SL in Table \ref{table:resnet_main}) where the softmax predictions of the victim model are available. In order to utilize the soft labels in a KD setting, we use the L1 loss formulation of DFME~\cite{truong2021data}, which computes the L1 distance between the victim and clone model's logits. Victim model logits are estimated by first taking log of the softmax output, followed by a mean correction. We use the same query budget of 20M and get a boost of almost 3\% using synthetic data and proxy data of 10 CIFAR-100 classes. \textbf{Results on CIFAR-100.} We perform experiments on CIFAR-100 (Table \ref{table:cifar100_main}) with CIFAR-10 \cite{addepalli2020degan,barbalau2020black} and synthetic data as proxy datasets using a ResNet-18 victim model of accuracy 78.52\%. DeGAN attains an accuracy of 75.62\% in a soft-label setting with access to the teacher gradients. DFMS-HL reaches a comparably close accuracy of 72.83\% using CIFAR-10 as the proxy and 43.56\% using synthetic data without any access to the victim model's gradients and only using hard labels. This shows that gradients from the clone model effectively act as a proxy to the victim model's gradients, for training the generator to generate diverse samples across all classes. \begin{figure} \centering \includegraphics[width=0.78\linewidth]{latex/figures/gan_imgs.pdf} \caption{Samples of grey-scale synthetic images shown on the left, along with images generated from the DFMS-HL generator shown on the right.} \vspace{-1em} \label{fig:gan_imgs} \end{figure} \section{Ablation Study} \label{sec:abla_study} \textbf{Query Budget:} We analyze the impact of query budget on the accuracy of the clone model. Our approach achieves a good accuracy within a query budget of 7.6 million using synthetic data as proxy, with AlexNet as the victim model and AlexNet-half as the clone model on CIFAR-10. From Fig. \ref{fig:plot_queries} we observe that even with a small query budget of 1.26M, our method performs well and the accuracy saturates within 8M. We report the saturating accuracies in Tables \ref{table:cifar10_main} and \ref{table:resnet_main}. We use a query budget of 10M for the CIFAR-100 experiments (Table \ref{table:cifar100_main}) and 8M for the CIFAR-10 experiments (Tables \ref{table:cifar10_main} and \ref{table:resnet_main}). The class-diversity loss has a huge impact on the clone model accuracy as we observe a significant boost of 6\% using this. \begin{figure} \centering \includegraphics[width=0.75\linewidth]{latex/figures/plot-pd.pdf} \caption{\textbf{Query Ablation on CIFAR-10 using synthetic images as proxy data:} Plot of clone model accuracy (\%) w.r.t. the number of queries. We achieve a significant boost of 6\% by using the class-diversity loss.} \label{fig:plot_queries} \vspace{-1em} \end{figure} \textbf{Class Diversity Loss:} We perform an ablation study by varying the coefficient of the diversity loss from 0 to 1000 in Fig. \ref{fig:div_loss_syn}. We use synthetic data as proxy with CIFAR-10 as the original training dataset of the Victim model. We run the ablations for 150 epochs of training, which limits the queries to 7.6M. We find that the clone model accuracy is stable across a wide range of loss coefficients. We set $\lambda_{div}$ to 500 for CIFAR-10 and 100 for CIFAR-100. \begin{figure} \centering \includegraphics[width=0.75\linewidth]{latex/figures/plot_div_loss_syn-pd.pdf} \caption{\textbf{Sensitivity Plot for Class-diversity Loss:} Clone model accuracy is stable across a wide range of loss coefficients $\lambda_{div}$.} \label{fig:div_loss_syn} \vspace{-0.3cm} \end{figure} \textbf{Alternate training of Clone model and generator:} The generator and clone model are trained once in every iteration. We check the impact of training each model after every $t$ iterations in Fig. \ref{fig:iter_gap}. We use synthetic dataset as proxy data and CIFAR-10 as the Victim training dataset, with 85 epochs of training for this ablation. We vary the iteration gap of training each model from 0 to 4. A gap of 0 indicates that the respective model is trained every iteration. The results show that increasing the iteration gap impacts the clone accuracy. We obtain a marginally better accuracy when the generator is trained in alternate iterations. We report our final results with iteration gap set to $0$ for both clone model and generator. \begin{figure} \centering \includegraphics[width=0.72\linewidth]{latex/figures/plt_iter_gap-pd.pdf} \caption{\textbf{Iteration Gap ablation:} Clone model accuracy plotted against iteration gap for training the clone and generator.} \label{fig:iter_gap} \vspace{-0.5cm} \end{figure} \begin{figure} \centering \includegraphics[width=0.75\linewidth]{latex/figures/plot_class_balance.pdf} \caption{\textbf{Distribution of images across classes:} The images generated by DFMS-HL are distributed evenly across all classes. } \label{fig:class_dist} \vspace{-0.5cm} \end{figure} \textbf{Generation of Diverse Images:} The DFMS-HL generator is initialized with a DCGAN generator at the start of the training process. As the training progresses, the generator learns to generate images distributed evenly across different classes of the victim model as shown in Fig. \ref{fig:class_dist}. We use synthetic images as the proxy data and CIFAR-10 as the Victim model training dataset, with AlexNet/ AlexNet-half as victim/ clone model architectures. The initial distribution of images generated using DCGAN is skewed, with very few samples in classes 1, 4 and 6. The distribution of images without the diversity loss is also skewed. Based on the plots, we note that the class-diversity loss has a huge impact in making the class distribution uniform. \section{Conclusions} In this paper, we propose an effective model stealing attack in a practical setting of having access to only hard-labels of a black-box victim model. Extensive experiments show that our method DFMS-HL performs better than the state-of-the art model stealing method ZSDB3KD at a $500\times$ lower query budget. We further show that our attack is effective in a completely data-free setting as well, that uses synthetically generated images to impose an image prior. We demonstrate the scalability of the proposed model stealing attack to CIFAR-100 within a low query budget, which has not been attempted in prior works. Our ablations reveal that the class-diversity loss plays a major role in achieving diversity in the generated images, boosting the clone model accuracy evenly across all classes. Although our work describes methods of attacking the privacy of models through model stealing, the goal is indeed to create better awareness and understanding of the vulnerabilities of Machine Leaning models. This would in turn promote research towards the development of novel defenses against such attacks, leading to a more robust ecosystem with increased security and privacy. \section{Acknowledgements} This work was supported by a project grant from MeitY (No.4(16) /2019-ITEA), Govt. of India and a grant from Uchhatar Avishkar Yojana (UAY, IISC\_010), MHRD, Govt. of India. Sunandini Sanyal is supported by Prime Minister’s Research Fellowship, and Sravanti Addepalli is supported by Google PhD Fellowship. We are thankful for the support. {\small \bibliographystyle{ieee_fullname}
1,314,259,995,739
arxiv
\section{Introduction} In the resonance model \citep{ak01,ka01,kato03}, there is a natural and attractive possibility of explaining the observed rational ratios of high-frequency QPOs as a consequence of non-linear coupling between different modes of accretion disk oscillations. The idea has been pursued in several papers \citep[recently, e.g.][]{akklr03, r04}. Specific models invoke particular physical mechanisms. Some models can be almost immediately comprehended as distinct realizations of the general approach discussed here -- for example, various formulations of the orbiting spot model \citep{sb04} or the models, where QPOs are produced by the magnetically driven resonance in a diamagnetic accretion disk \citep{lai99} -- while other seem to be more distant from the view presented herein -- e.g.\ the transition layer model \citep{tit02}, an interesting idea of p-mode oscillations of a small accretion torus \citep{rezzola03} or the model of blobs in an accretion disc \citep[see e.g.][and references cited therein]{ka99,li04}. Also in this context, \citet{kato04} discussed the resonant interaction between waves propagating in a warped disk, including their rigorous mathematical description. Instead of pursuing a specific model, here we keep the discussion as general as possible, aiming to implement the formalism of multiple scales. Indeed, we show that there is unquestionable appeal in this approach which offers some additional insight into generic properties of resonant oscillations. Some properties of an accretion disk oscillations can be discussed within the epicyclic approximation of a test particle on a circular orbit near equatorial plane. Suppose that angular momentum of the particle is fixed to a value $ \ell $. The effective potential $ U_\ell(r, \theta) $ has a minimum at radius $ r_0 $, corresponding to the location of the stable circular orbit. An observer moving along this orbit measures radial, vertical and azimuthal epicyclic oscillations of a particle nearby. Since the angular momentum of the particle is conserved, only two of them -- radial and vertical -- are independent. The epicyclic frequencies can be derived from the geodesic equations expanded to the linear order in deviations $ \delta r = r - r_0 $ and $ \delta \theta = \theta - \pi/2 $ from the circular orbit. We get two independent second-order differential equations describing two uncoupled oscillators with frequencies $ \omega_r $ and $ \omega_\theta $, which are given by the second derivatives of effective potential $ U_\ell(r, \theta) $. In Newtonian theory, $ \omega_r $ and $ \omega_\theta $ are equal to the Keplerian orbital frequency $ \Omega_K $. This is in tune with the fact that orbits of particles are planar and closed curves. The degeneracy between two epicyclic frequencies can be seen as a result of scale-freedom of the Newtonian gravitational potential \citep{ak03}. In Schwarzschild geometry this freedom is broken by introducing the gravitational radius $ r_g = 2GM/c^2 $. The degeneracy between the vertical epicyclic and the orbital frequencies is related to spherical symmetry of the gravitational potential, which assures the existence of planar trajectories of particles. All three frequencies are different in the vicinity of a rotating Kerr black hole. In addition, when nonlinear terms of geodesic equations are included, the two oscillations in $r$ and $\theta$ directions become coupled and variety of new phenomena connected to nonlinear nature of the equations appear. This rich phenomenology includes frequency shift of observed frequencies with respect to eigenfrequencies, presence of higher harmonics and subharmonics, drifts and parametric resonance. The first three are connected to nonlinear oscillations of each mode and the last one comes from the coupling between two modes. \section{Expansion via multiple scales} \label{sec:res} We study nonlinear oscillations of the system having two degrees of freedom, i.e., the coordinate perturbations $\delta r$ and $\delta \theta$. The oscillations are described by two coupled differential equations of the very general form \begin{eqnarray} \label{eq:res_gov_r} \ddot{\delta r} + \omega_r^2\; \delta r &=& \omega_r^2\; f_r(\delta r, \delta\theta, \dot{\delta r},\dot{\delta \theta}), \\ \label{eq:res_gov_theta} \ddot{\delta \theta} + \omega_\theta^2\; \delta \theta &=& \omega_\theta^2 \;f_\theta(\delta r, \delta\theta, \dot{\delta r}, \dot{\delta \theta}). \end{eqnarray} Suppose that the functions $ f_r $ and $ f_\theta $ are nonlinear, i.e., their Taylor expansions start in the second order. Our another assumption is that these functions are invariant under reflection of time (i.e., the Taylor expansion does not contain odd powers of time derivatives of $ \delta r $ and $ \delta \theta $). As we shell see later, this is related to the conservation of the total energy in the system. Many authors studied such systems with a particular form of functions $ f $ and $ g $ \citep{nm79}, however, in this paper we keep the discussion fully general. We seek the solutions of the governing equations in the form of the multiple-scales expansions \citep{nm79} \begin{equation} \delta r(t, \epsilon) = \sum_{n=1}^4 \epsilon^n r_n(T_\mu), \quad \delta \theta(t, \epsilon) = \sum_{n=1}^4 \epsilon^n \theta_n(T_\mu), \label{eq:res_exp} \end{equation} where several time scales $T_\mu$ are introduced instead of the physical time $t$, \begin{eqnarray} T_\mu \equiv \epsilon^\mu t, & \mu = 0,1,2,3. \label{eq:ms_scales} \end{eqnarray} The time scales are treated as independent. It follows that instead of the single time derivative we have an expansion of partial derivatives with respect to the $ T_\mu $ \begin{eqnarray} \frac{d}{dt} &=& \D{0} + \epsilon \D{1} + \epsilon^2 \D{2} + \epsilon^3 \D{3} + {\cal O}(\epsilon^4), \label{eq:ms_der1} \\ \frac{d^2}{dt^2} &=& \D{0}^2 + 2 \epsilon \D{0} \D{1} + \epsilon^2 (\D{1}^2 + 2 \D{0}\D{2}) + \ \nonumber \\ &\phantom{=}& 2\epsilon^3 (\D{0}\D{3} + \D{1}\D{2}) + {\cal O}(\epsilon^4), \label{eq:ms_der2} \end{eqnarray} where $ \D{\mu} = \partial / \partial T_\mu $. We expand the nonlinear functions $f_r$ and $f_\theta$ into the Taylor series and then we substitute the expansions (\ref{eq:res_exp}), (\ref{eq:ms_der1}) and (\ref{eq:ms_der2}). Finally, we compare the coefficients of the same powers of $\epsilon$ on both sides in the resulting couple of equations. This way we get a set of \imp{linear} second-order differential equations that can be solved successively -- the lower-order terms of the expansion (\ref{eq:res_exp}) appear as forcing terms on the right-hand sides of the equations for the higher order approximations. In the first order we obtain equations corresponding to the linear approximation \begin{equation} \label{eq:res_1} (\D{0}^2 + \omega_r^2) r_1 = 0, \quad (\D{0}^2 + \omega_\theta^2) \theta_1 = 0. \end{equation} with the solutions \begin{eqnarray} \label{eq:res_1solr} x_1 = A_r(T_1,T_2,T_3) e^{i \omega_r T_0} + \mathrm{cc}, \\ \theta_1 = A_\theta(T_1,T_2,T_3) e^{i \omega_\theta T_0} + \mathrm{cc}. \label{eq:res_1solth} \end{eqnarray} The complex amplitudes $\widehat{A}_r$ and $A_\theta$ generally depend on the higher time-scales. The solutions (\ref{eq:res_1solr}) and (\ref{eq:res_1solth}) substituted into the quadratic terms on the right-hand sides of the second-order differential equations produce terms that oscillates with frequencies $2\omega_r$, $2\omega_\theta$ and $\omega_\theta\pm\omega_r$. When the frequency ratio $\omega_r/\omega_\theta$ is far from 1:2 and 2:1 the particular solutions $r_2$ and $\theta_2$ describe higher harmonics to the linear-order oscillations $r_1$ and $\theta_1$. Hence, the presence of higher harmonics in the power-spectra is a signature of nonlinear oscillations. Their frequencies and relative strengths with respect to the main oscillations provide us an usefull informations about nonlinearities in the system. In addition, the right hand sides of the second order equations contain terms proportional to $e^{i\omega_r T_0}$ and $e^{i\omega_\theta T_0}$ that oscillates with the same frequency as the eigenfrequency of the oscillators. These terms produces secular grow of the amplitudes of the second-order approximations $r_2$ and $\theta_2$ and causes nonuniform expansions (\ref{eq:res_exp}). Eliminating them we get the \imp{solvability conditions} for the complex amplitudes $A_r(T_1,T_2,T_3)$ and $A_\theta(T_1,T_2,T_3)$ that give us the evolution of the system on longer time-scales \citep{nm79}. When the eigenfrequencies are in 1:2 or 2:1 ratio we observe qualitatively different behavior related to the \imp{autoparametric resonance}. In that case the right hand sides contains additional secular terms and the solvability conditions take different form. Different resonances occur in different orders of approximation. The possible resonances in the third order are 1:3, 1:1 and 3:1 and 1:4, 3:2, 2:3 and 4:1 in the fourth order\footnote{The ratio $n:m$ refers to the eigenfrequency ratio $\omega_\theta:\omega_r$.} However, if the governing equations remain unchanged under the transformation $\delta\theta\rightarrow-\delta\theta$ (i.e., the system is reflection symmetric) the only autoparametric resonances that exists in the system are 1:2, 1:1, 1:4 and 3:2 \citep{r04} \section{The 3:2 autoparametric resonance} \label{sec:32} Let us consider oscillations of a conservative system eigenfrequencies of which are close to 3:2. The time behavior of the observed frequencies $\omega^\star_r$ and $\omega^\star_\theta$ and amplitudes $a_r$ and $a_\theta$ of the oscillations can be found from the solvability conditions imposed on the complex amplitudes $A_r(T_1,T_2,T_3)$ and $A_\theta(T_1,T_2,T_3)$ \citep{hor05} \begin{eqnarray} \D{1}A_r &=& \D{2}A_\theta = 0, \label{eq:sol1} \\ \D{2} A_r &=& - \frac{i \omega_r}{2} \left[ \kappa_r |A_r|^2 + \kappa_\theta |A_\theta|^2 \right] A_r, \\ \D{2} A_\theta &=& - \frac{i \omega_\theta}{2} \left[ \lambda_r |A_r|^2 + \lambda_\theta |A_\theta|^2 \right] A_\theta, \\ \D{3} A_r &=& -\frac{i}{2}\omega_r \alpha (A_r^2)^\ast A_\theta^2 e^{-i (\sigma_2 T_2 + \sigma_3 T_3)}, \\ \D{3} A_\theta &=&-\frac{i}{2}\omega_\theta \beta A_r^3 A_\theta^\ast e^{i(\sigma_2 T_2 + \sigma_3 T_3)}. \label{eq:sol5} \end{eqnarray} In the fourth order we eliminate also terms which become secular in when $3\omega_r\approx2\omega_\theta$. We describe vicinity of the resonance by the detuning parameters $\sigma_2$ and $\sigma_3$ introduced according to \begin{equation} 3 \omega_r = 2 \omega_\theta + \epsilon^2 \sigma_2 + \epsilon^3 \sigma_3. \end{equation} The term $ \epsilon \sigma_1 $ is missing, because the complex amplitudes depends only on the second and the third time-scales. The solvability conditions describe evolution of the system in the most general way: the real parameters $\alpha$, $\beta$, $\kappa_r$, $\kappa_\theta$, $\lambda_r$ and $\lambda_\theta$ are given by the coefficients of the Taylor-expanded nonlinear functions $f_r$ and $f_\theta$. Since $A_r$ and $A_\theta$ are complex, the conditions (\ref{eq:sol1})--(\ref{eq:sol5}) represents 12 real equations. However few of them are trivial. By substituting the polar forms $\epsilon A_r=\frac{1}{2}a_r e^{i \phi_r}$ and $\epsilon A_\theta=\frac{1}{2}a_\theta e^{i \phi_\theta}$, separating real and imaginary parts and introducing the unique time $t$ the number of the equations can be reduced to four, \begin{eqnarray} \dot{a}_\rho &=& \frac{\alpha\omega_r}{16}\, a_\rho^2\,a_\theta^2\,\sin \gamma, \label{eq:qpo-ar} \\ \dot{a}_\theta &=& -\frac{\beta \omega_\theta}{16}\,a_\rho^3\,a_\theta\,\sin \gamma, \label{eq:qpo-atheta} \\ \dot{\phi}_\rho &=& -\frac{\omega_r}{2}\, \left[\kappa_r\,a_\rho^2 + \kappa_\theta\,a_\theta^2 \right] - \frac{\alpha \omega_r}{16}\, a_\rho\,a_\theta^2\,\cos \gamma, \label{eq:qpo-phir} \\ \dot{\phi}_\theta &=& -\frac{\omega_\theta}{2}\, \left[\lambda_r\,a_\rho^2 + \lambda_\theta\, a_\theta^2 \right] - \frac{\beta \omega_\theta}{16} a_\rho^3\,\cos \gamma, \label{eq:qpo-phitheta} \end{eqnarray} where we introduced the phase function $\gamma(t)=2\phi_\theta(t)-3\phi_r(t)-\sigma t$ and the unique detuning parameter $\sigma=\epsilon^2\sigma_2+\epsilon^3\sigma_3$. The equations (\ref{eq:qpo-ar}) and (\ref{eq:qpo-atheta}) describes the slow evolution of the amplitudes of oscillations and additional long-term behavior of the oscillation phases is given by equations (\ref{eq:qpo-phir}) and (\ref{eq:qpo-phitheta}). These equations give us the frequency-shift of the observed frequencies $\omega^\star_r$ and $\omega^\star_\theta$ with respect to the eigenfrequencies $\omega_r$ and $\omega_\theta$, respectively, \begin{eqnarray} \omega^\ast_r = \omega_r + \dot{\phi}_r, \quad \omega^\ast_\theta = \omega_\theta + \dot{\phi}_\theta. \label{eq:qpos-corrections} \end{eqnarray} The two equations (\ref{eq:qpo-phir}) and (\ref{eq:qpo-phitheta}) can be replaced by a single differential equation for the phase function, \begin{equation} \dot{\gamma}=-\sigma+\frac{\omega_\theta}{4}\left[\mu_r a_r^2 + \mu_\theta a_\theta^2 + \frac{a_r}{2} \left( \alpha a_\theta^2 - \beta a_r^2 \right) \cos \gamma \right], \label{eq:qpo-gamma} \end{equation} were we used the fact that near the resonance $ \omega_r \approx (2/3) \omega_\theta $ and we defined $ \mu_r = \kappa_r - \lambda_r $ and $ \mu_\theta = \kappa_\theta - \lambda_\theta $. \subsection{Steady-state solutions} Steady-state solutions are characterized by constant amplitudes and frequencies of oscillations. Such solutions represent singular points of the system governed by equations (\ref{eq:qpo-ar}), (\ref{eq:qpo-atheta}) and (\ref{eq:qpo-gamma}). It is obvious from equations (\ref{eq:qpo-ar}) and (\ref{eq:qpo-atheta}) that the condition $\dot{a}_r= \dot{a}_\theta=0 $ can be satisfied (with nonzero amplitudes) only if $\sin \gamma = 0$ (identically at all times), and thus also $\dot{\gamma}=0 $. In that case equation (\ref{eq:qpo-gamma}) transforms to the algebraic equation \begin{equation} \frac{\sigma}{\omega_\theta} = \frac{1}{4} \left[\mu_r a_r^2 + \mu_\theta a_\theta^2 \pm \frac{a_r}{2} \left( \alpha a_\theta^2 - \beta a_r^2 \right)\right]. \end{equation} The left-hand side can be expressed using the eigenfrequency ratio $R=\omega_\theta/\omega_r$ as \begin{equation} \frac{\sigma}{\omega_\theta} = -\frac{2}{R} \left( R - \frac{3}{2} \right). \end{equation} Then we get \begin{equation} R = \frac{3}{2} - \frac{3}{16} \left( \mu_r a_r^2 + \mu_\theta a_\theta^2 \right) \pm \frac{3}{32} a_r \left( \alpha a_\theta^2 - \beta a_r^2 \right), \end{equation} were we neglected terms of the fourth order. Note that the lowest correction to eigenfrequencies is of the second order -- for given amplitudes $a_r$, $a_\theta$ the steady-state oscillations occur when the eigenfrequency ratio departs from $3/2$ by deviation of order of $a^2$. The relation between observed frequencies of oscillations $\omega^\star_r$, $\omega^\star_\theta$ and eigenfrequencies $\omega_r$, $\omega_\theta$ are given by the time derivative of phases $\phi_r$ and $\phi_\theta$. We can find simple relation between observed frequencies and the phase function \begin{eqnarray} 3\omega^\star_r - 2\omega^\star_\theta &=& 3\omega_r - 2\omega_\theta + (3\dot{\phi}_r - 2\dot{\phi}_\theta) \nonumber\\ &=& \sigma + (3\dot{\phi}_r - 2\dot{\phi}_\theta) = -\dot{\gamma}. \label{eq:relation} \end{eqnarray} For steady state solutions $\dot{\gamma}=0$, and thus observed frequencies are adjusted to \imp{exact} 3:2 ratio even if eigenfrequencies depart from it. \subsection{Integrals of motion} \begin{figure} \includegraphics[width=0.5\textwidth]{enellipse.eps} \caption{ Comparison between an analytical constraint (\ref{eq:32_E}) and the corresponding numerical solution of the system \citet{akklr03}. Each point corresponds to the amplitudes of the oscillations at a particular time. On the other hand, from the discussion of equation (\ref{eq:32_E}) we know that these points must lay on an ellipse, whose shape is determined by the multiple-scales method.} \label{fig:32_ellipse} \end{figure} The method of investigation of time-dependent behavior of the system is analogical to the case of 1:2 resonance as examined by \citet{nm79}. The oscillations are described by three variables $ a_r(t) $, $a_\theta(t)$ and $\gamma(t)$ and three first-order differential equations (\ref{eq:qpo-ar}), (\ref{eq:qpo-atheta}) and (\ref{eq:qpo-gamma}). However, the number of differential equations can be reduced to one because it is possible two find two integrals of motion. Our discussion will be Consider equations (\ref{eq:qpo-ar}) and (\ref{eq:qpo-atheta}). Eliminating $ \sin \gamma $ from both equations we find \begin{equation} \frac{d}{dt}(a_r^2 + \nu a_\theta^2) = 0 \end{equation} and thus \begin{equation} \label{eq:32_E} a_r^2 + \nu a_\theta^2 = \mathrm{const} \equiv E, \end{equation} where we defined \begin{equation} \label{eq:32_nu} \nu = \frac{\alpha \omega_r}{\beta \omega_\theta} \approx \frac{2 \alpha}{3 \beta}. \end{equation} When $ \nu > 0 $, the both amplitudes of oscillations are bounded. The curve $ [a_r(t), a_\theta(t)] $ is a segment of an ellipse. The constant $ E $ is proportional to the energy of the system. On the other hand, when $ \nu < 0 $, one amplitude of oscillations can grow without bounds while the second amplitude vanishes. This case corresponds to the presence of an regenerative element in the system \citep{nm79}. The corresponding curve in the $(a_r,a_\theta)$ plane is a hyperbola. In further discussion we assume that $ \nu > 0 $. In order to verify that the the energy of the system is conserved, we numerically integrated governing equation (\ref{eq:res_gov_r}) and (\ref{eq:res_gov_theta}) for the one particular system discussed by \citet{akklr03}. The comparison is in Figure \ref{fig:32_ellipse}. The numerical and analytical results are in a very good agreement. The second integral of motion is found in following way. Let us multiply equation (\ref{eq:qpo-gamma}) by $a_\theta$. Then we obtain \begin{eqnarray} a_\theta \dot{\gamma} &=& -\sigma a_\theta + \frac{\omega_\theta}{4} \mu_r a_r^2 a_\theta + \frac{\omega_\theta}{4} \mu_\theta a_\theta^3 + \frac{\omega_\theta}{8} \alpha a_r a_\theta^3 \cos \gamma - \nonumber\\ &\phantom{=}&\frac{\omega_\theta}{8} \beta a_r^3 a_\theta \cos \gamma. \end{eqnarray} Changing the independent variable from $t$ to $a_\theta$ and multiplying the whole equation by $d a_\theta$ we find \begin{eqnarray} \label{eq:32_derL1} a_r^3 a_\theta^2 d(\cos \gamma) + \frac{8\sigma}{\beta \omega_\theta} d(a_\theta^2) \!\!&-&\!\! \frac{4 \mu_r}{\beta} a_r^2 a_\theta d(a_\theta^2) - \frac{\mu_\theta}{\beta} d(a_\theta^4) - \nonumber \\ -\frac{2\alpha}{\beta} a_r a_\theta^3 \cos \gamma d a_\theta \!\!&+&\!\! 2 a_r^3 a_\theta \cos \gamma d a_\theta = 0. \end{eqnarray} The equation (\ref{eq:32_E}) implies \begin{equation} \label{eq:32_dE} a_\theta d a_\theta = -\frac{a_r d a_r}{\nu}. \end{equation} With the aid of this relation equation (\ref{eq:32_derL1}) takes the form \begin{eqnarray} 3 a_r^2 a_\theta^2 \cos \gamma d a_r + 2 a_r^3 a_\theta \cos \gamma d a_\theta + a_r^3 a_\theta^2 d(\cos\gamma) + \nonumber \\ + \frac{8\sigma}{\beta \omega_\theta} d(a_\theta^2) + \frac{\mu_r}{\beta \nu} d (a_r^4) - \frac{\mu_\theta}{\beta} d(a_\theta^4) = 0. \end{eqnarray} The first three terms express the total differential of function $-a_r^3 a_\theta^2 \cos \gamma $. Hence, the above equation can be arranged to the form \begin{equation} d \left( a_r^3 a_\theta^2 \cos \gamma + \frac{8\sigma}{\beta \omega_\theta} a_\theta^2 + \frac{\mu_r}{\beta \nu} a_r^4 - \frac{\mu_\theta}{\beta} a_\theta^4 \right) = 0. \end{equation} In other words, \begin{equation} \label{eq:32_L} a_r^3 a_\theta^2 \cos \gamma + \frac{8\sigma}{\beta \omega_\theta} a_\theta^2 + \frac{\mu_r}{\beta \nu} a_r^4 - \frac{\mu_\theta}{\beta} a_\theta^4 = \mathrm{const} \equiv L \end{equation} is another integral of equations (\ref{eq:qpo-ar}), (\ref{eq:qpo-atheta}) and (\ref{eq:qpo-gamma}). \subsection{Analytical results} Knowing two integrals of motion, we are able to find one differential equation which governs the time-evolution of the system. First, the amplitudes $ a_r $ and $ a_\theta $ are not independent because they are related by equation (\ref{eq:32_E}). To satisfy this relation, let us define new variable $\xi(t)$ by \begin{equation} \label{eq:32_xi} a_r^2 = \xi E, \quad a_\theta^2 = (1-\xi)\frac{E}{\nu}. \end{equation} For present moment, we ignore the time dependence by considering projections of solutions into the $(\gamma,\xi)$-plane. For a fixed energy $E$ of oscillations, the system follows curves of constant $L$. Hence, the trajectories in the $(\gamma,\xi)$-plane are given by equation \begin{equation} L(\gamma,\xi)=\mathrm{const}. \end{equation} An example of the phase-plane is given in Figure~\ref{fig:phpl}. There are two types of trajectories $[\xi(t),\gamma(t)]$: the \imp{circulating} trajectories take the full range $0\leq\gamma(t)\leq2\pi$ and the \imp{librating} trajectories that are confined in the smaller range $\gamma_1\leq\gamma(t)\leq\gamma_2$. The turning points on the librating trajectories correspond to $\gamma=\gamma_1$ and $\gamma=\gamma_2$. This division has an interesting consequences with respect to the observed frequencies of resonant oscillations. According to the relation (\ref{eq:relation}) the observed frequencies are in exact 3:2 ratio when the system pass through these points. On the other hand, the circulating trajectories do not contain any turning points and the ratio of observed frequencies is always above or bellow 3:2. \begin{figure} \includegraphics[width=0.48\textwidth]{phasepl_32.eps} \caption{ Example of the $(\gamma,\xi)$-plane for the system close to the 3:2 resonance. The oscillations are coupled by nonlinear functions $f_\rho$ and $f_\theta$ [see equations (\ref{eq:res_gov_r}) and (\ref{eq:res_gov_theta})]. These functions give us values of the constants $\alpha$, $\beta$, $\kappa_r$, $\kappa_\theta$. $\lambda_r$ and $\lambda_\theta$. The thick solid line is the separatrix dividing the librating and circulating trajectories The blue dotted line connects points where $\dot{\gamma}=0$. The example is for values $\alpha=\beta=\kappa_r=\lambda_\theta = 1$, $\kappa_\theta=\lambda_\theta=2$, $\mathcal{E}=0.1$ and $\sigma = -0.165$.} \label{fig:phpl} \end{figure} The equation describing the evolution of $\xi(t)$ can be derived in the following way. Let us multiply equation (\ref{eq:qpo-ar}) by $ 2 a_r $ and integrate it. We obtain \begin{equation} \frac{d (a_r^2)}{dt} = \frac{\alpha}{8}\omega_r a_r^3 a_\theta^2 \sin \gamma. \end{equation} Then we express $ a_r^2 $ using $ \xi $, and square it. We find \begin{equation} \label{eq:32_derxi} \left(\frac{8 E}{\alpha \omega_r} \right)^2 \dot{\xi}^2 = \left( a_r^3 a_\theta^2 \sin \gamma \right)^2. \end{equation} The right-hand side of this equation can be expressed using equation (\ref{eq:32_L}) as \begin{eqnarray} \left( a_r^3 a_\theta^2 \sin \gamma \right)^2 &=& \left( a_r^3 a_\theta^2 \right)^2- \nonumber\\ &\phantom{=}&\left( L - \frac{8\sigma}{\beta \omega_\theta} a_\theta^2 - \frac{\mu_r}{\beta \nu} a_r^4 + \frac{\mu_\theta}{\beta} a_\theta^4 \right)^2. \end{eqnarray} \begin{figure} \includegraphics[width=0.48\textwidth]{fg.eps} \caption{ The functions $ \pm F(\xi) = \pm (1-\xi)\xi^{3/2} $ and the quadratic function $ G(\xi) $ whose second powers are first and second terms on the right-hand side of equation (\ref{eq:32_EOM}). The behavior of the system corresponds to $ \xi $ in the interval $ [\xi_1, \xi_2] $ (denoted by the two dotted vertical lines) where the condition $ |F(\xi)| \geq |G(\xi)| $ is satisfied. } \label{fig:32_FG} \end{figure} After the substitution into equation (\ref{eq:32_derxi}) and using the relations (\ref{eq:32_xi}), we get \begin{eqnarray} \frac{1}{E^3} \left( \frac{8}{\beta\omega_\theta} \right)^2 \dot{\xi}^2 &=&(1-\xi)^2\xi^3 - \nonumber \\ &\phantom{=}& \frac{\nu^2}{E^5} \big[ L - \frac{8 \sigma E}{\beta \nu \omega_\theta} (1-\xi) \nonumber \\ &-& \frac{\mu_r E^2}{\beta \nu} \xi^2 + \frac{\mu_\theta E^2}{\beta \nu^2} (1-\xi)^2 \big]^2. \label{eq:32_EOM} \end{eqnarray} The equation of motion has a form \begin{equation} \label{eq:32_EOM_form} \mathcal{K}^2 \dot{\xi}^2 = F^2(\xi) - G^2(\xi), \end{equation} where the $\mathcal{K}^2$ is a positive constant, $F(\xi)=(1-\xi)\xi^{3/2}$ and $G(\xi)$ is a quadratic function coefficients of which depend on initial conditions through $E$ and $L$. The motion occurs only for $\xi$ that satisfy $|F(\xi)| \geq |G(\xi)|$. The turning points, where $ \dot{\xi} $ changes its signature, are determined by the condition \begin{equation} \label{eq:32_turn} |F(\xi)| = |G(\xi)|. \end{equation} The functions $\pm F(\xi)$ and $G(\xi)$ are plotted in Figure \ref{fig:32_FG}. Generally, the function $G$ intersects the functions $\pm F$ in two points that corresponds to $\xi(t)$ oscillating between the two bounds $\xi_1$ and $\xi_2$ given by condition (\ref{eq:32_turn}). The radial and vertical mode of oscillations periodically exchanges the energy. The amount of exchanged energy is given by $ \Delta E/E = \xi_2 - \xi_1 $. For some particular values of $L$ and $E$ only one intersection of $\pm F$ and $G$ exists (the function $G(\xi)$ touch one of the functions $\pm F(\xi)$)-- the oscillations of the system correspond to the steady-state solutions discussed above. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{corr.eps} \end{center} \caption{ Time evolution of the amplitudes (top panel) and the observed frequencies, $\nu_\theta^\star = \omega^\star_\theta/(2\pi)$ (middle panel) and $\nu_r^\star = \omega^\star_r/(2\pi)$ (bottom panel). All quantities are rescaled with respect to the higher eigenfrequency $ \nu_\theta $. Amplitudes of the oscillations are anticorrelated because the energy is conserved. Observed frequencies are correlated because the system is in parametric resonance.} \label{fig:32_sol} \end{figure} The period of the energy exchange can be find by integration of equation (\ref{eq:32_EOM}) \begin{equation} T = \frac{16}{\beta \omega_\theta} E^{-3/2} \int_{\xi_1}^{\xi_2} \frac{d\xi}{\sqrt{F^2(\xi) - G^2(\xi)}}. \end{equation} This integral can be roughly approximated as \begin{equation} \label{eq:res32_T} T \sim \frac{16 \pi}{\beta \omega_\theta} E^{-3/2}. \end{equation} However, near the steady state the period becomes much longer. The observed frequencies $\omega^\star_r$ and $\omega^\star_\theta$ are given by relations (\ref{eq:relation}). They depend on squared amplitudes $a_r^2$ and $a_\theta^2$. Since both $a_r^2$ and $a_\theta^2$ depend linearly on $\xi(t)$, also observed frequencies are linear functions of $\xi$ and are linearly correlated. The slope of this correlation $\omega^\star_\theta = K \omega^\star_r + Q$ is independent of the energy of oscillations and is given only by parameters of the system, \begin{equation} K = \frac{\omega_\theta}{\omega_r} \frac{\lambda_r \nu - \lambda_\theta}{\kappa_r \nu - \kappa_\theta}. \end{equation} The slope of the correlation differs from 3:2, however the observed frequencies are still close to it. \subsection{Numerical results} The equations (\ref{eq:qpo-ar}), (\ref{eq:qpo-atheta}) and (\ref{eq:qpo-gamma}) were solved numerically using the fifth-order Runge-Kutta method with an adaptive step size. One of the solutions is shown in Figure \ref{fig:32_sol}. The top panel of the figure shows the time behavior of the amplitudes of the resonant oscillations. Since energy of the system is constant, amplitudes are anticorrelated and the two modes are continuously exchanging energy between each other. The middle and the bottom panels show the two observed frequencies that are mutually correlated. They are also correlated to one of the amplitudes. The frequency ratio varies with time and it differs from the exact 3:2 ratio, however, it always remains very close to it. Our numerical solution is in agreement with the general results obtained analytically in the previous section. \section{Conclusions} Although this paper was originally motivated by observations and models connected to high-frequency QPOs, our results are very general and can be applied to any system with governing equations of the form (\ref{eq:res_gov_r}) and (\ref{eq:res_gov_theta}). The main result of the calculations is our prediction of low-frequency modulation of the amplitudes and frequencies of oscillations. The characteristic timescale of the modulation is approximately given by equation (\ref{eq:res32_T}). Because of the generality of our approach this fact have an interesting consequences in the context of QPO nature of which are unknown. Our result can be summarized in the following way: If the two quasiperiodic oscillations observed close to 3:2 ratio are produced by the autoparametric resonance the frequencies and amplitudes of oscillations should be periodically modulated. This modulation appears as a separate peak at the modulation frequency and as side-bands to the main (linear) oscillation. In a separate paper by \citet{h04} we pointed to possible connection of this modulation with the \quot{normal branch oscillations} (NBOs) that are often present together with QPOs. Specifically, we suggest that the correlation between the higher frequency and the lower amplitude, evident in Figure \ref{fig:32_sol}, is the same as was recently reported in Sco X-1 by \citet{ykj01}. We note that similar behavior was recently observed also in the galactic black-hole candidate XTE~J1550-564 \citep{ykf02}. \bigskip \acknowledgements It is a pleasure to thank Vladim{\'\i}r Karas, Marek Abramowicz, Wlodek Klu\'zniak, Paola Rebusco, Michal Bursa and Michal Dov\v{c}iak for helpful discussions. This work was supported by the GACR grant 205/03/0902 and GAUK grant 299/2004.
1,314,259,995,740
arxiv
\section{Introduction}\label{introduction} Steinberg algebras were independently introduced in \cite{clark2014} and \cite{steinberg2010} and they are closely related to the constructions in \cite{exel2008}. Steinberg algebras are algebrai\-sations of Renault's $C^*$-algebras of groupoids and can be viewed as a model for discrete inverse semigroup algebras. Lately, Steinberg algebras have attracted a lot of attention due to the fact that they include many well-known constructions, such as, the Kumjian-Pask algebras of \cite{arandopino2013} and \cite{clarkflynn2014} which, in turn, include the Leavitt path algebras of \cite{tomforde2011}. Many different properties of Steinberg algebras have been studied, such as when they are primitive, semiprimitive, artinian or noetherian (see e.g. \cite{steinberg2016} and \cite{steinberg2018}). In this article, we focus on the property of simplicity of Steinberg algebras. Necessary and sufficient conditions for this was first obtained for complex algebras in \cite{brown2014}. This result was generalized to algebras over general fields in \cite{steinberg2016} and in \cite{clark2015} to algebras over commutative unital rings: \begin{thm}[Clark and Edie-Michel \cite{clark2015}]\label{theoremsteinberg} If $G$ is a Hausdorff and ample groupoid, and $K$ is a commutative unital ring, then the Steinberg algebra $A_K(G)$ is simple if and only if $G$ is effective and minimal, and $K$ is a field. \end{thm} In \cite{clark2018} Clark, Exel, Pardo, Sims and Starling have found necessary and sufficient criteria for simplicity of the Steinberg algebra $A_K(G)$ when $K$ is a field and $G$ is a second countable, not necessarily Hausdorff, groupoid. In this article, we will, however, only consider Hausdorff groupoids. The class of partial skew group rings were introduced by Dokuchaev and Exel in \cite{dokuchaev2005} as a generalization of classical skew group rings and as an algebraic analogue of partial crossed product C*-algebras. This class of rings have been studied a lot mainly since many other constructions, such as the Leavitt path algebras \cite{goncalves2014} and ultragraph Leavitt path algebras \cite{goncalves2017}, can be realized as partial skew group rings. The class of partial skew group rings have been generalized even further with the definition of skew inverse semigroup rings by Exel and Vieira in \cite{exel2010}. From a result of Beuter and Goncalves in \cite{beuter2018} it follows that every Steinberg algebra can be seen as a skew inverse semigroup ring. This means that results concerning skew inverse semigroup rings can be translated to results concerning Steinberg algebras. Such a translation was recently made by Beuter, Goncalves, \"{O}inert and Royer in \cite{beuter2017} where they deduce Theorem \ref{theoremsteinberg} from the following result. \begin{thm}[Beuter, Goncalves, \"{O}inert and Royer \cite{beuter2017}]\label{thmbeuter} If $\pi$ is a locally unital partial action of an inverse semigroup $S$ on an associative, commutative and locally unital ring $A$, then the skew inverse semigroup ring $A \rtimes_{\pi} S$ is simple if and only if $A$ is $S$-simple and $A$ is a maximal commutative subring of $A \rtimes_{\pi} S$. \end{thm} The motivation for the present article is derived from the following remark made by Beuter and Goncalves in \cite[Definition 2.9]{beuter2018}. If $R = A \rtimes_{\pi} S$ is a partial skew inverse semigroup ring and we for all $s \in S$ put $R_s = \overline{D_s \delta_s}$, then $R$ is a {\it system} in the sense that $R = \sum_{s \in S} R_s$ and for all $s,t \in S$ the inclusion $R_s R_t \subseteq R_{st}$ holds, and $R$ is {\it coherent} in the sense that for all $s,t \in S$ with $s \leq t$ the inclusion $R_s \subseteq R_t$ holds. This observation prompts the author of the present article to ask the following: \begin{que} Is there a generalization of Theorem \ref{thmbeuter}, valid for coherent systems? \end{que} In the main result of this article (see Theorem \ref{maintheorem} below), we partially answer this question. Namely, we provide sufficient conditions for simplicity of coherent systems. Before we state this result, let us briefly describe the objects involved in this context. Suppose that $A$ is a subring of $R$. The {\it centralizer} of $A$ in $R$, denoted by $C_R(A)$, is the set of elements in $R$ that commute with every element of $A$. If $C_R(A) = A$, then $A$ is said to be a {\it maximal commutative} subring of $R$. The set $C_A(A)$ is called the {\it center} of $A$ and is denoted by $Z(A)$. Let $S$ be an inverse semigroup and let $R$ be a system. Let $E(S)$ denote the set of all idempotents of $S$ and put $R_0 = \sum_{e \in E(S)} R_s$. We say that an ideal $I$ of $R$ is a {\it system ideal} if $I = \sum_{s \in S} I \cap R_s$ and we say that $R$ is {\it system simple} if $R$ and $\{ 0 \}$ are the only system ideals of $R$. We say that $R$ is a left (right) {\it s-unital epsilon-strong system} if for all $s \in S$ the left $R_s R_{s^*}$-module (right $R_{s^*} R_s$-module) $R_s$ is s-unital. Note that if $S$ is a group and $R$ is $S$-graded, then $R$ is an epsilon-strong system precisely when it is epsilon-strongly graded in the sense of Nystedt, \"{O}inert and Pinedo \cite{NOP2016}. \begin{thm}\label{maintheorem} If $S$ is an inverse semigroup and $R$ is a system simple cohe\-rent left (or right) $s$-unital epsilon-strong system and $C_R( Z(R_0) ) \subseteq R_0$, then $R$ is simple. \end{thm} We use this result to obtain non-commutative and non-unital generali\-zations of Theorem \ref{theoremsteinberg} and Theorem \ref{thmbeuter} (see Theorem \ref{secondmaintheorem} and Theorem \ref{thirdmaintheorem} below). Here is a detailed outline of the article. In Section \ref{modules}, we recall some definitions and results concerning unital, locally unital and s-unital modules that we need in the sequel. In Section \ref{systems}, we recall the relevant background on systems and graded rings. We obtain sufficient conditions for simplicity for systems (see Theorem \ref{ZR0MAX}). In Section \ref{epsilonstrongsystems}, we define left and right unital (s-unital) epsilon-strong systems (see Definition \ref{defunitalepsilon}) as well as giving some equivalent conditions characterizing them (see Proposition \ref{epsilonequivalent} and Corollary \ref{epsiloncor}). At the end of this section, we prove Theorem \ref{maintheorem}. In Section \ref{partialskewsection}, we recall the definition of skew inverse semigroup rings and we determine when such rings are left (right) epsilon-strong systems (see Proposition \ref{associative}). At the end of this section, using Theorem \ref{maintheorem}, we prove the following s-unital generalization of Theorem \ref{thmbeuter}. \begin{thm}\label{secondmaintheorem} Suppose that $\pi$ is an s-unital partial action of an inverse semigroup $S$ on an associative (but not necessarily commutative) $s$-unital ring $A$. If $A$ is $S$-simple and $C_{A \rtimes_{\pi} S}( Z(A) ) \subseteq A$, then $A \rtimes_{\pi} S$ is simple. If $A$ is commutative, then $A \rtimes_{\pi} S$ is simple if and only if $A$ is $S$-simple and $A$ is a maximal commutative subring of $A \rtimes_{\pi} S$. \end{thm} In Section \ref{sectionsteinbergalgebras}, we recall from \cite{beuter2017} the description of Steinberg algebras as partial skew inverse semigroup rings. We use this description and Theorem \ref{secondmaintheorem} to prove the following non-commutative and s-unital generalisation of Theorem \ref{theoremsteinberg}. \begin{thm}\label{thirdmaintheorem} Suppose that $K$ is a non-zero and associative (but not necessarily commu\-tative or unital) ring with the property that $Z(K)$ contains a set of s-units for $K$. If $G$ is a Hausdorff and ample groupoid, then the Steinberg algebra $A_K(G)$ is simple if and only if $G$ is effective and minimal, and $K$ is simple. \end{thm} In Section \ref{gradedrings}, we specialize the above results to groupoid (and group) graded rings. In particular, we obtain necessary and sufficient conditions for partial skew groupoid rings, over commutative rings, to be simple (see Theorem \ref{oinertgen}). \section{Unital, locally unital and s-unital modules}\label{modules} In this section, we recall some definitions and results concerning unital, locally unital and s-unital modules that we need in the sequel. Throughout this article all rings are supposed to be associative, but, unless otherwise stated, not necessarily commutative or unital. For the remainder of this section, let $A$ and $B$ be rings and let $M$ be an $A$-$B$-bimodule. If $X \subseteq A$ ($Y \subseteq B$), then we let $XM$ ($MY$) denote the set of all finite sums of elements of the form $xm$ ($my$), for $x \in X$ ($y \in Y$) and $m \in M$. We let $\mathbb{N}$ denote the set of positive integers and we let $\mathbb{Z}_{\geq 0}$ denote the set of non-negative integers. \begin{defi} Recall that $M$ is said to be left (right) {\it unital} if there exists $a \in A$ ($b \in B$) such that for all $m \in M$ the relation $a m = m$ ($mb=m$) holds. In that case, $a$ ($b$) is said to be a left (right) {\it identity} for $M$. $M$ is said to be a unital as an $A$-$B$-bimodule, if it is unital both as a left $A$-module and a right $B$-module. The ring $A$ is said to be left (right) unital if it is left (right) unital as a left (right) module over itself. The ring $A$ is called called unital if it is unital as an $A$-bimodule over itself. \end{defi} \begin{rem}\label{twosided} Note that if $A=B$ so that $M$ is a unital $A$-bimodule, then there is $c \in A$ which is simultaneously an left identity and a right identity for $M$. In fact, if $a \in A$ is a left identity for $M$ and $b \in A$ is a right identity for $M$, then $c = a + b - ba \in A$ is a two-sided identity for $M$. \end{rem} \begin{defi} If $A=B$, then the $A$-bimodule $M$ is called {\it locally unital} if for all finite subsets $X$ of $M$ there exists an idempotent $e$ in $A$ such that $X \subseteq e X e$. The ring $A$ is called locally unital if it is locally unital as a bimodule over itself. \end{defi} \begin{exa} There are lots of examples of locally unital rings. For instance, if $\{ A_i \}_{i \in I}$ is a family of non-zero unital rings and we let $A$ be the direct sum of the rings $A_i$, then $A$ is locally unital. However, $A$ is unital if and only if $I$ is finite. \end{exa} \begin{defi} Following H. Tominaga \cite{tominaga1976} we say that $M$ is left (right) $s$-{\it unital} if for all $m \in M$ there exists $a \in A$ ($b \in B$) such that $am = m$ ($mb = m$). The $A$-$B$-bimodule $M$ is said to be s-unital if it is s-unital both as a left $A$-module and a right $B$-module. The ring $A$ is said to be left (right) $s$-unital if it is left (right) $s$-unital as a left (right) module over itself. The ring $A$ is said to be $s$-unital if it is s-unital as a bimodule over itself. \end{defi} \begin{exa}\label{notlocallyunital} If we let $X$ be a compact Hausdorff topological space, then the ring $C_0(X)$ of compactly supported continuous functions $X \to \mathbb{R}$ is s-unital. However, $C_0(X)$ is locally unital if and only if $X$ is compact (in which case $C_0(X)$ is unital). \end{exa} \begin{exa} The following example (inspired by \cite[Exercise 1.10]{lam2003}) shows that there are lots of examples of rings which are left s-unital but not right s-unital. Let $A$ be a unital ring with a non-zero multiplicative identity 1. Let $B$ denote the set $A \times A$ equipped with componentwise addition, and multiplication defined by the relations $(a,b) (c,d) = (ac,ad)$, for $a,b,c,d \in A$. It is easy to check that $B$ is associative. It is clear that any element of the form $(1,a)$, for $a \in A$, is a left identity for $B$. However, $B$ is not right unital. Indeed, since $(0,1) \notin \{ (0,0) \} = (0,1) B$ it follows that $B$ is not even right s-unital. For each $n \in \mathbb{N}$ let $C_n$ denote a copy of $B$ and put $C = \oplus_{n \in \mathbb{N}} C_n$. Then $C$ is left s-unital but not left unital. Since none of the $C_n$ are right s-unital it follows that $C$ is not right s-unital. \end{exa} \begin{prop}\label{tominaga} Let $M$ be an $A$-$B$-bimodule. \begin{itemize} \item[(a)] $M$ is left (right) $s$-unital if and only if for all $n \in \mathbb{N}$ and all $m_1,\ldots,m_n$ in $M$ there is $a \in A$ ($b \in B$) such that for all $i \in \{ 1,\ldots,n \}$ the relation $a m_i = m_i$ ($m_i b = m_i$) holds. \item[(b)] If $A=B$, then the $A$-bimodule $M$ is s-unital if and only if for all $n \in \mathbb{N}$ and all $m_1,\ldots,m_n \in M$ there is $c \in A$ such that for all $i \in \{ 1,\ldots,n \}$ the relations $c m_i = m_i c = m_i$ hold. \item[(c)] The ring $A$ is s-unital if and only if for all $n \in \mathbb{N}$ and all $a_1,\ldots,a_n$ in $A$ there is $c \in A$ such that for all $i \in \{ 1,\ldots,n \}$ the relations $c a_i = a_i c = a_i$ hold. \end{itemize} \end{prop} \begin{proof} (a) This is \cite[Theorem 1]{tominaga1976}. (b) Follows if we use the same argument as in the unital case in Remark \ref{twosided}. (c) Follows from (b). \end{proof} \begin{defi}\label{unitary} Following \'{A}nh and M\'{a}rki \cite{anh1987} we say that $M$ is left (right) {\it unitary} if $AM = M$ ($MB = M$). \end{defi} \section{Systems}\label{systems} In this section, we recall the relevant background on systems and graded rings and we obtain sufficient conditions for simplicity for systems (see Theorem \ref{ZR0MAX}). Throughout this section, $S$ denotes a {\it semigroup}, that is a non-empty set equipped with an associative binary operation $S \times S \ni (s,t) \mapsto st \in S$, and $R$ denotes a {\it system}. Recall, from the introduction, that the latter means that there to every $s \in S$ is an additive subgroup $R_s$ of $R$ such that $R = \sum_{s \in S} R_s$ and for all $s,t \in S$ the inclusion $R_s R_t \subseteq R_{st}$ holds. \begin{defi} The ring $R$ is called a {\it strong system} if for all $s,t \in S$ the equality $R_s R_t = R_{st}$ holds. The ring $R$ is called {\it graded} if $R = \oplus_{s \in S} R_s$. If $R$ is graded, then $R$ is called {\it strongly graded} if it is also a strong system. Let $E(S)$ denote the set of idempotents of $S$ and put $R_0 = \sum_{e \in E(S)} R_e$. We say that $R$ is {\it idempotent coherent} if for all $s \in S$ the inclusions $R_0 R_s \subseteq R_s$ and $R_s R_0 \subseteq R_s$ hold. In that case, clearly, $R_0$ is a subring of $R$. \end{defi} Now we extend a definition from the group (or groupoid) graded case \cite{cohen1983} (or \cite{lundstrom2012}) to the semigroup system situation. \begin{defi} We say that $R$ is left (right) {\it non-degenerate} if for all all $s \in E(S)$ and all non-zero $r \in R_s$, there is $t \in S$ such that $t s \in E(S)$ ($s t \in E(S)$) and $R_t r$ is non-zero ($r R_t$ is non-zero). \end{defi} \begin{defi}\label{degreemap} In the sequel we will use the function $d : R \rightarrow \mathbb{Z}_{\geq 0}$ defined in the following way. If $r = 0$, then put $d(r)=0$. Now suppose that $r \neq 0$. Then there is $n \in \mathbb{N}$, $s_1,\ldots,s_n \in S$ and non-zero $r_i \in R_{s_i}$, for $i = 1,\ldots,n$, such that $r = \sum_{i=1}^n r_i$. Amongst all such representations of $r$, choose one with $n$ minimal. Put $d(r)=n$. If $I$ is an ideal and $r \in I$, then we say that $r$ is $I$-{\it minimal} if $d(r) = {\rm min} \{ d(r') \mid r' \in I \setminus \{ 0 \} \}$. \end{defi} \begin{defi} Suppose that $A/B$ is a {\it ring extension} i.e. that $A$ and $B$ are rings with $A \supseteq B$. Recall that the {\it centralizer} of $B$ in $A$, denoted by $C_A(B)$, is the set of elements in $A$ that commute with every element of $B$. If $C_A(B) = B$, then $B$ is said to be a {\it maximal commutative subring} of $A$. The set $C_A(A)$ is called the {\it center} of $A$ and is denoted by $Z(A)$. The ring extension $A/B$ is said to have the {\it ideal intersection property} if every non-zero ideal of $A$ has non-zero intersection with $B$. \end{defi} \begin{prop}\label{propintersection} If $R$ is idempotent coherent and left (right) non-degenerate, then $R / C_R ( Z(R_0) )$ has the ideal intersection property. \end{prop} \begin{proof} Take a non-zero ideal $I$ of $R$ and an $I$-minimal element $r$. Take $n \in \mathbb{N}$ such that $d(r)= n$. Choose $s_1,\ldots,s_n \in S$ and non-zero $r_i \in R_{s_i}$, for $i = 1,\ldots,n$, such that $r = \sum_{i=1}^n r_i$. Case 1: $R$ is left non-degenerate. Choose $t \in S$, $i \in \{ 1 , \ldots , n \}$ and $x \in R_t$ such that $t s_i \in E(S)$ and $x r \neq 0$. Put $r' = x r$. Then $d(r') \leq d(r)$ and thus $r'$ is $I$-minimal. Take $w \in Z(R_0)$. Since $r \in I$, we get that $wr' - r'w \in I$. However $wr' - r'w = \sum_{j=1}^n ( w x r_j - x r_j w )= \sum_{j=1, \ j \neq i}^n ( w x r_j - x r_j w)$ since $x r_i \in R_t R_{s_i} \subseteq R_{t s_i} \subseteq R_0$. For each $j \in \{ 1,\ldots,n \}$ with $j \neq i$ it holds that $w x r_j - x r_j w \in R_0 R_t R_{s_j} + R_t R_{s_j} R_0 \subseteq R_{t s_j}$ since $R$ is idempotent coherent. Thus $d( w r' - r' w ) < n = d(r')$. From $I$-minimality of $r'$ we conclude that $wr' = r'w$. Hence $r' \in I \cap C_R ( Z(R_0) )$. Case 2: $R$ is right non-degenerate. Similar to the proof of Case 1 and is therefore left to the reader. \end{proof} \begin{defi} Let $I$ be an ideal of $R$. We say that $I$ is a system ideal if $I = \sum_{s \in S} I \cap R_s$. We say that $R$ is system simple if $R$ and $\{ 0 \}$ are the only system ideals of $R$. \end{defi} \begin{rem}\label{simpleremark} If $R$ is simple, then, clearly, $R$ is system simple. \end{rem} \begin{thm}\label{ZR0MAX} If $R$ is idempotent coherent, system simple, left (or right) non\-degene\-rate and $C_R( Z(R_0) ) \subseteq R_0$, then $R$ is simple. \end{thm} \begin{proof} Let $I$ be a non-zero ideal of $R$. From Proposition \ref{propintersection} it follows that the additive group $J = I \cap C_R( Z(R_0) )$ is non-zero. From the assumption $C_R( Z(R_0) ) \subseteq R_0$ it follows that $J \subseteq R_0$. Thus $K = R J R + J$ is a non-zero system ideal of $R$. From system simplicity of $R$ it follows that $K = R$. Thus $R = K = RJR + J \subseteq RIR + I = I$ and hence $R = I$. \end{proof} \begin{cor}\label{corcomm1} If $R$ is idempotent coherent, left (or right) non-degenerate and $R_0$ is maximally commutative in $R$, then $R$ is simple if and only if $R$ is system simple \end{cor} \begin{proof} This follows from Remark \ref{simpleremark} and Theorem \ref{ZR0MAX}. \end{proof} \section{Epsilon-strong systems}\label{epsilonstrongsystems} At the end of this section, we prove Theorem \ref{maintheorem} (see Theorem \ref{newmaintheorem}). We introduce left and right unital (s-unital) epsilon-strong systems (see Definition \ref{defunitalepsilon}) and obtain some characterizations of these objects (see Proposition \ref{epsilonequivalent} and Corollary \ref{epsiloncor}). Throughout this section, $S$ denotes an {\it inverse semigroup}. Recall that this means that there for all $s \in S$ exists a unique $t \in S$ such that $sts = s$ and $tst = t$. We will use the standard notation and put $s^* = t$. There is a partial order $\leq$ on $S$ defined by saying that if $s,t \in S$, then $s \leq t$ if $s = t s^* s$. For the rest of the section, $R$ denotes a system. It is easy to see that for all $s \in S$, $R_s R_{s^*}$ is a ring and $R_s$ is an $R_s R_{s^*}$-$R_{s^*} R_s$-bimodule. \begin{defi}\label{defunitalepsilon} Let $\mathcal{P}$ denote either ''unital'' or ''s-unital'' or ''unitary''. We say that $R$ is {\it left (right)} $\mathcal{P}$ {\it epsilon-strong} if for all $s \in S$ the left $R_s R_{s^*}$-module (right $R_{s^*} R_s$-module) $R_s$ is $\mathcal{P}$. If $R$ is both left and right $\mathcal{P}$ epsilon-strong, then we say that $R$ is $\mathcal{P}$ epsilon-strong. \end{defi} \begin{rem} Note that $R$ is left (or right) unitary epsilon-strong if and only if $R$ is symmetric in the sense of \cite[Definition 4.5]{CEP2016}, that is if for all $s \in S$, the equality $R_s R_{s^*} R_s = R_s$ holds. \end{rem} \begin{prop}\label{epsilonequivalent} If $\mathcal{P}$ denotes either ''unital'' or ''s-unital'', then (i), (ii) and (iii) below are equivalent. \begin{itemize} \item[(i)] $R$ is left (right) $\mathcal{P}$ epsilon-strong. \item[(ii)] $R$ is symmetric and for all $s \in S$, the ring $R_s R_{s^*}$ is left (right) $\mathcal{P}$. \item[(iii)] \begin{itemize} \item[$\bullet$] $\mathcal{P}$ = unital: for all $s \in S$, there exists $\epsilon_s \in R_s R_{s^*}$ ($\epsilon_s' \in R_{s^*} R_s$) such that for all $r \in R_s$, $\epsilon_s r = r$ ($r \epsilon_s' = r$). \item[$\bullet$] $\mathcal{P}$ = s-unital: for all $s \in S$ and all $r \in R_s$, there exists $\epsilon_s \in R_s R_{s^*}$ ($\epsilon_s' \in R_{s^*} R_s$) with $\epsilon_s r = r$ ($r \epsilon_s' = r$). \end{itemize} \end{itemize} \end{prop} \begin{proof} We only show the ''left'' parts of the proof. The ''right'' parts are shown in an analogous way and is therefore left to the reader. $\bullet$ $\mathcal{P}$ = unital: (i)$\Rightarrow$(ii): Take $s \in S$. Since $R_s$ is a unital left $R_s R_{s^*}$-module it follows immediately that $(R_s R_{s^*})R_s = R_s$. Thus $R$ is symmetric. Also since $R_s$ is a unital left $R_s R_{s^*}$-module it follows that the ring $R_s R_{s^*}$ is left unital. (ii)$\Rightarrow$(iii): Take $s \in S$ and let $\epsilon_s$ denote a left unit for the ring $R_s R_{s^*}$. Take $r \in R_s$. Since $R$ is symmetric there exists $n \in \mathbb{N}$ and $a_i,c_i \in R_s$ and $b_i \in R_{s^*}$, for $i = 1 , \ldots , n$, such that $r = \sum_{i=1}^n a_i b_i c_i$. Since $a_i b_i \in R_s R_{s^*}$ it follows that $\epsilon_s a_i b_i = a_i b_i$ and thus $\epsilon_s r = \sum_{i=1}^n \epsilon_s a_i b_i c_i = \sum_{i=1}^n a_i b_i c_i = r$. (iii)$\Rightarrow$(i): Immediate. $\bullet$ $\mathcal{P}$ = s-unital: (i)$\Rightarrow$(ii): Take $s \in S$. The symmetric part follows as in the unital case. To deduce that the ring $R_s R_{s^*}$ is left s-unital we use Proposition \ref{tominaga}. (ii)$\Rightarrow$(iii): Take $s \in S$ and $r \in R_s$. Since $R$ is symmetric there exists $n \in \mathbb{N}$ and $a_i,c_i \in R_s$ and $b_i \in R_{s^*}$, for $i = 1 , \ldots , n$, such that $r = \sum_{i=1}^n a_i b_i c_i$. Since $a_i b_i \in R_s R_{s^*}$ it follows, from Proposition \ref{tominaga}, that there is $\epsilon_s \in R_s R_{s^*}$ such that $\epsilon_s a_i b_i = a_i b_i$, for $i=1,\ldots,n$. Thus $\epsilon_s r = \sum_{i=1}^n \epsilon_s a_i b_i c_i = \sum_{i=1}^n a_i b_i c_i = r$. (iii)$\Rightarrow$(i): Immediate. \end{proof} \begin{cor}\label{epsiloncor} If $\mathcal{P}$ denotes either ''unital'' or ''s-unital'', then (i), (ii) and (iii) below are equivalent. \begin{itemize} \item[(i)] $R$ is $\mathcal{P}$ epsilon-strong. \item[(ii)] $R$ is symmetric and for all $s \in S$, the ring $R_s R_{s^*}$ is $\mathcal{P}$. \item[(iii)] \begin{itemize} \item[$\bullet$] $\mathcal{P}$ = unital: for all $s \in S$, there exists $\epsilon_s \in R_s R_{s^*}$ such that for all $r \in R_s$, $\epsilon_s r = r \epsilon_{s^*} = r$. \item[$\bullet$] $\mathcal{P}$ = s-unital: for all $s \in S$ and all $r \in R_s$, there exists $\epsilon_s \in R_s R_{s^*}$ and $\epsilon_s' \in R_{s^*} R_s$ with $\epsilon_s r = r \epsilon_s' = r$. \end{itemize} \end{itemize} \end{cor} \begin{proof} The case $\mathcal{P}$ = unital follows from Proposition \ref{epsilonequivalent} if we note that the ring $R_s R_{s^*}$ is unital and hence has a unique multiplicative identity $\epsilon_s$. Then the unique multiplicative identity of $R_{s^*} R_s$ is $\epsilon_{s^*}$. The case $\mathcal{P}$ = s-unital follows immediately from Proposition \ref{epsilonequivalent}. \end{proof} \begin{defi} We say that $R$ is {\it coherent} if for all $s,t \in S$ with $s \leq t$, the inclusion $R_s \subseteq R_t$ holds. In that case $R$ is idempotent coherent, since for all $e \in E(S)$ and all $s \in S$ we have that $es \leq s$ and $se \leq s$ (see \cite[Section 2]{beuter2017}), and thus $R_e R_s \subseteq R_{es} \subseteq R_s$ and $R_s R_e \subseteq R_{se} \subseteq R_s$. \end{defi} \begin{prop}\label{minimalprop} If $R$ is coherent and left (right) $s$-unital epsilon-strong, then $R$ is left (right) non-degenerate and $R / C_R( Z(R_0) )$ has the ideal intersection property. \end{prop} \begin{proof} We only show the ''left'' part of the proof. The ''right'' part can be shown in a similar way and is therefore left to the reader. Take an $I$-minimal element $r$ and put $d(r)=n$. Take $s_1,\ldots,s_n$ and non-zero $r_i \in R_{s_i}$, for $i = 1 , \ldots , n$, with $r = \sum_{i=1}^n r_i$. From Proposition \ref{epsilonequivalent} it follows that there exists $\epsilon_{s_1} \in R_{s_1} R_{s_1^*}$ such that $\epsilon_{s_1} r_1 = r_1$. Then $I \ni r - \epsilon_{s_1} r = \sum_{i=2}^n (r_i - \epsilon_{s_1} r_i)$. Since $R$ is idempotent coherent and $s_1 s_1^* \in E(S)$ it follows that $r_i - \epsilon_{s_1} r_i \in R_{s_i}$, for $i = 1,\ldots,n$. Thus $d( r - \epsilon_{s_1} r ) < n$. From $I$-minimality it follows that $r = \epsilon_{s_1} r$. Since $\epsilon_{s_1} \in R_{s_1} R_{s_1^*}$ it follows in particular that there is $z \in R_{s_1^*}$ with $z r$ non-zero. From Proposition \ref{propintersection} it follows that $R / C_R( Z(R_0) )$ has the ideal intersection property. \end{proof} Now we can state and prove the main result of this section (which in Section \ref{introduction} was named Theorem \ref{maintheorem}). \begin{thm}\label{newmaintheorem} If $R$ is a system simple cohe\-rent left (or right) s-unital epsilon-strong system and $C_R( Z(R_0) ) \subseteq R_0$, then $R$ is simple. \end{thm} \begin{proof} This follows from Theorem \ref{ZR0MAX} and Proposition \ref{minimalprop}. \end{proof} \begin{cor}\label{corrcomm2} If $R$ is a coherent s-unital epsilon-strong system and $R_0$ is maximally commutative in $R$, then $R$ is simple if and only if $R$ is system simple. \end{cor} \begin{proof} This follows from Remark \ref{simpleremark} and Theorem \ref{newmaintheorem}. \end{proof} \section{Skew inverse semigroup rings}\label{partialskewsection} In this section, we recall the definition of skew inverse semigroup rings and we state some well known facts concerning them that we need in the sequel. We determine when such rings are left (or right) epsilon-strong systems (see Proposition \ref{associative}). At the end of this section, we prove Theorem \ref{secondmaintheorem} (see Theorem \ref{newsecondmaintheorem}). Throughout this section $A$ denotes an associative, but not necessarily unital, ring, $S$ is an inverse semigroup and $\pi$ is a {\it partial action} of $S$ on $A$. Recall that the latter means that there is a set $\{ D_s \}_{s \in S}$ of ideals of $A$ and a set $\{ \pi_s : D_{s^*} \rightarrow D_s \}_{s \in S}$ of ring isomorphisms satisfying the following assertions for all $s,t \in S$: \begin{itemize} \item[(i)] $A = \sum_{s \in S} D_s$; \item[(ii)] $\pi_s( D_{s^*} \cap D_t ) = D_s \cap D_{st}$; \item[(iii)] for all $x \in D_{t^*} \cap D_{t^* s^*}$ the equality $\pi_s ( \pi_t (x) ) = \pi_{st}(x)$ holds. \end{itemize} We say that $\pi$ is {\it unital (locally unital, left s-unital, right s-unital)} if for every $s \in S$ the ring $D_s$ is unital (locally unital, left s-unital, right s-unital). Recall that an ideal $J$ of $A$ is called {\it $S$-invariant} if for all $s \in S$ the inclusion $\pi_s ( J \cap D_{s^*} ) \subseteq J$ holds. The ring $A$ is called {\it $S$-simple} if $\{ 0 \}$ and $A$ are the only $S$-invariant ideals of $A$. Note that if $\pi$ is a left (right) s-unital partial action of $S$ on $A$, then for all $s \in S$ and all ideals $J$ of $A$ the equality $J \cap D_s = D_s J$ ($J \cap D_s = J D_s$) holds. Now we will recall the definition of the skew inverse semigroup ring $A \rtimes_{\pi} S$ defined by $\pi$. Let $L_{\pi}$ be the set of formal finite sums of elements of the form $a_s \delta_s$, for $s \in S$ and $a_s \in D_s$. We equip $L_{\pi}$ with component-wise addition and with a multiplication defined by the additive extension of the relations $( a_s \delta_s ) ( b_t \delta_t ) = \pi_s ( \pi_{s^*} (a_s) b_t ) \delta_{st},$ for $s,t \in S$, $a_s \in D_s$ and $b_t \in D_t$. Let $I$ be the ideal of $L_{\pi}$ generated by all elements of the form $a \delta_r - a \delta_s$, for $r,s \in S$ with $r \leq s$ and $a \in D_r$. The {\it skew inverse semigroup ring} $A \rtimes_{\pi} S$ is defined to be the quotient $L_{\pi} / I$. It is clear that $L_{\pi}$ is a graded ring and that $A \rtimes_{\pi} S$ is a system. Note that the product on $L_{\pi}$ is not in general associative. However, as we shall soon see, in the s-unital case, this is indeed so. \begin{prop} The ring $L_{\pi}$, and hence also the ring $A \rtimes_{\pi} S$, is a coherent system. \end{prop} \begin{proof} Take $s,t \in S$ with $s \leq t$. From \cite[Proposition 2.2]{beuter2017} it follows that $D_s \subseteq D_t$. Thus $D_s \delta_s \subseteq D_t \delta_t$ and, hence $R_s = \overline{D_s \delta_s} \subseteq \overline{D_t \delta_t} = R_t$. \end{proof} \begin{prop}\label{DsDs} Put $R = L_{\pi}$ and take $s \in S$. The equality $R_s R_{s^*} = D_s D_s \delta_{s s^*}$ holds. In particular, the ring $R_s R_{s^*}$ is left (right) s-unital if and only if the ring $D_s D_s$ is left (right) s-unital. \end{prop} \begin{proof} We have that $R_s R_{s^*} = D_s \delta_s D_{s^*} \delta_{s^*} = \pi_s( \pi_{s^*}(D_s) D_{s^*} ) \delta_{s s^*} = \pi_s( D_{s^*} D_{s^*} ) \delta_{s s^*}$ $=D_s D_s \delta_{s s^*}.$ From \cite[Proposition 2.2]{beuter2017} it follows that $D_s D_s \subseteq D_s \subseteq D_{s s^*}$ and $\pi_{s s^*} = {\rm id}_{D_{s s^*}}$, thus the last claim follows. \end{proof} \begin{prop}\label{DsDsDs} Put $R = L_{\pi}$. For all $s \in S$ the equalities $( R_s R_{s^*} ) R_s = R_s ( R_{s^*} R_s )$ $= ( D_s D_s D_s ) \delta_s$ hold. In particular, $R$ is symmetric if and only if for all $s \in S$ the ring $D_s$ is idempotent. \end{prop} \begin{proof} Take $s \in S$. Then $$( R_s R_{s^*} ) R_s = ( D_s \delta_s D_{s^*} \delta_{s^*} ) D_s \delta_s = ( \pi_s( \pi_{s^*}(D_s) D_{s^*} ) ) \delta_{ss^*} D_s \delta_s =$$ $$ ( \pi_s ( D_{s^*} D_{s^*} ) ) \delta_{ss^*} D_s \delta_s = D_s D_s \delta_{ss^*} D_s \delta_s = \pi_{ss^*}( \pi_{ss^*}( D_s D_s ) D_s ) \delta_s =$$ $$\pi_{ss^*} ( D_s D_s D_s ) \delta_s = ( D_s D_s D_s ) \delta_s.$$ Note that $\pi_{ss^*} = {\rm id}_{D_{ss^*}}$. And $$R _s (R_{s^*} R_s) = D_s \delta_s ( D_{s^*} \delta_{s^*} D_s \delta_s ) = D_s \delta_s ( \pi_{s^*}( \pi_s(D_{s^*}) D_s ) \delta_{s^* s} ) =$$ $$D_s \delta_s \pi_{s^*} ( D_s D_s ) \delta_{s^* s} = D_s \delta_s D_{s^*} D_{s^*} \delta_{s^* s} = \pi_s ( \pi_{s^*}(D_s) D_{s^*} D_{s^*} ) \delta_s =$$ $$ \pi_s ( D_{s^*} D_{s^*} D_{s^*} ) \delta_s = D_s D_s D_s \delta_s.$$ Now we show the last part. Suppose first that $R$ is symmetric. Then, from the above, it follows that $D_s = D_s D_s D_s \subseteq D_s D_s \subseteq D_s$. Hence $D_s$ is idempotent. Now suppose that $D_s$ is idempotent. Then $D_s D_s D_s = D_s D_s = D_s$. Thus, from the above, it follows that $R$ is symmetric. \end{proof} \begin{prop}\label{associative} The ring $L_{\pi}$ is a left (right) s-unital epsilon-strong system if and only if $\pi$ is left (right) s-unital. In that case, $L_{\pi}$, and hence also $A \rtimes_{\pi} S$, is associative. \end{prop} \begin{proof} The ''if'' statement follows from Proposition \ref{epsilonequivalent}, Proposition \ref{DsDs} and Proposition \ref{DsDsDs}. Now we show the ''only if'' statement. Take $s \in S$. From Proposition \ref{DsDsDs} it follows that $D_s D_s= D_s$. From Proposition \ref{DsDs} we get that the ring $D_s D_s$ is left (right) s-unital. This, in combination with the equality $D_s D_s = D_s$, implies that $D_s$ is left (right) s-unital as a $D_s D_s$-module. Therefore, in particular, $D_s$ is left (right) s-unital as a ring. For the last statement, suppose that $D_s$ is left (right) s-unital. We wish to show that $R$ is associative. From \cite[Theorem 3.4]{exel2010} this follows if we can show the equality $a \pi_s ( \pi_{s^*} (b) c ) = \pi_s ( \pi_{s^*}(ab) c )$ for all $r,s,t \in S$, all $a \in D_{r^*}$, all $b \in D_s$ and all $c \in D_t$. First we show the ''left'' part. Since $D_s$ is left s-unital, there exists $d \in D_s$ such that $d \pi_s ( \pi_{s^*} (b) c ) = \pi_s ( \pi_{s^*} (b) c ) $ and $d b = b$. Then $a \pi_s ( \pi_{s^*} (b) c ) = a d \pi_s ( \pi_{s^*} (b) c ) = \pi_s ( \pi_{s^*} (ad) ) \pi_s ( \pi_{s^*} (b) c ) =$$ $$\pi_s ( \pi_{s^*} (ad) \pi_{s^*} (b) c ) = \pi_s ( \pi_{s^*} (adb) c ) = \pi_s ( \pi_{s^*}(ab) c ).$ Now we show the ''right'' part. Since $D_s$ is right s-unital, there exists $e \in D_{s^*}$ such that $\pi_{s^*}(b)e = b$ and $\pi_{s^*}(ab)e = \pi_{s^*}(ab)$. Then $a \pi_s ( \pi_{s^*} (b) c ) = a \pi_s ( \pi_{s^*} (b) e c ) = a \pi_{ss^*}(b) \pi_s( ec ) = ab \pi_s (ec) = \pi_{s s^*} (ab) \pi_s(ec) = \pi_s( \pi_{s^*}(ab) ec ) = \pi_s( \pi_{s^*}(ab) c ).$ \end{proof} \begin{rem}\label{subring} In \cite[Proposition 3.1]{beuter2018} it is shown that if $\pi$ and $A$ are locally unital, then the map $i : A \rightarrow (A \rtimes_{\pi} S )_0$ defined by sending $a = \sum_{i=1}^n a_{e_i}$, where $a_{e_i} \in D_{e_i}$, to $\sum_{i=1}^n \overline{a_{e_i} \delta_{e_i}}$, is a well defined isomorphism of rings with inverse given by the restriction to $(A \rtimes_{\pi} S )_0$ of the map $t : A \rtimes_{\pi} S \rightarrow A$ defined by $t( \sum_{i=1}^n \overline{a_i \delta_{s_i} } ) = \sum_{i=1}^n a_i$, for $s_i \in S$ and $a_i \in D_{s_i}$. It is clear from the proof given in loc. cit. that the same conclusions hold when $\pi$ and $A$ are s-unital. \end{rem} \begin{prop}\label{propsystemsimple} If $\pi$ and $A$ are s-unital, then $A \rtimes_{\pi} S$ is system simple if and only if $A$ is $S$-simple. \end{prop} \begin{proof} Put $R = A \rtimes_{\pi} S$. First we show the ''only if'' statement. Suppose that $A \rtimes_{\pi} S$ is system simple. Let $J$ be a non-zero $S$-invariant ideal of $A$. For all $s \in S$ put $I_s = \overline{(J \cap D_s) \delta_s}$ and let $I = \sum_{s \in S} I_s$. Take $s \in S$. Since $I_s \subseteq R_s$ it follows that $I_s \subseteq I \cap R_s$. Thus $I \subseteq \sum_{s \in S} I \cap R_s$. The inclusion $I \supseteq \sum_{s \in S} I \cap R_s$ is trivial. Thus $I = \sum_{s \in S} I \cap R_s$. Now we show that $I$ is an ideal of $R$. To this end, take $s,t \in S$. Then $$\overline{ D_t \delta_t } \cdot \overline{ (J \cap D_s) \delta_s } = \overline{ \pi_t( \pi_{t^*}(D_t) (J \cap D_s) ) \delta_{ts} } = \overline{ \pi_t( D_{t^*} (J \cap D_s) ) \delta_{ts} }.$$ Since $J$ is $S$-invariant, we get that $\pi_t( D_{t^*} (J \cap D_s) ) \subseteq \pi_t( D_{t^*} \cap J ) \subseteq J$ and from the definition of partial action, we get that $\pi_t( D_{t^*} (J \cap D_s) ) \subseteq \pi_t( D_{t^*} \cap D_s ) = D_t \cap D_{ts} \subseteq D_{ts}.$ Thus $I$ is a left ideal of $R$. Also $$\overline{ (J \cap D_s) \delta_s } \cdot \overline{ D_t \delta_t } = \overline{ \pi_s( \pi_{s^*}(J \cap D_s) D_t ) \delta_{st} } = \overline{ \pi_s( J \cap D_{s^*} \cap D_t ) \delta_{ts} }.$$ Since $J$ is $S$-invariant, we get that $\pi_s( J \cap D_{s^*} \cap D_t ) \delta_{ts} \subseteq \pi_s( J \cap D_{s^*} ) \subseteq J$ and from the definition of partial action, we get that $\pi_s( J \cap D_{s^*} \cap D_t ) \delta_{ts} \subseteq \pi_s( D_{s^*} \cap D_t ) = D_s \cap D_{st} \subseteq D_{st}.$ Thus $I$ is a right ideal of $R$. Since $R$ is system simple, this implies that $I = R$. Then $t(I) = t(R) = \sum_{s \in S} D_s = A$. On the other hand, from the construction of $I$, it follows that $t(I) = \sum_{s \in S} (J \cap D_s) \subseteq J$. Thus $J = A$. Now we show the ''if'' statement. Suppose that $A$ is $S$-simple. Let $I$ be a non-zero system ideal of $R$. We wish to show that $I = R$. Then $t(I)$ is a non-zero additive subgroup of $A$. First we show that $t(I)$ is an ideal of $A$. Take $s,t \in S$, $a \in D_s$ and $b \in D_t$ such that $\overline{ b \delta_t } \in I$. Since $D_t$ is s-unital, there is $c \in D_t$ such that $cb = bc = b$. Then $ab = t( \overline{ ab \delta_t } ) = t( \overline{ acb \delta_t } ) = t( \overline{ \pi_{tt^*}(\pi_{tt^*}(ac)b) \delta_{tt^*t} } ) = t( \overline{ac \delta_{tt^*}} \cdot \overline{b \delta_t}) \in t(I)$ and $ba = t( \overline{ bca \delta_t} ) = t( \overline{ \pi_t( \pi_{t^*}(b) \pi_{t^*}(ca) ) \delta_t } ) = t( \overline{b \delta_t} \cdot \overline{ \pi_{t^*}(ca) \delta_{t^*t} } ) \in t(I).$ Now we show that $t(I)$ is $S$-invariant. Take $s \in S$ and $a \in t(I) \cap D_{s^*}$. There exists $n \in \mathbb{N}$, $t_n,\ldots,t_n \in S$, $b_i \in D_{t_i}$, for $i = 1,\ldots,n$, such that $\sum_{i=1}^n \overline{ b_i \delta_{t_i} } \in I$ and $\sum_{i=1}^n b_i = t( \overline{ \sum_{i=1}^n b_i \delta_{t_i} } ) = a$. Since $D_{s^*}$ is s-unital there is $d \in D_s$ such that $\pi_{s^*}(d) a = a$. Then $\pi_s ( a ) = \pi_s( \pi_{s^*}(d) a ) = \sum_{i=1}^n \pi_s ( \pi_{s^*}(d) b_i ) = \sum_{i=1}^n t( \overline{ \pi_s ( \pi_{s^*}(d)b_i ) \delta_{s t_i} } ) = t( \overline{d \delta_s} \cdot \sum_{i=1}^n \overline{ b_i \delta_{t_i} } ) \in t(I).$ Thus $t(I)$ is a non-zero $S$-invariant ideal of $A$. From $S$-simplicity of $A$, we hence get that $t(I) = A$. Take $s \in S$ and $a \in D_s$. We wish to show that $\overline{a \delta_s} \in I$. Since $D_s$ is s-unital there is $b \in D_s$ such that $ba = a$. Since $t(I) = A$, there is $n \in \mathbb{N}$, $s_n,\ldots,s_n \in S$, $b_i \in D_{s_i}$, for $i = 1,\ldots,n$, such that $\sum_{i=1}^n \overline{ b_i \delta_{s_i} } \in I$ and $\sum_{i=1}^n b_i = t( \sum_{i=1}^n \overline{ b_i \delta_{s_i} } ) = b$. Since $I$ is a system ideal we may assume that $\overline{ b_i \delta_{s_i} } \in I$ for $i = 1, \ldots , n$. Take $i \in \{ 1, \ldots , n\}$. Since $D_{s_i^*}$ is s-unital there is $c \in D_{s_i^*}$ such that $\pi_{s_i^*}(b_i)c = \pi_{s_i^*}(b_i)$. Then $I \ni \overline{b_i \delta_{s_i} } \cdot \overline{c \delta_{s_i^*}} = \overline{ \pi_{s_i} ( \pi_{s_i^*}(b_i) c ) \delta_{s_i s_i^*} } = \overline{ \pi_{s_i} ( \pi_{s_i^*}(b_i) ) \delta_{s_i s_i^*} } = \overline{ b_i \delta_{s_i s_i^*} }.$ Therefore, we may assume that all $s_i$ are idempotent. From \cite[Proposition 2.2]{beuter2018} it follows that $\pi_{s_i} = {\rm id}_{D_{s_i}}$. Thus, from the definition of $\pi$ we get that $D_{s_i} D_s \subseteq D_{s_i} \cap D_s = \pi_{s_i} ( D_{s_i^*} \cap D_s ) = D_{s_i} \cap D_{s_i s} \subseteq D_{s_i s}$. Hence, finally, we get that $\overline{ a \delta_s } = \overline{ba \delta_s} = \overline{ \sum_{i=1}^n b_i a \delta_s } = [s_i s \leq s] = \overline{ \sum_{i=1} ^n b_i a \delta_{s_i s}} = \overline{ ( \sum_{i=1}^n b_i \delta_{s_i} ) a \delta_s } = \overline{ \sum_{i=1}^n b_i \delta_{s_i} } \cdot \overline{ a \delta_s } \in I$. \end{proof} \begin{prop}\label{propequivalence} If $\pi$ is $s$-unital, and $A$ is commutative and $s$-unital, then $A$ is a maximal commutative subring of $A \rtimes_{\pi} S$ if and only if $( A \rtimes_{\pi} S ) / A$ has the ideal intersection property. \end{prop} \begin{proof} The ''only if'' statement follows from Proposition \ref{minimalprop} and Proposition \ref{associative}. The ''if'' statement follows from the first part of the proof of \cite[Theorem 3.4]{beuter2017} which holds in the s-unital case also. \end{proof} We are now ready to state and prove the main result of this section (which in Section \ref{introduction} was named Theorem \ref{secondmaintheorem}). \begin{thm}\label{newsecondmaintheorem} Suppose that $\pi$ is an s-unital partial action of an inverse semigroup $S$ on an associative (but not necessarily commutative) $s$-unital ring $A$. If $A$ is $S$-simple and $C_{A \rtimes_{\pi} S}( Z(A) ) \subseteq A$, then $A \rtimes_{\pi} S$ is simple. If $A$ is commutative, then $A \rtimes_{\pi} S$ is simple if and only if $A$ is $S$-simple and $A$ is a maximal commutative subring of $A \rtimes_{\pi} S$. \end{thm} \begin{proof} The first part follows from Theorem \ref{newmaintheorem} and Proposition \ref{propsystemsimple}. The second part follows from the first part, Proposition \ref{propsystemsimple} and Proposition \ref{propequivalence}. \end{proof} \begin{rem} Since the class of s-unital rings properly contains the class of locally unital rings (even in the commutative case, see Example \ref{notlocallyunital}) it follows that Theorem \ref{newsecondmaintheorem} is a proper generalization of Theorem \ref{thmbeuter}. \end{rem} For use in subsequent sections, we now introduce a generalization of the concept of a faithful group action (see e.g. \cite[Chapter 1.4]{karpilovsky1987}). \begin{defi} We say that $\pi$ is {\it faithful} if for all $s \in S \setminus E(S)$, $\pi_s \neq {\rm id}_{D_{s^*}}$ holds. \end{defi} \begin{prop}\label{faithfulprop} Suppose that $\pi$ and $A$ are s-unital and that for every $s \in S \setminus E(S)$ the ring $D_s$ is non-zero. If $C_{A \rtimes_{\pi} S}( Z(A) ) \subseteq A$, then $\pi$ is faithful. \end{prop} \begin{proof} Suppose that $\pi$ is not faithful. Take $s \in S \setminus E(S)$ such that $\pi_s = {\rm id}_{D_{s^*}}$. Take a non-zero $a \in D_s$ and put $x = \overline{ a \delta_s } - \overline{ a \delta_{ss^*} } \in ( A \rtimes_{\pi} S ) \setminus A$. We wish to show that $x \in C_{A \rtimes_{\pi} S}( Z(A) )$. To this end, take $b = \sum_{i=1}^n b_i \in Z(A)$, for some $b_i \in D_{e_i}$, $e_i \in E(S)$, for $i=1,\ldots,n$. From Remark \ref{subring} we know that $\sum_{i=1}^n \overline{ b_i \delta_{e_i} }$ commutes with $\overline{ a \delta_{ss^*} }$. Therefore, we only need to show that $\sum_{i=1}^n \overline{ b_i \delta_{e_i} }$ commutes with $\overline{ a \delta_s }$. Now $$\sum_{i=1}^n \overline{ b_i \delta_{e_i} } \cdot \overline{ a \delta_s } = \sum_{i=1}^n \overline{ b_i a \delta_{e_i s} } = [e_i s \leq s] = \sum_{i=1}^n \overline{ b_i a \delta_s } = \overline{b a \delta_s} = [b \in Z(A)] = \overline{ab \delta_s}$$ and $$ \sum_{i=1}^n \overline{ a \delta_s } \cdot \overline{ b_i \delta_{e_i} } = \sum_{i=1}^n \overline{ \pi_s( \pi_{s^*}(a) b_i ) \delta_{s e_i} } = [s e_i \leq s, \ \pi_s = {\rm id}_{D_{s^*}} ] = \sum_{i=1}^n \overline{ a b_i \delta_s} = \overline{ab \delta_s}.$$ \end{proof} \begin{prop}\label{faithfulsimpleprop} Suppose that $\pi$ and $A$ are s-unital and that for every $s \in S \setminus E(S)$ the ring $D_s$ is non-zero. If $A \rtimes_{\pi} S$ is simple, then $\pi$ is faithful. \end{prop} \begin{proof} Suppose that $\pi$ is not faithful. Then there is $s \in S \setminus E(S)$ such that $\pi_s = {\rm id}_{D_{s^*}}$. Take a non-zero $a \in D_s$ and consider the non-zero element $x = \overline{ a \delta_s } - \overline{ a \delta_{ss^*} }$ in $A \rtimes_{\pi} S$. Let $I$ denote the non-zero ideal in $A \rtimes_{\pi} S$ generated by $x$. We claim that $t(I) = 0$. If we assume that the claim holds, then it follows that $I$ is a proper ideal of $A \rtimes_{\pi} S$, since e.g. $t( \overline{ a \delta_s } ) = a \neq 0$, and thus $A \rtimes_{\pi} S$ is not simple. Now we show the claim. Take $r,t \in S$, $b \in D_r$ and $c \in D_t$. Then $$\overline{b \delta_r} \cdot x = \overline{ \pi_r ( \pi_{r^*}(b) a ) \delta_{rs} } - \overline{ \pi_r ( \pi_{r^*}(b) a ) \delta_{rs s^*} }$$ and $$x \cdot \overline{c \delta_t} = \overline{ a c \delta_{st} } - \overline{ ac \delta_{s s^* t} } $$ and $$\overline{b \delta_r} \cdot x \cdot \overline{c \delta_t} = \overline{ \pi_r ( \pi_{r^*}(b) ac ) \delta_{rst} } - \overline{ \pi_r ( \pi_{r^*}(b) ac ) \delta_{rs s^* t} } $$ from which the claim follows. \end{proof} \section{Steinberg algebras}\label{sectionsteinbergalgebras} In this section, we recall from \cite{beuter2017} the description of Steinberg algebras as skew inverse semigroup rings. We use this description and the previous results to prove Theorem \ref{thirdmaintheorem} (see Theorem \ref{newthirdmaintheorem}). At the end of this section, we specialize this result to the case when the topology on the groupoid is discrete (se Theorem \ref{fourthmaintheorem}). Note that we closely follow the presentation from \cite{beuter2017}, in particular in the proofs of Propositions \ref{gen2} and Proposition \ref{gen1}. Let $G$ be a {\it groupoid}. By this we mean that $G$ is a small category in which every morphism is an isomorphism. The objects of $G$ will be denoted by $G_0$ and the the set of morphisms of $G$ will be denoted by $G_1$. The {\it domain} and {\it codomain} of $g \in G_1$ will be denoted by $d(g)$ and $c(g)$ respectively. Objects will be identified with the corresponding units so that in particular, for all $g \in G_1$, the identities $d(g) = g^{-1} g$ and $c(g) = g g^{-1}$ hold. Let $G_2$ denote the set of {\it composable pairs} of $G_1$, that is, all $(g,h) \in G_1 \times G_1$ such that $d(g) = c(h)$. We say that $G$ is a {\it topological groupoid} if $G_1$ is a topological space making inversion and composition continuous as maps $G_1 \rightarrow G_1$ and $G_2 \rightarrow G_1$, respectively, where the set $G_2$ is equipped with the relative topology induced from the product topology on $G_1 \times G_1$. A {\it bisection} of $G$ is a subset $U$ of $G_1$ such that both $c|_U$ and $d|_U$ are homeomorphisms. We call $G$ {\it \'{e}tale} if $G_0$ is locally compact and Hausdorff, and $d$ is a local homeomorphism (in that case, $c$ is also a local homeomorphism). An \'{e}tale groupoid $G$ is said to be {\it ample} if $G_1$ has a basis of compact open bisections. One can show that a Hausdorff \'{e}tale groupoid is ample if and only if $G_0$ is totally disconnected. A subset $U$ of $G_0$ is called {\it invariant} if $d(c^{-1}(U)) = U$. The groupoid $G$ is called {\it minimal} if $G_0$ and $\emptyset$ are the only open invariant subsets of $G_0$. We let ${\rm Iso}(G)$ denote the isotropy subgroupoid of $G$, that is the set of all $g \in G_1$ with $d(g) = c(g)$. If $G$ is Hausdorff and ample, then $G$ is said to be {\it effective} if the interior of ${\rm Iso}(G)$ is $G_0$, or equivalently, for all compact open bisections $U$ of $G_1 \setminus G_0$, there exists $a \in U$ such that $d(a) \neq c(a)$. We let $G^a$ denote the set of compact open bisections of $G_1$. The set $G^a$ is an inverse semigroup under the operations defined by $UV = \{ gh \mid g \in U, \ h \in V, (g,h) \in G_2 \}$ and $U^* = U^{-1} = \{ a^{-1} \mid a \in U \}$, for $U,V \in G^a$. The inverse semigroup partial order in $G^a$ is the inclusion of sets. The set of idempotents $E(G^a)$ is given by the set of $U \in G^a$ such that $U \subseteq G_0$. From now on we assume the following: \begin{itemize} \item $K$ is a non-zero associative (but not necessarily commutative or unital) ring and $G$ is a Hausdorff and ample groupoid. \end{itemize} The {\it Steinberg algebra} $A_K(G)$ is defined to be the set of compactly supported locally constant functions from $G_1$ to $K$ with pointwise addition, and convolution product, that is, if $f,g \in A_K(G)$ and $a \in G_1$, then $(f * g)(a) = \sum_{bc=a} f(b)g(c)$. In \cite{steinberg2010} it is shown that $A_K(G)$ is associative in the case when $K$ is a commutative unital ring. It is clear that the associativity $A_K(G)$ also holds for general associative rings $K$. If $k \in K$ and $U \in G^a$, then we let $k_U : G_1 \rightarrow K$ denote the function defined by $k_U(a) = k$, if $a \in U$, and $k_U(a)=0$, otherwise. The algebra $A_K(G)$ can be realised as the additive span of functions of the form $k_U$, for $U \in G^a$. Convolution of such functions is nicely behaved in the sense that $k_U * l_V = (kl)_{UV}$, for $k,l \in K$ and $U,V \in G^a$. The product of two subsets $U$ and $V$ of $G_1$ is defined as $UV = \{ gh \mid g \in U, \ h \in V, (g,h) \in G_2 \}$. Now we will describe the translation of Steinberg algebras to skew inverse semigroup rings from \cite{beuter2017}. From now on, let $G$ be a Hausdorff and ample groupoid. Given $U \in G^a$, define a map $\theta_U : d(U) \rightarrow c(U)$ by $\theta_U(u) = c_U ( d_U^{-1}(u) )$, for $u \in U$. Here $c_U$ and $d_U$ denote the corresponding restrictions of the the maps $c$ and $d$. The correspondence $U \mapsto \theta_U$ gives a partial action of $G^a$ on $G_0$. Let $L_c( G^0 )$ denote the ring of all locally constant, compactly supported, $K$-valued functions on $G_0$. Given $U \in G^a$, let $D_U$ denote the ideal $\{ f \in L_c(G_0) \mid \mbox{if $x \in G_0 \setminus c(U)$, then $f(x)=0$} \} = L_c(c(U))$ of $L_c(G_0)$ and define a ring isomorphism $\pi_U : D_{U^*} \rightarrow D_U$ in the following way. If $f \in D_{U^*}$ and $x \in c(U)$, then let $\pi_U(f)(x) = f ( \theta_{U^*}(x) )$, if $x \in c(U)$, and $\pi_U(f)(x) = 0$, otherwise. Define the map $\alpha : L_c(G_0) \rtimes_{\pi} G^a \rightarrow A_K(G)$ by $\alpha( \overline{f \delta_B} )(x) = f(c(x))$, if $x \in B$, and $\alpha( \overline{f \delta_B} )(x) = 0$, otherwise. Define the map $\beta : A_K(G) \rightarrow L_c(G_0) \rtimes_{\pi} G^a$ in the following way. Let $f = \sum_{j=1}^n (k_j)_{B_j} \in A_K(G)$, where the $B_j$ are pairwise disjoint compact bisections of $G$. Then let $\beta(f) = \sum_{j=1}^n \overline{ (k_j)_{c(B_j)} \delta_{B_j} }$. From \cite[Theorem 5.2]{beuter2018} it follows that $\alpha \circ \beta = {\rm id}_{A_K(G)}$ and $\beta \circ \alpha = {\rm id}_{ L_c(G_0) \rtimes_{\pi} G^a }$. \begin{defi} Let $T$ denote $(L_c(G_0) \rtimes G^a)_0$, that is the set of all finite sums of elements in $L_c(G_0) \rtimes G^a$ of the form $\overline{g \delta_U}$, for $U \in E(G^a)$ and $g \in L_c(U)$. \end{defi} Now we will describe some topological properties of $G$ in terms of algebraical properties of $L_c(G_0) \rtimes_{\pi} G^a$. We first consider effectiveness. \begin{prop}\label{effectiveprop} The groupoid $G$ is effective if and only if $\pi$ is faithful. \end{prop} \begin{proof} Suppose that $G$ is not effective. Then there exists $U \in G^a \setminus E(G^a)$ such that for all $g \in U$, the relation $d(g) = c(g)$ holds. Then $\theta_U = {\rm id}_{d(U)} = {\rm id}_{c(U)}$ and hence $\pi_U = {\rm id}_{D_{U^*}}$. Thus $\pi$ is not faithful. Now suppose that $\pi$ is not faithful. Then there exists $V \in G^a \setminus E(G^a)$ such that $\pi_V = {\rm id}_{D_{V^*}}$. Take $g \in V^*$. Take a non-zero $k \in K$ and define $f \in D_{V^*}$ by saying that $f( d(g) ) = k$ and $f( a ) = 0$, for $a \in G_0 \setminus \{ d(g) \}$. Then $k = f( d(g) ) = \pi_V(f)(d(g)) = f ( \theta_{V^*} ( d(g) ) ) = f ( c_{V^*}( d_{V^*}^{-1}( d(g) ) ) ) = f(c(g))$ which implies that $c(g)=d(g)$. Thus $G$ is not effective. \end{proof} \begin{prop}\label{propeffective} If $K$ is s-unital and $C_{ L_c(G_0) \rtimes G^a }( Z(T) ) \subseteq T$, then $G$ is effective. \end{prop} \begin{proof} This follows from Proposition \ref{faithfulprop} and Proposition \ref{effectiveprop}. \end{proof} \begin{defi} We say that $Z(K)$ contains a set of s-units for $K$ if for all $k \in K$ there exists $k' \in Z(K)$ such that $k k' = k$. \end{defi} \begin{prop}\label{gen2} If $Z(K)$ contains a set of s-units for $K$, then the following are equivalent: \begin{itemize} \item[(i)] $G$ is effective; \item[(ii)] $\pi$ is faithful; \item[(iii)] $C_{ L_c(G_0) \rtimes G^a }( Z(T) ) \subseteq T$. \end{itemize} \end{prop} \begin{proof} From Proposition \ref{effectiveprop} it follows that (i)$\Leftrightarrow$(ii). The implication (iii)$\Rightarrow$(i) follows from Proposition \ref{propeffective}. Now we show the implication (i)$\Rightarrow$(iii). To this end, take a non-zero $f = \sum_{i=1}^n \overline{ (k_i)_{c(U_i)} \delta_{U_i} } \in L_c(G_0) \rtimes G^a$ where the $k_i \in K \setminus \{ 0 \}$ and the $B_i$ are pairwise disjoint compact bisections of $G$. Suppose that $f \in C_{ L_c(G_0) \rtimes G^a }( Z(T) )$. We wish to show that $f \in T$. Since $G$ is effective, it suffices to show that each $B_i$ is a subset of ${\rm Iso}(G)$. Seeking a contradiction, suppose that there is $l \in \{ 1, \ldots , n \}$ and $b \in B_l$ such that $c(b) \neq d(b)$. From the Hausdorff property of $G$ it follows that there is a compact bisection $U \subseteq G_0$ with $c(b) \in U$ but $d(b) \notin U$. Since $Z(K)$ contains a set of s-units for $K$, there exists $\epsilon \in Z(K)$ such that for all $i \in \{ 1,\ldots,n \}$, the equality $k_i \epsilon = k_i$ holds. Since, clearly, $\overline{ \epsilon \delta_U } \in Z(T)$, it follows that $\overline{ \epsilon \delta_U } f = f \overline{ \epsilon \delta_U }$. By mimicking the calculations in the proof of \cite[Proposition 4.8]{beuter2017}, we get that $(*) \ \sum_{i=1}^n (k_i)_{C_i} = \sum_{i=1}^n (k_i)_{D_i}$, where $C_i = c_{B_i}^{-1} ( U \cap c(B_i) )$ and $D_i = c_{B_i}^{-1}( c(B_i) \cap \theta_{B_i}( d(B_i) \cap U ) )$, for $i = 1,\ldots,n$. By evaluating (*) on $b$, we get the equality $(k_l)_{C_l}(b) = (k_l)_{D_l}(b)$. This equality yields the contradiction $k_l = 0$. \end{proof} Next, we consider minimality. \begin{prop}\label{minimalpropagain} If $L_c(G_0)$ is $G^a$-simple, then $G$ is minimal. \end{prop} \begin{proof} It is clear that the second part of the proof of \cite[Proposition 5.4]{beuter2017} works in our generality also. \end{proof} \begin{lem}\label{thelemma} Suppose that $K$ is an s-unital ring. If $I$ is an ideal of $L_c(G_0)$ such that for all $k \in K$ and all $x \in G_0$ there exists a compact open $V \subseteq G_0$ such that $x \in V$ and the map $k_V$ belongs to $I$, then $I = L_c(G_0)$. \end{lem} \begin{proof} Seeking a contradiction, suppose that $I \subsetneq L_c(G_0)$. Since every function in $L_c(G_0)$ is a sum of funtions of the form $k_U$, for $k \in K$ and compact open subsets $U$ of $G_0$, it follows that there must exist a non-zero $k \in K$ and a non-empty compact open subset $U$ of $G_0$ with $k_U \notin I$. Amongst all such maps, we may, from compactness, choose one $k_U$ with $U$ minimal. Since $K$ is s-unital there exists $k' \in K$ such that $kk' = k$. Take $x \in U$. From the assumptions it follows that there exists a compact open subset $V$ of $G_0$ such that $x \in V$ and $k'_V \in I$. But then $k_{U \cap V} = k_U * k_V' \in I$. Since $x \in U \cap V$ it follows, in particular, that $U \cap V \neq \emptyset$. Thus, from minimality of $U$, it follows that $U \subseteq V$. But then $k_U = k_{U \cap V} \in I$ which is a contradiction. \end{proof} \begin{prop}\label{gen1} If $K$ is simple and s-unital, then $G$ is minimal if and only if $L_c(G_0)$ is $G^a$-simple. \end{prop} \begin{proof} The ''if'' statement follows from Proposition \ref{minimalpropagain}. Now we show the ''only if'' statement. Let $I$ be a non-zero $G^a$-invariant ideal of $L_c(G_0)$. Define $U = \{ u \in G_0 \mid \mbox{there exists $f \in I$ with $f(u) \neq 0$} \}.$ Then, clearly, $U$ is non-empty. We claim that $U$ is invariant. Assume, for a moment, that the claim holds. From minimality of $G$ it follows that $U = G_0$. Take $x \in G_0$ and $f \in I$ with $f(x) \neq 0$. Then there is a compact open subset $V$ of $G_0$ with $x \in V$ such that $f|_V = f(x)_V$. Take $k \in K$. Since $K$ is simple, there exists $n \in \mathbb{N}$ and $k_i,k_i' \in K$, for $i=1,\ldots,n$, such that $k = \sum_{k=1}^n k_i f(x) k_i'$. Then $k_V = \sum_{i=1}^n (r_i)_V * f * (r_i')_V \in I$. From Lemma \ref{thelemma} it follows that $I = L_c(G_0)$. Now we show the claim. Let $x \in G_1$ be such that $d(x) \in U$. Then there exists $g \in I$ with $g(d(x)) \neq 0$. Take a compact open bisection $V$ with $x \in V$. Take $n \in \mathbb{N}$ and $k_1,\ldots,k_n \in K$ such that the image of $g$ equals $\{ k_1 ,\ldots , k_n \}$. Since $K$ is s-unital there is $k \in K$ with $k_i k = k_i$, for $i = 1,\ldots,n$. Then $h := g * k_{d(V)} \in I \cap L_c(d(V))$. Since $I$ is $G^a$-invariant it follows that $\alpha_V(h) \in I$. Finally, since $\alpha_V(h)( c(x) ) = h( \theta_{V^*}(c(x)) ) = h( d ( c_B^{-1} ( c(x) ) ) ) = h( d ( x ) ) = g(d(x)) \neq 0,$ we get that $c(x) \in U$ and thus that $U$ is invariant. \end{proof} We are now ready to state and prove the main result of this section (which in Section \ref{introduction} was named Theorem \ref{thirdmaintheorem}). \begin{thm}\label{newthirdmaintheorem} Suppose that $K$ has the property that $Z(K)$ contains a set of s-units for $K$. If $G$ is a Hausdorff and ample groupoid, then the Steinberg algebra $A_K(G)$ is simple if and only if $G$ is effective and minimal, and $K$ is simple. \end{thm} \begin{proof} The ''if'' statement follows from Theorem \ref{newsecondmaintheorem}, Proposition \ref{gen2} and Proposition \ref{gen1}. Now we show the ''only if'' statement. Suppose that $K$ is not simple. Then there is a nonzero proper ideal $J$ of $K$. Then $A_J(G)$ is a non-zero proper ideal of $A_K(G)$ and hence $A_K(G)$ is not simple. If $G$ is not effective, then, from Proposition \ref{gen2} it follows that $\pi$ is not faithful. From Proposition \ref{faithfulsimpleprop} it thus follows that $A_K(G)$ is not simple. Finally, suppose that $G$ is not minimal. From Proposition \ref{gen1} we get that $L_c(G_0)$ is not $G^a$-simple. From Propostion \ref{propsystemsimple} it now follows that $A_K(G)$ is not system simple. Thus, from Remark \ref{simpleremark}, we get that $A_K(G)$ is not simple. \end{proof} In the last part of this section, we consider the case when the topology on $G$ is discrete. It is easy to see \cite[Remark 3.10]{steinberg2010} that the corresponding Steinberg algebra $A_K(G)$ coincides with the classical {\it groupoid ring} $K[G]$, of $G$ over $K$, defined in the following way. The elements of $K[G]$ are finite sums of formal elements of the form $k g$ where $k \in K$ and $g \in G_1$. Addition is defined point-wise i.e. from the relations $(k g) + (k' g) = (k+k')g$, for $k,k' \in K$ and $g \in G_1$. Multiplication is defined by the biadditive extension of the relations defined by $(k g)(k' g') = (kk')(gg')$, if $d(g)=c(g')$, and $(kg)(k'g') = 0$, otherwise, for $k,k' \in K$ and $g,g' \in G_1$. Before we can state our result, we need an example and a definition. \begin{exa} Suppose that $I$ is a set. Recall that the induced matrix groupoid $\overline{I}$, defined by $I$, is constructed in the following way. Let $\overline{I}_0 = I$ and $\overline{I}_1 = I \times I$. Given $(i,j) \in I \times I$ put $d( (i,j) ) = j$ and $c( (i,j) ) = i$. The groupoid ring $K[\overline{I}]$ is called {\it the ring of row and column finite matrices over $K$ defined by $I$}. Note that if $n \in \mathbb{N}$ and $I = \{ 1,\ldots,n \}$, then $K[\overline{I}]$ coincides with the ring $M_n(K)$ of square matrices of size $n$ over $K$. \end{exa} \begin{defi} The groupoid $G$ is called {\it connected (thin)} if for all $u,v \in G_0$ there exists at least (at most) one $g \in G_1$ with $d(g)=u$ and $c(g)=v$. \end{defi} \begin{lem}\label{lemmaconnected} If $G$ is a discrete groupoid, then \begin{itemize} \item[(a)] $G$ is minimal if and only if $G$ is connected; \item[(b)] $G$ is effective if and only if $G$ is thin; \item[(c)] $G$ is minimal and effective if and only if $G$ equals the matrix groupoid defined by $G_0$. \end{itemize} \end{lem} \begin{proof} (a) Suppose that $G$ is not minimal. Then there is a nonempty invariant $U \subsetneq G_0$ with $d(c^{-1}(U)) = U$. Take $u \in U$ and $v \in G_0 \setminus U$. Seeking a contradiction, suppose that there is $g \in G_1$ with $d(g)=v$ and $c(g)=u$. Then $g \in c^{-1}(U)$ so that $v = d(g) \in d(c^{-1}(U)) = U$ which is a contradiction. Suppose that $G$ is not connected. Take $u \in G_0$ and define $U$ to be the non-empty set of $u' \in G_0$ such that there exists $g \in G_1$ with $d(g)=u'$ and $c(g)=u$. Then, clearly, $d(c^{-1}(U))=U$ and, since $G$ is not connected, $U \subsetneq G_0$. Therefore, $G$ is not minimal. (b) It is clear that $G$ is effective if and only if ${\rm Iso}(G) = G_0$ and the latter is equivalent to $G$ being thin. (c) It follows from (a) and (b) that $G$ is minimal and effective if and only if $G$ is connected and thin. In that case, for all $u,v \in G_0$ there is precisely one $g \in G_1$ with $d(g)=u$ and $c(g)=v$. This is equivalent to $G = \overline{G_0}$. \end{proof} \begin{thm}\label{fourthmaintheorem} Suppose that $K$ is a non-zero and associative (but not necessarily commutative or unital) ring with the property that $Z(K)$ contains a set of s-units for $K$. If $G$ is a discrete groupoid, then the Steinberg algebra $A_K(G)$ is simple if and only $K$ is simple and $A_K(G)$ equals the ring of row and column finite matrices over $K$ defined by the objects $G_0$ of $G$. \end{thm} \begin{proof} This follows from Theorem \ref{newthirdmaintheorem} and Lemma \ref{lemmaconnected}. \end{proof} \begin{rem} In the case when $K$ is locally unital, then the ''if'' statement in Theorem \ref{fourthmaintheorem} follows from the fact that $K$ and $A_K(G)$ are Morita equivalent (see \cite[p. 14]{anh1987}). \end{rem} \section{Groupoid graded rings}\label{gradedrings} In this section, we specialize the results in the previous sections to groupoid (and group) graded rings. Let $G$ be a groupoid. We assume the same notation for groupoids as in Section \ref{sectionsteinbergalgebras}. Let $R$ denote a ring. For the rest of this section, we assume that $R$ is {\it graded} by $G$. Recall from \cite{lundstrom2004} that this means that there to each $g \in G_1$ is an additive subgroup $R_g$ of $R$ such that $R = \oplus_{g \in G_1} R_g$ and for all $g,h \in G_1$, the inclusion $R_g R_h \subseteq R_{gh}$ holds, if $(g,h) \in G_2$, and $R_g R_h = \{ 0 \}$, otherwise. Note that if $G$ only has one object, then $G$ is a group and we recover the usual notion of a {\it group graded ring} (see e.g. \cite{cohen1983}). Recall that an ideal $I$ of $R$ is called {\it graded} if $R = \oplus_{g \in G_1} R_g \cap I$; $R$ is called {\it graded simple} if $\{ 0 \}$ and $R$ are the only graded ideals of $R$. The grading on $R$ is called left (right) {\it non-degenerate} if for all $g \in G_1$ and all non-zero $r \in R_g$, the relation $R_{g^{-1}} r \neq \{ 0 \}$ ($r R_{g^{-1}} \neq \{ 0 \}$) holds. The grading on $R$ is called {\it s-unital epsilon-strongly graded} if for all $g \in G_1$ the $R_g R_{g^{-1}}$-$R_{g^{-1}} R_g$-bimodule $R_g$ is s-unital. Throughout this section, let $S$ denote the inverse semigroup {\it induced} by $G$. By this we mean that $S = G_1 \cup \{ o \}$, where $o$ is a symbol not contained in $G_1$. Put $o^* = o$ and if $g \in G_1$, then put $g^* = g^{-1}$. The corresponding binary operation on $S$ is defined as follows. Take $g,h \in S$. If $(g,h) \in G_2$, then let $gh$ denote the ordinary multiplication in $G_1$. If $(g,h) \notin G_2$, then put $gh = o$. If we put $R_o = \{ 0 \}$, then it is clear that $R$ is graded by $S$. The following result is immediate. \begin{prop}\label{translation} Using the above notations, we get: \begin{itemize} \item[(a)] $E(S) = G_0 \cup \{ o \}$; \item[(b)] $R_0 = \oplus_{e \in G_0} R_e$; \item[(c)] $Z(R_0) = \oplus_{e \in G_0} Z(R_e)$; \item[(d)] $C_R( Z(R_0) ) = \cap_{e \in G_0} C_R( Z(R_e) )$; \item[(e)] $R$ is idempotent coherent; \item[(f)] $R$ is $S$-graded simple if and only $R$ is $G$-graded simple; \item[(g)] $R$ is left (right) non-degenerate as an $S$-graded ring if and only if $R$ is left (right) non-degenerate as a $G$-graded ring. \end{itemize} \end{prop} \begin{prop}[Lundstr\"{o}m and \"{O}inert \cite{lundstrom2012}] If the grading on $R$ is left (right) non-degenerate, then $R / \cap_{e \in G_0} C_R( Z(R_e) )$ has the ideal intersection property. \end{prop} \begin{proof} This follows from Proposition \ref{minimalprop} and Proposition \ref{translation}. \end{proof} \begin{thm}\label{oinertepsilon} If $R$ is graded simple, the grading on $R$ is non-degenerate and $\cap_{e \in G_0} C_R( Z(R_e) ) \subseteq R_0$, then $R$ is simple. \end{thm} \begin{proof} This follows from Theorem \ref{newmaintheorem} and Proposition \ref{translation}. \end{proof} \begin{cor}\label{oinertepsilonagain} If $R$ is s-unital epsilon-strongly groupoid graded and $R_0$ is a maximally commutative subring of $R$, then $R$ is simple if and only if $R$ is graded simple. \end{cor} \begin{proof} This follows from Remark \ref{simpleremark}, Proposition \ref{translation} and Theorem \ref{oinertepsilon}. \end{proof} \begin{rem} Corollary \ref{oinertepsilonagain} generalizes \cite[Proposition 29]{NOP2016} from the group graded case to the groupoid graded s-unital case. \end{rem} We will now specialize our previous results to partial groupoid actions on rings. \begin{defi} For the rest of the section, let $A$ be a ring. Recall from \cite{bagio2012} that a {\it partial action} of $G$ on $A$ is a collection $\alpha = ( A_g , \alpha_g )_{g \in G_1}$, where for each $g \in G_1$, $A_g$ is an ideal of $A_{c(g)}$, $A_{c(g)}$ is an ideal of $A$, and $\alpha_g : A_{g^{-1}} \to A_g$ is a ring isomorphism and the following conditions hold \begin{itemize} \item $A = \sum_{e \in G_0} A_e$; \item $\alpha_e = {\rm id}_{A_e}$, for $e \in G_0$; \item $\alpha_h^{-1}( A_{g^{-1}} \cap A_h ) \subseteq A_{(gh)^{-1}}$, for $g,h \in G_2$; \item $\alpha_g ( \alpha_h (x) ) = \alpha_{gh}(x)$, for $(g,h) \in G_2$ and $x \in \alpha_h^{-1}(A_{g^{-1}} \cap A_h)$. \end{itemize} The partial action $\alpha$ is called {\it global} if $\alpha_g \alpha_h = \alpha_{gh}$, for $(g,h) \in G_2$. In that case, $A_g = A_{c(g)}$, for $g \in G_1$. We say that $\alpha$ is {\it s-unital} if for every $g \in G_1$, the ring $A_g$ is s-unital. An ideal $J$ of $A$ is said to be {\it $G$-invariant} if for all $g \in G_1$ the inclusion $\alpha_g ( J \cap A_{g^{-1}} ) \subseteq J$ holds. The ring $A$ is called {\it $G$-simple} if $\{ 0 \}$ and $A$ are the only $G$-invariant ideals of $A$. To each partial action $\alpha$ on $A$ one can define the associated {\it partial skew groupoid ring} $A *_{\alpha} G$ in the following way. As an additive group $A *_{\alpha} G$ is defined as $\oplus_{g \in G_1} A_g \delta_g$, for some formal symbols $\delta_g$. The multiplication is defined by the relations $(a_g \delta_g) (b_h \delta_h) = \alpha_g( \alpha_{g^{-1}}(a_g) b_h) \delta_{gh}$, if $(g,h) \in G_2$, and $(a_g \delta_g) (b_h \delta_h) = 0$, otherwise, for $g,h \in G_1$, $a_g \in A_g$ and $b_h \in A_h$. It is clear that $\alpha$ defines an induced a partial action $\pi$ of $S$ on $A$ and that the corresponding skew inverse semigroup ring $A \rtimes_{\pi} S$ coincides with $A *_{\alpha} G$. If $\alpha$ is global, then the multiplication simplifies to $(a_g \delta_g) (b_h \delta_h) = a_g \alpha_g( b_h) \delta_{gh}$, if $(g,h) \in G_2$, and $A *_{\alpha} G$ is called a {\it skew groupoid ring}. In that case, if all $A_e$, for $e \in G_0$, coincide with a ring $B$, and all $\alpha_g = {\rm id}_B$, for $g \in G_1$, then $A *_{\alpha} G$ equals the groupoid ring $B[G]$. \end{defi} \begin{thm}\label{oinertgen} Suppose that $\alpha$ is an s-unital partial action of a groupoid $G$ on an $s$-unital ring $A$. If $A$ is $G$-simple and $C_{A *_{\alpha} G}( Z(A) ) \subseteq A$, then $A *_{\alpha} G$ is simple. If $A$ is commutative, then $A *_{\alpha} G$ is simple if and only if $A$ is $G$-simple and $A$ is a maximal commutative subring of $A *_{\alpha} G$. \end{thm} \begin{proof} This follows from Theorem \ref{newsecondmaintheorem} and Proposition \ref{translation}. \end{proof} \begin{rem} The second part of Theorem \ref{oinertgen} generalizes \cite[Theorem 2.3]{goncalvesoinert2014} where the corresponding result for groups $G$ was obtained. \end{rem} \begin{exa} We will now apply Theorem \ref{oinertgen} to a concrete situation where we have a global groupoid action. It seems to the author of the present article that this construction was first introduced in \cite{lundstrom2005}. Namely, let $L/K$ be a finite separable (not necessarily normal) field extension. Let $N$ denote a normal closure of $L/K$ and let Gal denote the Galois group of $N/K$. Furthermore, let $L_1,\ldots,L_n$ denote the different conjugate fields of $L$ under the action of Gal. If $1 \leq i,j \leq n$, then let $G_{ij}$ denote the set of field isomorphisms from $L_j$ to $L_i$. Let $G$ denote the groupoid defined in the following way. Put $G_0 = \{ 1,\ldots,n \}$ and let $G_1$ be the union of the $G_{ij}$, for $1 \leq i,j \leq n$. If $g \in G_{ij}$, then we put $d(g) = j$ and $c(g) = i$. Define $A$ to be $L_1 \times \cdots \times L_n$ and put $e_i = (0,\ldots,0,1,0,\ldots,0)$, with 1 in the $i$th position, for $i=1,\ldots,n$. Take $g \in G_{ij}$. Put $A_g = A e_i$. Define $\alpha_g : A e_j \to A e_i$ in the following way. If $x \in A e_j$, then $x = (0,\ldots,0,y,0,\ldots,0)$, for some $y \in L_j$ in the $j$th position. Put $\alpha_g(x) = (0,\ldots,0,g(y),0,\ldots,0)$, where $g(y)$ is in the $i$th position. It is clear that $\alpha$ defines a global groupoid action of $G$ on $A$. The corresponding skew groupoid ring $A *_{\alpha} G$ will, from now on, be denoted by $L * G$. Note that if $L/K$ is a {\it Galois} field extension, then $G = {\rm Gal}$ and $L * G$ is the classical skew group ring. \end{exa} \begin{thm}\label{simplicityLK} If $L/K$ is a finite separable field extension, then the corresponding skew groupoid ring $L * G$ is simple. \end{thm} \begin{proof} We wish to use Theorem \ref{oinertgen}. First we show that $A$ is $G$-simple. To this end, suppose that $J$ is a non-zero ideal of $A$. Since $J = J1 = Je_1 \oplus \cdots \oplus Je_n$ it follows that there is $i \in G_0$ with $J e_i \neq \{ 0 \}$. Since $L_i$ is a field it is clear that $J e_i = A e_i \ni e_i$. Take $j \in \{ 1,\ldots,n \}$ and $g \in G_{ji}$. Since $A$ is $G$-invariant it follows that $e_j = \alpha_g(e_i) \in J$. Since $j$ was arbitrarily chosen from $G_0$ it follows that $1 = e_1 + \cdots + e_n \in J$. Thherefore $J = A$. Now we show that $A$ is maximally commutative in $L * G$. First of all, it is clear that $A$ is commutative. Next, let $X$ be a finite subset of $G_1$. Suppose that $z = \sum_{g \in X} a_g \delta_g \in C_{L * G}(A)$ for some non-zero $a_g \in L_{c(g)}$. Since $z e_i = e_i z$ for all $i \in G_0$ it follows that $d(g)=c(g)$ for all $g \in X$. Take $h \in X$ and put $j = d(h) = c(h)$. Seeking a contradiction, suppose that $h$ is not equal to the identity element in the group $G_{jj}$. Then there is $t \in L_i$ such that $h(t) \neq t$. From the relation $z t \delta_i = t \delta_i z$ we now get the contradiction $h(t) = t$. Therefore $z \in A$ and we are done. \end{proof} \begin{rem} Note that the simplicity of $L * G$ in Theorem \ref{simplicityLK} also follows from \cite[Theorem 4]{lundstrom2005} where a more general result was obtained, however via different techniques (separability of ring extensions). \end{rem} \section*{Acknowledgement} The author is indebted to the anonymous referee for many valuable comments on the manuscript.
1,314,259,995,741
arxiv
\section{Introduction} One of the legacies of every X-ray mission is the production of source catalogues. They always include a significant fraction of unidentified sources, whose nature remains unknown or controversial for several years, awaiting further investigations. This is the case of AX~J1714.1$-$3912, a source discovered during ASCA observations (performed in 1996) of the Galactic supernova remnant (SNR) RX~J1713.7$-$3946\ \citep{Uchiyama2002}, a shell-like SNR site of production of synchrotron X-ray emission \citep{Koyama1997, Slane1999}. AX~J1714.1$-$3912\ is located beyond the northern rim of the shell, and at first it was suggested to be associated with a molecular cloud \citep{Uchiyama2002}. The ASCA spectrum is well modeled by a hard powerlaw with a photon index $\Gamma$=0.98$^{+0.44} _{-0.34}$ and an absorbing column density \nh=$1.28^{+1.00} _{-0.70}$ $\times$10$^{22}$~cm$^{-2}$. This absorption was very similar to the one measured from other regions of the SNR, consistent with the total Galactic one in the source direction (\nh=1.5$\times$10$^{22}$~cm$^{-2}$; \citealt{nhcol2016}). The absorption-corrected flux was 4$\times10^{-11}$~erg cm$^{-2}$ s$^{-1}$ (1-10 keV). Given the spatial overlap of the ASCA source with the cloud, \citet{Uchiyama2002} interpreted the flat X-ray spectrum as produced by non-thermal bremsstrahlung from particles accelerated in the SNR, then impacting the molecular gas as a target. This hypothesis assumes the physical proximity of the SNR with the molecular cloud, and that AX~J1714.1$-$3912\ is an extended X-ray source. An observation performed in 2015 with the $Chandra$\ telescope proved the point-like character of AX~J1714.1$-$3912\ (named CXOU~J171343.9$-$391205, \citealt{Miceli2018}), excluding the association with the molecular cloud. A relatively low X-ray flux was observed, 7($\pm{3})\times$10$^{-14}$~erg cm$^{-2}$ s$^{-1}$ (2-10 keV; corrected for the absorption). \citet{Miceli2018} reported also on a {\em Suzaku}\ observation of the field containing AX~J1714.1$-$3912\ performed in 2011, where the source showed a variable X-ray emission, with a short ($\sim$2~ks) hard X-ray flare. The separate spectroscopy of the quiescent emission and the flare resulted into a quite high absorption (\nh\ in the range 3.6-12.2 $\times$10$^{22}$~cm$^{-2}$ for emission in quiescence, and \nh=6$-$11$\times$10$^{22}$~cm$^{-2}$ during the flare) and flat power law spectra (photon index, $\Gamma$, in the range 0.6-2.3 during quiescence, and $\Gamma$ ranging from 0.7 to 1.6 during the flare, at 90\% confidence level). The observed (not corrected for the absorption) fluxes (2-10 keV) were F=6$\times$10$^{-13}$~erg cm$^{-2}$ s$^{-1}$\ and F=3.6$\times$10$^{-12}$~erg cm$^{-2}$ s$^{-1}$\ for the quiescent and flare emission, respectively. On this basis, \citet{Miceli2018} proposed that AX~J1714.1$-$3912\ is a high mass X-ray binary (HMXB) belonging to the sub-class of the Supergiant Fast X-ray Transients (SFXTs; \citealt{Sguera2005, Sguera2006, Negueruela2005a}). The positional overlap with the near-infrared (NIR) point source 2MASS~17134391-3912055 further supported the identification with a massive X-ray binary \citep{Miceli2018}. \begin{figure} \includegraphics[width=\columnwidth]{ax1714_xmm_image_mos_color.ps} \caption{EPIC MOS2 image of the $XMM$-$Newton$\ observation of the northern region of the SNR RX~J1713.7$-$3946. The sky position of AX~J1714.1$-$3912\ is marked with the black, dashed circle. The northern rim of the SNR shell is evident in the lowest part of the image.} \label{fig:image} \end{figure} \section{Observations and Data Analysis} \label{sect:data} The sky position of AX~J1714.1$-$3912\ was serendipitously covered by $XMM$-$Newton$\ \citep{Jansen2001} during an observation performed in 2017, from 29 August (at 15:28, UTC) to 30 August (at 04:08), with an on-time exposure of 41.6\,ks ({pn}) and 45.5\,ks (MOS). The observation (Obs.ID 0804300901) was targeted at the northern region of the SNR RX~J1713.7$-$3946\ and imaged AX~J1714.1$-$3912\ at an offaxis angle of about 5 arcmin. In Fig.\,\ref{fig:image} we show the {MOS2}\ field-of-view (FOV), where AX~J1714.1$-$3912\ is marked by a dashed black circle. The three European Photon Imaging Cameras ({EPIC}) \citep{Struder2001, Turner2001} operated with the medium filter, with the pn in full frame extended window, and the two MOS in full frame mode. EPIC data were reprocessed using the version 18 of the $XMM$-$Newton$\ Science Analysis Software (SAS), with standard procedures. The tools {\em rmfgen} and {\em arfgen}, available in the SAS, were used to generate the response and ancillary matrices, respectively. High background levels were filtered-out before extracting EPIC spectra. Light curves and spectra were extracted from circles centered on the source emission, adopting a 30\hbox{$^{\prime\prime}$}\ radius, selecting patterns from 0 to 4 ({EPIC}\ {pn}), and from 0 to 12 ({MOS}). Similar size regions, offset from the source position but lying on the same CCD, were used to extract background spectra. Source spectra from the {pn}, {MOS1}\ and {MOS2}\ were simultaneously fitted using {\sc xspec} (version 12.10.1; \citealt{Arnaud1996}) in the energy range 0.3-12 keV, allowing for free cross-calibration constants, to take into account calibration uncertainties. All fluxes were estimated in the 1-10 keV range. The models {\sc TBabs} and {\sc TBpcf} were adopted to account for the absorbing column density along the line of sight, assuming the photoelectric absorption cross sections of \cite{Verner1996} and the interstellar abundances of \cite{Wilms2000}. The spectra were rebinned to have at least 20 counts per bin, to apply the $\chi^{2}$ statistics. All uncertanties are computed at 90\% confidence level, for one interesting parameter. The uncertainty on the X-ray fluxes have been calculated using {\sc cflux} in {\sc xspec}. \section{Results} \label{sect:res} \subsection{Temporal analysis} \label{sect:timing} The EPIC source light curve is reported in Fig.\,\ref{fig:lc}, before and after filtering for high background levels. The source displays a significant variability on timescales of a few hundred seconds, with a dynamic range of $\sim$50. Although formally the energy range is 0.3-12 keV, we note that most of the source counts lies in the 2-12 keV energy band. We have identified two source states: a high state (or ``flare'', hereafter) and a low one, named ``quiescence'', indicated in the lowest panel in Fig.\,\ref{fig:lc}. We searched the data for periodic signals by means of Fourier transforms and Rayleigh periodograms, but we did not find any statistically significant signal. The 3$\sigma$ upper limit on the pulsed fraction, computed by extensive Monte Carlo simulations, is 25\% for a sinusoidal signal between 0.4 and 5.4\,s, using only the pn data (2-12 keV), and 20\% between 5.4 and 1000\,s, using also the data from the MOS cameras. Above $\approx$1000 s, the strong red noise does not allow us to set meaningful limits. \begin{figure} \includegraphics[width=10.0cm, height=8.5cm, angle=-90]{fig2_ls.ps} \caption{AX~J1714.1$-$3912\ light curve in the energy band 0.3-12 keV (bin time = 256 s) observed by $XMM$-$Newton$. From top to bottom, we show the MOS1 source light curve (MOS2 one is similar), the cleaned MOS1 one (where time intervals with high background levels have been filtered-out), the EPIC pn light curve and the cleaned one (lowest panel). The horizontal lines in the lowest panel indicate the time intervals for the extraction of the flare and quiescent EPIC spectra (the same for pn, MOS1 and MOS2). } \label{fig:lc} \end{figure} \subsection{Spectroscopy} \label{sect:spec} We performed the spectroscopic analysis during the flare and in quiescence separately, extracting two sets of EPIC spectra, covering the time intervals shown in Fig.\,\ref{fig:lc} (bottom panel). The EPIC spectra extracted during the flare are highly absorbed and with large positive residuals around 6.4 keV, very evident when fitting the spectra with a simple, absorbed power law model (Fig.\,\ref{fig:spec_flare}), resulting in a reduced $\chi^{2}_{\nu}$=1.928 (for 117 degrees of freedom, dof; see Table\,\ref{tab:spec} for the spectral parameters). The addition of a narrow Gaussian line accounted for these residuals ($\chi^{2}_{\nu}$/dof=1.445/114; Table\,\ref{tab:spec}). However, a mild soft excess remained below 4 keV. This suggested to consider an additional absorption model, a partial covering fraction absorption ({\sc TBpcf} in {\sc xspec}), where the additional column density is applied to a fraction of the power law emission. This final model provides a good description of the flare spectrum (Model 3 in Table\,\ref{tab:spec}). The counts spectra and the residuals to this best fit model are shown in Fig.\,\ref{fig:spec_flare}. We note that the model {\sc TBpcf} resulted into an almost complete covering (98$\pm{1}$\%) of the X-ray emission, leading to an absorption of 1.5$\times$10$^{24}$\,cm$^{-2}$ during the flare, considering both absorbing components ({\sc TBabs} and {\sc TBpcf}). This is 2 dex larger than the Galactic absorption (\nh=1.5$\times$10$^{22}$~cm$^{-2}$; \citealt{nhcol2016}). The centroid of the emission line is very well constrained in a narrow range around 6.4 keV in both states, clearly indicative of fluorescence from neutral iron. The Fe K$_{\alpha}$ line is produced when a direct hard X-ray radiation illuminates neutral matter around the source. This reprocessing results into a fluorescent iron line emission together with a Compton component. The final spectrum is a composition of a reflection component together with the direct (power law) X-ray emission. However, adopting reflection models like {\sc pexrav} \citep{pexrav} and {\sc pexmon} \citep{Nandra2007} in {\sc xspec} did not yield better fits. We note that disentangling the incident and the reflected component is made difficult due to the limited energy band and the low counting statistics. This results into a quite hard power law slope measured in the total spectrum below 10 keV, also because of a reflected hump \citep{Fabian1990} that is expected beyond the $XMM$-$Newton$\ energy band. The spectrum in quiescence shows a prominent positive excess around 6.4 keV as well, when fitted with a simple absorbed power law ($\chi^{2}_{\nu}$/dof=2.85/6). The addition of a Gaussian line to the absorbed power law model resulted in a good fit to the data, with no need to adopt further partial covering absorption (Fig.\,\ref{fig:spec_quiesc}). During the fit, a narrow iron emission line is assumed, fixing its width to zero. The best fit parameters are listed in Table\,\ref{tab:spec} (last column). \begin{table*} \caption{Spectroscopy of the two source states ({EPIC}\ {pn}, {MOS1}\ and {MOS2}). We show spectral results obtained using three models for the flare emission: Model 1 is a simple absorbed power law, Model 2 includes an iron line in emission, while the best fit model is Model 3, a partially absorbed powerlaw, together with a Gaussian line in emission ({\sc const * TBabs * TBpcf * (POW+GAU)} in {\sc xspec} synthax). The last column lists the best fit parameters for the spectroscopy during quiescence ({\sc const * TBabs * (POW+GAU)}). F$_{1-10 keV}$ is the absorbed flux, UF$_{1-10 keV}$ the flux corrected for the absorption. } \label{tab:spec} \vspace{0.0 cm} \begin{center} \begin{tabular}{lcccc} \hline \hline \noalign {\smallskip} Parameters & \multicolumn{3}{c}{Flare} & Quiescence \\ & Model 1 & Model 2 & Model 3 & \\ \hline \noalign {\smallskip} N$_{\rm H}$ (10$^{22}$ cm$^{-2}$) & $127^{+16} _{-15}$ & $99^{+18} _{-17}$ & $5.8^{+6.3} _{-4.0}$ & $< 80$ \\ \multicolumn{2}{c}{----- Partial covering fraction absorption -------} \\ N$_{\rm H TBpcf}$ (10$^{22}$ cm$^{-2}$) & $-$ & $-$ & $145\pm{20}$ & $-$ \\ covering fraction & $-$ & $-$ & $98\pm{1}\%$ & $-$ \\ \multicolumn{2}{c}{----- POWER LAW-------} \\ $\Gamma$ & $0.21^{+0.32} _{-0.32}$ & $-0.40^{+0.39} _{-0.39}$ & $0.06^{+0.37} _{-0.38}$ & $-0.8^{+1.6} _{-1.7}$ \\ norm & $10(^{+12} _{-6})\times10^{-4}$ & $2.0(^{+3.3} _{-1.2})\times10^{-4}$ & $8.4(^{+13.5}_{-5.3})\times10^{-4}$ & $1.1(^{+50.}_{-1.0})\times10^{-6}$ \\ \multicolumn{2}{c}{----- GAUSSIAN LINE-------} \\ E$_{\rm line}$ (keV) & $-$ & $6.409^{+0.025} _{-0.027}$ & $6.402^{+0.024} _{-0.026}$ & $6.41^{+0.06} _{-0.05}$ \\ $\sigma$ (keV) & $-$ & $<$0.10 & $<$0.08 & 0.0 (fixed) \\ norm (photons cm$^{-2}$ s$^{-1}$) & $-$ & $1.3^{+0.4} _{-0.3} \times10^{-4}$ & $2.0^{+0.7} _{-0.5} \times10^{-4}$ & $4.4^{+6.3} _{-1.8} \times10^{-6}$ \\ EW (eV) & $-$ & $320^{+80} _{-70}$ & $265^{+30} _{-80}$ & $900^{+1300} _{-400}$ \\ \hline F$_{1-10 keV}$ (\mbox{erg cm$^{-2}$ s$^{-1}$}) & 9.2 $\pm{0.5}$ $\times10^{-12}$ & 9.3 $\pm{0.5}$ $\times10^{-12}$ & 9.4 $\pm{0.5}$ $\times10^{-12}$ & 4.3$^{+0.9} _{-1.2}$ $\times10^{-13}$ \\ UF$_{1-10 keV}$ (\mbox{erg cm$^{-2}$ s$^{-1}$}) & 5.4 $^{+1.6} _{-1.2}$ $\times10^{-11}$ & 3.4 $^{+1.1} _{-0.8}$ $\times10^{-11}$ & 6.2 $^{+2.5} _{-1.7}$ $\times10^{-11}$ & 4.4$^{+3.1} _{-0.1}$ $\times10^{-13}$ \\ \hline L$_{1-10 keV}$ (\hbox{erg s$^{-1}$}) & 6.5$\times10^{35}$ d$_{10kpc}^2$ & 4.1$\times10^{35}$ d$_{10kpc}^2$ & 7.4$\times10^{35}$ d$_{10kpc}^2$ & 5.3$\times10^{33}$ d$_{10kpc}^2$ \\ \hline $\chi^{2}_{\nu}$/dof & 1.928/117 & 1.445/114 & 1.076/112 & 0.854/4 \\ \hline \hline \end{tabular} \end{center} \end{table*} \begin{figure} \includegraphics[width=12.0cm, height=8.5cm, angle=-90]{4panels_lda_del_del_del_final.ps} \caption{EPIC spectra extracted during the flare. Counts spectra are shown in the top panel (EPIC {pn}\ is marked with solid squares, {MOS1}\ with open circles, {MOS2}\ with crosses), when fitted with the best fit reported in Table\ref{tab:spec}. Residuals (in units of standard deviation) with respect to three models are shown in the bottom panel: from top to bottom, the residuals are with respect to a single absorbed powerlaw, with respect to a powerlaw with a Gaussian line at 6.4 keV, and including a partial covering absorption model (i.e. the best fit reported in Table\ref{tab:spec}). } \label{fig:spec_flare} \end{figure} \begin{figure} \includegraphics[width=9.0cm, height=8.5cm, angle=-90]{fig4_ls.ps} \caption{EPIC spectra extracted during quiescence. Counts spectra are shown in the top panel (EPIC {pn}\ is marked with solid squares, {MOS1}\ with open circles, {MOS2}\ with crosses), when fitted with the best fit reported in Table\ref{tab:spec}, an absorbed power law together with an emission line at 6.4 keV. The lower panel shows the residuals in units of standard deviation. } \label{fig:spec_quiesc} \end{figure} \section{Discussion} We have reported on the discovery of a prominent FeK$\alpha$ line and a large intrinsic absorption (\nh=1.5$\times$10$^{24}$\,cm$^{-2}$) from AX~J1714.1$-$3912\ during an $XMM$-$Newton$\ observation performed in 2017. The source shows a variable X-ray emission with a brighter state (flare) at the beginning of the observation, followed by a fainter state (quiescence). Spectra extracted from both states are well described by hard power law models with similar slopes, within the uncertainties. The X-ray emission is significantly more absorbed during the flare than during the following fainter state, suggesting a variability in the circumstellar absorbing matter on short timescale, correlated with the X-ray flux. The FeK$\alpha$ is detected in both states, with a larger equivalent width during the quiescent emission. This might suggest that the unflared state is due to the eclipse by the companion star: the less absorbed X-ray spectrum can be due to scattering into the line of sight by the stellar wind matter of the central (eclipsed) X-ray radiation (e.g. \citealt{Haberl1991}). However, the flux of the iron line is significantly different in the two states and correlates with the continuum flux, disfavouring this hypothesis, and suggests an intrinsic variability. Moreover, this correlation indicates a close proximity of the reprocessing matter to the compact object. The source is also variable on long timescales of years: previous X-ray observations (ASCA in 1996, {\em Suzaku}\ in 2011, $Chandra$\ in 2015) caught different X-ray fluxes (Fig.\,\ref{fig:flux}) and a significantly lower absorbing column density (\nh\ in the range 10$^{22}$-10$^{23}$\,cm$^{-2}$) than during the $XMM$-$Newton$\ observation. This indicates a long-term changing aspect of the absorbing matter local to the source, possibly with an inhomogeneous distribution or a variability due to the orbital motion in a binary system. In light of the new $XMM$-$Newton$\ results, we discuss the possible source nature in the following sub-sections. \begin{figure} \includegraphics[height=\columnwidth, angle=-90]{ax1714_flux.ps} \caption{AX~J1714.1$-$3912\ long term light curve (fluxes in the energy range 2-10 keV are corrected for the absorption). } \label{fig:flux} \end{figure} \begin{figure} \includegraphics[width=\columnwidth, angle=0]{axj1714_sed_nufnu.ps} \caption{Spectral energy distribution of AX\,J1714.1$-$3912 (full circles: GAIA in cyan, 2MASS in green, \textit{Spitzer} in red and WISE in orange). The lines represent various templates normalized at the observed 3.6$\mu$m flux with no reddening (dotted) or with different amounts of reddening (solid), as annotated, applied to reproduce the observed optical-NIR SED. Templates of various types of AGN are shown in the \textit{top panel}~\citep{polletta07}, and of a B and M-type star~\citep{kurucz93} in the \textit{bottom panel}. } \label{fig:sed} \end{figure} \subsection{Is AX~J1714.1$-$3912\ an active galactic nucleus?} The point-like appearance, the large intrinsic absorption and the presence of an FeK$\alpha$ line in AXJ\,1714.1$-$3912 are reminiscent of the X-ray properties of obscured active galactic nuclei~\citep[AGN; see e.g.,][]{guainazzi05}. To investigate such a hypothesis we examine the broad-band spectral energy distribution (SED) of the optical-infrared (IR) counterpart to AXJ\,1714.1$-$3912. To build the broadband SED we collected optical data from GAIA DR3~\citep{gaia16_mission,gaia21_dr3}, near-IR data from 2MASS~\citep{skrutskie06}, and mid-IR data from WISE~\citep{wright10} and $Spitzer$~\citep{werner04,glimpse}. The optical-IR SED, shown in Fig.~\ref{fig:sed}, peaks at $\sim$2$\mu$m and decreases steadily towards longer wavelengths up to $\lambda{\leq}$10$\mu$m. Such a behaviour is inconsistent with what is observed in AGN, whose SEDs typically rise long-ward of 1--5$\mu$m~\citep{polletta07,hickox17} due to the emission from AGN-heated hot dust. Therefore, the broad-band SED rules out the AGN hypothesis. The mid-IR ($\lambda$\,=\,3--10$\mu$m) SED of AXJ\,1714.1$-$3912 is instead consistent with stellar radiation. The full optical-IR SED of AXJ\,1714.1$-$3912 can be reproduced with various reddened stellar templates~\citep{kurucz93}. We apply a standard Galactic reddening law~\citep{cardelli89}. The amount of required optical extinction depends on the stellar type, for example an A$_\mathrm{V}$ of 13\,mag, corresponding to N$_\mathrm{H}{\simeq}$2.3$\times$10$^{22}$\,cm$^{-2}$, is required for a B-type star, and an A$_\mathrm{V}$ of 8\,mag (N$_\mathrm{H}{\simeq}$1.4$\times$10$^{22}$\,cm$^{-2}$) for a red-giant M-type star. The estimated column densities are consistent or slightly larger than the Galactic value measured along the line of sight towards AXJ\,1714.1$-$3912~\citep[i.e., 1.5$\times$10$^{22}$\,cm$^{-2}$;][]{nhcol2016}. Spectroscopic observations and a more detailed analysis would be necessary to better characterise the stellar type and determine whether intrinsic dust absorption is present in AXJ\,1714.1$-$3912. \subsection{Is AX~J1714.1$-$3912\ an SFXT?} \citet{Miceli2018} excluded an extragalactic origin as well, based on the rapid timescale (thousands seconds) of the X-ray flux variability. They suggested that AX~J1714.1$-$3912\ is a Galactic HMXB belonging to the sub-class of SFXTs, based on the point-like appearence, the amplitude of the X-ray variability and the hard power law spectrum, indicative of accretion of matter onto a compact object. In light of the flare caught by $XMM$-$Newton$, the source dynamic range (F$_{\rm max}$/F$_{\rm min}$) increases to $\sim$900, compared with the faint flux detected by $Chandra$\ (Fig.\,\ref{fig:flux}). This range of variability is not as extreme as the one shown by the prototypical members of the SFXT class (F$_{\rm max}$/F$_{\rm min}$ from 10$^{4}$ to 10$^{6}$) but still consistent with less variable, ``intermediate'' SFXTs (see Table\,2 in \citealt{Sidoli2018}). However it is possible that we missed the brightest flares, since the duty cycle of SFXT outbursts is very small (lower than 5\%, see Table\,1 in \citealt{Sidoli2018}). On the other hand, the source field has been monitored by $INTEGRAL$/IBIS (above 20 keV) for a total exposure time of 6.7\,Ms \citep{Bird2016}, with no detections reported in the literature. If we assume a typical flare duration of $\sim$2\,ks we calculate a duty cycle lower than 0.03\% for AX~J1714.1$-$3912\ (percentage of time spent in bright flaring activity, i.e. with a peak flux F$_{18-50 keV}\geq$1.5-3$\times$10$^{-10}$~\mbox{erg cm$^{-2}$ s$^{-1}$}; \citealt{Sidoli2018}), to reconcile with the lack of reported outbursts with $INTEGRAL$. This would imply that AX~J1714.1$-$3912\ is the SFXT with the lowest duty cycle: to date, the SFXT with the rarest outbursts is IGR\,J08408-4503, showing a duty cycle of 0.09\% \citep{Sidoli2018}. Alternatively, AX~J1714.1$-$3912\ could be located at large distance: a short (duration $\sim$2\,ks) flare with a peak luminosity of $\sim$10$^{36}$~\hbox{erg s$^{-1}$}\ (18-50 keV) implies a source distance d$\gtrsim$8~kpc, to not be detected by $INTEGRAL$. The non detection with $INTEGRAL$\ poses also a 3$\sigma$ upper limit to the persistent (quiescent) emission F$<$2.3$\times10^{-12}$\,\mbox{erg cm$^{-2}$ s$^{-1}$}\ (20-40 keV; \citealt{Bird2016}). If we assume that the quiescent $XMM$-$Newton$\ spectrum is representative of the long-term source state, we can use this upper limit to constrain the presence of a high energy cutoff, despite the large uncertainty in the measured power law slope. In particular, if the true photon index of the quiescent spectrum is $\Gamma$$\sim$0.8, the extrapolation of the power law model at higher energies leads to F$=$1.3$\times10^{-12}$\,\mbox{erg cm$^{-2}$ s$^{-1}$}\ (20-40 keV), with no need for a cutoff. If harder power law photon indexes are assumed, variable cutoff values are needed to reconcile with the $INTEGRAL$\ upper limit. For instance, for a photon index $\Gamma$$\sim$$-1$ in the quiescent spectrum, a cutoff E$_{cut}$$\sim$10 keV is needed, to match the upper limit to the 20-40 keV flux (where E$_{cut}$ is the e-folding energy of exponential rolloff in the {\sc cutoffpl} model in {\sc xspec}). \subsection{Is AX~J1714.1$-$3912\ a supergiant B[e] HMXB?} Although the long-term X-ray light curve is compatible with an SFXT with very rare outbursts (and/or located at large distance), we note that the $XMM$-$Newton$\ spectral properties reported here for the first time are unusual for an SFXT \citep{Sidoli2017review, Martinez2017, Kretschmar2019}: so far, no members of this class are known to be so highly absorbed. Even in the SFXT IGR\,J18410$-$0535, where an intense flare was suggested to be produced by accretion of a very massive wind clump, the associated absorbing column density was significantly lower \citep{Bozzo2011}. The most extreme absorption among SFXTs has been observed in SAX\,J1818.6$-$1703 (5$\times$10$^{23}$\,cm$^{-2}$; \citealt{Boon2016}), but it is a unique case. In general, SFXTs show a circumstellar environment less dense than in persistent HMXBs \citep{Kretschmar2019}. We note that large absorbing column densities, variable on timescales of ten days, have been measured in the Be X-ray transient SXP\,1062 \citep{Gonzalez2018} during the decline of the outburst, but with no detection of FeK$\alpha$ line emission. On the other hand, the AX~J1714.1$-$3912\ spectrum observed with $XMM$-$Newton$\ strongly resembles those of the so-called ``highly obscured sources'' \citep{Walter2003}. The latter are HMXBs where the compact object is enshrouded in a dense circumstellar environment produced by the outflowing matter from an evolved, early type massive star, such as a sgB[e]. In particular, the huge absorbing column density of $\sim$1.5$\times$10$^{24}$\,cm$^{-2}$ we deduce from the flare emission, makes AX~J1714.1$-$3912\ one of the most absorbed sources ever observed in our Galaxy, together with IGR J16318-4848 \citep{Ibarra2007}. SgB[e] stars \citep{Zickgraf1985, Kraus2019} are evolved massive stars characterized by disk-like, dusty circumstellar envelopes fed by dense outflows from the B supergiant. Their optical spectra show a twofold behaviour: broad Balmer emission lines plus narrow emission lines from permitted and forbidden transitions. HMXBs with a supergiant B[e] donor stars are a rare type of X-ray binaries, with CI Camelopardalis (CI Cam, aka XTE\,J0421+560; \citealt{Bartlett2013}) as a prototype, being the first Galactic sgB[e] star observed during an X-ray outburst whose X-ray luminosity implied a clear binarity nature. \citet{Chaty2019} suggest that sgB[e]\ HMXBs are at the short evolutionary stage when a binary system is entering a common envelope phase of binary evolution. At present, this is a small class of rare HMXBs that, besides the Galactic sources CI\,Cam, IGR J16318$-$4848, and Wd1-9, includes a couple of candidates in the Magellanic Clouds and, remarkably, two ultra luminous X-ray sources, Holmberg II X-1 and NGC300 ULX-1/supernova imposter SN2010da \citep{Bartlett2019}. CI Cam and IGR J16318$-$4848 show variable absorbing column densities in the range 10$^{23}$-10$^{24}$\,cm$^{-2}$, intense FeK$\alpha$ line emission and X-ray flux variability \citep{Bartlett2019}. But while IGR J16318-4848 is bright above 20 keV \citep{Bird2016} with some level of flaring activity \citep{Sidoli2018}, CI Cam has never been detected by $INTEGRAL$\ \citep{Bird2016}. Nevertheless, IGR J16318$-$4848 has never undergone an X-ray outburst similar to the one experienced by CI Cam in 1998: CI Cam displayed a dynamic range in excess of 500 in 8 days (from $\sim$5$\times$10$^{-8}$\,\mbox{erg cm$^{-2}$ s$^{-1}$}\ to 9$\times$10$^{-11}$\,\mbox{erg cm$^{-2}$ s$^{-1}$}), with a decline of five orders of magnitude, back to quiescence, in a few months \citep{Belloni1999, Orlandini2000}. Their X-ray luminosities cannot be determined as their distances are very uncertain. The nature of the compact object is unknown as X-ray pulsations have not been observed. \section{Conclusions} We have reported here on an $XMM$-$Newton$\ observation of AX~J1714.1$-$3912, leading to the discovery of a remarkable new behavior of this source. The new findings can be summarized as follows: \begin{itemize} \item a high intrinsic obscuration ($\sim$1.5$\times$10$^{24}$\,cm$^{-2}$) is observed during the flaring emission, implying a large variability of two orders of magnitude in the absorbing column density towards the source, on timescales of years; \item a prominent FeK$\alpha$ line emission is evident both during the flare and the unflared (quiescent) emission, with variable fluxes in the two source states; \item the flare caught by $XMM$-$Newton$\ increases the source range of flux variability to $\sim$900, when compared to a $Chandra$\ observation performed two years before. \end{itemize} In view of these new findings we have discussed different viable scenarios for the source nature. AX~J1714.1$-$3912\ was previously suggested to be a SFXT. The short term variability during the $XMM$-$Newton$\ observation is consistent with a SFXT nature, as well as the long-term dynamic range. On the other hand, the $XMM$-$Newton$\ spectrum is remarkable, as no SFXT has ever shown an obscuration as large as 10$^{24}$\,cm$^{-2}$. This spectrum shows many similarities with those typically observed in the so-called ``highly obscured sources'', a rare sub-class of HMXBs with a sgB[e]\ companion. This might pose the SFXT identification into question and leads us to propose an alternative origin for the X--ray emisison, a sgB[e]\ HMXB. To confirm its membership further investigations of the companion star are needed. \section*{Acknowledgements} Based on observations (ObsID 0804300901) obtained with $XMM$-$Newton$, a European Space Agency science mission with instruments and contributions directly funded by ESA Member States and NASA. This work has made use of data and software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA/GSFC. This work has made use of data from the ESA mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. This publication makes use of data products from the Two Micron All Sky Survey, the Wide-field Infrared Survey Explorer, and \textit{Spitzer} Space Telescope. 2MASS is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. WISE is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. \textit{Spitzer} was operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. PE acknowledges financial support from the Italian Ministry for Education and Research through the PRIN grant 2017LJ39LM. \section*{Data Availability} The $XMM$-$Newton$\ data analysed here are publicly available by means of the HEASARC (ObsID 0804300901) and the ESA archive at the link https://doi.org/10.5270/esa-nai97jb \bibliographystyle{mnras}
1,314,259,995,742
arxiv
\section{Introduction}\vspace*{-1mm} With the ever-increasing demand for wireless services along with a need for green communications, spectral efficiency and energy efficiency have become important criteria in the design of future wireless systems. Energy harvesting (EH) cognitive radio \cite{sultan,park1,niyato2014,lee1,jeya} is a promising solution to improve the spectrum utilization; in particular, spectral efficiency is improved by spectrum sharing, while achieving self-sustaining green communications. In cognitive radio, a secondary user (SU) may share the spectrum with a primary user (PU) provided that the interference from it to PU remains below a given threshold \cite{luo}. The use of cooperative relays in cognitive radio has gained significant attention as they have the potential to improve the coverage and reliability of SU's transmission while sharing the spectrum with PU \cite{zhang1,guo,zou,si1,luo,duong1,tourki2012,zhong,lee,si10,weiw}. However, the relays may have limited battery reserves, and recharging or replacing the battery frequently may be inconvenient. This invokes the need for an external power source to keep relays active in the network. The EH relays can overcome such energy shortage while exploiting the spatial diversity~\cite{nasir,aissa,mehta2,krikidis,yener1}. As to the EH relays in cognitive radio, \cite{mousa,sanket} consider an EH secondary relay which helps relaying the secondary data, and perform the secondary outage analysis for Rayleigh fading channels under the interference constraint at the primary receiver; while in~\cite{van}, cooperative communication via multiple EH relays is considered. In this paper, we consider the case where SU uses the best relay from multiple EH relays for its own transmission over Nakagami-$m$ channels, given that PU's outage probability remains below a given threshold$-$we characterize the interference to PU by its outage probability. For EH relays, the optimal use of available energy is crucial. Low transmission power to conserve energy may prolong the lifetime of a relay, however, at the cost of increased outage; whereas higher transmission power improves the transmission quality, but at the expense of higher energy consumption rate reducing the future chances of transmission. Due to this EH nature of relays, the best relay selection becomes tricky as only relays having sufficient energy to forward the data to the destination, called \textit{active relays}, can be considered for the selection, making energy a crucial factor in the relay selection. Additionally, in spectrum sharing, the secondary communication via EH relays differs from that in non-spectrum sharing environment; because, in spectrum sharing, EH relay's transmit power depends not only on the energy availability with it, but also on the maximum power allowed by PU's outage constraint. For example, in a case where a relay has harvested less energy, it may not transmit with the maximum power allowed by PU's outage constraint. On the contrary, even if the relay has harvested large amount of energy, a tight PU's outage constraint may not allow relay to transmit with higher power. Thus, there exists an interesting tussle between these two constraints putting a stronger restriction on relay's transmit power than it would have been in the case with only one constraint, i.e., spectrum sharing without energy harvesting or non-spectrum sharing energy harvesting. Intrigued by the aforementioned tussle, in this work, we investigate its impact on the secondary network's performance, which is missing in~\cite{mousa,sanket,van}. The main contributions of this paper are as follows: \begin{itemize} \item Firstly, with the best relay selection scheme that maximizes SU's end-to-end signal-to-interference-noise-ratio (SINR), we specifically derive a closed-form expression for the outage probability of EH decode-and-forward (DF) cognitive relay network in Nakagami-$m$ channels under PU's outage constraint. We also consider the interference from PU while deriving the outage probability expression. \item Secondly, for better utilization of the harvested energy, we calculate the probability of a relay being active. We show that, besides energy harvesting and consumption rates, the probability of a relay being active depends on PU's outage probability threshold. We then couple the energy constraint due to the EH nature of relays with PU's outage constraint. We investigate which of the two constraints dominates the performance of EH relays and find the respective regions of dominance that regulate the transmit powers of EH relays. \item Finally, we investigate the effects of fading severity parameter, number of relays, and the average energy harvesting rate on the secondary outage probability and tradeoff between the energy constraint and PU's outage constraint. \end{itemize} \vspace*{-2mm} \section{Energy Harvesting and Spectrum Sharing Model}\vspace*{-1mm} As shown in Fig.\,\ref{fig:syst}, the network consists of a primary transmitter (PT), a primary destination (PD), a secondary transmitter (ST), a secondary destination (SD), and $M$ energy harvesting DF secondary relays (SRs). The PT, PD, ST, and SD are conventional nodes with constant energy supply (e.g., battery). The ST-SD direct link is assumed to be unavailable~\cite{zhong,lee,nasir,aissa,si10,weiw}. The ST communicates with SD over $i$th half-duplex EH relay ($\mathrm{SR}_i$), $i \in \lbrace1, 2, \dotsc, M \rbrace$. The channel between a transmitter $p \in \lbrace \mathrm{PT, ST, SR}_i \rbrace$ and a receiver $q \in \lbrace \mathrm{PD, SD, SR}_i\rbrace$, is a Nakagami-$m$ fading channel; $h_{p-q}$ denotes the channel coefficient. Thus, the channel power gain $|h_{p-q}|^2$ is Gamma-distributed with mean $\Omega_{p-q}$ and fading severity parameter $m_{p-q}$. We can write the probability density function and cumulative distribution function\,(CDF) of a random variable $U = |h_{p-q}|^2$ as\vspace*{-1mm} \begin{eqnarray} f_{U}(u) &=& \frac{\alpha_{p-q}^{m_{p-q}}}{\Gamma(m_{p-q})}u^{m_{p-q}-1}\exp(-\alpha_{p-q} u), \label{eq:pdf}\vspace*{-1mm} \end{eqnarray}\vspace*{-2mm} \begin{eqnarray} F_{U}(u) &=& \frac{\Upsilon(m_{p-q}, \alpha_{p-q} u)}{\Gamma(m_{p-q})} = 1 - \frac{\Gamma(m_{p-q}, \alpha_{p-q} u)}{\Gamma(m_{p-q})}, \label{eq:cdf}\vspace*{-3mm} \end{eqnarray}\vspace*{-4mm} \noindent respectively, where $\Gamma(\cdot)$, $\Gamma(\cdot, \cdot)$, and $\Upsilon(\cdot, \cdot)$ are the complete, upper incomplete, and lower incomplete Gamma functions \cite{gradshteyn}, respectively; $\alpha_{p-q} = m_{p-q}/\Omega_{p-q}$. The channels are independent of each other. For PT-PD, ST-PD, and SR$_{i}$-PD links, we assume the mean channel power gain knowledge due to limited feedback; while SR$_{i}$ and SD have the instantaneous channel gain knowledge for respective receiving links, i.e., ST-SR$_{i}$ and PT-SR$_i$ links at SR$_{i}$, and SR$_{i}$-SD and PT-SD links at SD~\cite{zou,peter:2013}. The secondary communication happens over two phases, each of $T$-second duration. All channels experience block-fading and remain constant for $\mathrm{2}T$-second, i.e., two phases of secondary communication, as in \cite{mehta2,aissa,nasir}. In phase $\mathrm{1}$, ST transmits to EH secondary relays, while in phase $\mathrm{2}$, the received signal from ST is forwarded by one of the relays to SD after decoding. Note that in phase $\mathrm{2}$, no relay might be active due to the lack of energy required to forward the signal. \begin{figure} \centering \includegraphics[scale=0.2]{sys3}\vspace*{-3mm} \caption{Secondary transmissions via EH relays with underlay spectrum sharing.} \label{fig:syst}\vspace*{-6mm} \end{figure} \vspace*{-1mm} \subsection{Energy Harvesting Model}\vspace*{-1mm} The energy harvesting process of a relay $i$ is stationary and ergodic~\cite{mehta2}, with mean $H_{\mathrm{av}, i}$ Joules per second\,($\mathrm{J/s}$). This model encompasses different energy harvesting sources like solar, vibrations, radio frequency\,(RF) in the surroundings~\cite{lu}, and different energy harvesting profiles \cite{mehta1}. The EH relay stores the harvested energy in a battery with negligible leakage. For analytical tractability, we assume the capacity of the energy storage to be large\cite{aissa,mehta2,krikidis}. In addition, the energy consumption occurs only in data transmission; any other energy expenditure, e.g., energy consumption in signal reception and processing, is not considered for the purpose of exposition~\cite{mehta2,aissa,mousa}.\vspace*{-1mm} \subsection{Maximum Secondary Transmit Powers in Spectrum Sharing} In this work, we characterize the quality of service (QoS) of PU by its outage probability. For constant transmit power of PT ($P_{\mathrm{PT}}$), the PU outage probability should be below a certain threshold $\Theta_{\mathrm{p}}$ given the interference from the secondary transmitter and the relay. This constraint limits the transmit powers of ST and SR to $P_{\mathrm{ST}}$ and $P_{\mathrm{SR}}$, respectively. In phase $\mathrm{1}$, the outage probability of PU $\mathrm{P_{p, out, ST}}$ when ST is transmitting, is given as\vspace*{-1mm} \begin{align} \mathrm{P_{p, out, ST}} = \mathrm{Pr}\left(\log_{2}\left(1+ \gamma_{\mathrm{PD}}\right)\leq\mathcal{R}_\mathrm{p}\right) \leq \Theta_{\mathrm{p}},\vspace*{-2mm} \label{eq:3}\vspace*{-2mm} \end{align}\vspace*{-4mm} \noindent where $\gamma_{\mathrm{PD}} = \frac{P_{\mathrm{PT}}|h_{\mathrm{PT-PD}}|^2}{P_{\mathrm{ST}}|h_{\mathrm{ST-PD}}|^2 + N_0}$ is SINR at PD, $N_0$ being the noise power of additive white Gaussian noise (AWGN) at all receivers, and $\mathcal{R}_\mathrm{p}$ is the desired data rate on the primary link.\vspace*{-2mm} \begin{proposition} We write $\mathrm{P_{p, out, ST}}$ as follows: \vspace*{-4mm} {{\small \begin{align} \mathrm{P_{p, out, ST}}&= 1-\Bigg[\frac{\alpha_{\mathrm{ST-PD}}^{m_{\mathrm{ST-PD}}}\exp\left(\frac{-\alpha_{\mathrm{PT-PD}}\theta_\mathrm{p}N_0}{P_{\mathrm{PT}}}\right)}{\Gamma(m_{\mathrm{ST-PD}})} \nonumber \\ &\times\sum_{k=0}^{m_{\mathrm{PT-PD}}-1}\left(\frac{\alpha_{\mathrm{PT-PD}}\theta_\mathrm{p}N_0}{P_{\mathrm{PT}}}\right)^k\frac{1}{k!} \sum_{t=0}^{k}{\binom{k}{t}}\left(\frac{P_{\mathrm{ST}}}{N_0}\right)^t \nonumber \\ & \times\frac{\Gamma(m_{\mathrm{ST-PD}} + t)}{\left(\frac{\alpha_{\mathrm{PT-PD}}\theta_\mathrm{p} P_{\mathrm{ST}}}{P_{\mathrm{PT}}} + \alpha_{\mathrm{ST-PD}}\right)^{m_{\mathrm{ST-PD}} + t}}\Bigg]. \label{eq:power} \end{align}}} \end{proposition}\vspace*{-1mm} \begin{proof} The proof is given in Appendix \ref{sec:der_out}. \end{proof} Using \eqref{eq:power}, the value of maximum ST power $P_{\mathrm{ST}}$ allowed by PU's outage constraint $\Theta_{\mathrm{p}}$ can be numerically found. Similarly, in phase $\mathrm{2}$, the maximum allowable transmit power $P_{\mathrm{SR}}$ for a relay can be numerically found from \eqref{eq:power1}, replacing the role of secondary transmitter in \eqref{eq:power} by the secondary relay and replacing corresponding channel parameters.\vspace*{-4mm} {{\small \begin{align} \mathrm{P_{p, out, SR}} &= 1-\Bigg[\frac{\alpha_{\mathrm{SR-PD}}^{m_{\mathrm{SR-PD}}}\exp\left(\frac{-\alpha_{\mathrm{PT-PD}}\theta_\mathrm{p}N_0}{P_{\mathrm{PT}}}\right)}{\Gamma(m_{\mathrm{SR-PD}})}\nonumber \\ & \times \sum_{k=0}^{m_{\mathrm{PT-PD}}-1}\left(\frac{\alpha_{\mathrm{PT-PD}}\theta_\mathrm{p}N_0}{P_{\mathrm{PT}}}\right)^k\frac{1}{k!} \sum_{t=0}^{k}{\binom{k}{t}}\left(\frac{P_{\mathrm{SR}}}{N_0}\right)^t \nonumber \\ & \times \frac{\Gamma(m_{\mathrm{SR-PD}} + t)}{\left(\frac{\alpha_{\mathrm{PT-PD}}\theta_\mathrm{p} P_{\mathrm{SR}}}{P_{\mathrm{PT}}} + \alpha_{\mathrm{SR-PD}}\right)^{m_{\mathrm{SR-PD}} + t}}\Bigg]. \label{eq:power1} \end{align}}}\vspace*{-4mm} \subsection{Active Relays and Best Relay Selection}\vspace*{-1mm} Assume that out of the total $M$ relays, $N$ relays are active $\left(N \in \lbrace 0, 1, \dotsc, M\rbrace\right)$ due to energy availability, and a relay has to be selected from active $N$ relays. An \textit{active} relay is the relay having sufficient energy to forward the received data from ST. For an opportunistic DF relaying, the relay with the largest end-to-end SINR at SD, called \textit{the best relay}, is selected to forward the signal. When $N$ relays are available for selection, the largest end-to-end SINR at SD is given by\vspace*{-1mm} \begin{equation} \gamma_{\mathrm{tot}}^{N}= \max_{\mathrm{SR}_i \in \mathbb{R}}(\min(\gamma_{\mathrm{SR}_i}, \gamma_{\mathrm{R}_i\mathrm{D}})), \label{eq:rn1}\vspace*{-1mm} \end{equation} where $\mathbb{R}$ is the set of active relays. Note that $\mathbb{R}$ is an empty set when no relay is active. $\gamma_{\mathrm{SR}_i}$ and $\gamma_{\mathrm{R}_i\mathrm{D}}$ are SINRs at $i$th relay and at SD over $\mathrm{R}_i-\mathrm{D}$ channel, respectively, and are given as\vspace*{-2mm} \begin{equation} \gamma_{\mathrm{SR}_i} = \frac{P_{\mathrm{ST}}|h_{\mathrm{ST}-\mathrm{SR}_i}|^2}{P_{\mathrm{PT}}|h_{\mathrm{PT}-\mathrm{SR}_i}|^2 + N_0}, \label{eq:pst11}\vspace*{-1mm} \end{equation} \begin{equation} \gamma_{\mathrm{R}_i\mathrm{D}} = \frac{P_{\mathrm{SR}_i}|h_{\mathrm{SR}_i-\mathrm{SD}}|^2}{P_{\mathrm{PT}}|h_{\mathrm{PT-SD}}|^2 + N_0},\vspace*{-1mm} \label{eq:psr11} \end{equation} where we obtain $P_{\mathrm{ST}}$ and $P_{\mathrm{SR}_i}$ from \eqref{eq:power} and \eqref{eq:power1}, respectively. \section{Secondary Outage Analysis} Now, when we select the best relay out of $N$ active relays, the secondary outage probability $\mathrm{P}^{N}_{\mathrm{s,out}}$ can be given as\vspace*{-2mm} \begin{equation} \mathrm{P}^{N}_{\mathrm{s,out}}(\gamma) \!=\! \mathrm{Pr}(\gamma_{\mathrm{tot}}^{N} \leq \gamma) \!=\! \mathrm{Pr}\!\left(\!\max_{\mathrm{SR}_i \in \mathbb{R}}(\min(\gamma_{\mathrm{SR}_i}, \gamma_{\mathrm{R}_i\mathrm{D}})) \!\leq\! \gamma\!\right)\!, \label{eq:psout_basic}\vspace*{-1mm} \end{equation} where secondary's desired secondary rate is $\mathcal{R}_{\mathrm{s}} = \frac{1}{2}\log_2\left(1 + \gamma\right)$. For the ease of representation and without compromising the insight into analysis, we consider $m_{\mathrm{ST}-\mathrm{SR}_i}\!=\!m_{\mathrm{ST-SR}}$, $m_{\mathrm{SR}_i-\mathrm{SD}} \!=\! m_{\mathrm{SR-SD}}$, $m_{\mathrm{PT}-\mathrm{SR}_i}\!=\! m_{\mathrm{PT-SR}}$, $m_{\mathrm{SR}_i-\mathrm{PD}}\! = \!m_{\mathrm{SR-PD}}$, $\Omega_{\mathrm{ST}-\mathrm{SR}_i} \!=\! \Omega_{\mathrm{ST-SR}}$, $\Omega_{\mathrm{SR}_i-\mathrm{SD}} \!= \!\Omega_{\mathrm{SR-SD}}$, $\Omega_{\mathrm{PT}-\mathrm{SR}_i} \!=\! \Omega_{\mathrm{PT-SR}}$, and $\Omega_{\mathrm{SR}_i-\mathrm{PD}}\! =\! \Omega_{\mathrm{SR-PD}}$. Then, $P_{\mathrm{SR}_i} = P_\mathrm{SR}$. Below we give the closed-form expression for $\mathrm{P}^{N}_{\mathrm{s,out}}$. \begin{proposition} We write $\mathrm{P}^{N}_{\mathrm{s,out}}(\gamma)$ as follows:\vspace*{-2mm} \begin{align} {\mathrm{P}}^{N}_{\mathrm{s, out}}(\gamma) &=\sum_{r_0=0}^{N}\sum_{r_1=0}^{r_0}\!\dotsc\!\!\!\!\!\! \sum_{r_{m_{\mathrm{SR-SD}}-1}=0}^{r_{m_{\mathrm{SR-SD}}-2}}\!\!{\binom{N}{r_0}}\!\!{\binom{r_0}{r_1}}\!\dotsc\!{\binom{r_{m_{\mathrm{SR-SD}}-2}}{r_{m_{\mathrm{SR-SD}}-1}}}\nonumber\\ &\!\!\times\mathcal{A}^{N-r_{m_\mathrm{SR-SD}-1}} (-1)^{N+r_{m_{\mathrm{SR-SD}}-1}} \nonumber \\ &\times \Bigg[\mathrm{exp}\left( -\frac{\alpha_{\mathrm{SR-SD}}\gamma N_0(N-r_{m_\mathrm{SR-SD}-1})}{P_{\mathrm{SR}}} \right) \nonumber\\ &\times\left(\frac{\alpha_{\mathrm{SR-SD}}\gamma N_0}{P_{\mathrm{SR}}}\right)^{R_{m_\mathrm{SR-SD}}} \prod_{k=1}^{m_\mathrm{SR-SD}-1} \left( \frac{1}{k!}\right)^{r_{k-1}- r_{k}} \Bigg] \nonumber \\ &\times \Bigg[\sum_{p =0}^{R}{\binom{R}{p}}\left(\frac{P_{\mathrm{PT}}}{N_0}\right)^p\frac{1}{(m_{\mathrm{PT-SD}}-1)!}\nonumber\\ &\!\times\!\! \frac{\alpha_{\mathrm{PT-SD}}^{m_{\mathrm{PT-SD}}}(m_{\mathrm{PT-SD}}+p-1)!}{\!\left(\!\alpha_{\mathrm{PT-SD}}+\frac{\alpha_{\mathrm{SR-SD}}\gamma P_{\mathrm{PT}} (N-r_{m_{\mathrm{SR-SD}} -1})}{P_{\mathrm{SR}}}\!\right)^{m_{\mathrm{PT-SD}}+p}}\Bigg], \label{eq:psout} \end{align}\vspace*{-5mm} \noindent where $\mathcal{A}$ is given by \eqref{eq:A}. \end{proposition}\vspace*{-2mm} \begin{proof} See Appendix \ref{appen:2}. A key idea in the proof is to consider the dependency between $\gamma_{\mathrm{R}_i\mathrm{D}}$ and $\gamma_{\mathrm{R}_j\mathrm{D}}$ $(i \neq j, j \in \lbrace 1, 2, \dotsc, M\rbrace)$, originating due to the common term $|h_{\mathrm{PT-SD}}|^2$. \end{proof} For an EH relay, its operation is subject to the energy neutrality constraint, which states that a relay cannot spend more energy than it has harvested. Thus, it is possible that the relay might remain inactive for some time due to the lack of energy. Let us denote the probability of a relay $i$ being active by $\eta_i \geq 0$. In a non-spectrum sharing scenario, $\eta_i$ depends on the relay's energy harvesting and consumption rates. Based on these two factors, the relay $i$ operates in two regions as follows: \begin{itemize} \item \textit{Energy constrained region ($\eta_i < 1$}) \item \textit{Energy unconstrained region ($\eta_i = 1$)}. \end{itemize} A relay operates in the energy unconstrained region if its average energy consumption rate is less than the average energy harvesting rate, i.e., the relay is always active. We assume $H_{\mathrm{av},i} = H_{\mathrm{av}} $ without loss of generality. Then, we have $\eta_i = \eta$. The energy available with the relay depends on the factors that, how frequently the relay is selected; its harvested energy till now; and when was the energy harvested. As we will show later, in the case of spectrum sharing with PU, the probability of a relay being active, i.e., $\eta$, depends not only on the energy harvesting rate, the total number of relays in the system, the energy consumed by a relay in its each transmission, but also on PU's outage constraint. Using the following proposition given in~\cite{mehta2}, we show the dependency of $\eta$ on PU's outage constraint.\vspace*{-2mm} \begin{proposition} \label{prop:main} Let the probability of selecting a relay be $\omega$. Then, $\omega = \frac{2H_{\mathrm{av}}}{P_{\mathrm{SR}}}$. The relays remain active with the probability\vspace*{-1mm} \begin{equation} \eta = 1 - \left[(1 - M\omega)^{+}\right]^{\frac{1}{M}}.\vspace*{-1mm} \label{eq:rel}\vspace*{-1mm} \end{equation} All the relays become energy unconstrained, i.e., $\eta = 1$, when $\omega \geq 1/M$. We denote $(x)^{+} = \max(0, x)$. \end{proposition}\vspace*{-2mm} The expression for $\omega$ in Proposition $\mathrm{3}$ is obtained from the energy neutrality constraint and stationarity and ergodicity of the energy harvesting process. From Proposition~\ref{prop:main}, one can notice that the probability of a relay being active depends on the power $P_{\mathrm{SR}}$ with which the relay performs a transmission. Equations \eqref{eq:power1} and \eqref{eq:rel} together show that $P_{\mathrm{SR}}$, in turn, the probability of a relay being active, depends on the primary outage constraint. Given $N$ out of $M$ relays are active, each with the probability $\eta$, we obtain the final expression for the secondary outage probability with EH relays by unconditioning over the number of active relays as\vspace*{-1mm} \begin{equation} \mathrm{P_{s,out}} = \sum_{N = 1}^{M} {\binom{M}{N}} \eta^{N}(1 - \eta)^{M-N}{\mathrm{P}}^{N}_{\mathrm{s, out}} + (1-\eta)^{M}\mathrm{P_{s,out}^0},\vspace*{-2mm} \label{eq:finale} \end{equation}\vspace*{-1mm} \noindent where ${\mathrm{P}}^{N}_{\mathrm{s, out}}$ given by \eqref{eq:psout} is the secondary outage probability when we select the best relay among active $N$ relays to forward the signal from ST; ${\mathrm{P}}^{0}_{\mathrm{s, out}}$ is the secondary outage probability when no relay is active, and is equal to 1. \section{Discussions and Results} In spectrum sharing, PU's outage constraint governs the maximum transmit power of relays. In addition, if relays are energy harvesting, due to the limited available harvested energy, the probability of a relay being active plays an important role in the performance of the secondary system. In this section, we will first discuss the effect of PU's outage constraint on SU's outage performance for the case when an EH relay on selection, uses the maximum power allowed by the primary, aiming to reduce the secondary outage probability; even though, it might also reduce the relay's probability of being active. Then, we will consider the case when, along with PU's outage constraint, EH relays aim to keep their probability of being active to one ($\eta = 1$)$-$which we will call the \textit{energy constraint}{\footnote{Note that the constraint $\eta = 1$ is different from the \textit{energy neutrality constraint}. With latter, the consumed energy by a relay cannot exceed its harvested energy, whereas the constraint of always being active does not allow relay's energy consumption rate to exceed its energy harvesting rate. With the higher energy consumption rate, the relays will eventually consume the energy harvested in the past before acquiring sufficient newly harvested energy to keep them active.}}. In this case, we will show that the transmit power of the relay is regulated by the dominant of the two constraints$-$PU's outage constraint and energy constraint$-$and we will find the region of dominance for each constraint. Finally, we will see from results that in both the above cases, relaxing PU's outage constraint beyond a level does not offer any benefit to the secondary system with EH relays. \subsection{System Parameters and Simulation Setup} We consider following parameter values: PU transmit power, $P_\mathrm{PT} = \mathrm{15}$\,$\mathrm{dB}$; the desired primary rate, $\mathcal{R}_{\mathrm{p}}$ = $\mathrm{0.4}$\,$\mathrm{bits/s/Hz}$; the desired secondary rate, $\mathcal{R}_{\mathrm{s}}$ = $\mathrm{0.2}$\,$\mathrm{bits/s/Hz}$; noise power, $N_0$ = $-\mathrm{60}~\mathrm{dBm}$. Denote fading severity parameters on forward and interference channels by $m_{\mathrm{f}}$ and $m_{\mathrm{int}}$, respectively. We consider a 2-D topology, where ($x_{i}$, $y_{i}$) defines the coordinate of $i$th user. The mean channel gain between $i$th user with coordinate ($x_{i}$, $y_{i}$) and $j$th user with coordinate ($x_{j}$, $y_{j}$) is $d_{ij}^{-\Delta}$, where $d_{ij}$ is the distance between users $i$ and $j$ in meters and $\Delta$ is the path-loss coefficient which is assumed to be $\mathrm{4}$. Without any loss of generality, ST is placed at ($\mathrm{0}$, $\mathrm{0}$), $M$ relays are clustered and collocated at ($\mathrm{50}$, $\mathrm{0}$), and SD is placed at ($\mathrm{100}$, $\mathrm{0}$). Also, PT and PD are located at ($\mathrm{50}$, $\mathrm{50}$) and ($\mathrm{100}$, $\mathrm{50}$), respectively.\vspace*{-1mm} \subsection{Effect of PU's Outage Constraint} \label{sec22} Figs.\,\ref{fig:11} and \ref{fig:22} show the effect of PU's outage constraint on the outage probability of the secondary system with EH relays.\footnote{Simulation results validate the analysis. The number of iterations is up to $\mathrm{10^6}$.} The selected EH relay transmits with the maximum power allowed by PU's outage constraint. We notice that the increase in the primary outage threshold $\Theta_{\mathrm{p}}$ increases the maximum transmit power $P_{\mathrm{SR}}$ allowed for the relay, which initially reduces the secondary outage probability $P_{\mathrm{s, out}}$. However, with the increase in the threshold $\Theta_{\mathrm{p}}$ beyond a level, a \textit{tipping point} will be reached after which $P_{\mathrm{s, out}}$ will increase even with the increase in $P_{\mathrm{SR}}$ as relays will consume energy at a higher rate than they will harvest, i.e., relays will become \textit{energy constrained} (see plots for $H_\mathrm{av}$\,=\,$\mathrm{1}, \mathrm{2}$ in Fig.\,\ref{fig:11}). This will reduce the probability of a relay being active, thereby reducing the number of relays available to forward the data to SD. As long as $P_{\mathrm{SR}}$ is below a certain level so that $\omega$\,=\,$\frac{2H_{\mathrm{av}}}{P_{\mathrm{SR}}}$\,$\geq$\,$1/M$ as shown in Proposition $\mathrm{3}$, the relays operate in the \textit{energy unconstrained region}, i.e., the harvested power is more than the transmit power $P_{\mathrm{SR}}$. But, with relaxation of PU's outage constraint, eventually the value of $P_{\mathrm{SR}}$ increases such that $\omega$\,$<$\,$1/M$, making relays energy constrained and increasing $P_{\mathrm{s, out}}$. Also, increase in the harvesting rate $H_{\mathrm{av}}$ delays the occurrence of the \textit{tipping point} as expected, and at high harvesting rates, the relays might operate completely in the energy unconstrained region due to availability of abundant energy (see the plot for $H_\mathrm{av}$\,=\,$\mathrm{4}$ in Fig.\,\ref{fig:11}).\vspace*{-1mm} \begin{remark} Relaxing PU's outage constraint may not always improve the performance of EH secondary relays in spectrum sharing. That is, unlike conventional non-EH case, due to lack of energy, the relays with EH capability may not transmit with the maximum allowed power even though they are allowed to do so. \end{remark}\vspace*{-3mm} \begin{figure} \centering \includegraphics[scale=0.39]{Revised_results_globecom/fig_1_cam_ready}\vspace*{-3mm} \caption{Secondary outage probability ($P_{\mathrm{s, out}}$) vs. primary outage probability threshold ($\Theta_{\mathrm{p}}$), $M = 3$, $m_{\mathrm{f}} = \mathrm{2}$, $m_{\mathrm{int}} = \mathrm{1}$.} \label{fig:11}\vspace*{-4mm} \end{figure} \begin{figure} \centering \includegraphics[scale=0.39]{Revised_results_globecom/fig_2_cam_ready}\vspace*{-3mm} \caption{Secondary outage probability ($P_{\mathrm{s, out}}$) vs. primary outage probability threshold ($\Theta_{\mathrm{p}}$), effect of fading severity parameter and number of relays $M$, $m_{\mathrm{int}} = \mathrm{1}$, $H_{\mathrm{av}} = \mathrm{2}$\,$\mathrm{J/s}$.} \label{fig:22}\vspace*{-5mm} \end{figure} \subsection{Effect of Fading Severity Parameter} Fig.\,\ref{fig:22} shows the effect of fading severity parameter on $P_{\mathrm{s, out}}$. We notice that, before the \textit{tipping point}, i.e, in the energy unconstrained region, $P_{\mathrm{s, out}}$ is lower for higher fading severity parameter $m_{\mathrm{f}}$ on forward channels. However, the trend reverses after the \textit{tipping point}. This is because, with the increase in $m_{\mathrm{f}}$, the fading effect subsides over the primary link between PT and PD, providing an extra margin for maximum secondary relay transmit power $P_{\mathrm{SR}}$ for a given $\Theta_{\mathrm{p}}$. This helps in achieving lower $P_{\mathrm{s, out}}$ for higher $m_{\mathrm{f}}$ in the energy unconstrained region, where the energy harvesting rate is higher than the energy consumption rate. As shown in Fig.\,\ref{fig:22}, for a given harvesting rate, due to higher allowed $P_{\mathrm{SR}}$ (higher energy consumption rate) for higher $m_{\mathrm{f}}$, the \textit{tipping point} arrives earlier than that for lower $m_{\mathrm{f}}$. After the \textit{tipping point}, since relays enter the energy constrained region, higher $m_{\mathrm{f}}$, in turn, higher energy consumption rate, reduces the probability of a relay being active. This often leads to non-availability of relays for transmission, increasing $P_{\mathrm{s, out}}$ for higher $m_{\mathrm{f}}$. Also, increase in the number of relays $M$ increases the probability of being active (see \eqref{eq:rel}) as the candidate relays for cooperation increases (increased diversity) due to which a certain relay is chosen less frequently. This reduces $P_{\mathrm{s, out}}$. \begin{figure} \centering \includegraphics[scale=0.39]{Revised_results_globecom/fig_3_cam_ready}\vspace*{-3mm} \caption{Secondary outage probability ($P_{\mathrm{s, out}}$) vs. primary outage probability threshold ($\Theta_{\mathrm{p}}$), $M$ = $\mathrm{3}$, $m_{\mathrm{f}}= \mathrm{3}$, $m_{\mathrm{int}} = \mathrm{2}$, $H_{\mathrm{av}} = \mathrm{2}$\,$\mathrm{J/s}$.} \label{fig:33}\vspace*{-7mm} \end{figure} \subsection{Joint Effect of PU's Outage Constraint and Energy Constraint} From discussions of Figs.\,\ref{fig:11} and \ref{fig:22}, we note that EH relays being inconsiderate towards their probability of being active, leads to SU's inferior outage performance beyond the tipping point. Now, for instance, assume that EH relays try to remain always active ($\eta = \mathrm{1}$), i.e., try to satisfy the energy constraint, irrespective of PU's outage constraint, and transmit with power $P_{\mathrm{s,a}}$. From Proposition $\mathrm{3}$, we can see that satisfying the energy constraint corresponds to $\omega$\,$\geq$\,$1/M$, i.e., $P_{\mathrm{s,a}}$\,$\leq$\,$2MH_{\mathrm{av}}$. That is, as long as EH relays transmit with power no greater than $2MH_{\mathrm{av}}$, they always remain active. Now, if we combine the energy constraint with PU's outage constraint, Fig.\,\ref{fig:33} shows that in the \textit{energy unconstrained region}, though an EH relay may transmit with maximum power $2MH_{\mathrm{av}}$ maintaining $\eta = \mathrm{1}$, the power $2MH_{\mathrm{av}}$ does not satisfy PU's outage constraint, i.e., $2MH_{\mathrm{av}} > P_{\mathrm{SR}}$. This leads to higher $P_{\mathrm{s, out}}$ in EH relay spectrum sharing scenario governed by both the energy constraint and PU's constraint than it would have been in EH non-spectrum sharing scenario governed by the energy constraint alone. In the \textit{energy constrained region}, the energy constraint becomes dominant, i.e., $2MH_{\mathrm{av}}$\,$<$\,$P_{\mathrm{SR}}$. Thus, even though PU's outage constraint is satisfied, and allows EH relays to transmit with the maximum power $P_{\mathrm{SR}}$, the energy constraint is violated, causing $\eta < \mathrm{1}$ and increasing in $P_{\mathrm{s, out}}$ as discussed for Figs.\,\ref{fig:11} and \ref{fig:22}. Therefore, we can see from the above discussion that, to satisfy both constraints, the maximum power with which EH relays may transmit is $\min(2MH_\mathrm{av}, P_\mathrm{SR})$. As shown in Fig.\,\ref{fig:33}, in the energy constrained region (after the tipping point), though transmitting with power $\min(2MH_\mathrm{av}, P_\mathrm{SR})$ avoids the increase in $P_{\mathrm{s, out}}$, relaxing PU's outage constraint does not improve SU's outage performance.\vspace*{-1mm} \section{Conclusions}\vspace*{-1mm} Under the primary outage constraint, this paper has analyzed the outage performance of the secondary communication via energy harvesting relays in Nakagami-$m$ channel. In a spectrum sharing scenario, the results show that, besides energy harvesting nature of relays, the primary outage constraint also strongly influences the probability of a relay being active. We note that relays should keep their probability of being active to one; otherwise, obeying only the primary outage constraint may lead to the inferior secondary outage performance. That is, in energy harvesting spectrum sharing, due to the energy constraint, relaxing the primary outage constraint may not always improve secondary outage performance unlike in non-energy harvesting case. Further, we have found the region of dominance for each of the constraints and proposed the optimal transmit power for relays to subside their inferior performance in the energy constrained region. We observe that increase in the number relays and lower fading severity parameter delays the entry of relays into the energy constrained region, boosting the secondary outage performance.\vspace*{-1mm} \appendices \section{Proof of \eqref{eq:power}} \label{sec:der_out} From \eqref{eq:3}, conditioned on $|h_{\mathrm{ST-PD}}|^2 = x$ we can write \vspace*{-5mm} {{\small\begin{equation} \mathrm{P_{p, out, ST}}\bigg|_{|h_{\mathrm{ST-PD}}|^2 = x}\!\!= \frac{\Upsilon\bigg(\!\!m_{\mathrm{PT-PD}}, \alpha_{\mathrm{PT-PD}} \frac{\theta_\mathrm{p} (P_{\mathrm{ST}}x + N_0)}{P_{\mathrm{PT}}}\!\!\bigg)}{\Gamma(m_{\mathrm{PT-PD}})}. \label{eq:out1}\vspace*{-4mm} \end{equation}}} \noindent When $m_{\mathrm{PT-PD}}$ is a positive integer, we can write the lower incomplete Gamma function as \cite[8.352]{gradshteyn}\vspace*{-3mm} {{\small\begin{equation} \Upsilon(a, b) = (a-1)!\left(1-\exp(-b)\sum_{k=0}^{a-1}\frac{b^k}{k!}\right). \label{eq:finite}\vspace*{-4mm} \end{equation}}} \noindent Then, using \eqref{eq:finite} in \eqref{eq:out1} and unconditioning over $|h_{\mathrm{ST-PD}}|^2$, we can write \eqref{eq:3} as\vspace*{-1mm} \begin{eqnarray} \mathrm{P_{p, out, ST}} \!\!\!\!\!\!&=&\!\!\!\! \int_0^{\infty}\mathrm{P_{p, out}}\bigg|_{|h_{\mathrm{ST-PD}}|^2 = x}\frac{\alpha_{\mathrm{ST-PD}}^{m_{\mathrm{ST-PD}}}}{\Gamma(m_{\mathrm{ST-PD}})}x^{m_{\mathrm{ST-PD}}-1}\nonumber\\ && \times \exp(-\alpha_{\mathrm{ST-PD}} x)\mathrm{d}x.\vspace*{-2mm} \label{eq:rand1} \end{eqnarray}\vspace*{-6mm} \noindent Simplifying \eqref{eq:rand1} and using binomial expansion, we get\vspace*{-4mm} {{\small \begin{align} \mathrm{P_{p, out, ST}} &= 1-\frac{\alpha_{\mathrm{ST-PD}}^{m_{\mathrm{ST-PD}}}\exp\left(\frac{-\alpha_{\mathrm{PT-PD}}\theta_\mathrm{p}N_0}{P_{\mathrm{PT}}}\right)}{\Gamma(m_{\mathrm{ST-PD}})} \nonumber \\ &\!\!\times\! \sum_{k=0}^{m_{\mathrm{PT-PD}}-1}\left(\frac{\alpha_{\mathrm{PT-PD}}\theta_\mathrm{p}N_0}{P_{\mathrm{PT}}}\right)^k\frac{1}{k!} \sum_{t=0}^{k}{\binom{k}{t}}\left(\frac{P_{\mathrm{ST}}}{N_0}\right)^t \nonumber \\ &\!\!\times\!\! \int_{0}^{\infty}\!\!\!x^{m_{\mathrm{ST-PD}}+t-1}\!\exp\!\left(\!\!-x\!\left(\!\!\frac{\alpha_{\mathrm{PT-PD}}\theta_\mathrm{p} P_{\mathrm{ST}}}{P_{\mathrm{PT}}} \!+\! \alpha_{\mathrm{ST-PD}}\!\!\right)\!\right)\!\mathrm{d}x. \label{eq:long1} \end{align}}}\vspace*{-3mm} \noindent Solving \eqref{eq:long1}, we get the required expression in \eqref{eq:power}.\vspace*{-1mm} \section{Proof of \eqref{eq:psout}} \label{appen:2} From\,\eqref{eq:psr11}, we can see that $\gamma_{\mathrm{R}_i\mathrm{D}}$ and $\gamma_{\mathrm{R}_j\mathrm{D}}$ $(i \neq j, j \in \lbrace 1, 2, \dotsc, M\rbrace)$ contain the common term $|h_{\mathrm{PT-SD}}|^2$, which makes them dependent. Thus, conditioned on $|h_{\mathrm{PT-SD}}|^2 = x$, we can write the CDF of $\gamma_{\mathrm{tot}}^{N}$ as \begin{eqnarray} \mathrm{Pr}\left(\gamma_{\mathrm{tot}}^{N} \leq \gamma \big|_{|h_{\mathrm{PT-SD}}|^2 = x}\right) \!\!\!\!&=&\!\!\!\! \prod_{i = 1}^{N} \big[1 - \left(1 -\mathrm{Pr}\left(\gamma_{\mathrm{SR}_i} \leq \gamma \right) \right) \nonumber \\ && \hspace*{-20mm}\times \underbrace{\left(1 - \mathrm{Pr} \left(\gamma_{\mathrm{R}_i\mathrm{D}} \leq \gamma\big|_{|h_{\mathrm{PT-SD}}|^2 = x} \right)\right)}_{\mathcal{I}}\bigg]. \vspace*{-3mm} \label{eq:main} \end{eqnarray}\vspace*{-4mm} \noindent Using \eqref{eq:cdf}, we can write $\mathcal{I}$ in \eqref{eq:main} as\vspace*{-2mm} \begin{eqnarray} \mathcal{I}&=& \frac{\Gamma\left(m_{\mathrm{SR-SD}}, \alpha_{\mathrm{SR-SD}} \frac{\gamma(P_{\mathrm{PT}}x + N_0)}{P_{\mathrm{SR}}}\right)}{\Gamma\left(m_{\mathrm{SR-SD}}\right)}. \label{eq:I}\vspace*{-1mm} \end{eqnarray} Using \cite[8.352]{gradshteyn}\vspace*{-1mm} \begin{equation} \Gamma(k, t) = (k-1)!\exp(-t)\sum_{n=0}^{k-1}\dfrac{t^n}{n!}, k=1,2,\ldots, \vspace*{-2mm} \end{equation} we can write \eqref{eq:I} as\vspace*{-5mm} {{\small \begin{eqnarray} \mathcal{I} &=& \exp\left(-\alpha_{\mathrm{SR-SD}} \frac{\gamma(P_{\mathrm{PT}}x + N_0)}{P_{\mathrm{SR}}}\right)\nonumber \\ && \times \!\sum_{k=0}^{m_{\mathrm{SR-SD}}-1}\! \dfrac{1}{k!} {\left(\alpha_{\mathrm{SR-SD}} \frac{\gamma(P_{\mathrm{PT}}x + N_0)}{P_{\mathrm{SR}}}\right)^k}\!\!. \label{eq:I1}\vspace*{-2mm} \end{eqnarray}}}\vspace*{-4mm} \noindent Now, let\vspace*{-2mm} \begin{equation} \!\!\mathcal{A} =1 -\mathrm{Pr}(\gamma_{\mathrm{SR}_{i}} \leq \gamma ) = \mathrm{Pr}\!\left(\!\frac{P_{\mathrm{ST}}|h_{\mathrm{ST}-\mathrm{SR}_{i}}|^2}{P_{\mathrm{PT}}|h_{\mathrm{PT}-\mathrm{SR}_{i}}|^2 + N_0}> \gamma \!\right). \label{eq:AA} \end{equation} Using the procedure to derive \eqref{eq:power}, we can write \eqref{eq:AA} as\vspace*{-4mm} {{\small \begin{eqnarray} \mathcal{A} &=&\frac{\alpha_{\mathrm{PT-SR}}^{m_{\mathrm{PT-SR}}}\exp\left(\frac{-\alpha_{\mathrm{ST-SR}}\gamma N_0}{P_{\mathrm{ST}}}\right)}{\Gamma(m_{\mathrm{PT-SR}})}\nonumber \\ &&\hspace*{-10mm}\times \sum_{k=0}^{m_{\mathrm{ST-SR}}-1}\left(\frac{\alpha_{\mathrm{ST-SR}} \gamma N_0}{P_\mathrm{ST}}\right)^k\frac{1}{k!} \times \left(\sum_{t=0}^{k}{\binom{k}{t}}\left(\frac{P_{\mathrm{PT}}}{N_0}\right)^t \right.\nonumber \\ &&\left.\hspace*{-10mm}\times\frac{\Gamma(m_{\mathrm{PT-SR}} + t)}{\left(\frac{\alpha_{\mathrm{ST-SR}}\gamma P_{\mathrm{PT}}}{P_{\mathrm{ST}}} + \alpha_{\mathrm{PT-SR}}\right)^{m_{\mathrm{PT-SR}} + t}}\right). \label{eq:A} \end{eqnarray}}}\vspace*{-3mm} \noindent Substituting \eqref{eq:I1} and \eqref{eq:A} in \eqref{eq:main} and using the multinominal theorem \cite{gradshteyn}, we get \vspace*{-3mm} {{\small \begin{eqnarray} &&\hspace*{-7mm}\mathrm{Pr}\left(\gamma_{\mathrm{tot}}^{N} \leq \gamma \big|_{|h_{\mathrm{PT-SD}}|^2 = x}\right) \nonumber\\ &\!\!\!\!\hspace*{-3mm} =&\hspace*{-3mm} \!\!\!\! \sum_{r_0=0}^N \sum_{r_1=0}^{r_0} \ldots \sum_{r_{m_{\mathrm{SR-SD}}-1}=0}^{r_{m_{\mathrm{SR-SD}}-2}} {\binom{N}{r_0}} {\binom{r_0}{r_1}} \ldots {\binom{r_{m_{\mathrm{SR-SD}}-2}}{r_{m_{\mathrm{SR-SD}}-1}}}\nonumber \\ &\!\!\!\!\hspace*{-3mm} \times & \hspace*{-3mm} \!\!\!\mathcal{A}^{N-r_{m_\mathrm{SR-SD}-1}} (-1)^{N+r_{m_\mathrm{SR-SD}-1}} \nonumber \\ &\!\!\!\!\hspace*{-3mm} \times&\hspace*{-3mm} \!\!\!\mathrm{exp}\!\left(\! -\frac{\alpha_{\mathrm{SR-SD}}\gamma N_0(N-r_{m_\mathrm{SR-SD}-1})}{P_{\mathrm{SR}}} \right)\left( \frac{\alpha_{\mathrm{SR-SD}}\gamma N_0}{P_{\mathrm{SR}}} \right)^{R_{m_\mathrm{SR-SD}}} \nonumber\\ &\!\!\!\!\hspace*{-3mm}\times&\!\!\!\!\hspace*{-3mm} \prod_{k=1}^{m_\mathrm{SR-SD}-1} \left( \frac{1}{k!}\right)^{r_{k-1}- r_{k}} \left( 1 + \frac{x P_{\mathrm{PT}}}{N_0}\right)^{R_{m_\mathrm{SR-SD}}} \nonumber \\ &\!\!\!\!\hspace*{-3mm}\times&\hspace*{-3mm}\!\!\! \mathrm{exp}\left( -\frac{\alpha_{\mathrm{SR-SD}}\gamma P_{\mathrm{PT}} x(N-r_{m_\mathrm{SR-SD}-1})}{P_{\mathrm{SR}}} \right), \end{eqnarray}}}\vspace*{-3mm} \noindent where $R_{m_\mathrm{SR-SD}} = \sum_{k=1}^{m_{\mathrm{SR-SD}}-1}k(r_{k-1}-r_k)$. In the following step, we use binomial expansion of $\left( 1 + \frac{x P_{\mathrm{PT}}}{N_0}\right)^{R_{m_\mathrm{SR-SD}}}$ and take expectation over $|h_{\mathrm{PT-SD}}|^2$. Then, we can write\vspace*{-4mm} {{\small \begin{eqnarray} {\mathrm{P}}^{N}_{\mathrm{s, out}}(\gamma)\!\!\!\!\!\! &=&\!\!\!\!\!\! \sum_{r_0=0}^{N}\sum_{r_1=0}^{r_0}\!\dotsc\!\!\!\!\!\! \sum_{r_{m_{\mathrm{SR-SD}}-1}=0}^{r_{m_{\mathrm{SR-SD}}-2}}\!\!{\binom{N}{r_0}}\!\!{\binom{r_0}{r_1}}\!\dotsc\!{\binom{r_{m_{\mathrm{SR-SD}}-2}}{r_{m_{\mathrm{SR-SD}}-1}}}\nonumber\\ &&\hspace*{-15mm}\!\!\!\!\!\times \mathcal{A}^{N-r_{m_\mathrm{SR-SD}-1}} (-1)^{N+r_{m_{\mathrm{SR-SD}}-1}} \nonumber \\ &&\hspace*{-15mm}\!\!\!\!\!\times \left[\mathrm{exp}\left( -\frac{\alpha_{\mathrm{SR-SD}}\gamma N_0(N-r_{m_\mathrm{SR-SD}-1})}{P_{\mathrm{SR}}} \right)\right. \nonumber\\ &&\left.\hspace*{-15mm}\!\!\!\!\!\times \left(\frac{\alpha_{\mathrm{SR-SD}}\gamma N_0}{P_{\mathrm{SR}}}\right)^{R_{m_\mathrm{SR-SD}}} \prod_{k=1}^{m_\mathrm{SR-SD}-1} \left( \frac{1}{k!}\right)^{r_{k-1}- r_{k}} \right] \nonumber \\ &&\hspace*{-15mm} \!\!\!\!\!\times \Bigg[\sum_{p =0}^{R}{\binom{R}{p}}\left(\frac{P_{\mathrm{PT}}}{N_0}\right)^p \nonumber\\ &&\hspace*{-15mm}\!\!\!\!\!\times \int_{x=0}^\infty x^p \mathrm{exp}\left( -\frac{\alpha_{\mathrm{SR-SD}}\gamma P_{\mathrm{PT}} x(N-r_{m_\mathrm{SR-SD}-1})}{P_{\mathrm{SR}}} \right) \nonumber \\ &&\hspace*{-15mm}\!\!\!\!\! \times \frac{\alpha_{\mathrm{PT-SD}}^{m_{\mathrm{PT-SD}}}}{(m_{\mathrm{PT-SD}}-1)!} x^{m_{\mathrm{PT-SD}}-1} \mathrm{exp} \left(-\alpha_{\mathrm{PT-SD}} x\right) \mathrm{d}x\Bigg]. \label{eq:at1} \end{eqnarray}}}\vspace*{-3mm} \noindent Solving the integration in \eqref{eq:at1}, we get the required closed-form expression for the secondary outage probability given by \eqref{eq:psout}.\vspace*{-1mm} \bibliographystyle{ieeetr}
1,314,259,995,743
arxiv
\section{Introduction} \vspace{-1mm} For safety assessment and structural health monitoring of infrastructures such as buildings and bridges and their components, their vibrations and deformations need to be recorded and evaluated. Traditional structural sensors such as Linear Variable Displacement Transducers (LVDTs), dial gauges, accelerometers and other advanced sensors are used to measure deformations and vibrations. However, in some cases, conventional sensors may not be a good option to access the desirable instrumentation locations and work in a timely and cost-efficient way. Most importantly, traditional sensors measure the displacement or vibration at a discrete location, i.e., one sensor is used to measure one quantity at a single point. On the other hand, a high definition video camera can record the movement of a component or an entire structure rather than a single point. Recent developments on vision- and vibration-based technologies led to the measurement applications of low-cost and non-contact sensors on deformation and vibration monitoring of the infrastructures \cite{dong2020review}, especially because unusual and extreme deformations or vibrations in aging bridges and buildings may be an indication of significant serviceability or safety issues. Cameras can collect high-quality images or high-speed videos in lab or field tests as non-contact and non-destructive sensors. With computer vision technologies and various deep learning techniques, the vision sensors can not only be used as eyes for Artificial Intelligence (AI) vehicles and machines, but also provide opportunities for scientific measurements to researchers and engineers. In a laboratory experiment, cameras can be fixed near the testing station and record live motion of the monitored structural members. They can be placed at a stationary location or some distance away from an in-service bridge or building to record its structural movements remotely in the field. The collected visual data can be processed further to identify potential damage and assess the motion of the observed structures precisely. To better understand and efficiently use displacement and vibration measurement data from cameras, we used Mask R-CNN \cite{he2017mask} with High-resolution network (HRNet) \cite{sun2019high} to track a target attached on beam specimens in the laboratory and to measure their deflections in the first study. Then the Mask R-CNN was applied on a shaking table test to track the dynamic motion of four targets simultaneously in the second study. \vspace{-1mm} \section{Literature Review} Computer Vision (CV) and deep learning techniques are very useful to gain high-level understanding and extraction of desired information and precise motion measurements from images and videos. Traditional CV techniques have been widely used by researchers for displacement or vibration measurements by cameras. These techniques include image processing technique \cite{lee2006real}, up-sampled cross correlation \cite{feng2015vision}, adaptive Region of Interest algorithm \cite{lee2017computer}, modified Taylor approximation \cite{liu2016vision}, and contour extraction with Speeded-Up Robust Features (SURF) \cite{yin2014concrete}. In addition, Lucas-Kanade template tracking algorithm \cite{guo2016dynamic, Dong2019mark-free} and Digital Image Correlation (DIC) \cite{chen2021homography} are employed to track displacement and vibration of structural members. Furthermore, Hu and Pai \cite{doi:10.1061/(ASCE)EM.1943-7889.0000374} utilized a camera-based 3D motion analysis system to measure the resonant vibration of steel cables. Chen et al. \cite{chen2017video} described an application with a video camera-based technique to test the vibration of an antenna tower on a tall building when a camera was placed 175 meters far from it. Hoskere et al. \cite{hoskere2019vision} used an Unmanned Aerial Vehicle (UAV) to measure the modal properties and dynamic response of a full-scale structure. Deep learning is a relatively new research area for visual measurement applications. Dong et al. \cite{dong2020structural} implemented a full field optical flow algorithm named FlowNet2 to measure the displacement and vibration of structures. They also used the Spatio-Temporal Context Learning to track targets and utilized a Taylor approximation to gain subpixel level precision for displacement measurement \cite{dong2019robust}. Also, Dong et al. \cite{dong2019non} applied Visual Graph Visual Geometry Group (VGG) to extract features of the target for monitoring and measuring during the traffic time. These methods indicate how we can efficiently use cameras to monitor and measure the displacement and vibration of structural members or a structure in the laboratory or in the field. \vspace{-4mm} \section{Methodology} In our previous studies\cite{bai-2021-isprs, Bai1835end}, new variants of the latest Mask R-CNNs were successfully applied for structural damage detection with high accuracy. Therefore, we used one of the variants, Mask R-CNN with HRNet, to track and measure the displacement and vibration in this research. Fig. \ref{fig:maskrcnn} shows the framework of this Mask R-CNN. \begin{figure}[h!] \centering \vspace{-1mm} \includegraphics[width=\linewidth, height=2.4cm]{./mask_rcnn.pdf} \vspace{-8mm} \caption{The framework of Mask R-CNN for tacking and measuring displacement of a reinforced concrete beam} \vspace{-8mm} \label{fig:maskrcnn} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.9\linewidth, height=3.2cm]{./how_it_works.pdf} \vspace{-8mm} \caption{Translation of a target measured by a bounding box or by matching keypoints.} \vspace{-6mm} \label{fig:workflow} \end{figure} \begin{figure}[h!] \vspace{-2mm} \centering \includegraphics[width=0.9\linewidth, height=1.8cm]{./Flowchart_for_mask_rcnn+SIFT.pdf} \vspace{-4mm} \caption{A flowchart of Mask R-CNN and SIFT for automated displacement and vibration measurement in a lab experiment with a stationary camera.} \vspace{-4mm} \label{fig:flowchart} \end{figure} As shown in Fig. \ref{fig:maskrcnn} and \ref{fig:setup}, a wood frame is attached near the midspan of the tested beam so that it moves downward or upward when the beam is loaded or unloaded. The motion of the frame represents the deflection of the point where it is attached. Mask R-CNN is used to track the top of this wood frame (i.e., yellow dashed line on the left image in Fig. \ref{fig:maskrcnn}), which is marked by a bounding box and a mask in purple on the right image. Since the tracking target is a rigid body, its motion can be represented by any point on it or by the bounding box. On the other hand, as shown in Fig. \ref{fig:workflow} for the image plane of a stationary camera, translation of a target between the first frame and the $j$th frame, $du\textsuperscript{j}$ and $dv\textsuperscript{j}$, can be calculated as the position change of the bounding box or the average motion of the matching keypoints with SIFT. We observed that the mask for a target may be not exactly the same as its real shape in some cases. Therefore, SIFT is introduced into the pipeline to eliminate this inadvertent drawback when the bounding box is inaccurate to represent the target. Furthermore, subpixel precision can be achieved by matching and using the average motion of these keypoints on the target. In our pipeline (see Fig. \ref{fig:flowchart}), the ratio of good matching is restricted as SIFT commonly used in practice, but the range of coordinate change for each matching keypoint is also constrained such that the top good matching keypoints can be secured. Therefore, the mismatching is reduced dramatically and the accuracy of measurement is improved. Finally, the measurement is converted from pixel to length unit (inches or millimeters), which is also called a scalar, $s$. The horizontal and vertical displacement $dx\textsuperscript{j}$ and $dy\textsuperscript{j}$ of the target can be obtained as follows: \vspace{-2mm} \begin{equation} {dx\textsuperscript{j} = s \times du\textsuperscript{j} \label{eq1}} \vspace{-2mm} \end{equation} \begin{equation} {dy\textsuperscript{j} = s \times dv\textsuperscript{j}\label{eq2}} \vspace{-2mm} \end{equation} As a comparison, an optical flow method called Lucas-Kanade (LK) tracker \cite{bouguet2001pyramidal} was used to track and measure the same targets in our experiments. For the LK tracker, the relative displacement is measured between two adjacent frames in a video. The final displacement is the sum-up of all the relative measurements. In addition, Savitzky-Golay filter \cite{bianchi2007electronic} and Butterworth filter \cite{press1990savitzky} were employed to handle the inconsistency and noise of the measurements. Fast Fourier Transform (FFT) \cite{cooley1965algorithm} was applied to extract the frequencies of the vibrating targets. \vspace{-2mm} \section{Implementation} Two types of indoor experiments were utilized to verify our proposed methods. The first one is a small-scale experiment involving three reinforced concrete (RC) beams (see Fig. \ref{fig:setup}) conducted on the main campus of The Ohio State University in Columbus, Ohio. These beams are loaded and deflected until failure. Another test is an application on a video of a shaking table test (see Fig. \ref{fig:shaking}) \cite{quanser_2017}. \vspace{-4mm} \begin{figure}[h!] \centering \includegraphics[width=0.6\linewidth, height=2.4cm]{./setup_for_beam_test.pdf} \vspace{-2mm} \caption{A flexural test of a RC beam with a LVDT and a camera.} \label{fig:setup} \vspace{-4mm} \end{figure} \begin{figure}{} \centering \vspace{-4mm} \includegraphics[width=0.8\linewidth, height=2.4cm]{./example_shaking_table.pdf} \vspace{-2mm} \caption{Original image (left) and label (right) in the shaking table video \cite{quanser_2017}.} \label{fig:shaking} \end{figure} \vspace{-2mm} \subsection{Deflection Measurement of RC Beams in Laboratory Tests} In this experiment, three RC beams were subjected to a monotonically increasing point load at the midspan. A displacement sensor (a LVDT or dial gauge) and a wood frame were used to measure the deflection near midspan of the beams, while a camera was placed 3-feet away from the midspan (see Fig. \ref{fig:setup}). Its definition was $1600\times1200$ and the frame rate was 15 per second. The wood frame was clamped on the beam to represent the deflection of the targeted point on the tested beam. The data process for the videos of three tests is like this: since this is a static test, which means the loading and deflection of the tested beams is slow, images from the video at each second are selected as the visual data. There are a total of 500 to 600 images for each test. To train the Mask R-CNN, only 50 images are randomly selected and labeled for detecting and tracking the top of wood frame (in purple mask) as shown in Fig. \ref{fig:maskrcnn}. The LK tracker is used to track the same object and measure the deflection. The testing results are shown in Fig. \ref{fig:plot}. Compared to the LK tracker, this Mask R-CNN with SIFT can provide a measurement closer to the ground truth of beam deflections by a dial gauge. In addition, Savitzky-Golay filter is applied to smooth the measurement such that it becomes consistent (see Fig. \ref{fig:MAE}). Table \ref{tab1} shows MAE (mean absolute error) for both methods. It can be inferred that SIFT and Savitzky-Golay filter can effectively readjust the position of the bounding box predicted from the Mask R-CNN and smooth the measurements, hence, the proposed method can outperform the LK tracker. \vspace{-4mm} \begin{table}[htbp] \caption{MAE for two methods (inches)} \vspace{-5mm} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \cline{2-4} \textbf{Methods} & \textbf{\textit{Test 1}}& \textbf{\textit{Test 2}}& \textbf{\textit{Test 3}} \\ \hline Mask R-CNN + SIFT & 0.005&0.005 &0.005 \\ LK tracker & 0.030&0.012 &0.012 \\ \hline \end{tabular} \label{tab1} \end{center} \vspace{-7mm} \end{table} \vspace{-1mm} \subsection{Vibration Measurement of A Shaking Table Test} \vspace{-1mm} Our proposed method was also applied on a shaking table test \cite{quanser_2017} to check its applicability of monitoring dynamic movement of objects. In this test, there are three rectangles (masses) fixed on the shaking table at different heights (see Fig. \ref{fig:shaking}). Each rectangle, which is supported by two sticks, like a structure has its unique resonant frequency in the horizontal direction. This is due to differences between the lateral stiffness of each pair of sticks. The frequencies of the applied shaking are increased from 4 Hz to 13.65 Hz to excite these masses and cause their harmonic vibrations. From the recorded video \cite{quanser_2017}, 150 frames are randomly selected from a total of 6,674 frames and labeled for training the Mask R-CNN. The video has an image size of 640$\times$480 and a frame rate of 30 per second. SIFT is not applied to smooth the measurements here, since the goal of this test is to detect the frequencies instead of accurate amplitudes of the vibration, which is in pixel unit in this test. Thus, the motion of the bounding box represents the translation of each object. The LK tracker is utilized to verify our method by tracking the same vibration of the shaking table. On one hand, all the raw data are processed by Butterworth filter, and FFT is applied to extract the frequencies for each tracking target. The filtered vibrations of the shaking table with the LK tracker and Mask R-CNN are shown in the left figures of Fig. \ref{fig:table_1}. There are three frequencies of vibration at approximately 4 Hz, 6.35 Hz and 11.35 Hz excited by the table. Both methods capture these frequencies (yellow captions in the right figures of Fig. \ref{fig:table_1}) with a less than $2.6\%$ error. On the other hand, the vibrations of three rectangles are measured by the Mask R-CNN and raw data are processed like the procedures for the shaking table. As shown in Fig. \ref{fig:table_2}, their resonant frequencies are very close to the intended frequencies (i.e., 4 Hz, 6.35 Hz and 11.35 Hz). The error rate for this measurement is $0\%$, $0.3\%$ and $2.6\%$, respectively. This indicates that the proposed Mask R-CNN can be used alone to track multiple objects and capture their vibrations characteristics precisely. \vspace{-4mm} \begin{figure}[h!] \centering \includegraphics[width=1\linewidth, height=4.0cm]{./Lk_and_mask_for_table.pdf} \vspace{-8mm} \caption{Vibration and frequencies of the shaking table measured by Mask R-CNN and the LK tracker.} \label{fig:table_1} \end{figure} \begin{figure}[h!] \vspace{-8mm} \centering \includegraphics[width=1\linewidth, height=6.5cm]{./mask_for_reactangles.pdf} \vspace{-8mm} \caption{Vibrations of three rectangles (the highest to the lowest one from top to bottom on the left hand side) measured by Mask R-CNN and and the corresponding frequencies calculated by FFT (on the right hand side).} \label{fig:table_2} \vspace{-2mm} \end{figure} \begin{figure*}[h!] \centering \includegraphics[width=1\linewidth, height=3.cm]{./plot_for_beam_test.pdf} \vspace{-8mm} \caption{The deflection-time relationship measured by a camera and a dial gauge for the three testing beams.} \label{fig:plot} \vspace{-2mm} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[width=1\linewidth, height=3.cm]{./plot_for_MAE1.pdf} \vspace{-8mm} \caption{The filtered deflection-time relationship measured by a camera and a dial gauge for the three testing beams.} \label{fig:MAE} \vspace{-4mm} \end{figure*} \vspace{-6mm} \section{Conclusions} A deep learning method (i.e., Mask R-CNN with HRNet) and techniques such as SIFT and Savitzky-Golay filter are applied to automatically track the targets and provide the accurate measurement of their motions with a stationary camera. In our first experiment, Mask R-CNN and SIFT were used for precise deflection measurement of the tested RC beams, since SIFT can utilize the keypoints on the targets to refine the measurement. Our method can be closer to the measurement from traditional structural sensors and outperform the LK tracker. The Mask R-CNN was also used alone to track the vibration of multiple targets in a shaking table experiment and capture the resonant frequencies of these targets via Butterworth filter and FFT. These preliminary tests show that the proposed method is robust and has the potential for measuring displacements and vibrations of structural specimens precisely and automatically in laboratory experiments. Our ongoing work involves application of the proposed method on actual buildings tested in the field to confirm its applicability for outdoor environments. Other deep learning methods are also being explored and tested. \vspace{18mm} \bibliographystyle{ieeetr}
1,314,259,995,744
arxiv
\section{Introduction} \label{S1} The discovery of a relation between the masses of super-massive black holes and the velocity dispersion of the stars in the bulge of their host galaxies \citep{Lynden-Bell1969,Magorrian1998,Ferrarese2000,Haring2004,Gultekin2009,Graham2011} suggests that the formation process of galaxies could be influenced by the active phase of their black holes (BHs). Pushing this idea forward, \citet{Silk1998} developed, in certain detail, the concept of energetic AGN gas outflows, which, in principle, could regulate the growth in mass of the bulge of galaxies by quenching their star formation. Over the past twenty years intense efforts (theoretical and observational) were invested by the international community to detect and study such AGN outflows (or AGN winds) and better understand the effects (feedback) that they could have on their host galaxies \citep{Harrison2018}. Yet, the results are still controversial and the whole subject open to debate. One of the most important of these debates took place in October 2017 in Leiden, where twenty of the most active researchers in the field met to make the point on the subject.\footnote{The proceeding for the debate was published online in 27 February 2018 at https://doi.org/10.1038/s41550-018-0407-2} The consensus was that, ``...there is currently no strong direct evidence for the impact of AGN on star formation in the overall galaxy population when different approaches and selection effects are taken into account'' \citep{Harrison2017}. However, also important were the discussions why this could be so. For example, in their intervention \cite{Cresci2018} concluded that although massive outflows in luminous ``active'' galaxies seem ubiquitous \citep[see also][and references therein]{Fiore2017}, observations that they suppress star formation on a large scale are inconclusive. To date, no global ``shut down'' of star formation has been reported, with observations favoring instead local effects (either quenching or triggering of star formation), and data are usually too scarce to produce meaningful statistics. The authors also added that, in a way, a negative result connecting AGN outflows to the star formation in their hosts was not totally unexpected, considering the different timescales of the two phenomena; the AGN activity happening over a short time period ($\sim 10^8$ yrs), while, once the gas is ejected from the central region of a galaxy, its effect on the interstellar gas could be delayed for a longer interval of time. Actually, assuming long delays, it could even become difficult to distinguish quenching triggered by AGN feedback from secular quenching, namely, a natural decrease of star formation due to the limited gas reservoir of galaxies \citep{Kennicutt1992,Kennicutt1994,Bait2017}. However, and as \cite{Cresci2018} also explicitly recognized, studies over larger samples of galaxies with unbiased star formation tracers are needed, ideally during the peak of the feedback epoch, which should be at the same time the AGN and star formation activities reach their maximum \citep[that is, $1 < z < 2$;][]{Madau1998,Madau2014}. On the other hand, one could add that, assuming delayed feedback, evidence could appear long after the maximum peak of activity, that is, in large samples at lower redshifts. Another problem with AGN feedback is that outflows are multiphases, that is, they appear in different wavelengths, in X-rays, in optical, in Infra-Red, and even in radio \citep{Cicone2018}, and since those bands cover different ranges in temperature and density, consistent with different regions of the host galaxies, getting a full coverage is technically demanding. Moreover, the task of integrating all these different observational aspects in one consistent view can be theoretically exacting. In their intervention \cite{Cicone2018} gave two examples. One was IC 5063, a Seyfert~2 (Sy2) where the outflow manifestations at different wavelengths seem to have similar kinematics and spatial extents, suggesting they are part of the same feedback event. The other one was Mrk 231, an ultraluminous infrared galaxy (ULIRG), which has a complex activity type (a mixture of AGN and starburst) and a perturbed morphology (due to galaxy interactions or mergers), and where no clear interrelations between the different outflow phases could be established. However, these authors were also prompt in noting that their sample was small and biased toward specific cases, favoring high luminosity AGN with star bursts or galaxies with unusually high molecular (H$_2$) contents, complicating the statistical analysis. What are badly needed, they recommended, are larger samples spanning a wider range of intrinsic population properties, AGN bolometric luminosity, BH masses, and Eddington ratios, in different galaxy hosts (with different morphology) and, ``...exploring alternate tracers of star formation that can be applied to larger range in redshift''. One remarkable effort to extend the multiphase analysis is the statistical study made by \cite{Fiore2017}, which was based on a compilation, from the literature, of multiwavelength observations of outflows (in 94 galaxies) detected in molecular (CO and OH), ionized (H$\beta$, [OIII], H$\alpha$ and [CII]) and X-rays. Among their most robust results, they found for all these outflow phases strong correlations between the mass outflow rate, $\dot{\rm M}_{OF}$, the kinetic power, $\dot{\rm E}_{OF}$ and the bolometric luminosity, although with different slopes (that converge at high luminosity). Another significant result was that the mass loading factor, $\dot{\rm M}_{OF}$/SFR, seemed relatively high compared to starburst galaxies, which the authors suggested could be due to quenching. However, recognizing their sample was biased toward extreme starbursts and massive galaxies, the authors had to conclude that the connection between outflows and SFR they observed in their small selective sample might not apply to less active and massive galaxies. This is where the study made by \cite{Woo2016} about the prevalence of outflows in 39,000 type~2 AGNs becomes significant. Using spectra from the Sloan Digital Sky Survey (SDSS) these authors were able to detect [OIII]$\lambda5007$ outflows in as much as 43.6\% of the galaxies in their samples, a remarkable high fraction considering the low resolution of the SDSS spectra ($\sim 69$\ km s$^{-1}$). They also showed that the fraction of detected outflows goes up with the AGN luminosity and increases with the Eddington ratio, from which they concluded that, since they found no connection with the radio luminosity \citep[making a search based on FIRST;][]{Helfand2015}, the outflows in their sample were most probably radiatively launched, consistent with AGN winds. In the present study we extend the search for evidence of outflows based on the [OIII] emission line in Sy1s, namely, a type~1 AGN with higher average luminosity than Sy2s, Log(L$_{bol}) = 45.0$ in our Sy1 sample compared to Log(L$_{bol}) = 44.4$ in Sy2s up to $z \sim 0.25$ \citep[][]{Torres-Papaqui2012}, but lower than QSOs \citep[Log(L$_{bol}) \ge 45.5$ up to z = 0.3;][]{Coziol2017}, and without evidence of star bursts (that is, no ULIRG). Our final sample is composed of 3,896 SDSS spectra of Sy1s, with high signal to noise ratio (S/N $> 10$) in the continuum and high quality MIR photometry in WISE. From the SDSS spectra we extracted information about the outflow velocity, V$_{max}$, and the AGN characteristics, namely, BH mass, power-law index, bolometric luminosity, and Eddington ratio. The WISE data are used, on the other hand, to estimate the intensity of star formation in their host galaxies, based on a new calibration that relates their W2-W3 colors to their SFRs. Limiting our observations to low redshifts ($z < 0.4$) we can also determine their morphology, by applying an automatic classification method that uses the SDSS photometric parameters. Putting all these data together allows us to compare the outflows with the AGN and galaxy host characteristics in a meaningful statistical way. The organization of the paper is the following. In Section~\ref{S2} we describe our sample of Sy1s as detected in SDSS and WISE, explain our spectral analysis method, and expound how the outflows were detected and quantified. In Section~\ref{S3} we explain how we determined the main characteristics of the BHs and also describe how the parameters characterizing their host galaxies, SFR and morphology, were estimated. In Section~\ref{S4} we discuss the results of our statistical analysis. Our conclusions can be found in Section~\ref{S5}. In our study all the physical parameters that depend on the proper distance were calculated assuming a $\Lambda$CDM cosmology, adopting the generic parameters: $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_{DM} = 0.30$, and $\Omega_{\Lambda} = 0.70$. \section{Data analysis}\label{S2} \subsection{Sample Selection}\label{SS2a} Our spectroscopic sample was obtained from the Sloan Digital Sky Survey Data Release 7 \citep[SDSS DR7;][]{Abazajian2009} by cross-correlating this list with a target list of AGNs identified as Sy1 in the ``Catalog of quasars and active nuclei: 13th edition'', as compiled by \cite{VeronCetty2010}. In this this catalog, galaxies are classified as Sy1s when their spectra show prominent broad permitted lines, which is the standard definition \citep{Osterbrock2006}. Intermediate types identified as Sy1.2 and Sy1.5, where narrow Balmer lines appear over the broad lines (in both H$\alpha$ and H$\beta$), were also included in our sample to complete the panorama of Sy1 galaxies. Other intermediate types where the broad lines are less conspicuous, like the Sy1.9 and Sy1.8, were not included. Based on their definition \citep{Rakshit2017}, there are also no narrow line Sy1s (NLSy1) in our sample. As a supplementary selection criteria we have kept only the galaxies with spectra that have a S/N~$\ge 10$ in the continuum and emission lines with S/N~$\ge 3$. Keeping only those entries with redshift $z \leq 0.4$ gave us a preliminary sample of 4,000 Sy1s. To retrieve the MIR data of these AGNs, we cross-correlated their positions, as found in SDSS DR7, with the positions of the entries in the AllWISE Data Release 2012\footnote{http://wise2.ipac.caltech.edu/docs/release/allsky/} \citep{Wright2010}. This was done using the CDS X-Match pipeline in VizieR \citep{Ochsenbein2000}, applying a search radius of 5$\arcsec$ around the position of each galaxy \citep[e.g.,][]{Clemens2013}. Keeping only the matches that have a contamination and confusion flag that is clear ($cc\_flags = 0$) and that have WISE fluxes with quality flags, ``ph\_qual", equal to A in all the first three bands (W1 $ = 3.353\ \mu {\rm m}$, W2 $= 4.603\ \mu {\rm m}$, W3 $= 11.561\ \mu {\rm m}$), and A or B in the last band (W4 $= 22.088\ \mu {\rm m}$), we obtained high quality photometric data in the MIR for 3,896 Sy1s (97.4\% of our original spectroscopic sample). These Sy1s constitute our final sample. \subsection{Spectral analysis}\label{SS2b} Our spectral analysis can be summarized as follows. First, we applied a correction for Galactic extinction using the extinction map of \citet{Schlegel1998} and the reddening law of \cite{Cardelli1989}. Then, we corrected for the redshift and redressed the spectra by fitting a two-component template on the continuum, containing: 1) an AGN power-law, $f_{\lambda} \propto \lambda^{\beta}$, and 2) a Fe~{\rm II} template for the multiple iron lines. For this last correction we used the synthetic Fe~{\rm II} template constructed by \citet{Veron2004} using high resolution spectra of {\rm I}~Zw~1, a NLSy1 with strong Fe~{\rm II} lines \citep{Vestergaard2001}. The fitting method consists in scaling and velocity broadening a Fe~{\rm II} template \citep[a method described in details in][]{Greene2005}. Note that in the Sy1s the continuum is largely dominated by the AGN (we see no stellar features), no correction due to the host galaxy was applied. After subtraction of our power-law and Fe~{\rm II} template, the average rms in the residual is 0.03 dex, which is comparable to the uncertainty introduced by this phase of the reduction as estimated by \citep{Greene2005}. Once the continuum was subtracted we searched for evidence of outflows. These usually appear as extended wings to the blue of the core line of [OIII], consistent with a broad component, blue-shifted by a few \AA\ \citep[e.g.,][]{Dunn2010,Sturm2011,Mullaney2013,Woo2016,Perna2017}. However, before working the oxygen line itself, we fitted and subtracted the H$\beta$ line, eliminating any possible contamination of the blue wing of the [OIII] line (in general, such contamination, once the Fe~{\rm II} lines were eliminated, was found to be negligible). This was done by fitting Gaussian profiles to the Balmer lines that were then subtracted from the spectra. \begin{figure*} \epsscale{1.15} \plotone{figure01.png} \caption{Examples of our fitting method applied on (a) H$\beta$ and (b) H$\alpha$, distinguishing between the two Sy1 main subgroups (Sy1B and Sy1N). In each panel the different colors identify the different components and the red curve corresponds to their sum (the direct fit in the Sy1B). In (c) we show two examples of fits on [OIII]: in the upper panel the outflow is spectrally resolved, that is, its center is separated from the core by more than 69 km s$^{-1}$, which is the SDSS spectral resolution, while in the lower panel the outflow is unresolved, where the center of the outflow is separated from the core by less than 69 km s$^{-1}$. \label{f01}} \end{figure*} During this part of the analysis we separated our sample of Sy1s in two: those requiring only one Gaussian were identified as Sy1B (36\% of our sample) whereas those requiring at least two Gaussians (the Sy1.2 and Sy1.5) were identified as Sy1N. Note that we did not fit outflow components to the Balmer lines, since, being limited by the low resolution of the SDSS spectra ($\sim 69$\ km s$^{-1}$), we had no way to constrain such a fit. The fit of various Gaussian profiles was done automatically by using the Levenberg-Marquardt fitting algorithm \citep{Manquardt1963} MPFIT in IDL \citep{Markwardt2009}. As initial parameters, this routine requires the central wavelengths of the line components and estimates of their amplitudes and dispersions. Iterations are then done to minimize the $\chi^2$ value in the residual, that is, after subtracting the fitted profiles from the observed lines. Two examples of Gaussian fits on H$\beta$ are shown in Figure~\ref{f01}a for the Sy1B (upper panel) and for the Sy1N (lower panel). All the fitted solutions have S/N~$\ge 3$ (the fits do not change the S/N). The fitting routine gives us the flux intensity of each component as well as its FWHM. Note that whenever there are more than one component the routine is allowed to slightly shift each Gaussian peak relative to the systemic wavelength. However, since these shifts are very small compared to the spectral resolution of the SDSS spectra, they have no measurable effect. The flux uncertainties computed on the basis of the S/N residuals are lower than 15\%, which is consistent with the mean uncertainty of the flux calibration. To double check our Gaussian fitting method, the same analysis was applied on H$\alpha$ (see examples in Figure~\ref{f01}b for both Sy1B and Sy1N). For the Sy1N, this means adding two Gaussian profiles in MPFIT to fit the nitrogen doublet, [NII]$\lambda\lambda$6548,6584. Note that the most intense line of the doublet, [NII]$\lambda$6584, is almost always visible in our spectra, since the line ratio [NII]$\lambda$6584/H$\alpha$ is high in AGNs. This means that we can well constrain its position and the fact that the ratio of the intensities of the doublet must be $I_{NII6548}/I_{NII6584} = 1/3$ \citep{Osterbrock2006} allows us also to constrain the intensity of the weakest line (when too blended to be seen). According to \citet{Greene2005} the FWHM of the broad line components of H$\beta$ and H$\alpha$ are correlated. We verify this with our data, obtaining the relation: \begin{equation}\label{eq01} {\rm FWHM}_{\rm H\beta} = (1.03\pm 0.06) \times 10^3 \left(\frac{\textrm {FWHM}_{{\textrm H}\alpha}}{10^3\, \textrm {km\, s}^{-1}}\right)^{(1.01\pm 0.03)} {\rm km}\ {\rm s}^{-1} \end{equation} Although our relation is comparable to the one obtained by \citet{Greene2005}, our uncertainties are consistent with a one-to-one relation, which reinforce the conclusion that either line can be used to determine the virial mass of the SMBHs. More important for our study, however, this last result also implies that we have cleanly eliminated from the continuum the broad component in H$\beta$, suggesting that any redundant structure we detect in [OIII] must be intrinsic to this line. \subsection{Outflow detection}\label{SS2c} \begin{figure*} \gridline{\fig{figure02a.png}{0.56\textwidth}{(a)} \fig{figure02b.png}{0.3\textwidth}{(b)} } \caption{(a) Tukey box-whisker plots comparing the FWHMs of the cores of the [OIII] lines in the four subgroups of Sy1s. (b) Results for the stacking of the [OIII] lines in the Sy1B and Sy1N without resolved outflows. In green we traced the line core, and in blue an broad outflow component, the sum of which is shown in red. \label{f2}} \end{figure*} In this section we now concentrate on searching for outflows in [OIII]. To quantify our search, we systematically fitted two Gaussian components: ($G$ = $G_C$ + $G_{OF}$), centering the narrow core ($G_C$) at 5007 \AA\ and fitting the outflow broad component ($G_{OF}$) as observed a few \AA\ to the blue of this position. Note that our fitting method is limited by the resolution of the SDSS spectra, since we can only securely fit (no degeneracy) a broad component when the peak of the Gaussian is blue-shifted by more than 69 km s$^{-1}$. In Figure~\ref{f01}c in the upper panel we show one example of such resolved outflow, while in the lower panel we show one example of an unresolved solution. Taking this limit into account we found clear evidence of outflows, namely, spectrally resolved outflows, in 37\% of the total sample: 39\% in the Sy1N and 35\% in the Sy1B, identifying these cases as Sy1Nw and Sy1BNw, respectively. Since the frequency of resolved outflows in the Sy1B and Sy1N are comparable, we can conclude that whatever causes this spectral difference, it has apparently no effect on the production of an outflow. From our parametric decomposition analysis, it is obvious that the low resolution of the SDSS spectra limits the detection of outflows \citep[as also noted by][]{Woo2016}. Actually, comparing in Figure~\ref{f2}a the FWHMs of the cores of the oxygen lines in the four subgroups, it seems clear that spectrally unresolved outflows must also be present, since the FWHMs are systematically larger in the Sy1B and Sy1N than in the Sy1Bw and Sy1Nw where the outflow components were subtracted. To verify the statistical significance of the differences observed, a non-parametric Kruskal-Wallis ANOVA test with Dunn's multiple comparisons test was performed.\footnote{All the statistical tests used in this article were done using GraphPad Prism version 6.00 for Mac OS X, GraphPad Software, La Jolla California USA, www.graphpad.com; A description of each test can be fond on their exhaustive guide page: https://www.graphpad.com/guides/prism/8/statistics/index.htm.} The Kruskal-Wallis test is a non-parametric test: it does not assume Gaussian distributions (three normality tests performed on our samples were rejected), but assumes their shape are similar, which is the case of the data in our samples. The p-value answers the question (the null hypothesis): if the groups are sampled from populations with identical distributions, what is the chance that random sampling would result in a sum of ranks as far apart as observed in this experiment? This implies that when the p-value is small (we choose a level of significance $\alpha = 0.05$), you can conclude that the populations have different distributions. Dunn's multiple comparisons test, then, compares the difference in the sum of ranks between two groups with the expected average difference (based on the number of groups and their size). The p-value takes into account the number of comparisons. If the null hypothesis is true (all differences between the groups are due to random sampling), then there is a 5\% chance that at least one of the post tests will have p $< 0.05$, the 5\% chance applying to the entire family of comparisons. In order to make the p-value more significant, we used the interpretation scale described in Table~\ref{STAT}. \begin{deluxetable*}{ccc}\label{STAT} \tablecaption{Significance levels of statistical tests} \tablehead{ \colhead{p-value} & \colhead{Interpretation} &\colhead{summary} } \startdata \hspace*{1.3cm} $p < 0.0001$ & Extremely significant & **** \\ $0.0001\ge p < 0.001$ & Highly significant & *** \\ $0.001\ge p < 0.01$ & Very significant & ** \\ $0.01\ge p < 0.05$ & Significant & * \\ \hspace*{0.9cm} $p \ge 0.05$ & Not significant & ns \\ \enddata \end{deluxetable*} Applying the statistical test to the FWHMs of the cores of the [OIII] lines in Figure~\ref{f2}a, extremely significant differences are confirmed between the Sy1s with and without resolved outflows (the groups Sy1B-Sy1Bw and Sy1N-Sy1Nw). To put the hypothesis of unresolved outflows on a more robust observational ground, we stacked the [OIII] spectra in the Sy1B and Sy1N subgroups. The results are shown in Figure~\ref{f2}b. In each case, we were able to fit a broad component consistent with an an outflow with a S/N $\sim 60$. This result suggests that outflows are ubiquitous in Sy1s. The Gaussian fit for the broad component of the [OIII] line gives us two parameters that are important to qualify outflows as AGN winds \citep{Karouzos2016}: the drifting velocity, $V_{OF}$ (positive), that is, how much the Gaussian peak is shifted to the blue (after correcting for resolution), and the velocity dispersion of the line, $\sigma_{OF}$. Following \citet{Rupke2013} and \citet{Fiore2017} we define the maximum velocity of the wind as: \begin{equation}\label{Vmax} {\rm V}_{max} = {\rm V}_{OF} + (2 \times \sigma_{OF})\ {\rm km}\ {\rm s}^{-1} \end{equation} In Figure~\ref{f3}a, we compare V$_{max}$ in the Sy1Bw and Sy1Nw. There seems to be a difference in the distributions, the Sy1Bw having slightly higher V$_{max}$ than the Sy1Nw. Performing a Mann-Whitney test, a non parametric test that is mostly sensitive to change in the median, the difference is not confirmed as significant. However, applying a Kolmogorov-Smirnov test, another non parametric test that is sensitive to any difference between the two distributions (in particular shape or spread), the difference is highly significant, with a p-value of 0.0002. Considering the definition of V$_{max}$ (Eq.~\ref{Vmax}), a difference in the velocity dispersion of the line, $\sigma_{OF}$, as shown in Figure~\ref{f3}b, explains this difference. This time a Mann-Whitney test find the difference in $\sigma_{OF}$ to be extremely significant. Finding a possible difference in $V_{max}$ is interesting, because in terms of AGN wind model it could suggest different physical conditions in the ISM of the Sy1B and Sy1N \citep{KingPounds2015}. In particular, if the Sy1N have more gas in their narrow line regions (NLRs) than the Sy1B, then the outflows could have had more difficulties to expand (because of the higher density), which would explain the lower wind velocities. \begin{figure*} \gridline{\fig{figure03a.png}{0.5\textwidth}{(a)} \fig{figure03b.png}{0.5\textwidth}{(b)} } \gridline{ \fig{figure03c.png}{0.5\textwidth}{(c)} } \caption{ (a) Tukey box-whisker plots comparing the velocity of the outflows, $V_{max}$, in the Sy1Bw and Sy1Nw. (b) Tukey box-whisker plots comparing the velocity dispersion, $\sigma_{OF}$, in the Sy1Bw and Sy1Nw. (c) BPT-VO diagnostic diagram for the Sy1Nw and Sy1N. The different regions identify galaxies with NLR excited by different sources: SFG, star forming galaxy, TO, transition type object, excited by AGNs and star formation, Sy2, excited by an AGNs, LINER, excited by low luminosity AGN. Note that a correction was applied to each of the line ratio by estimating the amount of internal extinction based on the ${\rm H}\alpha/{\rm H}\beta$ ratio and using an extinction law with $R_V = 3.1$ \citep{Cardelli1989}. \label{f3}} \end{figure*} \begin{deluxetable*}{ccccccc}\label{Ap} \tablecaption{Size of regions covered by aperture at different redshifts} \tablehead{ \colhead{z} & \colhead{scale} &\colhead{diameter} & \colhead{Sy1B} & \colhead{Sy1Bw} & \colhead{Sy1N} & \colhead{Sy1Nw} \\ \colhead{ } & \colhead{(kpc/$\arcsec$)} &\colhead{(kpc)} & \colhead{(\%)} & \colhead{(\%)} & \colhead{(\%)} & \colhead{(\%)} } \decimalcolnumbers \startdata 0.1 & 1.8 & 5.5 & 5 & 4 & 13 & 9 \\ 0.2 & 3.3 & 9.9 & 30 & 24 & 39 & 29 \\ 0.3 & 4.4 & 13.4 & 41 & 45 & 36 & 36 \\ 0.4 & 5.4 & 16.1 & 24 & 28 & 12 & 25 \\ \enddata \end{deluxetable*} Although we cannot check for a difference of NLR in the Sy1N and Sy1B, we can check to see if the presence of an outflow affects the physical state of the gas in the NLR. We do this by tracing the BPT-VO diagram \citep{Baldwin1981,Veilleux1987} in Figure~\ref{f3}c, which is used to define the level of excitation of the gas in the NLR and determine the main source of this excitation, either AGN or star formation. In Figure~\ref{f3}c we can see that the line ratios in the Sy1N and Sy1Nw are similar to those observed in Sy2s. Taken at face value, this result is consistent with the standard AGN unification model, which states that Sy1s and Sy2s have similar engines \citep{Antonucci1985, Antonucci1993, Urry1995}. But more relevant for our analysis, the fact that we see no difference of line ratios in the Sy1N and Sy1Nw suggests that the presence of an outflow has no effect on the level of excitation of the gas in the NLR \citep[in particular, LINER-like line ratios would have been expected for shocks with AGN winds;][]{Veilleux1987}. At this point of our study, it seems necessary to examine what could be the importance of the aperture on our results. The aperture of the SDSS fiber has a diameter of $\sim 3\arcsec$ in the sky. As the redshift increases, the spatial regions covered by the spectra increase, and thus different parts of the galaxies are sampled: mostly the bulge at low redshifts, but larger portions of the disks as the redshift increases. To quantify this change, we compiled in Table~\ref{Ap} the projected diameters of these regions based on the cosmology used in our study.\footnote{Values calculated using Ned Wright Cosmology Calculator: http://www.astro.ucla.edu/~wright/CosmoCalc.html} From this table, we can deduce that even at low redshifts the spectra cover regions of the order of kpc, that is, extending already in the NLR. In order to see if these changes had any effect on the Sy1 subgroups (for example, favoring the detection of Sy1N at high redshifts, or influencing the detection frequency of outflows) we calculated in the last four columns the fraction of galaxies at different redshifts in each spectral subgroup. From this table we can infer that there are no significant differences between these distributions, which suggests that the finite aperture of the fiber cannot explain the differences we observe between the Sy1 subgroups. In other words, these differences are most probably real, due to different physical characteristics. \section{Characterization of the AGNs and their hosts} \label{S3} \subsection{Determining the BH characteristics}\label{SS3a} \begin{figure*} \gridline{\fig{figure04a.png}{0.35\textwidth}{(a)} \fig{figure04b.png}{0.35\textwidth}{(b)} } \gridline{\fig{figure04c.png}{0.35\textwidth}{(c)} \fig{figure04d.png}{0.35\textwidth}{(d)} } \gridline{\fig{figure04e.png}{0.4\textwidth}{(e)} } \caption{Tukey box-whisker plots comparing in the four subgroups of Sy1s (a) the black hole mass, (b) the AGN luminosity, (c) the exponent of the power-law of the continuum, (d) the Eddington ratio, and (e) the FWHM of the broad component of the H$\beta$ line. \label{f4}} \end{figure*} To characterize the AGNs we measured three important parameters: the luminosity of the continuum at 5100 \AA, L$_{AGN}$, the power-law index, $\beta$, and the mass of the BH, M$_{BH}$ \citep{Greene2005}: \begin{equation}\label{BH} \left(\frac{{\rm M}_{BH}}{10^6\ \textrm {M}_{\odot}}\right) = (3.6\pm 0.2) \left(\frac{{\rm L}_{AGN}}{10^{44}\, \textrm {ergs\, s}^{-1}}\right)^{0.56\pm 0.02} \left(\frac{{\rm FWHM}_{\rm H\beta}}{10^3\, \textrm {km\, s}^{-1}}\right)^2 \end{equation} where FWHM$_{{\rm H}\beta}$ is the full width at half maximum of the fitted broad Gaussian in H$\beta$. Note that before using Eq.~\ref{BH}, we also applied a K-correction on the luminosity \citep{Weedman1988}, ${\rm L}_{\nu_e} = {\rm L}_{\nu_o} (\nu_o/\nu_e)^\alpha ={\rm L}_{\nu_o} (1 + z)^{-\alpha}$, where we assumed $\alpha = -0.5$, similar to QSOs at low redshifts \citep[the majority being radio quiet;][]{Coziol2017}. From these measurements we also derived two other important parameters: the bolometric luminosity, using the relation L$_{bol} = 9.8 \times {\rm L}_{AGN}$ \citep{McLure2004} and the Eddington ratio: \begin{equation}\label{N_Edd} {\rm N}_{Edd} = \log({\rm L}_{bol}/{\rm L}_{Edd}) \end{equation} where L$_{Edd}$ is the Eddington luminosity. In Figure~\ref{f4}a we compare M$_{BH}$ in the different Sy1 subgroups. The Sy1Bw and Sy1B have more massive black holes than the Sy1Nw and Sy1N. The mean values were reported in Table~\ref{stat1}. It is important to note that this difference is only related with the spectral groups B vs. N, independent from the outflow. The statistical significance of these differences were confirmed using a Kruskal-Wallis tests with Dunn's multiple comparisons test, as summarized in Table~\ref{stat2}. In Figure~\ref{f4}b we observe a continuous decline of L$_{AGN}$ along the sequence Sy1Bw:Sy1B:Sy1Nw:Sy1N. The notable difference here is the fact that in the two Sy1s with resolved outflows (Sy1Bw and Sy1Nw) have higher luminosities than those without a resolved outflow (also true on average in Table~\ref{stat1}). These differences establish a direct connection between the outflow and the bolometric luminosity, which is related to the AGN activity. Once again, a Kruskal-Wallis tests with Dunn's multiple comparisons test confirmed that the differences are statistically significant (Table~\ref{stat2}). \begin{deluxetable*}{lccccccccc} \tablecaption{Mean characteristics of the Sy1 with and without outflows. \label{stat1}} \tablewidth{0pt} \tablehead{ \colhead{Sy1} & \colhead{Sample} & \colhead{\% of } &\colhead{V$_{max}$} & \colhead{$\log ({\rm M}_{BH})$} & \colhead{$\log ({\rm L}_{bol})$} & \colhead{$\beta$} & \colhead{N$_{Edd}$} & \colhead{T} & \colhead{$\log({\rm SFR})$} \\ \colhead{Subgroups} & \colhead{sizes} & \colhead{total} &\colhead{(km s$^{-1}$)} & \colhead{(M$_\odot$)} & \colhead{(ergs s$^{-1}$)} & \colhead{} & \colhead{} & \colhead{} & \colhead{(M$_\odot$ yr$^{-1}$)} } \decimalcolnumbers \startdata Sy1Bw & 483 & 12.4 & 1099 & 8.11 & 45.1 & -0.766 & -1.11 & 2.08 & -0.054 \\ Sy1B & 905 & 23.2 & \nodata & 8.09 & 45.0 & -0.833 & -1.20 & 2.08 & -0.099 \\ Sy1Nw & 974 & 25.0 & 1052 & 7.95 & 44.9 & 0.068 & -1.17 & 3.16 & 0.004 \\ Sy1N & 1534 & 39.4 & \nodata & 7.92 & 44.8 & 0.198 & -1.23 & 3.07 & -0.026 \\ \enddata \end{deluxetable*} \begin{deluxetable*}{lccccccc} \tablecaption{Summaries of Dunn's multiple comparisons test \label{stat2}} \tablewidth{0pt} \tablehead{ \colhead{Pairs} & \colhead{M$_{BH}$} & \colhead{ L$_{AGN}$} & \colhead{$\beta$} & \colhead{N$_{Edd}$} & \colhead{T} & \colhead{SFR} & \colhead{FWHM H$\beta$} } \decimalcolnumbers \startdata Sy1Bw vs. Sy1B & ns & **** & ns & **** & ns & **** & * \\ Sy1Bw vs. Sy1Nw & **** & **** & **** & ** & **** & **** & * \\ Sy1Bw vs. Sy1N & **** & **** & **** & **** & **** & ** & ns \\ Sy1B vs. Sy1Nw & **** & **** & **** & ns & **** & **** & ****\\ Sy1B vs. Sy1N & **** & **** & **** & * & **** & **** & * \\ Sy1Nw vs. Sy1N & ns & **** & * & **** & ns & *** & ****\\ \enddata \tablecomments{Star code explained in Table~\ref{STAT}.} \end{deluxetable*} Considering the difference in luminosity the first results about the BH masses being comparable in Sy1s within the same spectral groups (B or N) seems less trivial than thought. This is because in the equation for the BH mass (Eq.~\ref{BH}) two parameters must balance to make the BH masses equal in the subgroups. In particular, this implies that the FWHM in the Sy1s with outflows must be smaller than the FWHM in their counterparts without an outflow. Although very weak in Figure~\ref{f4}e, the trend seems to be there. In Table~\ref{stat2}, the results reported for Dunn's post test confirm that, except for the groups Sy1Bw-Sy1N, all the differences are statistically significant, the difference between the Sy1Nw and Sy1N being more obvious than between the Sy1Bw and Sy1B. Finding such a difference was not expected, and the physical reason why this happens is unclear, since this would be related to differences in the structures of the broad line reagions (BLRs). In terms of the Virial theorem (the basis of Eq.~\ref{BH}), smaller BLRs, closer to the nucleus, would produce larger FWHMs, which suggests that outflows pushing the gas further out would produce more extended BLRs and hence smaller FWHMs. Note, however, that we do not observe evidence of outflows in the broad Balmer components (due to our low spectral resolution), and the observation of smaller FWHMs in Sy1s with outflows goes contrary to what we observed in [OIII] (where the FWHM increases with the presence of an unresolved outflow). However, the effect of outflows on low scales (below pc for the BLRs) is not necessarily expected to be similar to those observed on larger scales (pc to kpc for the NLRs). In Figure~\ref{f4}c we compare the power-law index, $\beta$. Likewise $M_{BH}$ the main differences are between the spectral groups, B vs N. In Table~\ref{stat2}, Dunn's post test finds no significant difference between the Sy1B and Sy1Bw, and while the difference between the Sy1N and Sy1Nw is statistically significant, it is at the lowest level. Consequently, this result confirms that the relevant physical differences are between the groups B and N: in general, the Sy1B have harder continuum, which means they emit more UV photons than the Sy1N, a characteristic that does not depend on the presence of an outflow. Finally, in Figure~\ref{f4}d we compare the Eddington ratios, N$_{\rm Edd}$. What is remarkable here is the fact that the Sy1s with resolved outflows (Sy1Bw and Sy1Nw) have higher ratios than those without an outflow (Sy1B and Sy1N). This particular trait of the Sy1s with resolved outflows explains why the Dunn's post test in Table~\ref{stat2} finds no significant statistical difference between the Sy1B and Sy1Nw, despite the former having higher L$_{AGN}$ than the latter (note that the difference between the Sy1B and Sy1N is also at the lowest level of statistical significance). This result directly connects the presence of outflows to higher AGN activities. It implies that the Sy1Bw and Sy1Nw have higher luminosities for a given BH mass, which could only means they have higher accretion rates ($\dot{\rm m}_{acc}$) than the Sy1B and Sy1N (since, assuming $\eta$ is the same, L$_{AGN} = \eta\, \dot{\rm m}_{acc} c^2$). \subsection{Determining the star formation rates} \label{SS3b} \begin{figure*} \gridline{\fig{figure05a.png}{0.45\textwidth}{(a)} \fig{figure05b.png}{0.45\textwidth}{(b)} } \caption{MIRDD for the Sy1s distinguishing between those with resolved outflows (a), and those with unresolved outflows (b). The different regions, lines, and values are determined as in \citet{Coziol2014}; the diagram allows to distinguish between AGN and star forming galaxies, based on low or high levels of star formation. \label{f5}} \end{figure*} The above comparisons show clear evidence of physical differences between the Sy1s with and without resolved outflows that are consistent with AGN winds: radiatively launched outflows related to higher accretion rates \citep{KingPounds2015}. Therefore, it would be natural to also expect differences in SFRs due to AGN feedback, in particular, a quenching effect of star formation in their galaxy hosts. Consequently, our main goal by computing the SFR and morphology of the AGN hosts in this section is to determine whether the SFRs of the galaxies with an outflow in our sample are somewhat peculiar relative to their morphology. To determine the SFR we used the W2 and W3 colors in WISE. The idea came from \citet{Coziol2014}, where a new diagnostic diagram separating AGN from SFGs, in a way similar to what can be achieved using the BPT-VO diagram, was constructed by combining the W3$-$W4 and the W2$-$W3 colors. As it was shown in this study, the working principle of this MIR diagnostic diagram (MIRDD) turned out to be the sensitivity of the W2$-$W3 color to the level of star formation in galaxies. This sensitivity was observed before and explained based on the specific MIR SEDs of star forming galaxies in \citet[][]{Jarrett2013}. A reddening of MIR colors consistent with an increase of star formation, was also found to be a common trait of type~1 AGNs at high redshifts \citep[e.g.,][]{Donoso2012,Delvecchio2014}. One advantage of the MIRDD over the BPT-VO diagram, is that we can compare on the same scale the level of star formation in narrow-emission line galaxies (NELGs) with the level of star formation in broad-line AGNs \citep{Coziol2015}. In Figure~\ref{f5}, we show the MIRDD for the Sy1s in each subgroup. The color distributions are almost equal. Based on the analysis done by \citet{Coziol2015} the Sy1s host galaxies would have intermediate SFRs, lower than in the Sy2s (on the high SF side) but higher than in LINERs (on the low SF side). Comparing the different Sy1s subgroups in our sample, there is a definite trend for the Sy1Bw and Sy1B to be bluer in W2$-$W3 than the Sy1Nw and Sy1N. In terms of SFR this difference is consistent with slightly higher SFRs in the Sy1Nw and Sy1N than in the Sy1Bw and Sy1B. Once again, we find in the case of the WISE colors a specific difference that seems to depend only on the spectral distinction, B vs. N. Therefore, could this difference be an observational artefact, due, for example, to predominant bright MIR BHs in the B type subgroup? Comparing the SEDs of QSOs and starburst galaxies \citep[e.g.,][]{Leipski2014}, such dust hidden BHs would increase the flux in W2, reducing the difference with W3 and making the W2$-$W3 color slightly bluer (moving toward the left in the MIRDD). However, to increase the W2 flux in this way implies adding a lot of dust in the BLRs of the Sy1B and Sy1Bw, which contradicts our observation, according to the difference in $\beta$, that these galaxies produce a higher number of UV photons than the Sy1N and Sy1Nw. On the other hand, an increase in star formation in the Sy1s in the N group would increase the flux in W3, making their W2$-$W3 colors redder than those in the B group (moving toward the right in the MIRDD). Considering the possible MIR SEDs, the hypothesis of higher SFRs in the Sy1N and Sy1Nw than in the Sy1B and Sy1Bw seems much more probable. A more relevant question then is, how much higher? Using the SFRs measured in NELGs by \citet{Coziol2015}, which were established based on stellar population synthesis, we can calibrate the W2$-$W3 color in terms of SFR. In Figure~\ref{f6}a, a linear regression yields the relation: \begin{equation}\label{eq04} \log\left(\frac{\textrm{SFR}}{\textrm{M}_{\odot}/\textrm{yr}}\right) = [0.52\pm 0.03 \cdot ({\rm W2-W3})] - (1.46\pm 0.07) \end{equation} This linear regression has a $r^2 = 0.89$ and a p-value of 0.0044. A statistical test rejects deviations from linearity with a p-value = 0.4000. Comparing with the literature, Eq.~\ref{eq04} yields values that are in very good agreement with the SFR estimated using other methods, in optical and UV \citep[][]{Kennicutt2012}, in NIR \citep[][]{Delvecchio2014} and based on SEDs fitting \citep[][]{Bait2017}. In Figure~\ref{f6}a, our calibration suggests that the SFR increases by a factor 5 along the sequence, LINER:Sy2:TO:SFG. In Figure~\ref{f6}a, we also added vertical bars indicating the median MIR colors in the different Sy1 subgroups. These medians imply that the SFR increases along the sequence Sy1B:Sy1Bw:Sy1N:Sy1Nw, but only by small amounts. This is also confirmed in Figure~\ref{f6}b, where we compare the distributions of SFR in the subgroups. A Kruskal-Wallis ANOVA test with Dunn's multiple comparisons test confirms the statistical significance of all the differences observed (see Table~\ref{stat2}). From the mean values reported in Table~\ref{stat1} we deduce that, on average, the Sy1B, the Sy1Bw and Sy1N have SFRs that are 21\%, 13\% and 7\% lower, respectively, than in the Sy1Nw. More relevant for our study, however, the Sy1s with resolved outflows are found to have systematically higher SFRs than those without resolved OF. This last result implies that in the Sy1s with resolved outflows both the AGN and star formation activities are high, which seems inconsistent with quenching. However, a clear interpretation in terms of AGN feedback is not possible until we can compare these SFRs with the morphology of their hosts. \begin{figure*} \gridline{\fig{figure06a.png}{0.49\textwidth}{(a)} \fig{figure06b.png}{0.48\textwidth}{(b)} } \caption{SFR in the Sy1s. In (a) we show our calibration in terms of SFR of the W2-W3 colors of the NELGs; the points are the medians and the error bars the Tukey's wiskers (the farthest points where the data are not outliers). The vertical bar are the medians MIR colors in the Sy1 subgroups: dash blue, Sy1B, continuous blue Sy1Bw, dash red, Sy1N, continuous red Sy1Nw. In (b) we show the Tukey box-whisker plots comparing the SFR in the four subgroups of Sy1s. \label{f6}} \end{figure*} \subsection{Determining the morphology}\label{SS3c} To determine the morphology, we first tried to do it by eye by examining their SDSS images. However, this exercise seemed useless, since, although a great majority showed extended structures, consistent with spiral galaxies, discriminating what type of spiral (early or late) seems impossible. To give you a better idea, for comparison sake, we cross-correlated the positions of the galaxies in our sample with those classified in the Galaxy Zoo project \citep{Lintott2011}. We could only recover 30\% of the galaxies in our sample, all at low redshift ($z < 0.15$). Moreover, for 75\% of the recovered Sy1Bw/Sy1B and 76\% of the S1Nw/Sy1N, the morphology reported by the observers in the Galaxy Zoo were judged unclassifiable, although they could estimate a 54\% probability for these cases to be early-type. The few that were classifiable were estimated to be spirals, with only 5\% Es (Early-type). These results convinced us to try another method based on SDSS photometry, which was developed by \citet{Shimasaku2001} and \citet{Fukugita2007}. After downloading the photometric parameters from SDSS,\footnote{http://casjobs.sdss.org} examination of the data showed that the information for our whole sample was complete and of high quality (qualified in SDSS as detected $\ge 5$ sigma in original image). For each Sy1 in our sample we found the following photometric data: the colors $u-g$, $g-r$, $r-i$, and $i-z$ and the inverse concentration index $ C = r_{50}/r_{90}$, which is equivalent with comparing the Petrosian ratio, $R_p$, at 50\% and 90\% surface brightness ratios \citep{Blanton2001,Yasuda2001}. As was shown by \citet{Fukugita2007}, the inverse concentration index is tightly correlated with the morphological type while the variations in SDSS colors are consistent with variations of star formations in spirals and color gradients in early-type galaxies, as reproduced by synthetic stellar model and SEDs. In \citet{Torres-Papaqui2012} we already used the photometric method to determine the morphology of NELGs up to $z \leq 0.25$, finding a high level of consistency with our stellar population synthesis study. Assuming, therefore, that the morphology of the Sy1s in our sample (and thus their SEDs) did not evolved over the time interval consistent with $z \sim 0.4$ ($\sim 4.3$\ Gyrs), applying a proper K-correction \citep[e.g.,][]{Blanton2007} would allow us to estimate the morphology of our galaxies. Our expectation, based on what we know of Sy1s at low redshifts, is that the majority should turned out to be early-type (big bulge) spirals \citep{Chen2017}. Our method to determine the morphology can be summarized as follows. First we correct the magnitudes for Galactic extinction and apply the K-correction determined by \citet{Blanton2007}, which was developed specially for the SDSS photometric system. Then, for each galaxy, we automatically determined weights for the five photometric parameters enumerated above, based on how close their values are from the characteristic values in galaxies with different morphology. Finally, we calculate the weighted mean of these parameters to assign a morphological classification to the galaxy on a discrete scale (identified as T), which ranges from 0 to 6 (0 = E and 6 = Irr; see Table~\ref{table_T}), with an uncertainty T $ \pm\ 0.5$. One remaining concern in employing the photometric method for the Sy1s is about a possible influence of a bright AGN. Three possible effects could be expected: 1) at low redshifts, with the photometric aperture covering only the central region of the galaxies, the broad component in emission may affect the colors, making them appear bluer than they really are, which would skew our classification toward late-type spirals, 2) a bright AGN in the nucleus of a galaxy might also make it look more compact, this time favoring earlier morphological types, 3) as the broad components of the emission lines pass from one band to another in the red, at certain redshifts the colors could abruptly change; for example, H$\beta$ passes from the $g$ to the $r$ band around $z \sim 0.13$, making the $g-r$ redder, and H$\alpha$ passes from the $i$ to the $z$ filter around $z \sim 0.25$, making the $r-i$ redder and $i-z$ bluer. We did observe on average bluer $u-g$ and $g-r$ colors (by 0.3 and 0.2 mag respectively) for the Sy1B and Sy1Bw compared to the Sy1N and Sy1Nw, which might produce a bias of our classification toward late-type spirals for the former. We also observed the expected change in $r-i$ and $i-z$ colors as H$\alpha$ changes from one band to the next with redshift, but only for the Sy1B and Sy1Bw above $z \sim 0.25$. However, what the effect could be on our classification is not clear. On the other hand, these effects were observed before the K-correction, which once applied reduced significantly the variations between the subgroups. As for the compacity parameter $C$, we observed no differences between the subgroups, all the values being consistent with early-type spirals (Sa/Sb), and staying constant as the redshift increases. Note that the absence of variation of $C$ with redshift for galaxies with similar morphology was a criterion used by \citet{Shimasaku2001} and \citet{Fukugita2007} to legitimize their photometric method. Similarly, we saw no variation of the $b/a$ parameter with the redshift ($b$ and $a$ being the semi-minor and semi-major axes, respectively), which suggests that most galaxies in our sample are seen face on ($b/a \sim 0.8)$, which, for Sy1s, is also consistent with the standard AGN unification model; although one cannot assume that there is necessarily a connection between the orientation of the galaxy and the orientation of the obscuring torus; \citet{WuHan2001}. \begin{figure}[ht!] \epsscale{0.9} \plotone{figure07.png} \caption{Tukey box-whisker plots comparing the morphology among the four subgroups of Sy1s separated in three redshift bins. \label{f07}} \end{figure} In Figure~\ref{f07} we compare the morphological index, T, obtained with the photometric method, separating each subgroup in three different redshift bins. These bins correspond to average spherical apertures 7.7, 11.7, and 14.8 kpc wide. Even at low redshifts, therefore, the aperture covers more than just the nucleus of the galaxies. There is a clear difference between the Sy1s B type and N type, the host of the former having slightly earlier morphological types than the latter. Note that this is a small difference on average, Sa instead of Sb in Table~\ref{stat1}, which is observed at any redshift. The Kruskal-Wallis ANOVA test with Dunn's multiple comparisons test (Table~\ref{stat2}) confirms the differences in morphology. Once again this difference appears to depend only on the spectral group B vs. N and not on the presence of resolved outflow. Note that the preference for early-type spirals in the Sy1B and Sy1Bw goes against the bias expected based on their bluer colors. Moreover, since the differences in morphology are observed in all the different redshift bins, this cannot be due to the passage of the broad line components to different filters (which would be expected to happen only around $z \sim 0.25$ for H$\alpha$). The only possible bias of our method could be the general increase of late-type spirals with the redshift. This implies bluer colors at high redshifts, which in the case of the Sy1s would be as expected if the AGN activity increases with the redshift. However, remembering the results shown in Table~\ref{Ap} for the spectral aperture, showing similar distributions of the Sy1 subgroups at different redshifts, and considering that the uncertainty on the morphology is T $ \pm\ 0.5$, this bias can be judged to be relatively mild, not affecting our results significantly; in particular we can still differentiate the Sy1B and Sy1N morphologically. \begin{deluxetable*}{cccccccccccccc} \tablecaption{Frequencies (\%) of morphological types for the Sy1 subgoups (compared to Sy2). \label{table_T}} \tablewidth{0pt} \tablehead{ \colhead{Hubble} & \colhead{E} & \colhead{E/S0} & \colhead{S0} &\colhead{S0/Sa} &\colhead{Sa} &\colhead{Sa/Sb} &\colhead{Sb} & \colhead{Sb/Sc} & \colhead{Sc} & \colhead{Sc/Sd} & \colhead{Sd} & \colhead{Sdm/Sm} &\colhead{Im} \\ \colhead{T} & \colhead{0.0} & \colhead{0.5} & \colhead{1.0} & \colhead{1.5} & \colhead{2.0} & \colhead{2.5} & \colhead{3.0} & \colhead{3.5} & \colhead{4.0} & \colhead{4.5} & \colhead{5.0} & \colhead{5.5} & \colhead{6.0} } \startdata Sy1Bw & 0.43 & 6.90 & {\color{blue} 9.91} & {\color{blue} 12.50} & {\bf 17.67} & {\bf 29.53} & {\bf 20.47} & 2.59 & & & & & \\ Sy1B & 0.58 & 5.61 & {\color{blue} 10.28} & {\color{blue} 13.90} & {\bf 17.52} & {\bf 28.27} & {\bf 21.26} & 2.34 & 0.23 & & & & \\ Sy1Nw & & & & 3.51 & 8.95 & {\bf 18.69} & {\bf 19.71} & {\bf 17.55} & {\color{blue} 16.99} & {\color{blue} 12.91} & 1.59 & 0.11 & \\ Sy1N & & & & 4.04 & {\color{blue} 12.72} & {\bf 18.49} & {\bf20.28} & {\bf19.31} & {\color{blue} 14.90} & 8.68 & 1.35 & 0.22 & \\ Sy2* & & 0.95 & 5.49 & {\color{blue} 10.91} & {\color{blue} 16.51} & {\bf 20.48} & {\bf 20.64} & {\bf 16.92} & 7.84 & 0.26 & & & \\ \enddata \tablecomments{*Data for the Sy2 comes from \citet{Torres-Papaqui2013}. The three most populated bins are identified in bold, while the trends are identified in blue.} \end{deluxetable*} As a further test of our method, we compare in Table~\ref{table_T} our morphological classification for the Sy1s with the classification for a large sample of Sy2s, which was obtained by \citet{Torres-Papaqui2013} using the same photometric method. The three most frequent morphological types in each subgroup are marked in bold, while the trend, that is, the next two most frequent morphology bins, are marked in blue. It can be seen that the morphology of the Sy1s and Sy2s are very similar to each other. The trend for the Sy1Nw and Sy1N is for these galaxies to be slightly later-type than the Sy2s, while the trend is the contrary for the Sy1Bs and Sy1B. Once again, these differences do not depend of the presence of a detected outflow. In general therefore, the trend for the Sy1B to be slightly earlier-type than the Sy2, and even this difference decreasing in the intermediate Sy1N, are as expected based on what we know about the different Seyfert galaxies. Highly relevant for our study, therefore, although the differences in morphology between the Sy1s and Sy2s are minimal, the differences in SFRs are significant, of the order of 40-50\% lower in the former than in the latter. Could this suggests unusually low SFRs in the Sy1s? However, we must also consider the possibility that their hosts do not have the same mass, since the SMBHs in the Sy2s have a median mass of only $10^{7.5}$ M$_\odot$ compared to $10^8$ M$_\odot$ for the Sy1s. Assuming the galaxy hosts are roughly a 1000 times more massive \citep[e.g.,][]{Alexander2008, KingPounds2015}, this would yield specific SFR (sSFR; the star formation rate per unit of mass) of the order of $10^{-10.5}$\ yr$^{-1}$ in the Sy2s, compared to $10^{-11}$\ yr$^{-1}$ in the Sy1s. Both of these values are typical of early-type spirals in the green valley, which puts the Sy1s, and this is the most important point of our analysis, far from the quenched regime \citep[see Figure~8 in][]{Bait2017}. In other words, the SFR of the Sy1s seem to be normal considering the morphology and typical masses of their hosts. \section{Discussion}\label{S4} Considered as a whole, the number of Sy1s with resolved outflows at redshifts below $z \sim 0.4$ represent 37\% of our sample, which is a remarkably high fraction. Moreover, their velocities have an average value V$_{max} \sim 1014$ km s$^{-1}$ that is fully consistent with AGN winds \citep{Woo2016}. Finally, the outflows seem connected (in Table~\ref{stat1} and Table~\ref{stat2}) to higher AGN luminosities, suggesting they could be radiatively launched. Actually, the physical reason why this could be so can be seen in Figure~\ref{f08}, where we compare the BH mass with AGN luminosity in each of the four Sy1 subgroups. In this figure the diagonals correspond to different Eddington ratios: \begin{equation}\label{eq05} {\rm N}_{Edd} \propto \frac{ {\rm L}_{AGN} }{ {\rm M}_{BH}} \propto \frac{ \eta\ \dot{\rm m}_{acc}}{ {\rm M}_{BH}} \end{equation} where $\dot{\rm m}_{acc}$ is the mass accretion rate and $\eta$ the efficiency (the fraction of mass transformed into light). From the positions of the Sy1s it is clear that those with detected AGN winds have higher N$_{\rm Edd}$ than their respective counterparts without wind (confirmed statistically in Table~\ref{stat2}), and this is despite having similar BH masses (cf. Table~\ref{stat1}). Therefore, the interpretation of these differences based on Eq.~\ref{eq05} is unambiguous: it means that the Sy1s with detected AGN winds have higher accretion rates than the Sy1s without detected winds. Or, considering the results of stacking, the strength of the AGN wind increases with the level of accretion. Two questions that naturally come to mind, then, are what explains this higher accretion and what effect could these winds have or had on their hosts? \begin{figure}[ht!] \epsscale{0.7} \plotone{figure08.png} \caption{Mass-luminosity ratios in the Sy1 subgroups. The values are the mean and standard deviations. The diagonal lines are three different Eddington ratios, N$_{Edd}$. \label{f08}} \end{figure} Examining Table~\ref{stat1} and Table~\ref{stat2}, there are four physical characteristics that seem to be related only to the spectral difference B vs. N: 1) in the Sy1B and Sy1Bw, the BHs are more massive than in the Sy1N and Sy1Nw, 2) their power-law indices, $\beta$, are also lower (more negative), implying they are emitting more intensely in the UV, 3) their host galaxies have a slightly earlier morphology, and 4) their SFRs are slightly lower. The simplest scenario to explain all these differences seems to point to different galaxy formation processes: 1- according to the relation M$_{\rm BH}-\sigma_{\star}$, we expect the most massive BHs to be found in galaxies with more massive bulges, 2- this implies higher accretion rates in the Sy1B and Sy1Bw (early-type spirals) than in the Sy1N and Sy1Nw (late-type spirals) \citep{Coziol2011,Calvi2018}, 3- this difference in morphology also implies higher astration rates, that is, higher rates of transformation of gas into stars to form the bulges \citep{TinsleyLarson1979,StruckMarcell1981,Sandage1986,Coziol1998}, finally, 4- this implies that more metals and dust are locked into stars, leaving in the early-type spirals Sy1B and Sy1Bw bulges and disks depleted in gas and dust, compared to in the later type spirals Sy1N and Sy1Nw bulges and disks rich in gas and dust. This scenario would naturally explain why the Sy1B and Sy1Bw emit more UV photons and why they have lower SFRs than the Sy1N and Sy1Nw (there is possibly also a direct connection with the formation of obscuring torus). However, the general scenario above, based on galaxy formation process, does not explain why we find the same faction of detected AGN winds in the Sy1B and Sy1N. The key to understand the wind in these galaxies seems to be connected with the higher accretion rates and higher SFRs in the Sy1Bw and Sy1Nw. These observations imply more gas is falling onto the BH in these galaxies, and a good fraction of this ``free'' gas is sufficiently cold and dense to form more stars. Note that we can eliminate shocks due to AGN winds in the NLR, since we found in Figure~\ref{f3}c the intensity line ratios to be consistent with photoionization, and found no difference between the Sy1Nw and Sy1N. This suggests that the only mandatory condition to form an AGN wind could be a sufficiently large reservoir of gas available, which is consistent with the bias noted by \citet{Cicone2018}. Therefore, independently from how the galaxy formed, and thus of the final morphological type of the host galaxy, any ``surplus'' amount of gas reaching the BH might be sufficient to trigger an AGN wind. What is intriguing in this conclusion is that the physical conditions for the gas reaching the BH would also need to correspond to the physical conditions favoring star formation, which might be difficult to understand without a star formation triggering mechanism like an AGN wind feedback. \begin{deluxetable*}{lcccccc} \tablecaption{Spearman's correlation matrix for the whole sample of Sy1s \label{stat4}} \tablewidth{0pt} \tablehead{ \colhead{\textbf{Corr. Coeff.}}&\colhead{z}&\colhead{M$_{BH}$}&\colhead{L$_{\rm AGN}$}&\colhead{N$_{Edd}$}&\colhead{SFR}&\colhead{T} } \startdata M$_{\rm BH}$ & \textbf{0.538} & & & & & \\ L$_{\rm AGN}$ & \textbf{0.849} & \textbf{0.654} & & & & \\ N$_{\rm Edd}$ & 0.278 & -0.370 & 0.273 & & & \\ SFR & \textit{0.059} & \textit{-0.051} & \textit{0.085} & 0.167 & & \\ T & 0.279 & \textit{0.081} & 0.223 & 0.152 & 0.339 & \\ $\beta$ & \textbf{-0.565} & -0.334 & \textbf{-0.654} & -0.274 & -0.114 & -0.426 \\ \hline \textbf{P $\alpha = 0.05$} & z & M$_{BH}$ & L$_{\rm AGN}$ & N$_{Edd}$ & SFR & T \\ \hline M$_{\rm BH}$ & $< 0.0001$ & & & & & \\ L$_{\rm AGN}$ & $< 0.0001$ & $< 0.0001$ & & & & \\ N$_{\rm Edd}$ & $< 0.0001$ & $< 0.0001$ & $< 0.0001$ & & & \\ SFR & 0.0002 & 0.0015 & $< 0.0001$ & $< 0.0001$ & & \\ T & $< 0.0001$ & $< 0.0001$ & $< 0.0001$ & $< 0.0001$ & $< 0.0001$ & \\ $\beta$ & $< 0.0001$ & $< 0.0001$ & $< 0.0001$ & $< 0.0001$ & $< 0.0001$ & $< 0.0001$ \\ \enddata \tablecomments {M$_{BH}$, L$_{\rm AGN}$, N$_{Edd}$ and SFR values are in dex (as in Table~\ref{stat1}); Strongest correlations ($rs \ge 0.5$) are in bold, weakest ($rs \le 0.1$) in italic.} \end{deluxetable*} \begin{deluxetable*}{lccccccc} \tablecaption{Spearman's correlation matrix for Sy1 with resolved outflows \label{stat5}} \tablewidth{0pt} \tablehead{ \colhead{\textbf{Corr. Coeff.}}&\colhead{z}&\colhead{M$_{BH}$}&\colhead{L$_{\rm AGN}$}&\colhead{N$_{Edd}$}&\colhead{SFR}&\colhead{T} & $\beta$ } \startdata M$_{\rm BH}$ & \textbf{0.517} & & & & & & \\ L$_{\rm AGN}$ & \textbf{0.837} & \textbf{0.627} & & & & & \\ N$_{\rm Edd}$ & 0.330 & -0.445 & 0.355 & & & & \\ SFR & \textit{0.072} & \textit{-0.057} & 0.108 & 0.216 & & & \\ T & 0.324 & \textit{0.087} & 0.261 & 0.212 & 0.327 & & \\ $\beta$ & \textbf{-0.550} & -0.317 & \textbf{-0.666} & -0.362 & -0.133 & -0.443 & \\ V$_{\rm max}$ & \textit{0.090} & 0.165 & 0.135 & \textit{-0.060} & \textit{-0.057} & -0.116 & 0.046\\ \hline \textbf{P $\alpha = 0.05$} & z & M$_{BH}$ & L$_{\rm AGN}$ & N$_{Edd}$ & SFR & T & $\beta$\\ \hline M$_{\rm BH}$ & $< 0.0001$ & & & & & & \\ L$_{\rm AGN}$ & $< 0.0001$ & $< 0.0001$ & & & & & \\ N$_{\rm Edd}$ & $< 0.0001$ & $< 0.0001$ & $< 0.0001$ & & & & \\ SFR & 0.0057 & 0.0287 & $< 0.0001$ & $< 0.0001$ & & & \\ T & $< 0.0001$ & 0.0009 & $< 0.0001$ & $< 0.0001$ & $< 0.0001$ & & \\ $\beta$ & $< 0.0001$ & $< 0.0001$ & $< 0.0001$ & $< 0.0001$ & $< 0.0001$ & $< 0.0001$ & \\ V$_{\rm max}$ & 0.0006 & $< 0.0001$ & $< 0.0001$ & 0.0210 & 0.0287 & $< 0.0001$ & 0.0764 \\ \enddata \tablecomments {M$_{BH}$, L$_{\rm AGN}$, N$_{Edd}$ and SFR values are in dex (as in Table~\ref{stat1}); Strongest correlations ($rs \ge 0.5$) are in bold, weakest ($rs \le 0.1$) in italic. Note that the correlation between V$_{max}$ and $\beta$ is not recognized as significant.} \end{deluxetable*} The second question, then, is what could be or could have been the effects of the AGN winds we observe on their host galaxies? Considering the stacking result, our analysis supports the idea that AGN winds are ubiquitous in Sy1s, which suggests this is an intrinsic aspect of the AGN phenomenon. But if this is so, then the effects expected of these winds must also be common. However, from our data we cannot distinguish what these effects could be. The only evidence we found is a weak trend for the Sy1s with winds to have broad Balmer lines with smaller FWHMs than in Sy1s without wind, which suggests winds could have affected their BLRs (see Section~\ref{SS3a}). But that would be a local effect, while on a larger scale we found no difference between the Sy1N and Sy1Nw in terms of excitation in the NLRs. In order to get more information on this wind feedback problem, we computed a non parametric Spearman correlation matrix, which estimates the level of correlation for each pair of variables, without regard for the other variables. The Spearman correlation coefficient, $rs$, ranges from $-1$ to $+1$ (anticorrelation/perfect correlation), with $rs = 0$ meaning no correlation. The accompanying matrix contains the p-values (for $\alpha = 0.05$), which when small means that we can reject the idea the correlation we observed are due to random sampling. In Table~\ref{stat4} we present first the Spearman correlation and P-values matrices for the whole sample of 3,896 Sy1s. Because the matrices are symmetric, we present, for clarity sake, only the lower triangular parts. In Table~\ref{stat4} the strongest correlations are shown in bold, while the weakest ones are shown in italic. Based on the P-values, all the correlations are statistically significant. However, the strength of these correlations are not equal. The strongest correlations are also the most physically obvious, positive between M$_{BH}$ and L$_{AGN}$, and negative for these same two parameters with $\beta$, the index becoming more negative as the mass and luminosity of the AGN increase. The other strong correlations are with the redshift, showing rapid increase of BH mass and luminosity with the redshift (consistent with an increase of AGN activity at high redshift), which also explains the strong anticorreation with $\beta$. The correlation of the Eddington ratio, N$_{Edd}$, is also as expected based on its definition (Eq.~\ref{eq05}), increasing with the luminosity and decreasing with the BH mass. The fact that both of these parameters increase equally rapidly at high redshift might also explain the lower correlation of N$_{Edd}$ with $z$. But more interesting are the results for the two parameters related to the hosts, the morphology type, T, and the SFR. What is remarkable is that their correlations with the parameters related to the AGN activity are significantly lower. As the morphology change with the redshift toward later-types, the BH mass almost does not vary, while both the AGN luminosity and Eddington ratio increase. This is consistent with the observational bias expected on the morphology related to an increase of AGN activity at high redshift. In fact, the only obvious physical correlation for T is the increase of SFR as the host galaxies change into later type spirals. As for the SFR, the only other stronger correlations is with the Eddington ratio, consistent with what we observed before, implying that the SFR increases as the AGN activity increases. Note that this correlation could also explain the correlation of the SFR with $\beta$ and AGN luminosity. In general, therefore, there does not seem to be a a strong correlations between the SFR and AGN characteristics, or between the AGN activity and the morphological types (except for the bias). Now that we have the general behavior, we can look what happens in those Sy1s with AGN winds. In Table~\ref{stat5} we computed the Spearman correlation matrix adding V$_{max}$ as a supplementary parameter. The matrices shows mostly the same correlations as for the whole sample, with slightly higher coefficients, reinforcing the correlations of SFR with L$_{AGN}$ and N$_{Edd}$. This is consistent with the trend that Sy1s with winds show at the same time higher accretion rates and higher SFRs. As for V$_{max}$, the strongest positive correlations are with the BH mass and luminosity, consistent with what we expect of AGN winds. There is also a strong negative correlation with the morphology, the velocity decreasing in late-type spirals. This last correlation is consistent with what we found before, V$_{max}$ being lower in the Sy1Nw, since Sy1Nw are more numerous in late-type spirals. The correlations of V$_{max}$ are much weaker and negative with N$_{Edd}$ and the SFR, and in fact there is no correlation (the only correlation rejected by the p-value) with the power-law index. These last results seem to confirm that the outflows we observed are AGN winds, and that there are no direct physical connection with what causes these winds and the SFRs in their hosts. The only significant anticorrelation of V$_{max}$ with T, which suggest that AGN winds become stronger in early-type galaxies, might be the only evidence that could suggest a link between AGN winds and the bulges. However, as we have mentioned in Section~\ref{SS2c}, the difference in velocity between the SyBw and Sy1Nw could be easily explained in term of winds by a difference of NLRs, which make this characteristic of the wind the product of a difference in morphology and not a cause of this difference in morphology. The only direct effect of the winds we observe might have been local, affecting the BLRs, which would explain why the Balmer lines in those Sy1s with winds tend to have smaller FWHMs, and this is independent from the spectral type (B versus N) and morphology. Our observations, in particular, show no evidence favoring the quenching of star formation by AGN winds. At least not for the winds we observe, but then in what conditions would the winds, being ubiquitous and possibly intrinsic to the AGN phenomenon, be important? In \citet{Bait2017}, the authors concluded that, ``...the growth of the bulge plays an important role in quenching'', and ``morphology most strongly correlates with sSFR, independent of the environment...". Assuming AGN winds are not only ubiquitous, but transient and recurrent \citep[e.g.,][]{KingPounds2015}, could AGN winds play an active role in forming the bulge of the Sy1s in our sample? However, in Section~\ref{SS3c} we have determined that, assuming the mass of the bulge is 1000 times higher than the mass of their BHs, the sSFR of the Sy1s would be typical of early-type spirals (S0s to Sa/Sb) in the Green valley \citep[cf. Figure~8 in][]{Bait2017} far from the quenching region. There is consequently no evidence the SFRs we observe in the Sy1s are peculiar for the morphology and mass of their hosts. Thus if AGN winds played a role in the formation of these bulges, their effect would be today undistinguishable from the normal process of galaxy formation. But, then, what about the effects of the wind we observe now? Could they have been delayed, as many authors suggested? The winds that we observe, consequently, would have formed only recently, and would not have had enough time to interfere with the SF of their hosts. However, how long could this delay last? Note that our sample already covers a large range in redshift, up to $z \sim 0.4$, which corresponds cosmologically to a 4 Gyrs look-back time, and surely after such a long time one should have expected feedback evidence to appear in sufficiently large sample. Or could it be that the feedback happens when the galaxy is already out of its AGN phase, explaining why no such case appears in our sample? But then what form would this post-AGN phase take and how would this transition happen? Would it be gradual or sudden? And how would that phase be compared to other AGN types, the Sy2s, LINERs, or TOs, which also have outflows \citep[e.g.,][]{Woo2016}, but significantly different AGN and star formation characteristics \citep[e.g.,][]{Torres-Papaqui2012, Torres-Papaqui2013}?. Or maybe the effect of AGN winds is more direct (local). We already had a suggestion of that possibly happening in the BLRs (cf. Section~\ref{SS3a}). However, since the winds are ubiquitous, such effect would also need to be general, related to a common phenomenon that is obvious to observe (possibly something that we already observed). Interestingly, there is one well known phenomenon, which, although characteristic of AGN, is still unexplained, which is the fact that most AGN are radio quiet. In \citet{Coziol2017} it was suggested that AGN become radio-loud only when the accretion process in their galaxies becomes chaotic, which, on the other hand, they also demonstrated is a rare event, explaining why most AGN are radio quiet. However, why this is a rare event was not explained. Could AGN winds have something to do with this fact? For example, by ejecting a huge quantity of gas out of the central region, these winds could act as a natural mechanism to regulate the accretion rate, possibly impeding the process to become chaotic. This would make AGN winds not only ubiquitous, but intrinsic to the accretion process, in good agreement with our observations. What role AGN winds could play in radio galaxies, therefore, is an open question that needs to be investigated further. \section{Conclusions}\label{S5} The most significant result of our study is the confirmation that outflows are ubiquitous in Sy1s. We also found clear evidence that these outflows could be radiatively launched and that they are related to higher rates in gas accretion, consistent with AGN winds. This suggests that AGN winds are not only a common aspect of the AGN phenomenon, but most probably are intrinsic to the accretion process; they happen each time an extra amount of gas finds its way to the BH at the center of the galaxies. What is not clear, however, is what could be the consequence of these winds? Having determined the SFRs and morphology of the host galaxies, we have found these parameters to be only weakly correlated with the parameters related to the source of the winds (BH mass, AGN luminosity and Eddington ratios). We also found that Sy1s with detected winds have higher SFRs than those without winds, which contradicts the original quenching hypothesis. Furthermore, we found that their specific SFRs are typical of early-type spiral galaxies (S0s to Sa/Sb) in the green valley, far from the quenching regime, which suggests that the Sy1 host galaxies are following a normal evolutionary path for their morphology and mass. Other interesting observations related to the winds are: 1) the maximum velocity of the wind is higher in the Sy1Bw than in the Sy1Nw, which is consistent with denser NLRs in the Sy1Nw due to their later-type morphology, 2) Sy1s with wind, irrespective of their morphology, show a trend to have broad Balmer lines with smaller FWHMs than those without wind, which could be interpreted as a local effect of the wind in modifying the structures of the BLRs. Consequently, we propose that either the AGN winds in the Sy1s are recent occurrences, related to recurrent AGN events, and the winds, then, are too young to have had an observable influence on their galaxies, this happening only in post-AGN, or the feedback effects are mostly local, modifying the BLR and possibly regulating the accretion process itself. \acknowledgments The authors thank an anonymous referee for comments and suggestions that help us improve the clarity of our paper. They also want to thank Heinz Andernach for reading a first draft of this article and for his judicious comments. J. P. T.-P. acknowledges a grant support from DAIP-UGto (0173/19). R. C. also wants to thank Dr. Cindia Reyes (MD.) and the Drs. and nurses staff of the Centro Medico La Presa, in Guanajuato, for their excellent care in a time of needs during a difficult period (the COVID-19 pandemy), allowing him to continue working on this fascinating subject. The SDSS is managed by the Astrophysical Research Consortium (ARC) for the Participating Institutions. The Participating Institutions are: the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge (Cambridge University), CaseWestern Reserve University, the University of Chicago, the Fermi National Accelerator Laboratory (Fermilab), the Institute for Advanced Study, the Japan Participation Group, the Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), the New Mexico State University, the Ohio State University, the University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. This publication also makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration and of the cross-match service provided by CDS, Strasbourg.
1,314,259,995,745
arxiv
\section*{Introduction} Learning machines aim to find statistical patterns in data that generalize to previously unseen samples \cite{hastie2009elements}. How well they perform in doing so depends on factors such as the size and the nature of the training data set, the complexity of the learning task, and the inductive bias of the learning machine. Identifying precisely how these factors contribute to the generalization performance has been a theoretical challenge. In particular, a definitive theory should be able to predict generalization performance on real data. Existing theories fall short of this goal, often providing impractical bounds and inaccurate estimates \cite{zhang2016understanding,belkin2019reconciling}. The need for a new theory of generalization is exacerbated by recent developments in deep learning \cite{lecun2015deep}. Experience in the field suggests that larger models perform better \cite{nakkiran2019deep,kaplan2020scaling,lepikhin2020gshard}, encouraging training of larger and larger networks with state-of-the-art architectures reaching hundreds of billions of parameters \cite{lepikhin2020gshard}. These networks work in an overparameterized regime \cite{belkin2019reconciling,nakkiran2019deep} with much more parameters than training samples, and are highly expressive to a level that they can even fit random noise \cite{zhang2016understanding}. Yet, they generalize well, contradicting the conventional wisdom from classical statistical learning theory \cite{hastie2009elements,belkin2019reconciling,Vapnik1998} according to which overparameterization should lead to overfitting and worse generalization. It must be that overparameterized networks have inductive biases that suit the learning task. Therefore, it is crucial for a theory of generalization to elucidate such biases. While addressing the full complexity of deep learning is as of now beyond the reach of theoretical study, a tractable, yet practically-relevant limit was established by recent work pointing to a correspondence between training deep networks and performing regression with various rotation invariant kernels. In the limit where the width of a network is taken to infinity (network is thus overparameterized), neural network training with a certain random initialization scheme can be described by ridgeless kernel regression with the Neural Network Gaussian Process kernel (NNGPK) if only the last layer is trained \cite{neal1996bayesian,cho2009kernel,matthews2018gaussian,lee2017deep}, or the Neural Tangent Kernel (NTK) if all the layers are trained \cite{jacot2018neural}. Consequently, studying the inductive biases of kernels arising from the infinite-width limit should give insight to the success of overparameterized neural networks. Indeed, key generalization phenomena in deep learning also occur in kernel methods, and it has been argued that understanding generalization in kernel methods is necessary for understanding generalization in deep learning \cite{belkin2018understand}. Motivated by these connections to deep networks and also by its wide use, in this paper, we present a theory of generalization in kernel regression \cite{wahba1990spline,evgeniou2000regularization,shawe2004kernel,mohri2018foundations,jacot2020kernel}. Our theory is generally applicable to any kernel and contains the infinite-width limit of deep networks as a special case. Most importantly, our theory is applicable to real datasets. We describe typical generalization performance of kernel regression shedding light onto practical uses of the algorithm, in contrast to the worst case bounds of statistical learning theory \cite{Vapnik1998,mohri2018foundations,cucker2002best,caponnetto2007optimal,liang2018just}. In the past, statistical mechanics provided a useful theoretical framework for such typical-case analyses for various algorithms \cite{gardner1988space,Hertz_1989, krogh, sompolinsky1992examples,optimalperceptron,statmechofrule,sollich1999learning,Malzahn_Opper,engel2001statistical,bahri2019statistical}. Here, using the replica method of statistical mechanics \cite{mezard1987spin}, we derive an analytical expression for the typical generalization error of kernel regression as a function of 1) the number of training samples, 2) the eigenvalues and eigenfunctions of the kernel, which define the inductive bias of kernel regression, and 3) the alignment of the target function with the kernel's eigenfunctions, which provides a notion of how compatible the kernel is for the task. We test our theory on various real datasets and kernels. Our analytical generalization error predictions fit experiments remarkably well. Our theory sheds light onto the various generalization phenomena. We elucidate a strong inductive bias: as the size of the training set grows, kernel regression fits successively higher spectral modes of the target function, where the spectrum is defined by solving an eigenfunction problem \cite{bordelon2020spectrum, jacot2020implicit,jacot2020kernel, liang_isometry}. Consequently, our theory can predict which kernels or neural architectures are well suited to a given task by studying the alignment of top kernel eigenfunctions with the target function for the task. Target functions that place most power in the top kernel eigenfunctions can be estimated accurately at small sample sizes, leading to good generalization. Finally, when the data labels are noisy or the target function has components not expressible by the kernel, we observe that generalization error can exhibit non-monotonic behavior as a function of the number of samples, contrary to the common intuition that more data should lead to smaller error. This non-monotonic behavior is reminiscent of the recently described ``double-descent'' phenomenon \cite{belkin2019reconciling,nakkiran2019deep,loog2020brief,montanari2019generalization}, where generalization error is non-monotonic as a function of model complexity in many modern machine learning models. We show that the non-monotonicity can be mitigated by increasing the implicit or explicit regularization. To understand these phenomena better, we present a detailed analytical study of the application of our theory to rotation invariant kernels, motivated by their wide use and relevance for deep learning. Besides NNGPK and NTK, this class includes many other popular kernels such as the Gaussian, Exponential and Matern kernels \cite{genton2001classes,RasmussenWilliams}. When the data generating distribution is also spherically symmetric, our theory is amenable to further analytical treatment. Our analyses provide a mechanistic understanding of the inductive bias of kernel regression and the possible non-monotonic behavior of learning curves. \section*{Results} \subsection*{Generalization Error of Kernel Regression from Statistical Mechanics} Kernel regression is a supervised learning problem where one estimates a function from a number of observations. For our setup, let $\mathcal{D} = \{\mathbf{x}^\mu, y^\mu\}_{\mu=1}^P$ be a sample of $P$ observations drawn from a probability distribution on $\mathcal{X} \times \mathbb{R}$, and $\mathcal X \subseteq \mathbb{R}^D$. The inputs ${\mathbf x}^{\mu}$ are drawn from a distribution $p({\mathbf x})$, and the labels $y^\mu$ are assumed to be generated by a noisy target $ y^\mu = \bar f(\mathbf{x}^\mu) + \epsilon^\mu$, where $\bar f$ is square integrable with respect to $p({\mathbf x})$, and $\epsilon^\mu$ represents zero-mean additive noise with covariance $\left< \epsilon^\mu \epsilon^\nu \right> = \delta_{\mu \nu} \sigma^2$. The kernel regression problem is \begin{linenomath*}\begin{equation}\label{eq:regression} f^* = \argmin_{f \in \mathcal{H}} \frac{1}{2\lambda } \sum_{i=1}^P ( f(\mathbf{x}^\mu) - y^\mu)^2 + \frac{1}{2}\left< f,f \right>_{\mathcal{H}}, \end{equation}\end{linenomath*} where $\lambda$ is the ``ridge'' parameter, $\mathcal{H}$ is a Reproducing Kernel Hilbert Space (RKHS) uniquely determined by its reproducing kernel $K(\mathbf{x},\mathbf{x}')$ and the input distribution $p({\mathbf x})$ \cite{scholkopf2002learning}, and $\left< \cdot, \cdot \right>_{\mathcal{H}}$ is the RKHS inner product. The Hilbert norm penalty controls the complexity of $f$. The $\lambda\to 0$ limit is referred to as the kernel interpolation limit, where the dataset is exactly fit: $f^* = \argmin_{f \in \mathcal{H}} \left< f,f \right>_{\mathcal{H}},\,{\rm s.t.}\, f(\mathbf{x}^\mu) = y^\mu, \mu = 1,\ldots P$. We emphasize that in our setting the target function does not have to be in the RKHS. Our goal is to calculate generalization error, i.e. mean squared error between the estimator, $f^*$, and the ground-truth (target) $\bar f(\mathbf{x})$ averaged over the data distribution and datasets: \begin{linenomath*}\begin{equation}\label{eq:genError} E_g = \left< \int d{\mathbf x} \, p({\mathbf x})\,\big(f^*({\mathbf x}) - \bar f(\mathbf{x})\big)^2 \right>_\mathcal{D}. \end{equation}\end{linenomath*} $E_g$ measures, on average, how well the function learned agrees with the target on previously unseen (and seen) data sampled from the same distribution. This problem can be analyzed using the replica method from statistical physics of disordered systems \cite{mezard1987spin}, treating the training set as a quenched disorder. Our calculation is outlined in Methods and further detailed in the Supplementary Information. Here we present our main results. Our results rely on the Mercer decomposition of the kernel in terms of orthogonal eigenfunctions $\{\phi_\rho\}$, \begin{linenomath*}\begin{equation}\label{eq:eigenvalue} \int d \mathbf{x}'\,p(\mathbf{x}') K(\mathbf{x}, \mathbf{x}') \phi_\rho(\mathbf{x}') = \eta_\rho \phi_\rho(\mathbf{x}), \qquad \rho = 1,\ldots, N, \end{equation}\end{linenomath*} which form a complete basis for the RKHS, and eigenvalues $\{\eta_\rho\}$. $N$ is typically infinite. For ease of presentation, we assume that all eigenvalues are strictly greater than zero. In Supplementary Note 1 and 2, we fully address the case with zero eigenvalues. Working with the orthogonal basis set $\psi_\rho(\mathbf{x}) \equiv \sqrt{\eta_\rho} \phi_\rho(\mathbf{x})$, also called a feature map, we introduce coefficients $\{\overline{w}_\rho\}$ and $\{w_\rho^*\}$ that represent the target and learned functions respectively $\bar f(\mathbf{x}) = \sum_\rho \overline{w}_\rho \psi_\rho(\mathbf{x})$, and $f^*(\mathbf{x}) = \sum_\rho w_\rho^* \psi_\rho(\mathbf{x})$. With this setup, we calculate the generalization error of kernel regression for any kernel and data distribution to be (Methods and Supplementary Note 2): % \begin{linenomath*}\begin{equation}\label{eq:finitePgenErr} \begin{gathered} E_g = \frac{1}{1-\gamma}\sum_{\rho}\frac{\eta_\rho }{\big(\kappa+P\eta_\rho\big)^2}\big(\kappa^2\bar w_\rho^2+\tilde{\sigma}^2 P\eta_\rho\big), \\ \kappa = \lambda + \sum_\rho \frac{\kappa\eta_\rho}{\kappa+{P}\eta_\rho},\quad \gamma = \sum_\rho \frac{P\eta_\rho^2}{(\kappa+P\eta_\rho)^2}. \end{gathered} \end{equation}\end{linenomath*} We note that the generalization error is the sum of a $\sigma$-independent term and a $\sigma$-dependent term, the latter of which fully captures the effect of noise on generalization error. Formally, this equation describes the typical behavior of kernel regression in a thermodynamic limit that involves taking $P$ to infinity. In this limit, variations in kernel regression's performance due to the differences in how the training set is formed, which is assumed to be a stochastic process, become negligible. The precise nature of the limit depends on the kernel and the data distribution. In this work, we consider two different analytically solvable cases and identify natural scalings of $N$ and $D$ with $P$, which in turn govern how the kernel eigenvalues $\eta_\rho$ scale inversely with $P$. We further give the infinite-$P$ limits of Eq. \eqref{eq:finitePgenErr} explicitly for these cases. In practice, however, we find that our generalization error formula describes average learning curves very well for finite $P$ for even as low as a few samples. We observe that the variance in learning curves due to stochastic sampling of the training set is significant for low $P$, but decays with increasing $P$ as expected. We will demonstrate various generalization phenomena that arises from Eq. \eqref{eq:finitePgenErr} through simulations and analytical study. One immediate observation is the \textit{spectral bias}: faster rates of convergence of the error along eigenfunctions corresponding to higher eigenvalues in the noise free ($\sigma^2 = 0$) limit. The generalization error can be decomposed into a sum of modal errors $ E_g = \sum_\rho \eta_\rho \overline{w}_{\rho}^2 E_\rho$, where each normalized mode error $E_\rho$ represents the contribution of the mode error due to estimation of the coefficient for eigenfunction $\psi_\rho$ (Methods). The normalized mode errors are ordered according to their eigenvalues (Methods) \begin{linenomath*}\begin{equation} \eta_\rho > \eta_{\rho'} \implies E_{\rho} < E_{\rho'}, \end{equation}\end{linenomath*} which implies that modes $\rho$ with large eigenvalues $\eta_\rho$ are learned more rapidly as $P$ increases than modes with small eigenvalues. An important implication of this result is that target functions acting on the same data distribution with higher \textit{cumulative power distributions} $C(\rho)$, defined as the proportion of target power in the first $\rho$ modes \begin{linenomath*} \begin{equation}\label{eq:Ck} C(\rho) = \frac{\sum_{\rho' \leq \rho} \eta_{\rho'} \overline{w}_{\rho'}^2}{\sum_{\rho'} \eta_{\rho'} \overline{w}_{\rho'}^2}, \end{equation} \end{linenomath*} for all $k\geq 1$ will have lower generalization error normalized by total target power, $E_g(P)/E_g(0)$, for all $P$ (Methods). Therefore, $C(\rho)$ provides a measure of the compatibility between the kernel and the target, which we name {\it task-model alignment}. We further note that the target function enters normalized generalization error only through combinations $C(\rho)-C(\rho-1) = \eta_{\rho} \overline{w}_{\rho}^2/\sum_{\rho} \eta_\rho \overline{w}_\rho^2$. Hence, the kernel eigenvalues, the cumulative power distribution, and the noise parameter completely specify the normalized generalization error. Spectral bias, task-model alignment and noise explain generalization in kernel regression. Generalization error can exhibit non-monotonicity which can be understood through the bias and variance decomposition \cite{montanari2019generalization,dascoli2020double,dascoli2020triple}, $E_g = B+V$, where $B = \int d{\mathbf x}\, p({\mathbf x}) \left(\left<f^*({\mathbf x}) \right>_{\mathcal{D}} - \bar f({\mathbf x}) \right)^2$ and $V = \left<\int d{\mathbf x} \, p({\mathbf x}) \left(f^*({\mathbf x}) - \left<f^*({\mathbf x}) \right>_{\mathcal{D}} \right)^2 \right>_{\mathcal D}$. We found that the average estimator is given by $\braket{f^*({\mathbf x};P)}_\mathcal{D}= \sum_\rho \frac{P\eta_\rho}{P\eta_\rho + \kappa}\bar w_\rho \psi_\rho({\mathbf x})$, which monotonically approaches the target function as $P$ increases, giving rise to a monotonically decreasing bias (Supplementary Note 2). However, the variance term arising from the variance of the estimator over possible sampled datasets $\mathcal{D}$ is potentially non-monotonic as the dataset size increases. Therefore, the total generalization error can exhibit local maxima. \subsection*{Applications to Real Datasets} Next, we evaluate our theory on realistic datasets and show that it predicts kernel regression learning curves with remarkable accuracy. We further elucidate various heuristic generalization principles. To apply our theory, we numerically solve the eigenvalue problem Eq. \eqref{eq:eigenvalue} on the dataset (Methods) and obtain the necessary eigenvalues and eigenfunctions. When solved on a finite dataset, Eq. \eqref{eq:eigenvalue} is an uncentered kernel PCA problem (Methods). We use these eigenfunctions (or eigenvectors for finite data) to express our target function, and the resulting coefficients and kernel eigenvalues to evaluate the generalization error. \begin{figure}[h] \centering \includegraphics[scale = 1]{Figure1_BinaryTask_MNIST.pdf} \caption{\textbf{Effect of task-model alignment on the generalization of kernel regression.} \textbf{a, b.} Projections of digits from MNIST along the top two (uncentered) kernel principal components of 2-layer NTK for 0s vs. 1s and 8s vs. 9s, respectively. \textbf{c.} Learning curves for both tasks. The theoretical learning curves (Eq. \eqref{eq:finitePgenErr}, dashed lines) show strong agreement with experiment (dots). \textbf{d.} The kernel eigenspectra for the respective datasets. \textbf{e.} The cumulative power distributions $C(\rho)$. Error bars show the standard deviation over $50$ trials.}\label{fig:easy_vs_hard_visualization} \end{figure} In our first experiment, we test our theory using a 2-layer NTK \cite{cho2009kernel,jacot2018neural} on two different tasks: discriminating between 0s and 1s, and between 8s and 9s from MNIST dataset \cite{MNIST}. We formulate each of these tasks as a kernel regression problem by considering a vector target function which takes in digits and outputs one-hot labels. Our kernel regression theory can be applied separately to each element of the target function vector (Methods), and a generalization error can be calculated by adding the error due to each vector component. We can visualize the complexity of the two tasks by plotting the projection of the data along the top two kernel principal components (\figref{fig:easy_vs_hard_visualization}a,b). The projection for 0-1 digits appears highly separable compared to 8-9s, and thus simpler to learn to discriminate. Indeed, the generalization error for the 0-1 discrimination task falls more rapidly than the error for the 8-9 task (\figref{fig:easy_vs_hard_visualization}c). Our theory is in remarkable agreement with experiments. Why is 0-1 discrimination easier for this kernel? \figref{fig:easy_vs_hard_visualization}d shows that the eigenvalues of the NTK evaluated on the data are very similar for both datasets. To quantify the compatibility of the kernel with the tasks, we measure the cumulative power distribution $C(\rho)$. Even though in this case the data distributions are different, $C(\rho)$ is still informative. \figref{fig:easy_vs_hard_visualization}e illustrates that $C(\rho)$ rises more rapidly for the easier 0-1 task and more slowly for the 8-9 task, providing a heuristic explanation of why it requires a greater number of samples to learn. We next test our theory for Gaussian RBF kernel on the MNIST \cite{MNIST} and CIFAR \cite{CIFAR} datasets. \figref{fig:DoubleDescent_MNISTCIFAR}a shows excellent agreement between our theory and experiments for both. \figref{fig:DoubleDescent_MNISTCIFAR}b shows that the eigenvalues of the Gaussian RBF kernel evaluated on data are similar for MNIST and CIFAR-10. The cumulative powers $C(\rho)$ (\figref{fig:DoubleDescent_MNISTCIFAR}c), however, are very different. Placing more power in the first few modes makes learning faster. When the labels have nonzero noise $\sigma^2>0$ (\figref{fig:DoubleDescent_MNISTCIFAR}d,e), generalization error is non-monotonic with a peak, a feature that has been named ``double-descen '' \cite{belkin2019reconciling,loog2020brief}. By decomposing $E_g$ into the bias and the variance of the estimator, we see that the non-monotonicity is caused solely by the variance (\figref{fig:DoubleDescent_MNISTCIFAR}d,e). Similar observations about variance were made in different contexts before \cite{montanari2019generalization, dascoli2020double,nakkiran2020optimal}. \begin{figure} \centering \includegraphics[scale = 1]{Figure2_MNIST_CIFAR_RBF_BiasVariance.pdf} \caption{\textbf{Gaussian RBF kernel regression on MNIST and CIFAR-10 datasets.} Kernel is $K(\mathbf{x},\mathbf{x}')=e^{-\frac{1}{2 D \omega^2} ||x-x'||^2}$ with kernel bandwidth $\omega = 0.1$, ridge parameter $\lambda = 0.01$ and $D$ being the size of images. \textbf{a.} Generalization error $E_g(p)$ when $\sigma^2=0$: Solid lines are theory (Eq. \eqref{eq:finitePgenErr}), dots are experiments. \textbf{b.} Kernel eigenvalues and \textbf{c.} cumulative powers $C(\rho)$ for MNIST and CIFAR-10. \textbf{d, e.} Generalization error when $\sigma^2=0.5$ with its bias-variance decomposition for MNIST and CIFAR-10 datasets, respectively. Solid lines are theory, markers are experiments. Error bars represent standard deviation over $160$ trials. Bias and variance are obtained by calculating the mean and variance of the estimator over $150$ trials, respectively.} \label{fig:DoubleDescent_MNISTCIFAR} \end{figure} These experiments and discussion in the previous section provide illustrations of the three main heuristics about how dataset, kernel, target function, and noise interact to produce generalization in kernel regression: \begin{enumerate} \item \textit{Spectral Bias}: Kernel eigenfunctions $\phi_\rho$ with large eigenvalues $\eta_\rho$ can be estimated with kernel regression using a smaller number of samples. \item \textit{Task-Model Alignment}: Target functions with most of their power in top kernel eigenfunctions can be estimated efficiently and are compatible with the chosen kernel. We introduce cumulative power distribution, $C(\rho)$, as defined in Eq. \eqref{eq:Ck}, as a measure off this alignment. \item \textit{Non-monotonicity}: Generalization error may be non-monotonic with dataset size in the presence of noise (as in \figref{fig:DoubleDescent_MNISTCIFAR}), or when the target function is not expressible by the kernel (not in the RKHS). We provide a discussion of and examples for the latter kind in Supplementary Note 3 and 4. We show that modes of the target function corresponding to zero eigenvalues of the kernel act effectively as noise on the learning problem. \end{enumerate} To explore these phenomena further and understand their causes, we study several simplified models where the kernel eigenvalue problem and generalization error equations can be solved analytically. \subsection*{Double-Descent Phase Transition in a Band-Limited RKHS} An explicitly solvable and instructive case is the white band-limited RKHS with $N$ equal nonzero eigenvalues, a special case of which is linear regression. Later, we will observe that the mathematical description of rotation invariant kernels on isotropic distributions reduces to this simple model in each learning stage. In this model, the kernel eigenvalues are equal $\eta_\rho = \frac{1}{N}$ for a finite number of modes $\rho=1,...,N$ and truncate thereafter: $\eta_\rho = 0$ for $\rho>N$. Similarly, the target power $\overline {w}_\rho^2$ truncates after $N$ modes and satisfies the normalization condition $\sum_{\rho=1}^N \overline{w}_\rho^2 = N$. In Supplementary Note 3, we relax these constraints and discuss their implications. Linear regression (or linear perceptron) with isotropic data is a special case when $D=N$, $\phi_{\rho}(\mathbf{x}) = x_{\rho}$, and $\braket{x_{\rho} x_{\rho'}}_{{\mathbf x}\sim p({\mathbf x})} = \delta_{\rho\rho'}$ \cite{krogh}. We study this model in the thermodynamic limit. We find that the natural scaling is to take $P \to \infty$ and $N \to \infty$ with $\alpha = P/N \sim \mathcal{O}(1)$, and $D \sim O(1)$ (or $D=N\sim\mathcal{O}(P)$ in the linear regression case), leading to the generalization error: \begin{linenomath*}\begin{equation}\label{Egband} \begin{gathered} E_g(\alpha,\lambda,\sigma^2) = \frac{\kappa_\lambda(\alpha)^2+\sigma^2\alpha}{(\kappa_\lambda(\alpha)+\alpha)^2 - \alpha}, \\ \kappa_\lambda(\alpha) = \frac{1}{2} \left[(1+{\lambda}-\alpha) + \sqrt{(1+\lambda-\alpha)^2 + 4\lambda \alpha} \right]. \end{gathered} \end{equation}\end{linenomath*} Note that the result is independent of the teacher weights as long as they are properly normalized. The function $\kappa_\lambda(\alpha)$ appears in many contexts relevant to random matrix theory, as it is related to the resolvent, or Stieltjes transform, of a random Wishart matrix \cite{bai_silverstein, Advani_Ganguli_optimal} (Supplementary Note 3). This simple model shows interesting behavior, elucidating the role of regularization and under- vs. over-parameterization in learning machines. \begin{figure} \centering \includegraphics[scale = 1]{Figure3_WhiteBandLimited.pdf} \caption{\textbf{Learning curves and double-descent phase diagram for kernels with white band-limited spectra.} We simulated $N=800$ dimensional uncorrelated Gaussian features $\bm\phi({\mathbf x}) = {\mathbf x} \sim \mathcal{N}(0,{\bf I})$ and estimated a linear function $\bar{f}({\mathbf x}) = \bm\beta^\top {\mathbf x}$ with $||\bm\beta||^2 = N$. Error bars describe the standard deviation over $15$ trials. Solid lines are theory (Eq. \eqref{Egband}), dots are experiments. \textbf{a.} When $\lambda = 0$ and $\sigma^2=0$, $E_g$ linearly decreases with $\alpha$ and when $\sigma^2> 0$ it diverges as $\alpha \to 1$. \textbf{b.} When $\sigma^2=0$, explicit regularization $\lambda$ always leads to slower decay in $E_g$. \textbf{c.} For nonzero noise $\sigma^2>0$, there is an optimal regularization $\lambda^* = \sigma^2$ which gives the best generalization performance. \textbf{d.} Double-descent phase diagram where the colored squares correspond to the curves with same color in panel \textbf{c}. Optimal regularization ($\lambda^* = \sigma^2$) curve is shown in yellow dashed line which does not intersect the double-descent region above the curve defined by $g(\lambda)$ (Eq. \eqref{eq:phase_boundary_white}).} \label{fig:white} \end{figure} First we consider the interpolation limit ($\lambda = 0$, \figref{fig:white}a). The generalization error simplifies to $E_g = (1-\alpha) \Theta(1-\alpha) + \frac{\sigma^2}{1-\alpha} \left[ \alpha \Theta(1-\alpha) - \Theta(\alpha-1) \right]$. There is a first order phase transition at $\alpha_c = 1$, when the number of samples $P$ is equal to the number of non-zero modes $N$ and therefore to the number of parameters, $\lbrace \bar w_{\rho}\rbrace$, that define the target function. The phase transition is signaled by the non-analytic behavior of $E_g$ and verifiable by computing the first-derivative of free energy (Supplementary Note 3). When $\sigma = 0$, $E_g$ linearly falls with more data and at the critical point generalization error goes to zero. With noise present, the behavior at the critical point changes drastically, and there is a singular peak in the generalization error due to the noise term of the generalization error (\figref{fig:white}a). At this point the kernel machine is (over-)fitting exactly all data points, including noise. Then, as number of samples increase beyond the number of parameters ($\alpha > 1$), the machine is able to average over noise and the generalization error falls with asymptotic behavior $E_g \sim \sigma^2/\alpha$. Our results are consistent with those previously obtained for the linear perceptron with a noisy target \cite{krogh,sollich1994finite}, which is a special case of kernel regression with a white band-limited spectrum. When $\lambda > 0$ and $\sigma = 0$, $E_g$ decreases monotonically with $\alpha$ and is asymptotic to $E_g \sim {{\lambda}^2}/{\alpha^2}$ (\figref{fig:white}b). A sharp change at $\alpha =1$ is visible for small $\lambda$, reminiscent of the phase transition at $\lambda = 0$. When $\sigma > 0$ is sufficiently large compared to $\lambda$, non-monotonicity is again present, giving maximum generalization error at $\alpha \approx 1 + \lambda$ (\figref{fig:white}c), with an asymptotic fall $E_g \sim \frac{\sigma^2}{\alpha}$. We find that $E_g(\alpha)$ is non-monotonic when the noise level in target satisfies the following inequality (\figref{fig:white}d and Supplementary Note 3): \begin{linenomath*}\begin{align}\label{eq:phase_boundary_white} \sigma^2 > \begin{cases} g(\lambda) & \lambda< 1 \\ 2\lambda+1 & \lambda\geq 1 \end{cases}, \end{align}\end{linenomath*} where $g(\lambda) = 3\lambda\big[3\lambda+2-2\sqrt{1+\lambda}\sqrt{9\lambda+1}\cos\theta(\lambda)\big]$, and $ \theta(\lambda) = \frac{1}{3}\left(\pi+\tan^{-1}\frac{8\sqrt{\lambda}}{9\lambda(3\lambda+2)-1}\right)$. Although there is no strict phase transition (in the sense of non-analytic free energy) except at $\lambda=0$, Eq. \eqref{eq:phase_boundary_white} defines a phase boundary separating the monotonic and non-monotonic learning curve regions for a given regularization parameter and noise. For a given $\lambda$, double-descent occurs for sufficiently high $\sigma^2$. In the non-monotonic region, there is a single local maximum when $\sigma^2 > 2\lambda + 1$, otherwise a local minima followed by a local maxima (we call only this kind of peak as the double-descent peak). Based on this explicit formula, one could choose a large enough $\lambda$ to mitigate the peak to avoid overfitting for a given noise level (\figref{fig:white}d). However, larger $\lambda$ may imply slower learning (See \figref{fig:white}b and Supplementary Note 3) requiring more training samples to achieve the same generalization error. By inspecting the derivative $\frac{\partial E_g}{\partial \lambda}=0$, we find that $\lambda^* = \sigma^2$ (yellow dashed line in \figref{fig:white}d) is the optimal choice for ridge parameter, minimizing $E_g(\alpha)$ for a given $\sigma^2$ at all $\alpha$ (\figref{fig:white}c). For $\lambda > \lambda^*$ the noise-free error term increases from the optimum whereas $\lambda<\lambda^*$ gives a larger noise term. Our result agrees with a similar observation regarding the existence of an optimal ridge parameter in linear regression \cite{nakkiran2020optimal}. Further insight to the phase transition can be gained by looking at the bias and the variance of the estimator \cite{montanari2019generalization,dascoli2020double,dascoli2020triple}. The average estimator learned by kernel regression linearly approaches the target function as $\alpha \to 1$ (Supplementary Note 2): $\left< f^*({\mathbf x}) \right>_{\mathcal{D}} = \min\{\alpha,1\} \bar f({\mathbf x})$ (\figref{fig:bias_variance_estimator}a), which indicates that the bias ($B$) and variance ($V$) contributions to generalization error have the forms $B = \max\{0,1-\alpha\}^2$, $ V = \alpha(1-\alpha) \Theta(1-\alpha) + \frac{\sigma^2}{1-\alpha} \left[ \alpha \Theta(1-\alpha) - \Theta(\alpha-1) \right]$. In the absence of noise, $\sigma = 0$, variance is initially low at small $\alpha$, reaches its maximum at $\alpha = 1/2$ and then decreases as $\alpha \to 1$ as the learned function concentrates around $\bar f$ (\figref{fig:bias_variance_estimator}b). When there is noise, the phase transition at $\alpha = 1$ arises from the divergence in the variance $V$ of the learned estimator (\figref{fig:bias_variance_estimator}c). \begin{figure}[h] \centering \includegraphics[scale = 1]{Figure4_bias_variance_predictor.pdf} \caption{\textbf{Bias-variance decomposition of generalization error.} \textbf{a.} Average estimator for kernel regression with $K(x,x') = \sum_{k=1}^N \cos(k(x-x'))$ on target function $\bar f(x) = e^{4(\cos x -1 )}$ with mean subtracted for different values of $\alpha = P/N$ when $\lambda = \sigma^2=0$. Estimator linearly approaches to the target function and estimates it perfectly when $\alpha = 1$. Dashed lines are theory. \textbf{b.} With the same setting in \figref{fig:white}, when $\lambda = 0$ and $\sigma^2=0$, the bias is a monotonically decreasing function of $\alpha$ while variance has a peak at $\alpha = 1/2$ yet it does not diverge. \textbf{c.} When $\lambda = 0$ and $\sigma^2=0.2$, we observe that the divergence of $E_g$ is only due to the diverging variance of the estimator. In \textbf{b.} and \textbf{c.}, solid lines are theory, dots are experiments. Error bars represent the standard deviation over $15$ trials.} \label{fig:bias_variance_estimator} \end{figure} \subsection*{Multiple Learning Episodes and Descents: Rotation Invariant Kernels and Measures}\label{Spherical} Next, we study kernel regression on high dimensional spheres focusing on rotation invariant kernels, which satisfy $K(\mathbf{O} \mathbf{x},\mathbf{O} \mathbf{x}') = K(\mathbf{x},\mathbf{x}')$, where $\mathbf{O}$ is an arbitrary orthogonal matrix. This broad class of kernels includes widely used radial basis function kernels $K(\mathbf{x},\mathbf{x}') = K(||\mathbf{x}-\mathbf{x}'||)$ (Gaussian, Laplace, Matern, rational quadratic, thin plate splines, etc) and dot product kernels $K(\mathbf{x},\mathbf{x}') = K(\mathbf{x} \cdot \mathbf{x}')$ (polynomial kernels, NNGPK and NTK) \cite{ cho2009kernel,genton2001classes,RasmussenWilliams}. When the data distribution is spherically isotropic $p(\mathbf{x}) = p(||\mathbf{x}||)$, we can separate Mercer eigenfunctions for rotation invariant kernels into radial and angular parts. The spherical parts depend on the radial distances of the data points $||{\mathbf x}||$, whereas the angular components admit a decomposition in terms of spherical harmonics of the unit vectors $Y_{km}(\hat x)$, where $k$ is the degree and $m$ is the order of the harmonic \cite{dai2013spherical}. A review of the basic properties of spherical harmonics are provided in the Supplementary Note 6. Utilizing this spherical symmetry, we obtain the following Mercer decomposition $K(\mathbf{x},\mathbf{x}') = \sum_{z k m} \eta_{z,k} R_{z,k}(||\mathbf{x}||) R_{z,k}(||\mathbf{x}'||) Y_{km}(\mathbf{\hat{x}}) Y_{km}(\mathbf{\hat{x}'})$. Since the eigenvalues are independent of the spherical harmonic order $m$, the minimal degeneracy of the RKHS spectrum is the number of degree $k$ harmonics: in the limit $D \to \infty$ given by $\sim {D^k}/{k!}$ \cite{bietti2019inductive,yang2020finegrained}. However, the degeneracy can be even larger if there are different $(z,k)$ indices with the same eigenvalue. For notational convenience, we denote degenerate eigenvalues as $\eta_{K}$ ($K\in \mathbb{Z}^+$) and corresponding eigenfunctions as $\phi_{K,\rho}$ where $\rho \in \mathbb{Z}^+$ indexes the degenerate indices. After finding the eigenvalues of a kernel on the basis $\phi_{K,\rho}$, one can evaluate Eq. \eqref{eq:finitePgenErr} to predict the generalization error of the kernel machine. We focus on the case where the degeneracy of $\eta_{K}$ is $N(D,K) \sim \mathcal{O}_D(D^K)$. Correspondingly, for finite kernel power $\left< K(\mathbf{x},\mathbf{x}) \right>_{\mathbf{x} \sim p(\mathbf{x})}$ , the eigenvalues must scale with dimension like $\eta_{K} \sim \mathcal{O}_D(D^{-K})$ \cite{cohen2019learning,bordelon2020spectrum}. Examples include the widely-used Gaussian kernel and dot product kernels such as NTK, which we discuss below and in Supplementary Note 4. This scaling from the degeneracy allows us to consider multiple $P, D \to \infty$ limits leading to different learning stages. We consider a separate limit for each degenerate eigenvalue $L$ while keeping $\alpha \equiv P/N(D,L)$ finite. With this setting, we evaluate Eq. \eqref{eq:finitePgenErr} with definitions $ \bar\eta_K \equiv N(D,K)\eta_K$, $\bar w_{K}^2 \equiv \frac{1}{N(D,K)}\sum_\rho \bar w_{K,\rho}^2$, to obtain the generalization error in learning stage $L$: \begin{linenomath*}\begin{equation}\label{eq:asympGenErr} \begin{gathered} E^{(L)}_g(\alpha)=\bar\eta_{L}\bar w_{L}^2 \frac{\tilde \kappa^2+\tilde\sigma_{L}^2\alpha}{(\tilde \kappa + \alpha)^2-\alpha}+\sum_{K>L}\bar\eta_{K}\bar w_{K}^2,\\ \tilde \kappa(\alpha) = \frac{1}{2}(1+\tilde\lambda_{L}-\alpha)+\frac{1}{2}\sqrt{(\alpha+1+\tilde\lambda_{L})^2-4\alpha},\\ \tilde \sigma_{L}^2 \equiv \frac {\sigma^2+E^{(L)}_g(\infty)}{\bar\eta_{L}\bar w_{L}^2},\quad \tilde\lambda_{L} \equiv \frac{\lambda+\sum_{K>L}\bar\eta_{K}}{\bar\eta_{L}}. \end{gathered} \end{equation}\end{linenomath*} Several observations can be made: \begin{enumerate}[noitemsep,topsep=0pt,leftmargin=*] \item We note that $E_g^{(L)}(0) = \bar\eta_L\bar w_{L}^2 + \sum_{K> L}\bar\eta_{K}\bar w_{K}^2 = \bar\eta_L\bar w_{L}^2+E^{(L)}_g(\infty)$. In the learning stage $L$, generalization error due to all target modes with $K<L$ has already decayed to zero. As $\alpha \to \infty$, $K=L$ modes of the target function are learned, leaving $K>L$ modes. This illustrates an inductive bias towards learning target function modes corresponding to higher kernel eigenvalues. \item $E_g^{(L)}(\alpha)-E^{(L)}_g(\infty)$ reduces, up to a constant $\bar\eta_{L}\bar w_{L}^2$, to the generalization error in the band limited case, Eq. \eqref{Egband}, with the identification of an \textit{effective noise parameter}, $\tilde \sigma_L$, and an \textit{effective ridge parameter}, $\tilde\lambda_{L}$. Inspection of $\tilde \sigma_L$ reveals that target modes with $K>L$ ($E^{(L)}_g(\infty)$) act as noise in the current stage. Inspection of $\tilde\lambda_{L}$ reveals that kernel eigenvalues with $K>L$ act as a regularizer in the current stage. The role of the number of eigenvalues in the white band limited case, $N$, is played here by the degeneracy $N(D,L)$. \item Asymptotically, first term in $E_g^{(L)}(\alpha)$ is monotonically decreasing with $\alpha^{-2}$, while the second term shows non-monotonic behavior having a maximum at $\alpha = 1+\tilde\lambda_{L}$. Similar to the white band-limited case, generalization error diverges due to variance explosion at $\alpha = 1+\tilde\lambda_{L}$ when $\tilde\lambda_{L} = 0$ (a band-limited spectrum is possible) implying again a first order phase transition. Non-monotonicity caused by the noise term implies a possible peak in the generalization error in each learning stage. A phase diagram can be drawn, where phase boundaries are again defined by Eq. \eqref{eq:phase_boundary_white} evaluated with the effective ridge and noise parameters, \figref{fig:dot_prod_phase}a. \item Similar to the white band limited case, optimal regularization happens when \begin{linenomath*}\begin{equation} \tilde\lambda_{L}=\tilde \sigma_L^2, \end{equation}\end{linenomath*} minimizing $E_g^{(L)}(\alpha)$ for a given $\tilde \sigma_L$ for all $\alpha$. This result extends the previous findings on linear regression \cite{nakkiran2020optimal} to the large class of rotation invariant kernels. \item When all stages are considered, it is possible to observe learning curves with \textit{multiple descents} with at most one peak per stage. The presence and size of the descent peak depends on the level of noise in the data and the effective regularization as shown in Eq. \eqref{eq:phase_boundary_white} and Eq. \eqref{eq:asympGenErr}. Similar observations of multiple peaks in the learning curves were made in \cite{liang_isometry} in the context of ridgeless regression on polynomial kernels. \end{enumerate} As an example of the effect of kernel spectrum on double-descent, consider a power law $\bar\eta_K = K^{-s}$ where $s\geq 1$. Then $\tilde\lambda_L=L^s\big(\zeta(s,L )+\lambda\big)-1 \approx \frac{L}{s-1}+\lambda L^s,\;\;(L\gg 1)$, where $\zeta(s,L)$ is Hurwitz-Zeta function. In the ridgeless $\lambda = 0$ case, faster decaying spectra (higher $s$, smaller $\tilde\lambda_L$) are more prone to double-descent than the slower ones (\figref{fig:dot_prod_phase}a). Furthermore, we also observe that higher modes (higher $L$, higher $\tilde\lambda_L$) are more immune to overfitting, signaled by non-monotonicity, than the lower modes. \begin{figure} \centering \includegraphics[scale=1]{Figure5_RBF_Gen_std.pdf} \caption{\textbf{Gaussian RBF kernel regression on high-dimensional spherical data.} \textbf{a.} Phase diagram for non-monotonic learning curves obtained from the theory by counting the zeros of $\frac{\partial E_g}{\partial \alpha}$. Colored squares and colored circles correspond to curves in panel \textbf{c} and \textbf{d}, respectively. \textbf{b.} Kernel regression with Gaussian RBF $K(\mathbf{x},\mathbf{x}')=e^{-\frac{1}{2 D \omega^2} ||x-x'||^2}$ with $\omega = 3$, $D = 100$ and noise-free labels. Target is $\bar f({\mathbf x}) = \sum_{k,m}\bar w_{km}\sqrt{\eta_{km}}Y_{km}({\mathbf x})$ with random and centered weights $\bar w_{km}$ such that $\braket{\bar w_{km}^2} = \eta_{km}$ (Supplementary Note 5). Dashed lines represent the locations of $N(D,1)$ and $N(D,2)$, showing different learning stages. \textbf{c, d.} Generalization error for Gaussian RBF kernel for various kernel widths $\omega$ corresponding to specific $\tilde\lambda_L$'s and noise variances $\tilde\sigma_L$ pointed in the phase diagram in $D = 100$. Solid lines - theory (Eq. \eqref{eq:finitePgenErr}). Larger regularization suppresses the descent peaks, which occur at $P^* \sim N(D,L)$ shown by the vertical dashed lines. \textbf{c.} Varying $\tilde\lambda_L$ with fixed the $\tilde\sigma_L$. \textbf{d.} vice versa. For fixed noise, we observe an optimal $\tilde\lambda_1$ for up to $P/N(D,1)\sim 10$ after which the next learning stage starts. Error bars indicate standard deviation over $300$ trials for \textbf{b} and $100$ trials for \textbf{c, d}.}\label{fig:dot_prod_phase} \end{figure} We apply our theory to Gaussian RBF regression on synthetic data in \figref{fig:dot_prod_phase} where \figref{fig:dot_prod_phase}b demonstrates a perfect agreement with theory and experiment on Gaussian RBF with synthetic data when no label noise is present. The vertical dashed lines represent the locations where $P = N(D,1)$ and $P = N(D,2)$, respectively. \figref{fig:dot_prod_phase}c shows the regression experiment with the parameters $(\tilde\sigma_1^2,\tilde\lambda_1)$ indicated by colored squares on the phase diagram (\figref{fig:dot_prod_phase}a). When the parameters chosen on the yellow dashed line in \figref{fig:dot_prod_phase}a, corresponding to the optimal regularization for fixed effective noise, the lowest generalization error is achieved in the first learning episode without a double-descent. Finally, \figref{fig:dot_prod_phase}d shows the theory and experiment curves with the parameters $(\tilde\sigma_1^2,\tilde\lambda_1)$ shown by the colored circles in \figref{fig:dot_prod_phase}a. As expected, for fixed effective regularization, increasing noise hurts generalization. For further experiments see Supplementary Note 4. \subsection*{Dot Product Kernels, NTK and Wide Neural Networks} Our theory allows the study of generalization error for trained and wide feedforward neural networks by exploiting a correspondence with kernel regression. When weights in each layer are initialized from a Gaussian distribution $\mathcal{N}(0,\sigma^2_W)$ and the size of hidden layers tend to infinity, the function $f(\mathbf{x},\bm{\theta})$ learned by training the network parameters $\bm{\theta}$ with gradient descent on a squared loss to zero training error is equivalent to the function obtained from ridgeless ($\lambda=0$) kernel regression with the NTK: $ \mathbf{K}_{NTK} (\mathbf{x}_i, \mathbf{x}_j) = \nabla_{\bm{\theta}} f(\mathbf{x}_i, \bm{\theta}_0) \cdot \nabla_{\bm{\theta}} f(\mathbf{x}_j, \bm{\theta}_0)$ \cite{jacot2018neural}. For fully connected neural networks, the NTK is a dot product kernel $K_{NTK}(\mathbf{x}, \mathbf{x}') = K_{NTK}(\mathbf{x}\cdot \mathbf{x}')$ \cite{jacot2018neural,bordelon2020spectrum}. For such kernels and spherically symmetric data distributions $p(\mathbf{x})=p(\Vert\mathbf{x}\Vert)$, kernel eigenfunctions do not have a radial part, and consequently the eigenvalues are free of a $z$-index. Therefore, $k$-th eigenvalue has degeneracy of the degree $k$ spherical harmonics, $\mathcal{O}_D(D^k)$, ($K,L\to k,l$ and $\rho \to m$) \cite{bordelon2020spectrum}, allowing recourse to the same scaling we used to analyze rotation invariant kernels in the previous section. The learning curves for infinitely wide neural network will thus have the same form in Eq. \eqref{eq:asympGenErr}, evaluated with NTK eigenvalues and with $\lambda = 0$. In \figref{fig:neural_network_exp}a, we compare the prediction of our theoretical expression for $E_g$, Eq. \eqref{eq:finitePgenErr}, to NTK regression and neural network training. The match to NTK training is excellent. We can describe neural network training up to a certain $P$ after which the correspondence to NTK regression breaks down due to the network's finite-width. For large $P$, the neural network operates in under-parameterized regime where the network initialization variance due to finite number of parameters starts contributing to the generalization error \cite{belkin2019reconciling, montanari2019generalization, montanari2019surprises, dascoli2020double}. A detailed discussion of these topics is provided in Supplementary Note 4. Neural networks are thought to generalize well because of implicit regularization \cite{zhang2016understanding,bietti2019inductive,yang2020finegrained}. This can be addressed with our formalism. For spherical data, we see that the implicit regularization of a neural network for each mode $l$ is given by $\tilde\lambda_l = \frac{\sum_{k>l}\bar\eta_k}{\bar\eta_l}$. As an example, we calculate the spectrum of NTK for rectifier activations, and observe that the spectrum whitens with increasing depth \cite{yang2020finegrained}, corresponding to larger $\tilde\lambda_{l}$ and therefore more regularization for small learning stages $l$ (\figref{fig:neural_network_exp}b). The trend for small degree harmonics $l$ is especially relevant since, as we have shown, approximately $D^l$ samples are required to learn degree $l$ harmonics. In this small $l$ regime, we see that deep networks exhibit much higher effective regularization compared to shallow ones due to slower decay of eigenvalues. \begin{figure}[ht] \centering \includegraphics[scale=1]{Figure6_NeuralNetwork.pdf} \caption{\textbf{Comparison of our theory with finite width neural network experiments.} \textbf{a.} 2-layer NTK regression and corresponding neural network training using NeuralTangents package \cite{neuraltangents2020} with 50000 hidden units for $D = 25$ with varying noise levels chosen according to $g(\lambda)$. Target function is a single degree mode $\bar f({\mathbf x}) = c_k Q^{(D-1)}_k(\bm{\beta}\cdot{\mathbf x})$, where $c_k$ is a constant, $\bm{\beta}$ is a random vector, and $Q_k^{(D-1)}$ is the $k$-th Gegenbauer polynomial (see Supplementary Note 5 and 6). Here we picked $k=1$ (linear target). Solid lines are the theory predicted learning curves (Eq. \eqref{eq:finitePgenErr}), dots represent NTK regression and $\times$ represents $E_g$ after neural network training. Correspondence between NN training and NTK regression breaks down at large sample sizes $P$ since the network operates in under-parameterized regime and finite-size effects become dominating in $E_g$. Error bars represent standard deviation of $15$ averages for kernel regression and $5$ averages for neural network experiments. \textbf{b.} $\tilde\lambda_l$ dependence to mode $l$ across various layer NTKs. The weight and bias variances for the neural network are chosen to be $\sigma^2_W = 1$ and $\sigma^2_b = 0$, respectively.} \label{fig:neural_network_exp} \end{figure} \section*{Discussion} We studied generalization in kernel regression using statistical mechanics and the replica method \cite{mezard1987spin}. We derived an analytical expression for the generalization error, Eq. \eqref{eq:finitePgenErr}, valid for any kernel and any dataset. We showed that our expression explains generalization on real datasets, and provided a detailed analysis of its application to band-limited kernels with white spectra and the widely used class of rotation invariant kernels \cite{genton2001classes,RasmussenWilliams} operating on spherical data. For the latter case, we defined an effective regularization and an effective noise which govern the generalization behavior, including non-monotonicity of learning curves. It will be interesting to see if analogues of these concepts can be defined for real datasets. Our results are directly applicable to infinite-width limit of neural networks that admit a kernel description (including feedforward, convolutional and recurrent neural networks) \cite{jacot2018neural, neuraltangents2020, alemohammad2020recurrent, arora2019exact, yang2019tensor1, yang2020tensor2}, and explain their inductive bias towards simple functions \cite{neyshabur_inductivebias, lee2019wide, jacot2020implicit, chizat2020implicit, bietti2019inductive, ghorbani_twolayer, soudry_implicitbias}. We also note a closely related recent study \cite{jacot2020kernel}, which we discuss further in Supplementary Discussion, that utilizes random matrix theory to study generalization in kernel regression. One goal of our present work is to provide a framework that incorporates structural information about the data distribution into a realistic prediction of generalization performance that holds for real data and any kernel. Indeed, a recent study suggested that structure in data allows kernel methods to outperform pessimistic generalization expectations based on the high ambient dimension \cite{spigler2020asymptotic}. In a different setting, authors of \cite{liao2020random} calculate test error of random Fourier features model using random matrix theory techniques without strong assumptions on data distribution and obtain excellent agreement on real datasets. Overall, our results demonstrate how data and inductive biases of a model interact to shape generalization behavior, and in particular the importance of the compatibility of a learning task with the model for sample-efficient learning. Our findings elucidate three heuristic principles for generalization in kernel regression. First is the spectral bias. The eigendecomposition of the kernel provides a natural ordering of functions which are easiest to estimate. Decomposing generalization error into modal errors, we found that errors in spectral modes with large eigenvalues decrease more rapidly with increasing sample size than modes with small eigenvalues, also observed in \cite{bordelon2020spectrum}, illustrating a preference to fit certain functions over others. Our findings are consistent with other experimental results and analytical ones which derive error bounds on test risk to elucidate the spectral or frequency bias of NTK and NNGP \cite{cao2019understanding,rahaman2018spectral,luo2019frequencybias,valleprez2020generalization}. Consequently, how a given task decomposes in the eigenbasis, a heuristic that we name task-model alignment, determines the number of samples required to achieve good performance: tasks with most of their power in top eigenmodes can be learned in a sample efficient manner. We introduced cumulative power distribution as a metric for task-model alignment and proved that target functions with higher cumulative power distributions will have lower normalized generalization error for all $P$ under the same kernel and data distribution. A related notion of kernel compatibility with target was defined in \cite{cristianini2002kerneltarget,cortes2012algorithms}, which we discuss in detail in Supplementary Discussion. The third phenomenon we explore is how non-monotonicity can appear in the learning curves when either labels are noisy, or the target function has modes that is not expressible with the kernel. Non-monotonicity is caused by the variance term in the bias-variance decomposition of the generalization error. In the analytically tractable models we considered, this is related to a phase transition appearing in separate learning stages for the rotation invariant kernels. Non-monotonicity can be mitigated with explicit or implicit regularization \cite{montanari2019generalization,dascoli2020double,nakkiran2019moredata}. We showed the existence of an optimal regularizer, independent of sample size, for our theoretical settings. When applied to linear regression, our optimal regularizer matches that previously given by \cite{nakkiran2020optimal}. Non-monotonicity in generalization error gathered a lot of interest recently. Many studies pointed to absence of overfitting in overparameterized machine learning models, signaled by a peak and a subsequent descent in generalization error as the model complexity, or the number of parameters, increases, and the model transitions from an underparameterized to overparameterized (interpolating) regime \cite{belkin2019reconciling,nakkiran2019deep,jacot2020implicit,montanari2019surprises,montanari2019generalization,dascoli2020double,dascoli2020triple,spigler2019jamming,mezard2020generalisation, liao2020random}. Multiple peaks are also possible in this context \cite{adlam2020neural}. Our work provides an explanation for the lack of overfitting in overparameterized models by elucidating strong inductive biases of kernel regression, valid even in the interpolation limit, which includes infinitely overparameterized limits of neural networks. Sample-wise non-monotonicity has also been observed previously in many models \cite{nakkiran2019deep,krogh,Hertz_1989, montanari2019surprises,nakkiran2019moredata}, including ones that show multiple peaks \cite{nakkiran2020optimal,dascoli2020triple,liang_isometry,chen2020multipledesign}. A closely related study obtained an upper bound for test risk in ridgeless regression which shows non-monotonic behavior with increasing sample size whenever $P\sim \mathcal{O}(D^L)$, consistent with our results on rotation invariant kernels and isotropic data. An interesting comparison can be made between the multiple peaks we observed in our analytically solvable models and the multiple peaks observed in random features models \cite{dascoli2020triple,adlam2020neural}. In these models, one of the peaks (termed ``nonlinear'' in \cite{dascoli2020triple}) happens when the number of samples reaches the number of features, and thus the number of parameters of the model, crossing the interpolation threshold. While the peak we observed in the white band limited case with nonlinear features also happens at the interpolation threshold ($P=N$), the mechanisms causing double descent are different. In random features models, this peak is due to variance in the initialization of the random feature vectors. In our example, such variance does not exist. The peak is due to overfitting the noisy labels and disappears when there is no noise. The peaks observed for the rotationally invariant case has a more subtle connection. In each learning stage, peaks occur when number of samples reach the degeneracy of eigenfunctions in that stage. While kernel regression is non-parametric, one can think of this again as crossing an interpolation threshold defined by the dimensionality of the feature space in the large-$D$ limit. Like the white band limited case, these peaks are due to overfitting noise. While our theory is remarkably successful in its predictions, it has limitations. First, the theory requires eigendecomposition of the kernel on the full dataset which is computationally costly. Second, its applicability to state-of-the-art neural networks is limited by the kernel regime of networks, which does not capture many interesting and useful deep learning phenomena \cite{fort2020deep,chizat2020implicit}. Third, our theory uses a Gaussian approximation \cite{Dietrich_Sompolinsky} and a replica symmetric ans\"atz \cite{mezard1987spin}. While these assumptions were validated by the remarkable fit to experiments, it will be interesting to see if their relaxations reveal new insights. \section*{Methods}\label{Methods} \subsection*{Statistical Mechanics Formulation} With the setting described in the main text, kernel regression problem reduces to minimization of the energy function \begin{linenomath*}\begin{equation}\label{eq:energy_fn} H({\mathbf w}) \equiv \frac{1}{2\lambda} \sum_{\mu=1}^P\left(\sum_{\rho=1}^N(\bar w_{\rho}-w_{\rho})\psi_{\rho}({\mathbf x}^\mu)+\epsilon^\mu\right)^2 + \frac{1}{2}\Vert{\mathbf w}\Vert^2_2. \end{equation}\end{linenomath*} The quantity of interest is the generalization error in equation Eq. \eqref{eq:genError}, which in matrix notation is \begin{linenomath*}\begin{equation} E_g = \left<({\mathbf w}^*-\bar{\mathbf w})^\top{\bm \Lambda}({\mathbf w}^*-\bar{\mathbf w})\right>_{\mathcal{D}}, \end{equation}\end{linenomath*} where ${\bm \Lambda}_{\rho\gamma} = \eta_\rho\delta_{\rho\gamma}$ represents a diagonal matrix with entries given by the RKHS eigenvalues $\eta_\rho$. In order to calculate the generalization error, we introduce a Gibbs distribution $p_G({\mathbf w})\equiv \frac{1}{Z}e^{-\beta H({\mathbf w})} $ with the energy function in Eq. \eqref{eq:energy_fn}. In the $\beta \to \infty$ limit, this Gibbs measure is dominated by the solution to the kernel regression problem. We utilize this fact to calculate the generalization error for kernel regression. This can be done by introducing a source term with strength $J$ to the partition function, \begin{linenomath*}\begin{equation} Z(J) = \int d{\mathbf w} e^{-\beta H({\mathbf w},\mathcal{D})+{J \beta P} ({\mathbf w}-\bar{\mathbf w})^\top{\bm \Lambda}({\mathbf w}-\bar{\mathbf w})} \ , \ E_g(\mathcal{D}) = \left.\lim_{\beta\to \infty}\frac{1}{\beta P} \frac{d}{d J}\ln Z(J)\right |_{J=0}, \end{equation}\end{linenomath*} where we recognize the free energy $\beta F \equiv -\ln Z(J)$ is the relevant quantity to compute generalization error for a given dataset, $E_g(\mathcal{D})$. In Supplementary Note 2, we introduce other source terms to calculate training error, average estimator and its variance. The free energy depends on the sampled dataset $\mathcal{D}$, which can be thought of as a quenched disorder of the system. Experience from the study of physics of disordered systems suggests that the free energy concentrates around its mean (is self-averaging) for large $P$ \cite{mezard1987spin}. Therefore, we calculate the typical behavior of the system by performing the average free energy over all possible datasets: $\beta F = \beta \left< F \right >_{\mathcal D}= - \braket{\ln Z(J)}_{\mathcal{D}}$ in the $P\to\infty$ limit. All calculations are detailed in Supplementary Note 1 and 2. Here we provide a summary. To perform averages over the quenched disorder, we resort to the replica trick \cite{spinGlassReview} using $\braket{\log Z(J)}_{\mathcal{D}} = \lim_{n\to 0}\frac{1}{n}(\braket{Z(J)^n}_{\mathcal{D}}-1)$. A key step is a Gaussian approximation to the average over the dataset in the feature space \cite{sompolinsky1999statistical}, which exploits the orthogonality of the feature vectors with respect to the input distribution $p({\mathbf x})$. These averages are expressed in terms of order parameters defining the mean and the covariance of the Gaussian. The calculation proceeds by a replica symmetric ansatz \cite{mezard1987spin}, evaluating the saddle point equations and taking the $\beta\to\infty$ limit. \subsection*{Modal errors} The generalization error can be written as a sum of modal errors arising from the estimation of the coefficient for eigenfunction $\psi_\rho$: \begin{linenomath*}\begin{equation} E_g = \sum_\rho \eta_\rho \overline{w}_\rho^2 E_\rho, \end{equation}\end{linenomath*} where \begin{linenomath*}\begin{equation} E_\rho = \frac{1}{\overline{w}_\rho^2} \left< ( {w}^*_\rho - \overline{w}_\rho)^2 \right>_{\mathcal D} = \frac{1}{1-\gamma} \frac{\kappa^2 }{(\kappa+P\eta_\rho)^2} . \end{equation}\end{linenomath*} We now provide a proof that the mode error equation implies that the logarithmic derivatives of mode errors have the same ordering as the kernel eigenvalues when $\sigma=0$. Assuming that $\eta_\rho > \eta_{\rho'}$, and noting that $ \kappa'(P) = - \frac{\kappa \gamma}{P(1 + \gamma)} < 0$ since $\kappa, \gamma, P > 0$, we have \begin{linenomath*}\begin{equation} \frac{d}{dP} \log \left( \frac{E_\rho}{E_{\rho'}} \right) = - 2 \left[ \frac{\kappa'(P) + \eta_\rho}{\kappa+P\eta_\rho} - \frac{\kappa'(P) + \eta_{\rho'}}{\kappa + P\eta_{\rho'}} \right] <0. \end{equation}\end{linenomath*} Thus, we arrive at the conclusion \begin{linenomath*}\begin{equation} \frac{d}{dP} \log E_\rho < \frac{d}{dP} \log E_{\rho'} \quad \implies \quad \frac{1}{E_\rho} \frac{dE_\rho}{dP} < \frac{1}{E_{\rho'}} \frac{d E_{\rho'}}{dP}. \end{equation}\end{linenomath*} Let $u_{\rho,\rho'}(P) = \log\left( \frac{E_\rho}{E_{\rho'}} \right)$. The above derivation demonstrates that $\frac{d}{dP} u_{\rho,\rho'}(P) < 0$ for all $P$. Since $u_{\rho,\rho'}(0) = 0$, this implies that $u_{\rho,\rho'}(P) < 0$ or equivalently $E_\rho < E_{\rho'}$ for all $P$. This result indicates that the mode errors have the opposite ordering of eigenvalues, summarizing the phenomenon of \textit{spectral bias} for kernel regression: the generalization error falls faster for the eigenmodes with larger eigenvalues. If the target function has norm $T = \left<\bar f({\mathbf x})^2\right>=\sum_{\rho} \eta_{\rho} \overline{w}_{\rho}^2$ then the generalization error is a convex combination of $\{ T E_\rho(P) \}_{\rho=1}^{\infty}$. The quantities $T E_\rho(P)$ maintain the same ordering as the normalized mode errors $E_\rho$ for all $P$, and we see that re-allocations of target function power that strictly increase the cumulative power distribution $C(\rho) = \frac{1}{T} \sum_{\rho' \leq \rho} \eta_{\rho'} \overline{w}_{\rho'}^2 $ curve must cause a reduction in generalization error. We emphasize that, for a fixed set of kernel eigenvalues, strictly higher $C(\rho)$ yields better generalization but provide a caveat: for a fixed target function, comparison of different kernels requires knowledge of both the change in the spectrum $\eta_\rho$ as well as changes in the $C(\rho)$ curve. \subsection*{Diagonalizing the kernel on real datasets} Calculation of $E_g$ requires two inputs: kernel eigenvalues $\eta_\rho$ and teacher weights $\bar w_\rho$, both calculated using the underlying data distribution. For a dataset with a finitely many samples, we assume a discrete uniform distribution over data $p(\mathbf{x}) = \frac{1}{M} \sum_{i=1}^{M} \delta(\mathbf{x}-\mathbf{x}_i)$ with $M$ being the size of the whole dataset (train+test). Then, the corresponding eigenvalue problem reads: \begin{linenomath*}\begin{equation}\label{eq:KPCA} \eta_k \phi_k({\mathbf x}') = \int p({\mathbf x}) K({\mathbf x},{\mathbf x}') \phi_k({\mathbf x}) d{\mathbf x} = \frac{1}{M} \sum_{i=1}^M K({\mathbf x}_i,{\mathbf x}') \phi_k({\mathbf x}_i). \end{equation}\end{linenomath*} Given a kernel $K({\mathbf x},{\mathbf x}')$, one can evaluate the $M\times M$ kernel Gram matrix ${\mathbf K}_{ij} = K({\mathbf x}_i,{\mathbf x}_j)$ and solve for the eigenvalues ${\bm \Lambda}_{kl}=\eta_{k}\delta_{kl}$ and eigenfunctions $\mathbf{\Phi}_{ki} = \phi_k({\mathbf x}_i)$ of ${\mathbf K} = N\mathbf{\Phi}{\bm \Lambda}\mathbf{\Phi}^\top$. Note that both data indices and eigen-indices take values $i,k = 1,...,M$. For a target function with multiple classes $\bar f({\mathbf x}):\mathbb{R}^D\to\mathbb{R}^C$, we denote the one-hot encoded labels ${\mathbf Y} = [{\mathbf y}_1,...,{\mathbf y}_C]\in\mathbb{R}^{M\times C}$ and obtain the teacher weights for each class with $\bar{\mathbf w}_c = \frac{1}{M} \mathbf{\Lambda}^{-1/2} \mathbf\Phi^\top \mathbf y_c $. For solving kernel regression, each of the $C$ one-hot output channels can be treated as an individual target function $f_{t,c}({\mathbf x})$ where $f_{t,c}({\mathbf x}^\mu) = y_c^\mu$ for one-hot labels $y^\mu_c$. The weights $\overline{{\mathbf w}}_c$ obtained above allows the expansion of each teacher channel in the kernel's eigenbasis $f_{t,c}({\mathbf x}) = \sum_{k=1}^M \overline{w}_{c,k} \sqrt{\eta_k} \phi_k({\mathbf x})$. The total generalization error for the entire task is simply $E_g = \sum_{c=1}^C \left< (f_c^*({\mathbf x}) - f_{t,c}({\mathbf x}))^2 \right>_{{\mathbf x},\mathcal{D}}$ where $f_c^*$ is the kernel regression solution for output channel $c$. Note that, with the choice of one-hot labels, the total target power is normalized to $1$. After computing learning curves for each channel $c$, which requires plugging in $\{(\eta_k,\overline{w}_{c,k}^2)\}_{k=1}^M$ into our theory, the learning curves for each channel are simply summed to arrive at the final generalization error. In other cases, when we do not generally possess a priori knowledge about $p({\mathbf x})$, the underlying data distribution, solving the kernel eigenvalue problem in Eq. \eqref{eq:eigenvalue} appears intractable. However, when we are provided with a large number of samples from the dataset, we may approximate the kernel eigenvalue problem by using a Monte-Carlo estimate of the data density i.e. $p(\mathbf{x}) \approx \frac{1}{M} \sum_{i=1}^{M} \delta(\mathbf{x}-\mathbf{x}_i)$ with $M$ being the size of the dataset. Then Eq. \eqref{eq:KPCA} approximates the eigenvalues and eigenvectors where one obtains the exact eigenvalues when the number of samples is large, $M\to\infty$ \cite{RasmussenWilliams}. \section*{Data Availability} All data generated and/or analyzed in this study, along with the analysis code, are available in the Github repository \cite{codeanddata2021canatar}, \href{https://github.com/Pehlevan-Group/kernel-generalization}{https://github.com/Pehlevan-Group/kernel-generalization}. \section*{Code Availability} All code used to perform numerical experiments, analyze data and generate figures are available in the Github repository \cite{codeanddata2021canatar}, \href{https://github.com/Pehlevan-Group/kernel-generalization}{https://github.com/Pehlevan-Group/kernel-generalization}.
1,314,259,995,746
arxiv
\section{Derivation of hydrodynamic equations from GHD} The GHD evolution equation reads as \begin{equation} \frac \partial {\partial t} \rho _\mathrm{p} +\frac \partial {\partial z}\left( v^\mathrm{eff} \rho _\mathrm{p}\right) =0 \label{GHD.1} \end{equation} and is supplemented by two equations \begin{equation} v^\mathrm{eff} (\theta ) = \frac {\hbar \theta }m +\int _{-\infty }^\infty d\theta ^\prime \, \frac {2 c \rho _\mathrm{p} (\theta ^\prime )}{c^2+(\theta -\theta ^\prime )^2} \left[ v^\mathrm{eff} (\theta ^\prime )-v^\mathrm{eff} (\theta )\right] \; , \label{GHD.2} \end{equation} \begin{equation} \varpi (\theta ) =\frac 1{2\pi }+\frac 1\pi \int _{-\infty }^\infty d\theta ^\prime \, \frac c{c^2+(\theta -\theta ^\prime )^2} \rho _\mathrm{p} (\theta ^\prime ) \; , \label{GHD.3} \end{equation} where $m$ is the mass of the atoms, $c$ is the coupling strength, and the density of states \begin{equation} \varpi (\theta ) = \rho _\mathrm{p} (\theta ) + \rho _\mathrm{h} (\theta ) \label{GHD.4} \end{equation} is the sum of the densities of quasi-particles (p) and holes (h). We introduce the mean hydrodynamic velocity \begin{equation} V =\frac 1\varrho \int _{-\infty }^\infty d\theta \, \frac {\hbar \theta }m \rho _\mathrm{p} (\theta ), \label{GHD.5} \end{equation} where \begin{equation} \varrho =\int _{-\infty }^\infty d\theta \, \rho _\mathrm{p} (\theta ) \end{equation} is the linear density of atoms. It follows from Eq. (\ref{GHD.2}) that \begin{equation} V =\frac 1\varrho \int _{-\infty }^\infty d\theta \, v^\mathrm{eff} (\theta )\rho _\mathrm{p} (\theta ). \label{GHD.6} \end{equation} Consider the rapidity distribution with a co-ordinate-dependent Galilean boost \begin{subequations} \begin{align} \rho _\mathrm{p} (\theta ) &= \bar \rho _\mathrm{p} (\theta -\zeta ) \; ,\\ v^\mathrm{eff} (\theta ) &= \bar v^\mathrm{eff} (\theta -\zeta )+V \; , \label{GHD.7} \end{align} \end{subequations} where \begin{equation} \zeta (z) = m V(z)/\hbar \end{equation} is the rapidity characterizing the local boost and the barred quantities describe the system in the co-moving frame of reference. By definition, \begin{equation} \int _{-\infty }^\infty d\theta \, \bar v^\mathrm{eff} (\theta )\bar \rho _\mathrm{p} (\theta )=0 \; . \end{equation} The local density $\varrho (z)$ also can depend on the co-ordinate. Integrating Eq. (\ref{GHD.1}) over rapidities, we obtain the continuity equation \begin{equation} \frac \partial {\partial t}\varrho +\frac \partial {\partial z}(\varrho V)=0 \; . \label{GHD.8} \end{equation} Multiplying Eq. (\ref{GHD.1}) by $\hbar \theta /m$ and then integrating over $\theta $, we obtain the momentum balance equation \begin{equation} \frac \partial {\partial t}(\varrho V)+ \frac \partial {\partial z}\int _{-\infty }^\infty d\theta \, \frac {\hbar \theta }m v^\mathrm{eff}(\theta )\rho _\mathrm{p}(\theta ) + \frac \varrho m \frac {\partial U}{\partial z}=0 \; , \end{equation} which can be further transformed, using Eq.~(\ref{GHD.7}), to \begin{equation} \frac {\partial V}{\partial t}+V\frac {\partial V}{\partial z}=-\frac 1{m\varrho } \frac {\partial \mathcal{P}}{\partial z}-\frac 1m \frac {\partial U}{\partial z} \; , \label{GHD.9} \end{equation} where \begin{equation} \mathcal{P}=\int _{-\infty }^\infty d\theta \, \hbar \theta v^\mathrm{eff}(\theta )\rho _\mathrm{p} (\theta ) \label{press} \end{equation} is the 1D pressure. Eqs. (\ref{GHD.8}, \ref{GHD.9}) have a form of the standard equations of macroscopic hydrodynamics, supplemented by the equation of state (\ref{press}). These equations are essentially classical: the quantum potential $-[\hbar ^2/(2m \sqrt \varrho )] \partial ^2 \sqrt \varrho /(\partial z^2)$ does not appear (added to $U$) in the r.h.s. of Eq. (\ref{GHD.9}), because the GHD is an essentially local theory. Using Eq. (\ref{GHD.3}), we can rewrite Eq. (\ref{GHD.2}) for the effective velocity in the co-moving frame as \begin{equation} \bar \varpi (\theta ) \bar v^\mathrm{eff}(\theta ) = \frac {\hbar \theta }{2\pi m} +\frac 1\pi \int _{-\infty }^\infty d\theta \, \frac c{c^2 +(\theta -\theta ^\prime )^2} \bar v^\mathrm{eff}(\theta )\bar \rho _\mathrm{p}(\theta ) \; . \label{GHD.10} \end{equation} \section{Sampling of boostons} Fluctuations of the local Fermi points in the form of boostons translate into local fluctuations of density $\delta \varrho$ and boost $\zeta$, and vice versa. The density and boost (phase) fluctuations are quantized \cite{Haldane81,Cazalilla04} \begin{align} \delta\varrho (z) &= \sqrt{ \varrho_0 } \sum_j \left[ f_j^+ (z) b_j + \mathrm{H.c.} \right] \label{eq:density_fluct_mode} \\ \zeta (z) &= \frac{1}{\sqrt{ 4 \varrho_0 }} \sum_j \left[ k_j f_j^- (z) b_j + \mathrm{H.c.} \right] \; , \label{eq:boost_fluct_mode} \end{align} with the creation and annihilation operators of the collective (mode) excitations obeying the usual bosonic commutation relations $[ b_i , b_j^\dagger] = \delta_{ij}$. For simplicity the background density $\varrho_0$ is assumed to homogeneous, while coherent variations in density and boost added as part of the mode operators. The mode functions $f_j^\pm$ appearing in Eqs. (\ref{eq:density_fluct_mode}, \ref{eq:boost_fluct_mode}) are normalized as \begin{equation} \frac{1}{2} \int \mathrm{d}z \: \left[ \bar f_j^+ f_j^- + f_j^+ \bar f_j^- \right] = 1 \; , \end{equation} where the bar here denotes complex conjugate. For the majority of this work we concern ourselves with periodic boundary conditions, in which case the mode functions take the form \begin{equation} f_j^\pm (z) = \frac{1}{\sqrt{L}} \left( \frac{\epsilon_j}{E_j} \right)^{\mp 1/2} \mathrm{e}^{ i k_j z } \; , \end{equation} with $k_j = 2 \pi j / L$, $E_j = \hbar^2 k_j^2 / 2 m$, and $\epsilon_j$ being given by the Bogoliubov spectrum \begin{equation} \epsilon_j = \sqrt{E_j (E_j + 2 m v_s^2)} \; . \end{equation} We assume the population of the individual modes to be follow the Bose-Einstein distribution $n_j = 1/(\mathrm{e}^{\epsilon_j / k_B T} - 1)$. To achieve a thermal ensemble of boostons, the mode operators are sampled following \begin{equation} b_j = \alpha_j \mathrm{e}^{ i \varphi_j } + \sqrt{n_j} \frac{ X_1 + i X_2}{\sqrt{2}} \; , \label{eq:mode_operator_sample} \end{equation} where $X_1$ and $X_2$ are sampled from indepedent, Gaussian distributions with zero mean and unit variance. The first term of Eq. (\ref{eq:mode_operator_sample}) accounts of any coherent population of the mode, shifting the expectation value of the creation operator $\langle b_j \rangle = \alpha_j \langle \mathrm{e}^{i \varphi_j} \rangle$. Given the thermally sampled creation operators, the corresponding density and boost fluctuations are computed via Eqs. (\ref{eq:density_fluct_mode}, \ref{eq:boost_fluct_mode}), which in turn are translated into fluctuations in the Fermi edge (boostons) by locally solving the (zero-temperature) Lieb-Liniger equations. To this end, significant computational advantage can be achieved by solving the equations beforehand for a range of interaction strengths $\gamma$ and then tabulating the results. \section{Numerical implementation of zero-temperature GHD with boostons} The zero-temperature GHD (+boostons) simulations in this work employ the algorithm originally featured in Ref. \cite{Doyon17a}, while TBA and finite temperature GHD calculations were performed using the iFluid framework \cite{10.21468/SciPostPhys.8.3.041}. \begin{figure} \centering \includegraphics[width=78mm]{figures/algorithm_fluctuations.pdf} \caption{ Illustration of algorithm for numerically propagating the state parameterized as the Fermi contour $\Gamma$, which in turn is discretized as a set of points (see text for details). The coherent part of the contour is drawn in black, while the grey lines mark the contours after adding incoherent fluctuations $\delta K$ in the form of boostons.} \label{fig:algorithm} \end{figure} For the GHD simulations, rather than propagating the quasi-particle distribution $\rho_\mathrm{p} (\theta)$, it is numerically more convenient to work with the occupation function $n (\theta)$, which for a zero-temperature state with Fermi rapidity $K$ is given by \begin{equation} n(\theta) = \begin{cases} 1, \qquad \text{for } -K \leq \theta \leq K \\ 0, \qquad \text{otherwise, } \end{cases} \end{equation} thus realizing a Fermi sea. In the presence of multiple local Fermi seas $K_1^- < K_1^+ < K_2^- < K_2^+ < \ldots$, the occupation function will assume the value 1 between the rapidity pairs $(K_i^- , K_i^+)$ and 0 anywhere else. It is thus sufficient to encode the state only by the local Fermi points, whose displacement following dynamics is given by \begin{equation} \partial_t K_i^\pm + v_{\{ K \}}^\mathrm{eff} (K_i^\pm) \; \partial_z K_i^\pm = 0 \; . \label{eq:Fermi_point_propagation} \end{equation} Here the effective velocity is computed using \begin{equation} v_{\{ K \}}^\mathrm{eff} (\alpha) = \frac{ \mathrm{id}_{\{ K \}}^\mathrm{dr} (\alpha)}{ 1_{\{ K \}}^\mathrm{dr} (\alpha)} \; , \label{eq:veff_contour} \end{equation} which is an equivalent expression than that of Eq. (\ref{GHD.2}). In Eq. (\ref{eq:veff_contour}), $\mathrm{id}(\alpha)$ is the identity function and the dressing operation is defined as \begin{equation} f_{\{ K \}}^\mathrm{dr} (\alpha) = f(\alpha) + \sum_{i=1}^\mathcal{N} \int_{K_i^-}^{K_i^+} \mathrm{d}\theta \frac{2 c}{ c^2 + (\alpha - \theta)^2} f_{\{ K \}}^\mathrm{dr} (\theta) \; , \end{equation} where $\mathcal{N}$ is the number of local Fermi seas. Following Ref. \cite{Doyon17a}, we can encode the time- and space-dependent state $n(\theta,z,t)$ as a Fermi contour $\Gamma$ containing all the Fermi points. Discretizing the contour yields a set of points ($z_j (t), K_j (t)$), whose displacement after a small time step $\delta t$ is \begin{equation} z_j (t+\delta t) = z_j (t) + \delta t \: v_{\{ K \}}^\mathrm{eff} (K_j) \; . \label{eq:dispacement} \end{equation} Note, in the absence of any acceleration of the quasi-particles, which is the case in Eq. (\ref{eq:Fermi_point_propagation}), all rapidities are conserved, whereby all $K_j$ remain constant. The numerical propagation of the state follows from a simple algorithm: For each point $j$ in the contour $\Gamma$, find all local Fermi seas by searching for intersections between the vertical line at $z = z_j$ and the contour itself (see Fig. \ref{fig:algorithm}). Next, compute the effective velocity at each point using Eq. (\ref{eq:veff_contour}). Finally, displace each point of the contour according to Eq. (\ref{eq:dispacement}) and repeat. \section{Relaxation of coherently excited modes} \begin{figure} \centering \includegraphics[width=78mm]{figures/fermi_edge_illustration.pdf} \caption{ Illustration of occupation function $n (\theta)$, which following the thermodynamic Bethe ansatz is given by a Fermi distribution, for zero temperature (solid line) and finite temperature (dashed line). At finite temperature the Fermi point (rapidity) $K$, and thereby the sound velocity following Eq. (\ref{eq:sound_velocity}), is ill-defined. For small enough temperatures, thermal fluctuations can be treated as local fluctuations of the Fermi point in the form of boostons, thus leading to local fluctuations of the sound velocity.} \label{fig:fermi_edge_illustration} \end{figure} In Bogoliubov's theory for the interacting Bose gas, the equation of motion are linearized around a stationary solution, thus neglecting any back-action of the fluctuations. Hence, the Hamiltonian of the fluctuations is given by a sum of uncoupled harmonic oscillators, whereby the populations of all modes are conserved during evolution. The linearization step is reasonable for very low temperatures and small coherent populations of the modes, however, beyond this low-energy regime the back-action of fluctuations must be taken into account, which inevitably will lead to relaxation of excited modes. Within the thermodynamic Bethe ansatz the full thermodynamic state is characterized by the quasi-particle distribution, while thermal effects on dynamics are implicitly accounted for in GHD through the effective velocity. An example of this can be found in Ref.~\cite{cataldini2021emergent}, where GHD was employed to describe the relaxation of a single excited density mode. Although the excited mode was well-defined in $k$-space, its corresponding rapidity distribution spanned over a range of rapidities by virtue of finite temperature. During evolution of the mode, its different rapidity components propagated at slightly different velocities, resulting in a gradual dephasing of the mode. While GHD does provide an incredibly powerful framework for treating finite temperature dynamics, the exact role of temperature is rather opaque within the TBA. As we have demonstrated in this work, the TBA quasi-particle distributions (at lower temperatures) can be described by a thermal ensemble of boostons. Hence, employing this framework we can study the apparent dephasing of a single excited density mode in greater detail. First, consider the microscopic definition of the sound velocity $v_s \equiv \mathrm{lim}_{k \to 0} \frac{\partial \varepsilon}{\partial k} $, which is derived from the excitation spectrum with energy $\varepsilon (k)$ and momentum $k$. For a fermionic system near the ground state, all excitations are limited to momenta near the Fermi momentum. In the Bethe ansatz of the Lieb-Liniger model, whose quasi-particle excitations are fermiones, the sound velocity therefore reads~\cite{korepin_bogoliubov_izergin_1993} \begin{equation} v_s ={ \frac{\partial_\theta \varepsilon (\theta)}{\partial_\theta k(\theta)} \bigg\vert } _{\theta= K} \equiv v^{\mathrm{eff}} (K) \; . \label{eq:sound_velocity} \end{equation} Thus, for a zero-temperature state of the Lieb-Liniger model (with only a single local Fermi sea) the sound velocity is equal to the effective velocity at the Fermi point $K$. Accounting for thermal density fluctuations leads to local fluctuations in the the Fermi point $\delta K$, and thereby local fluctuations in the sound velocity. In the presence of phase fluctuations, equivalent to a local boost of the ground state distributions, we obtain a fluctuating advection velocity $v = v_s + \hbar \zeta / m$ in the laboratory frame. These fluctuations are implicitly accounted for in the thermodynamic Bethe ansatz (TBA) by "melting" the edge of the Fermi sea of rapidities, as illustrated in Fig.~\ref{fig:fermi_edge_illustration}. At low enough temperatures, averaging over thermally sampled booston fluctuations reproduces the TBA equilibrium distributions, as demonstrated in the main text. Hence, at low temperatures the two approaches should predict the same relaxation rate of a single excited mode. \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/mode_relaxation.pdf} \phantomsubfloat{\label{fig:mode_relaxation_a}} \phantomsubfloat{\label{fig:mode_relaxation_b}} \phantomsubfloat{\label{fig:mode_relaxation_c}} \phantomsubfloat{\label{fig:mode_relaxation_d}} \vspace{-2\baselineskip \caption{ (\textbf{a}) Sampled distribution of advection velocities for a Tonks-Girardeau gas with interaction $\gamma = 100$ and temperature $k_B T = 0.11 m v_s^2$. For comparison, a Gaussian distribution with variance $D_{\delta v}^{\mathrm{TG}}$ of Eq. (\ref{eq:variance_sound_TG}) is plotted. (\textbf{b}) Corresponding relaxation of a single, coherently excited density mode. The dynamics of the $j = 4$ mode is computed using GHD with TBA initial state (blue, solid curve). Its relaxation is compared to the damping term $\mathrm{e}^{ - \frac{1}{2} D_{\delta v} k_j^2 t^2}$ (red, dashed lines) of Eq. (\ref{eq:mode_damping}). \textbf{(c)} Sampled distribution of advection velocities for a quasi-condensate with interaction $\gamma = 0.01$ and temperature $k_B T = 0.25 m v_s^2$. For comparison, a Gaussian distribution with variance $D_{\delta v}^{\mathrm{QC}}$ of Eq. (\ref{eq:variance_sound_QC}) is plotted. \textbf{(d)} Corresponding relaxation of the $j=4$ density mode.} \label{fig:mode_relaxation} \end{figure*} The local advection velocity is dependent on the local density and boost. Following the central limit theorem the local fluctuations in density and boost are Gaussian. Further, we assume density and boost fluctuations to be independent, thus making fluctuations in advection velocity Gaussian as well \begin{equation} w_{v}(\delta v) = \frac{\exp \left[ - \delta v^2 / (2 D_{\delta v}) \right]}{\sqrt{2 \pi D_{\delta v}}} \; . \end{equation} Consider the evolution of a single coherent density mode following Bogoliubov theory \begin{equation} \delta\varrho_j (z,t) = \sqrt{ \varrho_0 } \left[ f_j^+ (z) \mathrm{e}^{-i \epsilon_j t / \hbar} \alpha_j + \mathrm{H.c.} \right] \; , \end{equation} where $\alpha_j = \langle b_j \rangle$ is the coherent amplitude. For simplicity we restrict ourselves to the phononic branch of the Bogoliubov spectrum where $\epsilon_j \approx \hbar k_j v_s$. To account for local fluctuations in the advection velocity we let $v = v_{s0} + \delta v$, where $v_{s0}$ is the sound velocity computed with respect to the homogeneous background density $\varrho_0$. Averaging over the fluctuations, we obtain the following expression for the evolution of the mode \begin{align} \delta\varrho_j (z,t) &= \int_{-\infty}^{\infty} \mathrm{d}\delta v \: w_{v}(\delta v) \sqrt{ \varrho_0 } \nonumber \\ &\qquad \qquad \times \left[ f_j^+ (z) \mathrm{e}^{-i k_j (v_{s0} + \delta v) t } \alpha_j + \mathrm{H.c.} \right] \nonumber \\ &= \sqrt{ \varrho_0 } \left[ f_j^+ (z) \mathrm{e}^{-i v_{s0} t } \alpha_j + \mathrm{H.c.} \right] \mathrm{e}^{ - \frac{1}{2} D_{\delta v} k_j^2 t^2 } \; . \label{eq:mode_damping} \end{align} Indeed, in the presence of fluctuations the mode evolves as a damped oscillation, whose damping exponent scales quadratic with momentum and time. For comparison, in Ref.~\cite{cataldini2021emergent} the observed dynamics was fitted with damping exponents scaling with $\propto k_j^{3/2}, t^{3/2}$. However, from GHD simulations the exact power was found to be slightly temperature dependent, with lower temperature realizations tending towards quadratic scaling. Thus, the behavior of Eq.~(\ref{eq:mode_damping}) should be reproduced by GHD (with TBA initial states) in the low temperature limit. To obtain the scaling of the damping with temperature, we must compute the variance of the advection velocity $D_{\delta v}$. This can be achieved analytically in the Tonk-Girardeau and quasi-condensate regime, where exact expressions for the sound velocity are known. Hence, in the Tonk-Girardeau (TG) regime we have \begin{align} v^{\mathrm{TG}} &= \frac{\pi \hbar \varrho}{m} + \frac{\hbar \zeta}{m} \\ D_{\delta v}^{\mathrm{TG}} &= \left(\frac{\pi \hbar}{m}\right)^2 D_{\delta \varrho} + \left(\frac{ \hbar}{m}\right)^2 D_{\zeta} \label{eq:variance_sound_TG} \; , \end{align} where the variances of the density and boost fluctuations are given in Eq. (\ref{dual.4}). Likewise, in the quasi-condensate (QC) regime, the variance reads \begin{align} v^{\mathrm{QC}} &= \frac{\hbar}{m} \sqrt{ c \varrho} + \frac{\hbar \zeta}{m} \\ D_{\delta v}^{\mathrm{QC}} &= \left(\frac{\hbar c }{m \sqrt{c \varrho}}\right)^2 D_{\delta \varrho} + \left(\frac{ \hbar}{m}\right)^2 D_{\zeta} \label{eq:variance_sound_QC} \; . \end{align} Figs.~\ref{fig:mode_relaxation_a} and \ref{fig:mode_relaxation_c} depict histograms of advection velocities calculated via Eq. (\ref{eq:sound_velocity}) for a state with thermally sampled boostons. The sampled velocities are compared to the analytic distributions of Eqs. (\ref{eq:variance_sound_TG}, \ref{eq:variance_sound_QC}) and a good agreement is found. Further, in Figs.~\ref{fig:mode_relaxation_b} and \ref{fig:mode_relaxation_d} GHD simulations of a single, coherently excited density mode are compared to the predicted relaxation from Eq. (\ref{eq:mode_damping}). The simulations are carried out both in the Tonks-Girardeau and quasi-condensate regime, whereby the variance in advection velocity can be obtained analytically via Eqs. (\ref{eq:variance_sound_TG}, \ref{eq:variance_sound_QC}). As evident from the figure, in the regime of relatively small coherent excitations and low temperatures, the relaxation of the mode is well-described by Eq. (\ref{eq:mode_damping}), illustrating how GHD based off the thermodynamic Bethe ansatz implicitly accounts for thermal fluctuations in the local velocity.
1,314,259,995,747
arxiv
\section{Introduction} Over the past 15 years, light-matter strong coupling has been studied extensively with organic materials \cite% {LidzeyNature1998,FujitaPRB1998,SchouwinkCPL2001,TakadaAPL2003,HolmesPRL2004,% TischlerPRL2005,MichettiPRB2005,HakalaPRL2009,BellessaPRL2004,DintingerPRB2005,% GuebrouPRL2012,AmbjornssonPRB2006,BerrierACSNano2011} which can display very large splitting of the two hybrid light-matter states, also known as the polariton states. Recently, optical resonances with small mode volumes such as Fabry-Perot nanocavities or surface plasmons have been used to achieve the so-called ultra-strong coupling where the Rabi splitting approaching $\sim 1\mathrm{eV}$ becomes a significant fraction of the electronic transition energy \cite{CuitiPRB2005,SchwartzPRL2011}. For such large splittings, changes in bulk properties are observed, as already shown for the work-function \cite{HutchisonAdvMat2013} and the ground state energy \cite{CanaguierACIE2013}. It has also been noticed over the years that the lifetime of the lowest polariton state, denoted $\mathrm{C}^{-}$, is much longer than the lifetime of the photon in the cavity mode \cite{SongPRB2004,WiederrechtPRL2007,SalomonACIE2009,VasaACSNano2010,% VirgiliPRB2011,HaoACIE2011,AgranovitchPRB2003,LitinskayaJLumin2004}. In recent experiments using resonant excitation, this $\mathrm{C}^{-}$ lifetime has even been shown to be longer than that of the bare excited molecules \cite{HutchisonACIE2012,SchwartzCPC2013}. These properties are counter-intuitive in the conventional picture where the dynamic properties of the coupled states are directly determined from those of the bare ones \cite{WeisbuchJLumin2000}. In the so-called Markov approximation, the effects of coupling and relaxation are simply added to each other in the master equation which describes the evolution of the system. It follows that the relaxation rates in the diagram of dressed states are obtained from those of bare states through a mere change of basis \cite{CohenJPhys1977}. In the ultra-strong coupling limit in particular, the low- and high-energy dressed states $\mathrm{C}^{-}$ and $\mathrm{C}^{+}$ contain identical proportions of the bare states and their lifetimes are thus expected to be equal to each other. The experimental observation of very different lifetimes for these two dressed states reveals that the relaxation of the dressed system is deeply influenced by the strong coupling. In other words, the relaxation of coupled organic molecules corresponds to a non-Markovian regime where relaxation can only be studied after the effect of ultra-strong coupling has been taken into account \cite{ReynaudJPhys1982}. In the present article, we build up a new theoretical framework to understand the dynamics of ultra-strongly coupled organic molecules. In particular, we show that the relaxation inherent is intrinsically non-Markovian in such a system. This new view on strongly coupled organic materials explains the most salient features experimentally observed in such systems, in particular the very long lifetime of the lower dressed state $\mathrm{C}^{-}$. \begin{figure}[tbp] \centerline{\psfig{figure=fig1.eps,width=9cm}} \caption{a) Illustration of molecules coupled to the fundamental optical mode of a $145$nm thick Fabry-Perot cavity made of two $30$nm thick Ag mirrors. b) Typical example of absorption spectrum of uncoupled (red line) and coupled (dark line) molecules. The data correspond to J-aggregate (TDBC) molecules dispersed in a polyvinyl alcohol (PVA) polymer host matrix inside the cavity sketched in a) \cite{SchwartzCPC2013}.} \label{fig1} \end{figure} \section{Non-Markovian dynamics} Figure 1 illustrates the case of organic molecules strongly coupled to a Fabry-Perot cavity mode. The organic molecules are doping a host polymer matrix {at $0.1$ to $0.01$ molar concentration (mole per litre). Qualitatively, this corresponds to typical intermolecular separation distances of the order or larger than $3$nm within the host matrix, so that F\"orster-type energy transfer is expected to dominate over other intermolecular transfer mechanisms \cite{ScholesARPC2003}}. Absorption spectra on the right-hand side of the figure shows the effect of strong coupling which splits the molecular resonance in the coupled system (dark curve), as compared to the uncoupled one (red curve). We stress here that the widths of the molecular absorption peaks need not to be directly related to the intrinsic molecular lifetimes, due to inhomogeneous broadening and vibrational manifold. Inhomogeneous broadening is crucial for coupled and uncoupled organic molecules to coexist in the cavity in the model discussed further down, and this prevents one to draw conclusions about intrinsic lifetimes from the measured spectral features. Inhomogeneous broadening is due to distribution of orientations, locations and micro-environment of the organic molecules in the matrix. These features are essentially the same for coupled and uncoupled molecules, since the optical coupling does not affect the associated motions. In addition, the host matrix behaves as a vibrational relaxation reservoir in thermodynamic equilibrium with both coupled and uncoupled molecules. The vibrational reservoir spectra are characterized by a typical energy dispersion $k_{\mathrm{B}}T\simeq 25\mathrm{meV}$ at room temperature, or equivalently a correlation time $\tau _{c}\simeq \hbar /k_{\mathrm{B}}T$ $\simeq 25\mathrm{fs}$. The condition of validity of the Markov approximation \cite{ReynaudJPhys1982} would be that the Rabi splitting $\Omega _{\mathrm{R}}$ be inefficient during the correlation time $\Omega _{\mathrm{R}}\tau _{c}\ll 1$, that is equivalently $\hbar \Omega_{\mathrm{R}} \ll k_{\mathrm{B}}T$. This condition is clearly not met for ultra-strong coupling of organic molecules, where the Rabi splitting $\hbar \Omega_{\mathrm{R}}$ is much larger than $k_{\mathrm{B}}T$ \cite{HoudrePRA1996,LienauNatPhot2013}. This implies that the system is intrinsically in the non-Markovian regime, with relaxation strongly influenced by the coupling. In other words, there is no reason to expect that the dressed states $\mathrm{C}^{+} $ and $\mathrm{C}^{-}$ have identical lifetimes, as it would be the case for ultra-strong coupling in the Markovian approximation. Furthermore, we will see below that the hierarchy of lifetimes observed in experiments is naturally explained by the approach proposed in this paper. In our case, each individual molecule can only be weakly coupled to the electromagnetic mode of the cavity. The strong coupling mechanism necessarily involves a collective excitation of an extremely large number of molecules coherently coupled to the single mode of the cavity. It is important to stress that the strong coupling does not shield the molecules from intramolecular vibrational relaxation. This explains the extremely low emission quantum yields, as observed experimentally, with vibrational relaxation rates at least $100$ times larger than the radiative rate of $\mathrm{C}^{-}$. For instance in the case of TDBC presented in Figure 1, the fluorescence quantum yield of $\mathrm{C}^{-}$, angularly integrated, is found to be $\sim4\times 10^{-3}$ (more numbers will be given below). We build up below the new framework which naturally allows us to analyze such situations. We show in particular that the non-Markovian character explains the otherwise counter-intuitive long lifetime of the lower dressed state $\mathrm{C}^{-}$. We stress at this point that a problem dominated by radiative relaxation would lead to different conclusions \cite{CohenJPhys1977}. Note also that, in what follows, dark states which are formed in the coherent state manifold when coupling a large number of molecules to one cavity mode are ignored \cite{HoudrePRA1996}. Nevertheless, their mere presence in the energy diagram also contributes to the non-radiative decay discussed further down. \section{Bare and dressed states} We consider uncoupled (U) and coupled (C) states as two populations in a dynamical equilibrium with the total concentration $[\mathrm{M}]=[\mathrm{U}]+[\mathrm{C}]$ fixed. This two-population model is a simplification of the real situation where there is a continuous distribution of molecules in the cavity mode with different positions, orientations or environments which lead to spectral inhomogeneous broadening. This model is based on recent experimental observations which have been shown to be explained in terms of two populations coexisting at thermal equilibrium with well defined Gibbs free energies \cite{CanaguierACIE2013}. As usually, the model has to be tested by comparing its predictions to experimental observations. The relevant states of the uncoupled molecules are, on the one hand, the ground and excited states of the molecule $\mathrm{U}$ and $\mathrm{U}^\ast$ and, on the other hand, the $0-$ and $1-$photon states of the cavity. The states of the hybrid `` molecule+cavity'' system are denoted $\mathrm{U}_{0}$ for the ground state, $\mathrm{U}_{1}$ and $\mathrm{U}_{0}^{\ast }$ for the excited ones (see Figure 2). The energy difference between the two excited states is \begin{equation} \hbar \delta = \hbar\omega _{1}-\hbar\omega _{\ast }~, \end{equation}% with $\omega _{1}$ the frequency of photons in the cavity mode and $\omega _{\ast }$ the frequency of the molecular transition. The detuning $\delta $ (as the Rabi coupling discussed in the next paragraph) has a single value in the simplified two-population model whereas it would have a distribution of values in a microscopic description. The relevant states are similar for coupled and uncoupled molecules, with differences caused by the effects of the coupling. They are denoted $\mathrm{C}_{0}$, $\mathrm{C}_{1}$ and $\mathrm{C}_{0}^{\ast }$, with the symbol C replacing U. The excited states $\mathrm{C}_{1}$ and $\mathrm{C}_{0}^{\ast }$ are coupled through the Rabi coupling $2\upsilon$ which is not zero for the coupled molecules. Note that this Rabi splitting has a large value, though the cavity has a low quality factor $Q$ and remains in low states with only 0 or 1 photon. This unusual feature is due to the already discussed fact that the cavity field is coupled to a giant dipole corresponding to the coherent superposition of an extremely large number of molecules. The dressed states, denoted $\mathrm{C}^{+}$ and $\mathrm{C}^{-}$, are obtained by diagonalizing the effect of the Rabi coupling between the states $\mathrm{C}_{1}$ and $\mathrm{C}_{0}^{\ast }$ \begin{eqnarray} \mathrm{C}^{+}=\cos \theta ~\mathrm{C}_{0}^{\ast }+\sin \theta ~\mathrm{C}_{1} &&~, \nonumber \\ \mathrm{C}^{-}=\cos \theta ~\mathrm{C}_{1}-\sin \theta ~\mathrm{C}_{0}^{\ast } &&~, \label{dressed} \end{eqnarray}% with the angle $\theta $ defined by \begin{equation} \tan (2\theta)=-2\frac\upsilon\delta \;,\quad 0\leq 2\theta \leq \pi~. \end{equation}% $\mathrm{C}^{+}$ is defined to have a higher energy than $\mathrm{C}^{-}$ and the splitting between the two states is \begin{equation} \Omega _{\mathrm{R}}=\sqrt{\delta ^{2}+4\upsilon ^{2}}~. \end{equation}% The projection factors in eq.(\ref{dressed}) are \begin{equation} \cos ^{2}\theta = \frac{\Omega _{\mathrm{R}}-\delta} {2\Omega_\mathrm{R}} \;,\quad \sin ^{2}\theta = \frac{\Omega_\mathrm{R}+\delta}{2\Omega _\mathrm{R}}~. \end{equation}% When the coupling is much larger than the detuning ($2\upsilon \gg \left\vert \delta \right\vert $), these projection factors are nearly equal $\cos ^{2}\theta \simeq\sin ^{2}\theta \simeq 1/2$. \begin{figure}[tbp] \centerline{\psfig{figure=fig2.eps,width=8cm}} \caption{Energy diagram of the bare states $\mathrm{U}$ of the ``molecule$+$cavity'' system and of the dressed states $\mathrm{C}$. The energy difference between $\mathrm{U}_{1}$ and $\mathrm{U}_{0}^{\ast }$ is $\hbar \delta $, with $\delta =\omega _{1}-\omega _{\ast }$ the detuning between the frequency $\omega _{1}$ of the cavity mode, and $\omega _{\ast }$ that of the molecular transition. The Rabi splitting $\Omega _{\mathrm{R}}$ between the dressed states $\mathrm{C}^{+}$ and $\mathrm{C}^{-}$ and the energy shift $\Delta_0$ of the ground state $\mathrm{C}_{0}$ are shown.} \end{figure} All molecular states are connected in the {molecular} hamiltonian so that the splitting of $\mathrm{C}^{+}$ and $\mathrm{C}^{-}$ has consequences on the other states. This causes in particular a shift $\Delta _{0}$ of the position of the ground state $\mathrm{C}_{0}$ \cite{FornDiazPRL2010,CuitiPRB2005,CanaguierACIE2013}. With an observed Rabi splitting $2\upsilon\sim 1~\mathrm{eV}$ and a distance between the ground and excited states $\Delta \sim 2~\mathrm{eV}$, this shift cannot be neglected. A naive expectation $\left( 2\upsilon \right) ^{2}/2\Delta $ from second order perturbation theory leads to a value consistent with the result of recent measurements $\Delta _{0}\sim 0.1\mathrm{eV}$. Indeed, as this shift changes the energy differences between the states of uncoupled and coupled molecules, it can be measured in a thermodynamic approach as the standard Gibbs free energy difference between the ground states of the uncoupled and coupled molecules \cite{CanaguierACIE2013}. \section{Cavity relaxation processes} We now discuss the radiative relaxation processes which correspond to emission of a photon by the cavity while leaving the molecular state unaffected. The basis of the method is the application of Fermi's golden rule to dressed states \cite{CohenJPhys1977}. For the uncoupled molecules, there is only one relaxation channel corresponding to the transition $\mathrm{U}_{1} \rightarrow \mathrm{U}_{0}$. Simple rate equations describe the evolution of the populations $[\mathrm{U}_{1}]$ and $[\mathrm{U}_{1}]$ due to this process \begin{equation} \frac{\mathrm{d}[\mathrm{U}_{1}]}{\mathrm{d}t} = - \frac{\mathrm{d}[\mathrm{U}_{0}]}{\mathrm{d}t} = - \Gamma_{\mathrm{U}_{0}\mathrm{U}_{1}}[\mathrm{U}_{1}] ~, \end{equation}% and they preserve the sum of the two populations. The transition rate $\Gamma _{\mathrm{U}_{0}\mathrm{U}_{1}}$, defined for the transition $\mathrm{U}_{1}\rightarrow \mathrm{U}_{0}$, is the product of a reduced rate $\gamma $ and a spectral density of optical modes evaluated at the frequency of the transition. Absorption rate on the same transition is the product of the spontaneous emission rate $\Gamma _{\mathrm{U}_{0}\mathrm{U} _{1}}$ by a photon flux $\Phi _{\mathrm{U}_{0}\mathrm{U}_{1}}$ at the relevant frequency \begin{equation} \frac{\mathrm{d}[\mathrm{U}_{1}]}{\mathrm{d}t} = A_{\mathrm{U}_{1}\mathrm{U}_{0}}[ \mathrm{U}_{0}] \,,\; A_{\mathrm{U}_{1}\mathrm{U}_{0}}=\Gamma _{\mathrm{U} _{0}\mathrm{U}_{1}}\Phi _{\mathrm{U}_{0}\mathrm{U}_{1}} \label{pumping} \end{equation} Note that the low $Q$ factor favors absorption events in the cavity and thereby strong coupling. For the coupled states, there are two radiative transition channels $\mathrm{C}^\pm \rightarrow \mathrm{C}_{0}$ with rate equations \begin{equation} \frac{\mathrm{d}[\mathrm{C}^{\pm }]}{\mathrm{d}t} = - \Gamma_{\mathrm{C}_{0}\mathrm{C}^{\pm }}[\mathrm{C}^{\pm }]~. \end{equation} The rates are proportional to squared projection factors $\Gamma _{\mathrm{C}_{0}\mathrm{C}^{+}}\propto \sin ^{2}\theta$ and $\Gamma _{\mathrm{C}_{0}\mathrm{C}^{-}}\propto \cos ^{2}\theta$, and to the spectral densities of optical modes at the transition frequencies. As these frequencies differ from the bare one, the values of the emission and absorption rates differ from the expectations deduced from the Markov approximation. We note that the thermodynamical equilibrium is only slightly modified by the absorption processes. The total population of excited states does not exceeds a fraction of the order of $10^{-7}$ in the case of static spectroscopic experiments ($\sim 10^{-2}$ for pump-probe measurements) so that the depletion of ground states remains negligible. This means that the populations $[\mathrm{U}_{0}]$ and $[\mathrm{C}_{0}]$ remain close to their values in vacuum and also explains why stimulated emission processes can be disregarded. \section{Vibrational relaxation processes} We now study vibrational relaxation processes which are the dominant relaxation mechanism for most organic molecules. They correspond to internal conversion of energy via a rapid cascade down the vibrational ladder of the molecule. Typical organic molecules used in strong-coupling experiments have over $100$ fundamental vibration modes. Another non-radiative relaxation process is the F\"orster energy transfer between different molecules with conservation of energy. Well known in molecular photophysics \cite{ScholesARPC2003}, these processes correspond to a transfer of excitation due to F\"orster dipole-dipole coupling between molecules over distances of a few nm to a few tenths of nm. The energy excess, required for energy conservation, is dissipated by a vibrational cascade down to the lowest level of the corresponding electronic multiplicity, as sketched on Figure 3. Though they involve Coulomb interaction, these energy transfer mechanisms can be considered as non-radiative as they do not couple to the free radiation field. It is also worth noting that, at the small intermolecular distance scales where they occur, they are expected not to perturb efficiently the coherence of the collective dipole. We do not enter into a detailed microscopic description of these processes, well-known in molecular science, which leave the cavity state unaffected. We give qualitative descriptions which are sufficient for our purpose. A crucial feature in our case is that the thermal energy $k_\mathrm{B}T$ is much smaller than energy differences, so that downward transitions are dominant. The only exception to this rule is the case of transitions between ground states which correspond to a smaller energy shift $\Delta _{0}$ and determine the thermodynamical equilibrium of the ground states of the coupled and uncoupled molecules \cite{CanaguierACIE2013}. For uncoupled states, there is only one non-radiative transition $\mathrm{U}_{0}^{\ast }\rightarrow \mathrm{U}_{0}$. As previously, this process is described by a rate equation \begin{equation} \frac{\mathrm{d}[\mathrm{U}_{0}^\ast]}{\mathrm{d}t} = - W_{\mathrm{U}_{0}\mathrm{U}_{0}^\ast}[\mathrm{U}_{0}^\ast]. \end{equation} The rate $W_{\mathrm{U}_{0}\mathrm{U}_{0}^\ast}$ is the product of a reduced rate $w^\ast$ by a spectral density $\mathcal{S}$ which represents the coupling of the two vibronic multiplicities and depends on the energy difference. This reduced rate is relatively small as this energy difference is much larger than $k_{B}T$. For coupled states, there are similar transitions $\mathrm{C}^\pm \rightarrow \mathrm{C}_{0}$ \begin{equation} \frac{\mathrm{d}[\mathrm{C}^\pm]}{\mathrm{d}t} = W_{\mathrm{C}_{0} \mathrm{C}^\pm}[\mathrm{C}^\pm] ~, \end{equation} with $W_{\mathrm{C}_{0}\mathrm{C}^{+}}$ and $W_{\mathrm{C}_{0}\mathrm{C}^{-}}$ proportional to $\cos^2\theta$ and $\sin^2\theta$ respectively. \begin{figure}[tbp] \centerline{\psfig{figure=fig3.eps,width=8cm}} \caption{Schematic representation of uncoupled $\mathrm{U}$ and coupled $\mathrm{C}$ states in the non-Markovian regime. The vibrational ladders associated with each molecular configuration are represented in grey shadows and the non-radiative relaxation paths as red vertical arrows. Transitions occuring between uncoupled and coupled molecules are represented by horizontal black arrows. } \end{figure} There exists one relaxation channel which is opened by the strong coupling and could never be seen in the absence of this effect. It corresponds to the transition between the dressed excited states $\mathrm{C}^{+}\rightarrow \mathrm{C}^{-}$ \begin{equation} \frac{\mathrm{d}[\mathrm{C}^{+}]}{\mathrm{d}t} = - W_{\mathrm{C}^{-}\mathrm{C}^{+}}[\mathrm{C}^{+}] ~, \end{equation} with a rate proportional to $\cos ^{2}\theta \sin ^{2}\theta $. This new channel has a maximal rate when $\cos ^{2}\theta \simeq \sin ^{2}\theta \simeq 1/2$ and is very similar to the collisional induced transitions studied in \cite{ReynaudJPhys1982}. Note that, as the energy difference is smaller, the rate is larger than for transitions studied in the preceding paragraph. We come now to a second category of transitions occuring between coupled and uncoupled molecules schematized in Figure 3. Such transitions are observed experimentally as energy transfer processes with well defined signatures \cite{VirgiliPRB2011,SchwartzCPC2013}. In the study of ground states, we consider reverse transitions $\mathrm{C}_{0}\rightarrow \mathrm{U}_{0} $ and $\mathrm{U}_{0}\rightarrow \mathrm{C}_{0}$ because the energy difference $\Delta _{0}$ is not so large with respect to $k_{\mathrm{B}}T$. These transitions produce the thermodynamical equilibrium between populations of coupled and uncoupled molecules \begin{equation} \frac{[\mathrm{C}_{0}]}{[\mathrm{U} _{0}]} = \exp\frac{\hbar\Delta_0}{k_{\mathrm{B}}T} ~. \end{equation} This equilibrium favors coupled molecules for a downward shift $\Delta _{0}>0$ of the coupled state. For similar transitions between the excited states of coupled and uncoupled molecules, energy differences are large, and we consider only downward transitions $\mathrm{C}^{+}\rightarrow \mathrm{U}_{1}$, $\mathrm{U}_{1}\rightarrow \mathrm{C}^{-}$, $\mathrm{C}^{+}\rightarrow \mathrm{U}_{0}^{\ast }$, $\mathrm{U} _{0}^{\ast }\rightarrow \mathrm{C}^{-}$. The rates $W_{\mathrm{U}_{1}\mathrm{C} ^{+}}$, $W_{\mathrm{C}^{-}\mathrm{U}_{1}}$, $W_{\mathrm{U}_{0}^{\ast } \mathrm{C}^{+}}$, $W_{\mathrm{C}^{-}\mathrm{U}_{0}^{\ast }}$ are respectively proportional to $\sin ^{2}\theta $, $\cos ^{2}\theta $, $\cos ^{2}\theta $ and $\sin ^{2}\theta $ and to spectral densities at the relevant frequencies. Hence, they can only be calculated on the diagram of dressed states and are not determined by rates known for bare molecules. This situation, typical for a non-Markovian regime, is in sharp contrast with the Markov approximation where the downward and upward rates would be similar. \section{Orders of magnitudes} Magnitudes of the various rates are known from the experiments (see for instance \cite{SchwartzCPC2013}). The largest rate corresponds to the radiative transition between uncoupled states $\Gamma_{\mathrm{U}_{0} \mathrm{U}_{1}}\sim 4\times 10^{13}\mathrm{s}^{-1}$ which is strongly favored by the cavity. In particular it is much larger than other radiative rates which have values in the range of $10^{11}\mathrm{s}^{-1}$. Large values are also obtained for non-radiative transition rates between excited states $W_{\mathrm{C}^{-}\mathrm{C}^{+}},W_{\mathrm{U}_{1}\mathrm{C} ^{+}},W_{\mathrm{U}_{0}^{\ast }\mathrm{C}^{+}},W_{\mathrm{C}^{-}\mathrm{U} _{1}},W_{\mathrm{C}^{-}\mathrm{U}_{0}^{\ast }}\sim 10^{13}\mathrm{s}^{-1}$, which arise as consequences of strong coupling. The first one $W_{\mathrm{C} ^{-}\mathrm{C}^{+}}$ has a dependence $\propto \cos ^{2}\theta \sin ^{2}\theta $ which makes it large for molecules with a Rabi coupling larger than the detuning. A similar discussion applies to the products of rates on the cascades $\mathrm{C}^{+}\rightarrow \mathrm{U}_{1}\rightarrow \mathrm{C}^{-}$ and $\mathrm{C}^{+}\rightarrow \mathrm{U}_{0}^{\ast }\rightarrow \mathrm{C}^{-}$. They correspond to two-step relaxation processes $\mathrm{C}^{+}\rightarrow \mathrm{C}^{-}$ which are large when $\cos ^{2}\theta \sin ^{2}\theta $ has its maximum value. These processes offer possibilities to explain a selection of strongly coupled molecules among a diverse population. The other non-radiative rates have smaller values $W_{\mathrm{U}_{0}\mathrm{U} _{0}^{\ast }}\sim 10^{12}\mathrm{s}^{-1}$ and $W_{\mathrm{C}_{0}\mathrm{C} ^{-}}\sim 10^{12}\mathrm{s}^{-1}\gg \Gamma _{\mathrm{C}_{0}\mathrm{ C}^{-}}$. These orders of magnitude allows one to write down a simplified system of rate equations. The largest absorption rate is indeed the one $A _{\mathrm{U}_{1}\mathrm{U}_{0}}$ associated to the absorption from $\mathrm{U}_{0}$ to $\mathrm{U}_{1}$ and the main relaxation channel is then through non-radiative relaxation from $ \mathrm{U}_{1}$ to $\mathrm{C}^{-}$. The populations of the states $\mathrm{C }^{+}$ and $\mathrm{U}_{0}^{\ast }$ remain negligible at all times and can be ignored in the following simplified system of solutions \begin{eqnarray} &&\left[ \mathrm{U}_{1}\right] \left( t\right) \simeq \int_{0}^{t}\mathrm{d} t^{\prime }~e^{-R_{\mathrm{U}_{1}}t^{\prime }} A _{\mathrm{U}_{1}\mathrm{U}_{0}} \left( t-t^{\prime }\right) [ \mathrm{U}_{0}] ~, \\ &&\left[ \mathrm{C}^{-}\right] \left( t\right) \simeq \int_{t_{0}}^{t} \mathrm{\ d}t^{\prime }~e^{-R_{\mathrm{C}^{-}}t^{\prime }}W_{\mathrm{C}^{-} \mathrm{U}_{1}}[\mathrm{U}_{1}]\left( t-t^{\prime }\right) ~, \notag \end{eqnarray}% where $R_{\mathrm{U}_1}$ and $R_{\mathrm{C}^-}$ are the total relaxation rates for states $\mathrm{U}_1$ and $\mathrm{C}^-$ \begin{eqnarray} &&R_{\mathrm{U}_{1}}\simeq \Gamma_{\mathrm{U}_{0}\mathrm{U}_{1}} + W_{\mathrm{ C}^{-}\mathrm{U}_{1}} ~, \\ &&R_{\mathrm{C}^{-}} \simeq \Gamma_{\mathrm{C}_{0} \mathrm{C}^{-}} + W_{\mathrm{C}_{0}\mathrm{C}^{-}} ~. \notag \end{eqnarray}% The population of $\mathrm{U}_{1}$ follows the pumping rate (\ref{pumping}), with a delay determined by $R_{ \mathrm{U}_{1}}$. As already stated, the population of $\mathrm{U}_{0}$ is not significantly depleted and can be considered as constant. The population of $\mathrm{ C}^{-}$ follows the feeding from $\mathrm{U}_{1}$, with a delay determined by $R_{\mathrm{C}^{-}}$. As $R_{\mathrm{U}_{1}}\sim 5\times 10^{13} \mathrm{s}^{-1}$ is $\sim50$ times larger than $R_{\mathrm{C}^{-}}\sim 10^{12}\mathrm{s}^{-1}$, it follows that $[\mathrm{U}_{1}]$ reaches a quasi-stationary value $A _{\mathrm{U}_{1}\mathrm{U}_{0}} [\mathrm{U}_{0}] / R_{\mathrm{U}_{1}}$ after a very short time $R_{\mathrm{U}_{1}}^{-1}\sim 20\mathrm{fs}$. Then $[\mathrm{C}^{-}]$ shows a quasi-stationary behavior for a much longer time $ R_{\mathrm{C}^{-}}^{-1}\sim 1\mathrm{ps}$ during which it is by far the most populated excited state and determines all observables. This explains the main feature observed in the experiments, that is the extremely long lifetime of the lower dressed state $\mathrm{C}^{-}$, which is much longer than that of other excited states. \section{Discussion} The decay of $\mathrm{C}^{-}$ is dominated by the internal vibrational relaxation whereas the radiative decay (fluorescence) is a negligible pathway. Even if the {fluorescence} rate is not suppressed, it is {overwhelmed} by the non-radiative rate enhanced in the strong coupling regime due to internal conversion via vibrational overlap between $\mathrm{C}^{-}$ and $\mathrm{C}_{0}$. This increase of the non-radiative decay with respect to the radiative one is confirmed by the small emission quantum yield measured at the level of strongly coupled molecules (numbers given below). Meanwhile, the higher dressed state $\mathrm{C}^{+}$ is much shorter lived due to the extremely rapid vibrational decay to $\mathrm{C}^{-}$ and energy transfer to uncoupled molecules (see Fig.3). The lifetime of $\mathrm{C}^{+}$ turns out to be less than $150$ fs while the lifetime of $\mathrm{C}^{-}$ is of the same order, at resonance, than that of the bare molecule \cite{SongPRB2004,HutchisonACIE2012,SchwartzCPC2013}. {In fact,} the strong dissymmetry in the $\mathrm{C }^{-}$ and $\mathrm{C}^{+}$ lifetimes is a direct proof of the importance of the vibrational coupling for the decay process of the polaritons, as well as of the non-Markovian character of the associated relaxation. As also known for the lowest excited level of most molecules, $\mathrm{C}^-$ has a very long lifetime precisely because the vibrational overlap between the lowest excited level and the ground state is much smaller than between it and the higher excited states. Let us discuss here two examples. For merocyanine strongly coupled ($\Omega _\mathrm{R}\sim 0.7$eV) to a Fabry-Perot cavity of low $Q-$factor ($\sim 10$), the half-life of $\mathrm{C}^{-}\sim 10$ps is much longer than the photon lifetime in the bare cavity ($\Gamma _{\mathrm{U}_{0} \mathrm{U}_{1}} ^{-1} \sim 25$fs) while being shorter than that of bare molecules ($30$ps) \cite{HutchisonACIE2012}. In the case of the TDBC J-aggregate strongly coupled ($\Omega _\mathrm{R}\sim 0.35$eV) to a similar low $Q$ cavity, $\mathrm{C}^{-}$ has a half-life of $4$ps, which is even longer than the $1$ps half-life of the bare organic material, as shown in Fig.4. Note that these lifetime values are the same whether $\mathrm{C}^{-}$ is excited resonantly or not. When the pump reaches higher electronic levels of uncoupled or coupled molecules, the same transient spectrum and lifetime are observed, confirming that $\mathrm{C}^{-}$ determines the observable because the population accumulates in this longest-lived state. We also emphasize that for both types of molecules, the quantum yields in the strong coupling regime are remarkably low. Indeed, for merocyanine, a highly efficient organic dye, the measured quantum yield associated with $\mathrm{C}^{-}$ falls below $10^{-4}$ \cite{HutchisonACIE2012,SchwartzCPC2013}. For TDBC, we measure a quantum yield $\sim 4\times 10^{-3}$ \cite{WangPrep}. In fact, this can also be found by simply remembering that for molecules with high oscillator strength such as merocyanine and TDBC, the radiative rate is at best $\sim10^{9}$s$^{-1}$ so that, given the observed $\mathrm{C}^{-}$ life times of the order of picoseconds, quantum yields are expected to be less than $10^{-2}$. \begin{figure}[tbp] \centerline{\psfig{figure=fig4.eps,width=7cm}} \caption{Temporal evolution of the total change in absorption recorded immediately after a $150$fs pump pulse at $590$nm for a bare film of TDBC molecules (red data) and for TDBC molecules coupled to the cavity (black data; see Fig.1 b). After the pumping rise time, the relaxation appears exponential over this time scale, in agreement with Eq.(3). The half-live of $\mathrm{C}^{-}$ $\sim 4$ps (black) is thus longer than that of the bare molecule (red). At this time scale, the radiative lifetime ($\sim 25$fs) of the low $Q$ cavity appears as instantaneous.} \label{fig4} \end{figure} \section{Conclusion} In conclusion, we have shown in this paper that the observed lifetimes of the polariton states are naturally explained in the non-Markovian relaxation approach studied in the present letter. The lifetimes of the excited states are determined by vibrational relaxation phenomena and they are strongly affected by the large Rabi splitting which changes the overlaps of the vibrational reservoirs. In particular, the lifetime of the lower dressed state $\mathrm{C}^{-}$ is much longer than that of other excited states and its value is disconnected from that of the photon decay rate in the bare cavity, or of the relaxation rates of bare molecular states. This explains the main features observed in experiments and also opens new possibilities to influence chemical dynamics by controlling organic strong coupling. \paragraph{Acknowledgment -} The authors are grateful to Claude Cohen-Tannoudji, James A. Hutchison and Jean-Marie Lehn for fruitful discussions. This work was funded in part by the ERC (grant 227577) and the ANR (Equipex Union).
1,314,259,995,748
arxiv
\section{Introduction} \label{sec:1} Quantum key distribution (QKD) allows two remote users (Alice and Bob) to establish a secure key by transmitting quantum states through an untrusted channel \cite{Gisin02, Scarani09, Lo14, Diamanti16}. The generated secure key can be further applied in various cryptographic protocols to achieve long-term proven security against adversaries with unlimited computational power. QKD protocols are commonly divided into two families based on encoding schemes: discrete-variable (DV) QKD or continuous-variable (CV) QKD. In contrast, classical optical communications are typically grouped into two categories based on detection schemes: direct detection or coherent detection. In principle, combining both encoding (DV or CV) and detection (direct or coherent) could lead to four families of QKD protocols. Among them, DV-QKD with direction detection (single photon detection) \cite {BB84, E91} and CV-QKD with coherent detection (optical homodyne detection) \cite{Ralph99, Hillery00, GMCSQKD} are dominant, although other protocols, such as CV-QKD using single photon detection \cite{Qi06}, do exist. For simplicity, in this paper we use the term CV-QKD to refer to CV-QKD using coherent detection. CV-QKD's most distinguishing feature is coherent detection. It enables high speed optical homodyne detection with no dead time which could give high secure key rates over short distances. Further, the intrinsic filtering provided by the local oscillator in a coherent receiver can effectively suppress background noise and enable QKD through conventional dense wavelength division multiplexed fiber networks in the presence of strong classical traffic \cite{Qi10, Kumar15, Eriksson19}. The similarity between a CV-QKD system and a classical coherent communication system opens the door for simultaneous quantum and classical communications \cite{Qi16} with a technological pathway towards fully integrated, on-chip, photonic implementation \cite{Zhang19}. Integrating CV-QKD on a chip may have several benefits. It allows for the integration of multiple photonic functions into a single compact circuit. In particular, the phase-sensitive optical circuits commonly used for CV-QKD can be made more robust to temperature-induced phase drifts by reducing path-length differences on chip. Furthermore, silicon photonic devices are compatible with complementary metal-oxide-semiconductor (CMOS) processes that enable monolithic integration of electronics and photonics, potentially leading to significant cost reduction which would enable the wide-spread utilization of QKD. One important CV-QKD protocol is the Gaussian-modulated coherent states (GMCS) QKD protocol \cite{GMCSQKD}, which has been implemented with standard off-the-shelf telecom components, such as laser sources, optical homodyne detectors, and optical intensity and phase modulators \cite{Lodewyck07, Qi07, Jouguet13, Huang16, ZLC19}. In the GMCS protocol, Alice first generates Gaussian distributed random numbers $x_A$ and $p_A$ and then prepares a coherent state $|x_A+ip_A\rangle$ sent to Bob through a channel controlled by the adversary (Eve). The quantum state preparation is commonly implemented \emph{actively} using amplitude and phase modulation. High speed modulation with high extinction is extremely challenging in chip-scale silicon photonics. While on-chip modulation with extinction ratio above 65 dB has been demonstrated recently \cite{Liu2017}, the high speed on-chip modulators required for active QKD encoding adds significant cost, manufacturing time and complexity, and are the principal source of loss in most integrated photonic circuits. Therefore, the potential to remove the modulators used for encoding may yield significant reductions in cost, manufacturing time, and on-chip loss. One could simplify the chip-scale implementation by adopting a passive CV-QKD protocol, where the amplitude and phase modulators in the GMCS QKD are replaced by a thermal source, beam splitters, optical attenuators and homodyne detectors \cite{Qi18}. As we have shown in \cite{Qi18}, given that Alice's QKD transmitter is trusted, the passive CV-QKD protocol is equivalent to GMCS QKD. This means that the well-established security proofs for GMCS QKD can be applied to passive CV-QKD directly. More recently, this passive state preparation scheme has also been extended to measurement-device-independent CV-QKD \cite{Bai19}. It could also be applied in other CV quantum communication protocols, such as quantum secret sharing \cite{Grice19} and quantum digital signatures \cite{Croal16}. In this paper, we conduct experimental studies of passive CV-QKD using a practical multi-mode thermal source. As we will show below, the excess noise due to the passive state preparation scheme can be effectively suppressed by Alice applying optical attenuation and, secure key can be generated over metro-area distances using an off-the-shelf amplified spontaneous emission source operated in continuous-wave (cw) mode. This paper is organized as follows: in Sec. II, we review the passive CV-QKD protocol and compare it with conventional GMCS QKD \cite{GMCSQKD, Weedbrook04} as well as entanglement-based CV-QKD using a two-mode squeezed vacuum state \cite{Grosshans03}. In Sec. III, we present our experimental setup and develop a corresponding noise model. In Sec. IV, we present the experimental results. Finally, we conclude this paper with a brief summary in Sec. V. \section{The protocol and its security} \label{sec:2} In the GMCS QKD protocol \cite{GMCSQKD}, Alice generates Gaussian distributed random numbers $x_A$ and $p_A$ using a trusted random number generator, prepares a coherent state $|x_A+ip_A\rangle$ accordingly, and transmits it to Bob, as shown in Fig. 1(a). At Bob's end, he can either measure a randomly chosen quadrature of the incoming quantum state by conducting single homodyne detection \cite{GMCSQKD} or measure both quadratures simultaneously by conducting conjugate homodyne detection (which is called the heterodyne protocol) \cite{Weedbrook04}. Alice and Bob further estimate channel-induced noise and other QKD parameters by comparing a subset of their data through an authenticated classical channel. This allows them to upper bound the information that could be gained by a third party Eve. Given the correlation between Alice and Bob's data is above certain threshold, they can perform reconciliation and privacy amplification to generate a final secure key. There are different reconciliation algorithms in GMCS QKD. In this paper, we consider the heterodyne protocol \cite{Weedbrook04} using reverse reconciliation \cite{GMCSQKD} where Alice tries to determine Bob's measurement results from her data. In this case, secure key can be generated as long as the mutual information between Alice and Bob is larger than the information Eve could have on Bob's measurement results. More details about the GMCS QKD can be found in recent reviews \cite{Diamanti15, Laudenbach18}. \begin{figure}[t] \includegraphics[width=.45\textwidth]{Fig1.pdf} \captionsetup{justification=raggedright, singlelinecheck=false } \caption{Three equivalent quantum state preparation schemes. (a) The Gaussian-modulated coherent states (GMCS) QKD \cite{GMCSQKD}. TRNG, true random number generator; A$\&$P mod., amplitude and phase modulators. (b) Entanglement-based CV-QKD \cite{Grosshans03}. TMSV, two-mode squeezed vacuum state; HOM, homodyne detector; BS, beam splitter. (c) Passive CV-QKD \cite{Qi18}. Att., optical attenuator.} \label{fig:1} \end{figure} Note from Eve's point of view, the quantum state from Alice is a mixture of all possible coherent states, which is simply a thermal state with an average photon number $n_0=V_A/2$, where $V_A$ is Alice's modulation variance. We remark that throughout this paper, all the noise variances and modulation variances are defined in shot-noise units. There are different ways (corresponding to different QKD protocols) for Alice to prepare the outgoing thermal state, as shown in Fig. 1. As long as Alice's QKD transmitter is within a trusted location, Eve cannot tell which protocol is actually carried out by Alice. This suggests all these QKD protocols are equivalent in terms of security. In fact, the security of the GMCS QKD is commonly analyzed based on an entanglement-based CV-QKD protocol using a two-mode squeezed vacuum state \cite{Grosshans03}, as shown in Fig. 1(b). In this entanglement-based protocol, by performing conjugate homodyne detection on one mode of a properly chosen two-mode squeezed vacuum state, Alice acquires Gaussian distributed random numbers $x_A$ and $p_A$ with the desired variance of $V_A$. In the meantime, Alice transmits the other mode, which is projected to a coherent state $|x_A+ip_A\rangle$, in her perspective, to Bob through the quantum channel. From Eve's point of view, since she has no information about $x_A$ and $p_A$, the state from Alice is thermal, as in the case of GMCS QKD. In passive CV-QKD \cite{Qi18}, the intrinsic field fluctuations of a thermal source are utilized to generate secure key, as shown in Fig. 1(c). In this protocol, Alice splits the output of a thermal source into two spatial modes using a beam splitter. One mode is sent to Bob after being attenuated using an optical attenuator, while the other mode will be measured locally by Alice using conjugate homodyne detection. Alice further numerically scales down her measurement results by a factor of $\alpha_A$ acquiring Gaussian-distributed random numbers $x_A$ and $p_A$, her estimation of the quadrature values of the outgoing mode (see Section III for the details about how to determine the optimal $\alpha_A$). By choosing a proper combination of source intensity and optical attenuation, the quadrature variance of the outgoing mode can be set to the desired value $V_A$. Again, from Eve's point of view, she cannot distinguish the quantum state sent in passive CV-QKD from the one in the GMCS QKD (or, for that matter, entanglement-based CV-QKD), so all three protocols are equivalent in terms of security. We remark that the combination of the balanced beam splitter and the optical attenuator in Fig. 1(c) can be replaced by a single unbalanced beam splitter with a suitable splitting ratio. The latter configuration can lead to a more efficient use of the thermal source. While the security is not dependent on which of the above schemes is employed, the secure key rate is sensitive to the excess noise generated in the quantum state preparation process. Since the quantum states sent by Alice are identical in the three schemes, so is the information Eve could gain on Bob's measurement results. The mutual information between Alice and Bob directly connects to Alice's uncertainties on the quadrature values of the outgoing mode. In GMCS QKD (and the above entanglement-based CV-QKD), the minimum uncertainty Alice could achieve on either quadrature of the outgoing mode is equal to one, as determined by the uncertainty principle in quantum mechanics. In Appendix A, we present a detailed noise model of the passive CV-QKD protocol and show that Alice's uncertainty on the outgoing mode is given by \begin{eqnarray}\label{eq1} \Delta=\dfrac{2V_A(\upsilon_{a}+1)}{V_A \eta_{a}+2\eta_0(\upsilon_{a}+1)}\eta_0+1, \end{eqnarray} where $\eta_0$ is the transmittance of the optical attenuator, $\eta_{a}$ and $\upsilon_{a}$ are the efficiency and noise variance of Alice's detector, respectively. See Eq. (A5) in Appendix A. From Eq. (1), the excess noise due to the passive state preparation scheme (which equals $\Delta-1$) can be effectively suppressed by introducing optical attenuation at Alice (while keeping $V_A$ at the desired value). Equation (1) suggests that regardless of the amount of detector noise, as $\eta_0$ approaches zero, so does the excess noise. In practice, this is convenient, since Alice's detector does not need to be shot-noise limited. To further illustrate the role of the optical attenuation, in Appendix A, we explicitly show that under the beam-splitting attack, the information advantage between Alice and Bob, when compared to that between Eve and Bob, can be improved by introducing optical loss at Alice. In practice, $\eta_0$ cannot be zero, otherwise, to have non-zero average photon number in the outgoing mode, the intensity of the thermal source has to be infinite. Security proofs of practical CV-QKD, taking into account the excess noise contributed by both Alice and Bob, are well developed \cite{Diamanti15, Laudenbach18}. In next section, we will present our experimental setup, develop a corresponding noise model, and quantify the QKD performance using the established security proof. \section{Experimental setup and noise model} \label{sec:3} The experimental setup is shown in Fig. 2. A fiber amplifier (PriTel, Inc.) operated in cw mode (Amplified Spontaneous Emission: ASE in Fig. 2) with vacuum state input is employed as a broadband thermal source. Previous studies have shown that the output from an unseeded fiber amplifier is thermal \cite{WH98, VV00, Qi18, Qi17}. To select a single polarization mode, a fiber pigtailed polarizer (Pol in Fig. 2) is placed right after the light source. Since the spectral bandwidth of the source is about 30 nm, a 0.8 nm optical bandpass filter centered at 1542 nm (BP in Fig. 2) is employed to reduce the optical power of unused light. Note that the actual bandwidth of this filter is not crucial since the optical coherent detection is mode selective: only photons in the same spectral-temporal and polarization mode as the local oscillator will be detected. Nevertheless, the optical power of unused modes should be suppressed so that they are well below the power of the local oscillator. \begin{figure}[t] \includegraphics[width=.45\textwidth]{Fig2.pdf} \captionsetup{justification=raggedright, singlelinecheck=false } \caption{Experimental setup. ASE, broadband amplified spontaneous emission source; L, narrow-band laser source as local oscillator; BP, optical band pass filter; BS, balanced beam splitter; Pol, fiber polarizer; Att1/Att2, variable optical attenuator; $90^\textup{o}$ OH, 90-degree optical hybrid; BD, balanced photodetector.} \label{fig:2} \end{figure} The filtered thermal light is split by a balanced beam splitter into two modes: one is sent to Bob after passing through an optical attenuator while the other is measured locally by Alice using conjugate homodyne detection. Note that there are two optical attenuators shown in Fig. 2: Att1 represents a ``trusted'' attenuator inside Alice's system which cannot be controlled by Eve. It provides the desired attenuation $\eta_0$ shown in Eq. (1); Att2 represents the channel loss which is fully controlled by Eve. In this proof-of-principle experiment, a single optical attenuator is employed to provide the combined attenuation of Att1 and Att2. A cw laser source with a central wavelength of 1542nm (Clarity-NLL-1542-HP from Wavelength Reference) is employed to provide local oscillators for both Alice's and Bob's conjugate homodyne detection systems. For long distance applications, instead of transmitting a local oscillator from Alice to Bob, it is more appropriate to generate Bob's local oscillator locally \cite{Qi15, Soh15, Huang15}. The conjugate homodyne detection is implemented using a 90-degree optical hybrid and two balanced photodetectors. Limited by the detectors available, different types of balanced photodetectors are employed in Alice's and Bob's systems. The corresponding bandwidths are 100MHz (Alice) and 75MHz (Bob). The outputs of all the balanced photodetectors are sampled by a real time oscilloscope. One important deviation of the setup shown in Fig. 2 from the ideal passive QKD protocol discussed in the previous Section is that a multi-mode (rather than a single mode) thermal source is employed in the actual experiment. On one hand, this modification greatly simplifies the implementation since it is experimentally challenging to prepare a single mode thermal state; on the other hand, the existence of unused modes could contribute additional noise, as we will discuss below. Within the integration time of the homodyne detector, the output of the light source (even after the 0.8nm spectral filter) contains many spectral-temporal modes of independent thermal states. To generate correlated keys, Alice and Bob have to measure the same mode. In practice, the modes measured by Alice and Bob (which are determined by the modes of their local oscillators) may not be perfectly overlapped. We define the mode measured by Bob as mode $\vert B\rangle$. The mode measured by Alice may be decomposed as \begin{eqnarray}\label{eq2} \vert A\rangle=a \vert B\rangle + b \vert B'\rangle \end{eqnarray} where $\vert B'\rangle$ represents the mode orthogonal to mode $\vert B\rangle$, and $\vert a \vert^2 + \vert b \vert^2 =1$. Without loss of generality, we assume the mode overlap coefficient $a$ is a real number. \begin{figure}[t] \includegraphics[width=.5\textwidth]{Fig3.pdf} \captionsetup{justification=raggedright, singlelinecheck=false } \caption{Noise model of the experimental setup. S, thermal source; $\textup{BS}_{1-3}$, balanced beam splitter; Att, optical attenuator; HD, homodyne detector; T, a beam splitter emulating channel loss; $\eta_{ax}$, $\eta_{bx}$, $\eta_{ap}$, $\eta_{bp}$, beam splitters emulating losses of homodyne detectors.} \label{fig:3} \end{figure} For simplicity, we only consider the X-quadrature below. The P-quadrature can be studied in a similar way. The X-quadrature of the outgoing mode (see Fig. 3) is given by \begin{eqnarray}\label{eqa3} x_1=\sqrt{\frac{\eta_0}{2}}x_{in}-\sqrt{\frac{\eta_0}{2}}x_{v1}-\sqrt{1-\eta_0}x_{v3}, \end{eqnarray} where $x_{in}$ is the X-quadrature of mode $\vert B\rangle$ of the source, $\eta_0$ is the transmittance of the optical attenuator, $x_{v1}$ and $x_{v3}$ represent vacuum noises introduced by beam splitter one and the optical attenuator, respectively. Similarly, Alice's measurement result of the X-quadrature is given by \begin{equation} \begin{split} &x_2=\dfrac{\sqrt{\eta_{ax}}}{2}(ax_{in}+bx'_{in})+\dfrac{\sqrt{\eta_{ax}}}{2}(ax_{v1}+bx'_{v1})\\ &+\sqrt{\frac{\eta_{ax}}{2}}x_{v2}-\sqrt{1-\eta_{ax}}x_{va}+E_{ax}, \end{split} \end{equation} where $\eta_{ax}$ and $E_{ax}$ are the efficiency and noise of Alice's homodyne detector for X-quadrature measurement; $x'_{in}$ is the X-quadrature of mode $\vert B'\rangle$ of the source; $x'_{v1}$ is the vacuum noise in mode $\vert B'\rangle$ from beam splitter one; $x_{v2}$ and $x_{va}$ are vacuum noises associated with beam splitter two and $\eta_{ax}$. Given $x_2$, Alice's optimal estimation of $x_1$ is \begin{eqnarray}\label{eqa5} x_{opt}=\alpha_A x_2, \end{eqnarray} where $\alpha_A=\langle x_1 x_2 \rangle/\langle x_2^2 \rangle$ \cite{Poizat94, Grangier98}. Using Eqs. (3) and (4), we can determine $\alpha_A$ as \begin{eqnarray}\label{eqa6} \alpha_A=\dfrac{n_0 a \sqrt{2\eta_0 \eta_{ax}}}{n_0\eta_{ax}+2\upsilon_{ax}+2}, \end{eqnarray} where $n_0$ is the average photon number per mode from the source and $\upsilon_{ax}=\langle E_{ax}^2 \rangle$ is the detector noise variance. The relation $\langle x_{in}^2 \rangle=\langle (x'_{in})^2 \rangle=2n_0+1$, which is a good approximation for the broadband light source used in our experiment, has been used in the above derivation. Using Eqs. (3) to (6), Alice's uncertainty on $x_1$ given her measurement result of $x_2$ can be determined as \begin{equation} \begin{split} &V_{x1|x2}=\langle (x_1-\alpha_A x_2)^2 \rangle=\\ &\dfrac{2V_A \eta_0 (\upsilon_{ax}+1)+V_A^2 \eta_{ax}(1-a^2)}{V_A \eta_{ax}+2\eta_0(\upsilon_{ax}+1)}+1, \end{split} \end{equation} where $V_A=\eta_0 n_0$ is the equivalent modulation variance of the outgoing mode. The excess noise due to the passive state preparation scheme is $\varepsilon_A=V_{x1|x2}-1$, which can be determined from Eq. (7) as \begin{eqnarray}\label{eq8} \varepsilon_A=\dfrac{2V_A \eta_0 (\upsilon_{ax}+1)+V_A^2 \eta_{ax}(1-a^2)}{V_A \eta_{ax}+2\eta_0(\upsilon_{ax}+1)}. \end{eqnarray} Once the QKD excess noise has been appropriately quantified, we can apply the standard security proof for GMCS QKD to calculate the secure key rate. In next section, we will experimentally determine the mode mismatch $a$ and other system parameters to evaluate the performance of our system. \section{Experimental results} \label{sec:4} In the experiment, both the broadband light source and the local oscillator are operated in cw mode. A 2GHz bandwidth real time oscilloscope is employed to sample the outputs of Alice and Bob's detectors. The modes sampled by Alice and Bob are determined by lengths of optical paths and also the electrical frequency responses of the optical homodyne detectors. Due to experimental imperfections, the modes measured by Alice and Bob are not perfectly overlapped, i.e., $a<1$. To determine $a$, we calculate the correlation between Alice and Bob's measurement results at different output photon numbers. Alice's X-quadrature measurement result, $x_2$, is given by Eq. (4). Similarly, when no optical attenuation is applied ($\eta_0=1$, $T=1$, see Fig. 3), Bob's X-quadrature measurement result, $x_3$, is given by \begin{equation} \begin{split} &x_3=\dfrac{\sqrt{\eta_{bx}}}{2}x_{in}-\dfrac{\sqrt{\eta_{bx}}}{2}x_{v1}\\ &+\sqrt{\frac{\eta_{bx}}{2}}x_{v5}-\sqrt{1-\eta_{bx}}x_{vb}+E_{bx}, \end{split} \end{equation} where $\eta_{bx}$ and $E_{bx}$ are the efficiency and noise of Bob's homodyne detector for X-quadrature measurement; $x_{v5}$ and $x_{vb}$ are vacuum noises associated with beam splitter 3 and $\eta_{bx}$ (see Fig. 3). Using Eqs. (4) and (9), it is easy to show \begin{eqnarray}\label{eq10} \langle x_2^2 \rangle=\frac{\eta_{ax}}{2}n_0+\upsilon_{ax}+1, \end{eqnarray} \begin{eqnarray}\label{eq11} \langle x_3^2 \rangle=\frac{\eta_{bx}}{2}n_0+\upsilon_{bx}+1, \end{eqnarray} \begin{eqnarray}\label{eq12} \langle x_2 x_3 \rangle=\frac{\sqrt{\eta_{ax}\eta_{bx}}}{2}n_0a. \end{eqnarray} From Eqs. (10)-(12), the correlation coefficient between $x_2$ and $x_3$ is given by \begin{equation} \begin{split} &Corr=\frac{\langle x_2 x_3 \rangle}{\sqrt{\langle x_2^2 \rangle \langle x_3^2 \rangle }}=\\ &\frac{n_0\sqrt{\eta_{ax}\eta_{bx}}}{\sqrt{(\eta_{ax}n_0+2\upsilon_{ax}+2)(\eta_{bx}n_0+2\upsilon_{bx}+2)}}a. \end{split} \end{equation} Eq. (13) shows that as $n_0$ approaches infinity, the correlation coefficient approaches the mode overlap coefficient $a$. Thus, we can determine $a$ experimentally by measuring the correlation between Alice and Bob's measurement results at high output photon number. \begin{figure}[t] \includegraphics[width=.45\textwidth]{Fig4.pdf} \captionsetup{justification=raggedright, singlelinecheck=false } \caption{Raw data of Alice and Bob's X-quadrature measurement results ($n_0$=880 and no optical attenuation applied).} \label{fig:4} \end{figure} \begin{figure}[t] \includegraphics[width=.45\textwidth]{Fig5.pdf} \captionsetup{justification=raggedright, singlelinecheck=false } \caption{The correlation coefficient of Alice and Bob's X-quadrature measurement results at different output photon numbers of the light source. Blue dots represent experimental results (error bars represent one standard deviation). The red line is a fitting curve using Eq. (13) with $a$ = 0.96.} \label{fig:5} \end{figure} We adjust $n_0$ by changing the pump power of the fiber amplifier and calculate the correlation coefficient from experimental data. In this experiment, no optical attention is applied ($\eta_{tot}=\eta_0 T=1$). The local oscillator power is 2mW. The detection efficiency and noise variance of Alice and Bob's conjugate homodyne detectors are $\eta_{ax}=0.43\pm0.01$, $\eta_{ap}=0.38\pm0.01$, $\eta_{bx}=0.54\pm0.01$, $\eta_{bp}=0.51\pm0.01$, $\upsilon_{ax}=0.17\pm0.01$, $\upsilon_{ap}=0.19\pm0.01$, $\upsilon_{bx}=0.24\pm0.01$,and $\upsilon_{ap}=0.23\pm0.01$. At each photon level, we collect 500,000 data samples, which are further divided into ten 50,000 data sets. The correlation coefficient is calculated for each data set. Figure 4 shows the raw data of Alice and Bob's X-quadrature measurement results when the average photon number per mode from the source is $n_0=880$. Fig. 5 shows the correlation coefficient of Alice and Bob's X-quadrature measurement results at different $n_0$ (error bars represent one standard deviation). The red line is a curve fit using Eq. (13) with $a$ = 0.96 and detector parameters given above. The theory and experimental results match well. To further justify the above noise model, we conduct experiments using a constant $n_0$ of 900 and different optical attenuation $\eta_{tot}$. This experiment can also be modeled using Eq. (13), with $\eta_{bx}$ replaced by $\eta_{tot} \eta_{bx}$. Note that when the optical loss is very high, the correlation between Alice and Bob's data is barely visible from the raw data, as evidenced by the case of $\eta_{tot}=-42.2$dB shown in Fig. 6. Nevertheless, by calculating the correlation coefficient from the raw data and comparing it with the theoretical prediction, the proposed noise model is well justified, as shown in Fig. 7. \begin{figure}[t] \includegraphics[width=.45\textwidth]{Fig6.pdf} \captionsetup{justification=raggedright, singlelinecheck=false } \caption{Raw data of Alice and Bob's X-quadrature measurement results ($n_0=900$ and $\eta_{tot}=-42.2$dB).} \label{fig:6} \end{figure} \begin{figure}[t] \includegraphics[width=.45\textwidth]{Fig7.pdf} \captionsetup{justification=raggedright, singlelinecheck=false } \caption{The correlation coefficient of Alice and Bob's X-quadrature measurement results as a function of optical attenuation $\eta_{tot}=\eta_0 T$. Blue dots represent experimental results (error bars represent one standard deviation). The red line is a fit curve based on Eq. (13) by replacing $\eta_{bx}$ with by $\eta_{tot} \eta_{bx}$.} \label{fig:7} \end{figure} As we have shown in \cite{Qi18}, the well-established security proof of GMCS QKD can be applied to passive CV-QKD, as long as the excess noise of Alice is properly quantified using Eq. (8). We calculate the secure key as a function of the channel length using experimentally determined system parameters. Here, we assume the quantum channel is a single mode fiber with an attenuation coefficient of $\gamma=0.2$dB/km. As shown in Fig. 8, a practical distance above 80km could be achieved. The QKD distance could be further extended by improving the mode overlap factor $a$. Details of the secure key formulas are summarized in Appendix B. To compare the simulated secure key rates with the experimental results shown in Fig. 7, we use Eq. (A11) in Appendix A to determine the mutual information between Alice and Bob from the experimentally determined correlation coefficient as $I_{AB}=\log_2\left( \dfrac{1}{1-Corr^2}\right)$. Two data points shown in Fig. 7, corresponding to $\eta_{tot}=-32.1$dB and $-42.2$dB, are used in the calculation. Since only one attenuator is applied in the experiment, we artificially divide the total attenuation $\eta_{tot}$ into Alice's attenuation $\eta_0$ and channel transmittance $T$ as $\lbrace \eta_0=0.0009, T=0.69\rbrace$ (for $\eta_{tot}=-32.1$dB) and $\lbrace \eta_0=0.0004, T=0.15\rbrace$ (for $\eta_{tot}=-42.2$dB). The calculated secure key rates are shown in Fig. 8 as two points, with error bars determined by one standard deviation of the correlation coefficient. \begin{figure}[t] \includegraphics[width=.45\textwidth]{Fig8.pdf} \captionsetup{justification=raggedright, singlelinecheck=false } \caption{The secure key rate as a function of channel loss. Simulation parameters: $a$ = 0.96, $n_0$=900, $\eta_{ax}=0.43$, $\eta_{ap}=0.38$, $\eta_{bx}=0.54$, $\eta_{bp}=0.51$, $\upsilon_{ax}=0.17$, $\upsilon_{ap}=0.19$, $\upsilon_{bx}=0.24$, $\upsilon_{ap}=0.23$. The efficiency of the reconciliation algorithm is $f=0.95$ (see details in Appendix B). The two data points are secure key rates calculated using the experimental results shown in Fig. 7 (error bars are determined by one standard deviation of the correlation coefficient).} \label{fig:8} \end{figure} \section{Summary} \label{sec:5} One common question on the passive CV-QKD scheme is whether we can trust the randomness from a thermal source. It is a common practice to apply a quantum random number generator \cite{Ma16, Herrero17} in prepare-and-measure QKD for state preparation and/or measurement basis selection. As we have discussed in \cite{Qi17}, while quantum randomness is ultimately connected to quantum superposition states, in the fully trusted device scenario, the quantum state received by the detector does not need to be a pure state. One illustrative example is the first quantum random number generator, where electrons from a radioactive source such as $^{90}\textup{Sr}$ are detected by a Geiger Mueller tube at random times \cite{Isida56, Schmidt70}. In this process, while the whole system (the radioactive nuclei and electrons) is in pure state, the state received by the detector is a mixture of 0-electron and 1-electron emission states. True randomness can be generated as long as Eve cannot access (or control) the radioactive source. Similar arguments can also be applied to randomness generated from spontaneous emission using a thermal source. The security of QKD is only as good as its underlying assumptions \cite{Qi072}. In this paper, we have adopted a commonly used assumption in QKD that the QKD systems employed by Alice and Bob are fully trusted and cannot be accessed by Eve. In practice, any real-life QKD systems cannot be perfect. It is thus important to scrutinize all of the implementation details to identify potential side channels and develop the corresponding countermeasures. The investigation of loopholes and countermeasures in practical QKD systems plays a complementary role to security proofs. In summary, we conduct experimental studies on the recently proposed passive CV-QKD protocol \cite{Qi18}, which is appealing for chip-scale implementation. When implemented with a practical multi-mode thermal source, one important issue is how to determine the excess noise contributed by photons in the unwanted modes. In this paper, we develop a noise model based on a practical setup, and conduct experiments to verify the above model using a commercial off-the-shelf amplified spontaneous emission source. Our results suggest that passive CV-QKD could be a cost-effective solution for metro-area QKD. This work was performed at Oak Ridge National Laboratory (ORNL), operated by UT-Battelle for the U.S. Department of Energy (DOE) under Contract No. DE-AC05-00OR22725. The authors acknowledge support from DOE Office of Cybersecurity Energy Security and Emergency Responce (CESER) through the Cybersecurity for Energy Delivery Systems (CEDS) program.
1,314,259,995,749
arxiv
\section{Introduction} Observations of radio-loud quasars are important for investigating some of the most interesting and fundamental problems of contemporary astrophysics. The foremost of these is the investigation of causes of the ``extinction'' of luminous quasars. The space density of luminous quasars has declined by about 3 orders-of magnitude from the epoch z=2--3 to the present (e.g., Hartwick \& Schade 1989). What processes led to the such a dramatic decrease in co-moving space density? Are these processes related to their galactic and/or cluster scale environments? The answers to these questions will be important in furthering our understanding galaxy evolution. Studies of the environments of quasars can also provide insight into the AGN phenomenon in general. Of contemporary interest is the relationship between quasars and radio galaxies. A variety of schemes have been proposed to link quasars and radio galaxies through differences in their environments, viewing angle, or evolutionary state (e.g., Norman \& Miley 1984; Barthel 1989; Neff \& Hutchings 1990). In particular, the \lq\lq viewing angle\rq\rq \ scheme of Barthel (in which radio-loud quasars and radio galaxies are drawn from the same parent population but viewed preferentially at small or large angles, respectively, to the radio axis) predicts that the luminosity and color of the quasar fuzz should be identical to those of the radio galaxies at similar radio powers and redshifts (as do some of the evolutionary unification schemes). Also, radio galaxies at high redshifts (z $\gtrsim$ 0.6) exhibit the so-called \lq\lq alignment effect\rq\rq \ (McCarthy et al. 1987; Chambers et al. 1987) namely that the radio, and the rest-frame UV and optical axes are all roughly co-linear. If indeed quasars and radio galaxies are objects differentiated only by viewing angle, then quasars might also be expected to exhibit such an alignment effect over the same redshift range as the radio galaxies. Investigating the above problems with ground-based data has been hampered by the inability of separating marginally extended ``fuzz'' from the ``blinding'' light of the quasar nucleus. But despite these difficulties, limited information about the properties of the host galaxies of quasars spanning a range of redshifts and radio properties has been obtained (see e.g., Hutchings, Crampton, \& Campbell 1984; Smith et al. 1986; Stockton \& MacKenty 1987; Veron-Cetty \& Woltjer 1990; Heckman et al. 1991). The Hubble Space Telescope is well-suited for investigating quasar host galaxies. Because of its high spatial resolution, imaging with the HST allows us to remove the contribution of the quasar nucleus and investigate the properties of the host galaxies on scales of 0.1 to a few kpc, depending on the redshift of the source. This was one of the reasons that motivated us to undertake a ``snapshot survey'' of sources in the 3CR catalogue. The ``snapshot'' mode of observing, in which gaps in the primary HST schedule are filled in with short integrations of selected targets, greatly enhances the overall efficiency of the HST and is well-suited to observing large samples of objects. The results presented here should be compared and contrasted with recent HST results by other groups. Images of low/modest redshift quasars obtained with the WFPC-2 on HST show a wide variety of host galaxy morphologies including elliptical and spiral systems as well as highly disturbed, galaxies which may be interacting with and/or accreting close companions (Bahcall et al. 1995a,b; Disney et al. 1995; Hutchings \& Morris 1995; Hooper et al. 1997). The 3CR data set allows the properties of matched samples of radio-loud quasars and radio galaxies to be compared and should ultimately allow us to investigate the properties of ``fuzz'' over a wide range of redshifts (0.3 $\lesssim$ z $\lesssim$ 2), radio luminosities (log P$_{178 MHz}$ $\sim$ 10$^{27}$ to 10$^{29.3}$ W Hz$^{-1}$) and radio types (e.g., lobe-dominated to core-dominated). Since low-frequency radio emission from radio-loud AGN is thought to be emitted isotropically, the low-frequency selection of the 3CR (178 MHz) implies that it is relatively unbiased by anisotropic emission (whether it is due to relativistic beaming, emission from an optically thick accretion disk, or due to an obscuring torus), thus the 3CR is particularly well suited for investigating the relationship between radio galaxies and quasars. Here we present and describe the image analysis and data reduction for 43 quasars from this ``3CR snapshot survey''. Other papers in this series have described the properties of the radio galaxies (de Koff et al. 1996; Baum et al. 1999; McCarthy et al. 1997) and future work will compare matched samples of radio galaxies and quasars over a wide range of redshifts. \section{Sample Selection} To select objects for the snapshot survey, we used the revised 3CR sample as defined by Bennett (1962a,b), having selection constraints of (i) flux density at 178 MHz, S(178) $>$ 9 Jy, (ii) declination $>$ 5$^\circ$, and (iii) galactic latitude, $|b| >$ 10$^\circ$. All of the sources have been optically identified and have measured redshifts (Spinrad et al. 1985; Djorgovski et al. 1988). In this paper, we report on the properties of the quasars observed during the snapshot survey of a total of 267 3CR sources. The classification of these sources as quasars are based on the compilations of Spinrad et al. (1985) and Djorgovski et al. (1988). Of the total of 53 quasars listed in Spinrad et al. (1985), we have observed 41 (3C 179 and 3C 279, which we have observed, were not listed in the Spinrad et al. quasar identifications). Inevitable scheduling constraints and conflicts with other observing programs accounts for the 12 3CR quasars that were not imaged as a part of this program. Table 1 gives a summary of the observations. \section{Characteristics of the Observed Sample} The characteristics of the observed sample of quasars are provided in Table 2 and summarized graphically in Figures 1 through 3. The redshift distribution of the observed sample is fairly flat and ranges from 0.3 to about 2.1 (Table 2 and Fig. 1) and is broadly similar to that of the entire sample of 3CR quasars, the major difference is that a few of the lowest redshift quasars are excluded. Adopting a cosmology with a Hubble constant of H$_0$ = 75 km s$^{-1}$ Mpc$^{-1}$ and a deceleration parameter of q$_0$=0.5 and using the 178 MHz radio fluxes of the sample as listed in Spinrad et al. (1985), the radio (178 MHz) power distribution of the observed sample lies in the range log P$_{178 MHz}$ = 27 --- 29.3 W Hz$^{-1}$. The distribution has a strong peak at log P$_{178 MHz}$ $\approx$ 28.5 W Hz$^{-1}$ (Figure 2 and Table 2). Hence the observed quasars are amongst the most powerful known radio sources. Within the sample, the radio emission from these sources also exhibit a wide variety of physical sizes. Again, adopting the cosmology of H$_0$ = 75 km s$^{-1}$ Mpc$^{-1}$ and q$_0$=0.5 and measuring the largest angular size of the radio sources using radio maps available in the literature (see Table 2 and \S 7), shows that these sources range from compact, galaxy-sized radio sources (largest projected linear sizes $\lesssim$10 kpc) to cluster-scale sources with sizes of a few hundred kpc (Figure 2). \section{Observations and Pipeline Reduction} The observations were taken throughout HST Cycle 4 (early 1994 to mid 1995) and the integrations times ranged from 2 to 10 minutes, with 5 or 10 minutes with being typical (see Table 1). All the quasars were observed close to the center of the Planetary Camera (PC) and imaged through the F702W filter, whose bandpass corresponds approximately to that of the Cousins R filter. The camera/filter combination was chosen to give the maximum spatial and photometric sensitivity. The large width of the F702W filter bandpass means that every quasar image has some contribution to its brightness and morphology due to one or two prominent emission lines. From the shape of the system bandpass for the F702W filter available in the HST WFPC2 Instrument Handbook (1995), the prominent lines that contribute to the emission within the images and the redshift range over which they contribute are: [OIII]$\lambda$5007 (0.19 $<$ z $<$ 0.64), H$\beta$ (0.22 $<$ z $<$ 0.67), [OII]$\lambda$3727 (0.60 $<$ z $<$ 1.20), MgII$\lambda$2798 (1.13 $<$ z $<$ 1.93). Only 3C 9 has a redshift (z=2.012) that avoids having prominent emission lines within the bandpass of the F702W filter. The images were reduced using the standard pipeline (see the HST Data Handbook, Version 2, 1995). The standard pipeline includes bias subtraction, dark count correction, flat-field correction, and a determination of the absolute sensitivity. For those objects that had two or more individual exposures (see Table 1), the separate images were combined using the STSDAS task CRREJ. This task constructs an average of the input frames and iteratively removes highly deviant pixels from the average. For those quasars with only one exposure, we used the IRAF task ``cosmicrays'' to remove the effects of cosmic rays. This task detects pixels that have significantly different value than the surrounding pixels and replaces the deviant value with the average of the surrounding pixels. Weak cosmic rays that were missed using this technique were subsequently removed by fitting the background to the area immediately surrounding the suspected cosmic ray hit and using this new value as a substitute for the old value of the counts in the affected pixel. The data were flux-calibrated using the inverse sensitivity for the F702W filter of 1.834 $\times$ 10$^{-18}$ ergs s$^{-1}$ cm$^{-2}$ \AA$^{-1}$ dn$^{-1}$ and a zero point of 22.469 (Whitmore 1995). This puts the magnitudes on the ``Vega system''. To convert to the STMAG system, which assumes a flat spectral energy distribution and has a zero point of 23.231, 0.762 magnitudes should be subtracted from the magnitudes given here. \section{Image Reduction} \subsection{PSFs and EEDs} One of the most important steps of the analysis is to quantify the amount of extended emission from these quasar images. We have attempted this using two methods. To do this we collected images of the standard stars used to calibrate the F702W filter that were near the dates of the observations. Unfortunately, there were only a few exposures (5) that were useful. These were then used in two ways. We constructed an empirical point-spread function (PSF) using these observations by adding up the individual exposures after they had been aligned to a common center. This empirical PSF was then compared with the model PSF constructed using the PSF modeling program, Tiny Tim. However, as we discuss below, the close agreement is limited to azimuthal averages -- Tiny Tim does not reproduce the detailed two-dimensional structure of the PSF. Next, we measured the encircled energy diagrams (EEDs), defined as the fraction of the flux from a point source interior to a radius r, as a function of r. We then intercompared all the EEDs (including the PSF generated by Tiny Tim) taken through a given filter to determine the reproducibility of the EED. There was very good agreement between the shape of the EED from the sum of the observations of standard stars and that of the Tiny Tim PSF but insufficient data were available to perform this comparison for individual stars/PSFs taken with the F702W filter. Fortunately, we also carried out this analysis for another of the WFPC2 filters, F555W, in the course of another HST program (Lehnert et al. 1999). For this intercomparison, we used observations of approximately 20 stars. This intercomparison implies that we can robustly detect fuzz that contributes more than about 5\% as much light as the quasar itself (within a radius of about 1.4 arcsec). This limit is consistent with the known temporal variations in the HST PSF due to effects like the gentle change in focus over times scales of months and shorter (orbital) time scale variations due to the so-called ``breathing'' of the telescope (see WFPC-2 Instrument Handbook). We have restricted ourselves to radii less than about 1.5 arc seconds due to the poorly understood large angle scattering which becomes important beyond a radius of about 2''. To quantify how much of the quasar light is extended, we compared the EEDs of each quasar to that of the PSF (both empirical and model, although it makes little difference which one is used). We accomplished this by scaling the PSF EED so that the total flux difference between the PSF EED and that of each quasar is zero inside a radius of 2 PC pixels ($\approx$0.09''). The EED analysis is conservative in that the actual underlying fuzz will certainly have a finite non-zero central surface brightness. Therefore, this analysis provides a lower limit to that amount of extended flux coming from each quasar. Moreover, the choice of using a radius of 2 pixels was not arbitrary. Experimenting with the effect of scaling by the central pixel led to a much higher variation in the structure of each stellar EED compared to the EED of the model PSF. This variation is due to our limited ability in determining the exact location of the peak of emission and also due to the fact that the central point source never falls exactly in the center of one pixel. Enlarging the area overwhich the quasar and PSF are scaled greatly reduced the variation in the structure seen in the PSF by the EED analysis. However, selecting a large area leads to very limited sensitivity to extended emission and thus a radius of 2 pixels was chosen as the most suitable choice to maintain the sensitivity to mildly extended fuzz but also minimizing systematic problems. In addition, we limited the EED analysis to a radius $\leq$30 PC pixels ($\approx$1.4''). This mimizes the problem of the large angle scattering which may not be well represented in the empirical or model PSF but may contribute to the extended emission in the quasar images. Therefore, in Table 3, we quote the amount of extended emission as a fraction of the total quasar emission within a radius of 30 PC pixels (1.4''). In Figure 4, we show all of the profiles from the EED analysis. The EED procedure assumes that the peak flux of the underlying galaxy is zero. Of course we know that this is a lower limit. To investigate how much flux we may miss in the course of such a procedure, we conducted the EED analysis for a small sample of 10 radio galaxies also observed as part of the snapshot survey. These galaxies were chosen to represent the full range of morphologies and (most importantly) central surface brightnesses seen in the data on radio galaxies (see de Koff et al. 1996; McCarthy et al. 1997). Although the images reveal non-thermal nuclei in a few objects, none are as ``PSF dominated'' as any of the quasars. Under the assumption that radio galaxies are similar to quasar hosts in morphology and central surface brightness and that the central surface brightness observed are due to the underlying galaxies and not the AGNs, we determined the EED fluxes for the radio galaxies. These fluxes were smaller than the measured total fluxes by an amounts ranging from about 8\% to 50\% (with about 30\% being the average) of the total extended light. We regard this as a reasonable underestimate of the quasar fluxes by the EED analysis. \subsection{PSF Subtraction} There are obvious limitations to using the EED analysis described above. The foremost of these is that it does not provide information on the morphology of the host galaxy. Also, it in fact only provides a lower limit on the amount of extended emission since we have assumed that the contribution to the fuzz from the central 0.03 $\sq\arcsec$ is zero. To study the host galaxy morphology, it is necessary to first remove the contribution of the quasar nuclei from the images. Therefore to provide morphological information about the host galaxies, we next attempted subtraction of a scaled PSF from each of the quasar images. We used two different PSFs, a model of the HST PSF constructed using Tiny Tim and an empirical PSF constructed by averaging several images of standard stars. We would have preferred to construct a PSF using exposures of open clusters or of outer regions of globular clusters but no such images were available from Cycle 4 observations taken through the F702W filter. Also only a limited number of exposures of standard stars were available and some of the stellar images were far (up to about 300 pixels) from the center of the PC. If there were two or more images of the same standard taken at the same position close in time, the individual exposures were averaged. Images of different standards were then summed after being aligned to the nearest pixel. The model and empirical PSFs were scaled such that their peaks were about 5 - 15\% of the highest valued pixel in the quasar image and iteratively subtracted until emission due to the diffraction of the secondary support become negligible or the flux in the central pixel of the quasar image was too small to be measured. During this procedure we noted the subtraction level where residual diffraction spikes were just above the background noise level. Also, we continued the subtraction until a negative image of the diffraction spikes appeared above the level of the noise. This procedure allowed us to estimate the uncertainty in the fraction of extended emission by observing the values where the diffraction spikes in the residual image became negative due to over-subtraction of the PSF or were still present due to under-subtraction. The relatively subjective method described above allows us to parameterize the uncertainty in the PSF subtraction procedure as a function of the total brightness of the quasar. We found that for most of the quasars, the uncertainty in the flux of the host relative to that of the total (quasar plus host) was about $\pm$7\%. For about 20\% of the sample (8 quasars), the uncertainty was larger, about $\pm$15\% of the total quasar flux. We used these estimates to categorize the quasars into three groups according to the uncertainties in the fluxes of the remnant hosts. The three categories correspond to $\sim$$\pm$0.2, $\sim$$\pm$0.4, and $\sim$$\pm$0.7 magnitudes. After conducting the subtraction process with both the model PSF and the empirical PSF for about 10 of the quasar images, we concluded that the model PSF was inadequate for PSF subtraction. There is an asymmetry in the intensity of the diffraction spikes within the PC point-spread-function. The diffraction spike along the positive direction of U3 axis (see HST WFPC2 Instrument Handbook 1995) and in a direction $-$45$^\circ$ relative to the U3 axis are more intense than those along the other two directions (the spike along the $+$U3 axis being the most intense). The model PSF does not characterize this asymmetry accurately. On the other hand, while the empirical PSF characterized this asymmetry well, because of the limited number of standard star exposures available, the empirical PSF only accurately characterized the PSF over a limited radius. Hence the empirical PSF was good for representing and removing the PSF structure from the quasars that did not have highly saturated nuclei. For the quasar images that appeared to be only mildly saturated a small residual sometimes was present along the brightest diffraction spike (the spike along the +U3 direction), $\approx$1'' from the nucleus and occupying a few pixels in diameter. This distinct residual was easily identifiable and removed using fits to the surrounding background. This correction needed to be applied to about 12 of the quasar images and those quasars are noted in Table 3. This residual removal was carried out mainly for cosmetic reasons. The residuals contained very little flux and since the residual was in every case well separated from the quasar host, it has only a small effect on the final host morphology. For four quasars (3C 205, 3C 279, 3C 351, and 3C 454.3) the nuclei were highly saturated and we do not present the results of the PSF subtraction due to their unreliability. Generally, since the exposure times for all the quasars were roughly the same, quasars with brighter nuclei were more difficult to determine reliable host morphologies and brightnesses through PSF subtraction. After PSF subtraction, the images were rotated so that north is at the top of the frame and east is to the left. Then each images was smoothed with a 4 $\times$ 4 pixel median filter to remove the effects of ``hot'' pixels and residual cosmic rays, to emphasize low surface brightness features, and to reduce the additional noise in the final image due to the noise in the image of the empirical PSF used for subtraction. Contour plots of the final images are displayed in Figure 5 and 6. We note that the relatively ``clean'' appearance of the contour plots is due to the way in which they were constructed. PSF subtracting the images leads to an increase in the overall noise of the image near the quasar. After the image was smoothed by a 4 $\times$ 4 pixel median filter we then selected the lowest contour to be at about the 3$\sigma$ noise level in the region affected by the PSF subtraction, but well away from the host galaxy. Therefore, the minimum level is relatively high compared to the noise level of the entire displayed image. Picking such a relatively high minimum contour level has the benefit of only showing morphological features that have a high certainty of being real and not artifacts of the PSF subtraction. As noted above, any additional ``cleaning'' of the images was strictly limited to the removal of the residual along the +U3 direction approximately 1'' from the nucleus in the quasars as noted in Table 3. Even this procedure had only a marginal influence on the final displayed morphology. In addition, in the one case (3C 179.0) where we subtracted individually two images taken at different times, there was close morphological agreement between the two images of the host galaxy in spite of the rather dramatic change in the total magnitude of the nucleus. \subsection{On the Differences Between the EED Analysis and PSF Subtraction} As can be seen in Table 3, there are rather large differences between the amount of resolved fluxes estimated using the EED analysis and the PSF subtraction. These differences are not surprising. First and perhaps most importantly, the EED analysis is basically an integral process and thus it is sensitive to low signal-to-noise, smoothly distributed light, whereas the PSF subtraction analysis is inherently differential and highlights the very small scale features lost in the growth-curves. Second, the EED analysis will always underestimate the amount of resolved flux since it assumes that the fraction of host light in the unresolved core (central $\sim$0.1 $\sq\arcsec$) is negligible and then scales the contribution of ``fuzz'' under this assumption. Thus one cannot simply measure the amount of flux from the central $\sim$0.1 $\sq\arcsec$ to reconcile the estimates from the EED analysis with that from the PSF analysis. Moreover, it is known that as the focus of the HST changes, different parts of the PSF are affected in different ways (C. Burrows and M. McMaster, private communication). Therefore, using the diffraction spikes to gauge when to stop subtracting scaled PSFs from the image, may not give the proper subtraction of the flux from the nuclear region. This was clearly evident for some of the quasars where the flux would almost reach zero near the nucleus as emission from the diffraction spikes disappeared into the noise. Since there is some disagreement in the amount of resolved flux between the PSF subtraction and the EED analysis, in Figure 6, we provide images of the quasars that are not classified as extended by the EED analysis, but apparently have some extended flux in the PSF subtraction analysis. There are 8 such objects. \subsection{On the Consistency of Our Results with Other Investigations} Since it is difficult to ascertain robustly whether or not our procedure for determining to what extent the quasars images are extended, it is important to compare our results with those obtained by other investigators. Of course, this is challenging given the variety of individual circumstances (HST versus ground-based data, use of adoptive optics, different filters, optical versus IR images, etc) by which quasar hosts have been observed. Limiting ourselves to comparisons with other HST programs to image quasar hosts in the optical, we can say that we find broad consistency between the results presented here and those of other programs investigating the hosts of radio loud quasars. For example, Boyce, Disney, \& Bleaken (1999) and Boyce et al. (1998), for small samples of low-z radio-loud and radio-quiet quasars, found extended to total flux ratios of roughly ten to a few tens of percent for the radio-loud quasars in their studies which imply host magnitudes consistent with our results. At moderate redshifts, 0.4 $<$ z $<$ 0.5, in a study of radio loud and radio-quiet quasar hosts, Hooper, Impey, \& Foltz (1997) again found extended to total flux ratios of roughly ten to a few tens of percent for their radio-loud subsample which also imply host magnitudes consistent with our results. At the high redshifts (z$>$1), in two small samples of radio-loud quasars, both Ridgway \& Stockton (1997; which also included radio galaxies) and Lehnert et al. (1999) find similar host galaxy magnitudes as we find here for similarly high redshift quasars. The broad agreement between the results of the study presented here and those of other studies of radio-loud quasars using the WFPC2, we are confident that our analysis is robust. However, this statement needs to be made more quantitative and we plan to make detailed comparisons between our results and those of other studies in subsequent papers. \section{The ``Alignment Effect'' for 3CR Quasars} The ``Alignment Effect'' in which the axes of the optical and radio emission roughly align is a well known effect observed in high redshift radio galaxies (Chambers et al. 1987; McCarthy et al. 1987). An interesting test of various schemes attempting to relate the properties of radio galaxies and quasars is to determine whether or not a sample of quasars also exhibits a similar alignment. To this end, we measured the position angles of the radio images from the literature as given in Table 2. These position angles were determined from the core of the radio emission along the position angle of the jet. If a jet was not obvious in the radio image used, we then measured the position angle of the highest surface brightness ``hot spot'' relative to the core. For the HST images, we measured the position angle by fitting a series of ellipses as a function of surface brightness using the STSDAS program ``ellipse'' of the PSF subtracted, rotated and median smoothed quasar images. The position angle of the optical emission was taken to be the position angle at a surface brightness of 21.5 m$_{F702W}$ arcsec$^{-2}$ for all of the quasars. This value was chosen because it was the surface brightness that was bright enough so that the ellipses that were fitted to the data gave believable results with small uncertainties for all of the quasar images (i.e., the uncertainty in the ellipticity was $<$0.07 and the uncertainty in the PA was $<$20$^\circ$). We present the measured radio and optical position angles in Tables 2 and 3 respectively and a histogram of the difference in the radio and optical position angles in Figure 7. Figure 7 shows a tendency for the difference between the position angle of the principal axes of the radio and optical emission to be less than 20$^\circ$. We present this result for two reasons. One of course is because this is an important result for schemes that attempt to unify quasars and radio galaxies based on viewing angle, evolution, or environment (e.g., Norman \& Miley 1984; Barthel 1989; Neff \& Hutchings 1990). Our main reason for presenting this result is that it lends support for our methodology for determining if the quasar image is resolved and if so, for determining the morphology of the underlying host galaxy. Of course it is important not to overstate this proposition. When comparing the ``alignment'' properties of radio galaxies and quasars one obviously needs to worry about projections (especially for the quasars), since the UV and radio are not likely to be perfectly aligned in three dimensions. In spite of this caveat, we would not expect to see any correspondence between the radio and optical if the morphology revealed through PSF subtraction happened by chance. However, we also note that there is a possible weakening of the alignment with redshift and this may be an indication of the difficulty in resolving high redshift hosts with short exposures through the WFPC2 (hence we urge some due caution in over interpreting our results). A more robust analysis of the importance of this result and a comparison of the strength of the ``alignment effect'' in quasars and radio galaxies will be presented in a subsequent paper. \section{Individual Source Descriptions} In this section, we shall describe the morphologies of the individual sources focusing on the following questions. Are there any artifacts in the images related to the PSF subtraction? What is the morphology of the extended emission (isophotal size, ellipticity, orientation)? Are there signs of interactions with nearby companions? How does the radio structure relate to the optical morphology of the host as seen in the HST data? We shall also discuss the positional offsets between the quasar nucleus and any nearby (in projection) companions seen in the contour plots and the magnitudes of these companions. \noindent {\bf 3C 9, z=2.012} 3C 9 has the highest redshift of the quasars in our sample. A short (10 minute) ground-based U-band exposure of 3C 9 in Heckman et al. (1991) did not reveal any extended structure. Our HST observations do not detect extended flux. \noindent {\bf 3C 14, z=0.469} The image of this source is approximately round. At the faintest isophotes it is preferentially extended along PA$\approx$120$^\circ$. At higher isophotal levels, the orientation of the image is close to PA$\approx$180$^\circ$. There is a diffuse galaxy about 0.7'' south and 2.6'' west of the nucleus with a total magnitude of about m$_{F702W}$ $\approx$ 23.2. The radio source has a triple morphology and is extended along PA$\approx$$-$5$^\circ$ and 170$^\circ$ (Akujor et al. 1994) --- very similar to the PA of the highest surface brightness extended optical emission. \noindent {\bf 3C 43, z=1.47} 3C 43 is optically one of the most compact objects in our sample. The PSF subtracted image has a faint extension along PAs$\approx$ 30$^\circ$ and 180$^\circ$. Emission is extended to only about 1'' down to a surface brightness of 22.9 m$_{F702W}$ arcsec$^{-2}$. 3C 43 has a compact (LAS$\sim$2.6'') and complex radio structure (Sanghera et al. 1995; Akujor et al. 1991). This complex morphology makes a direct comparison between the optical and radio emission difficult, but it is interesting to note that the ``{\bf U}-like'' structure seen in the faintest isophote to the north of the nucleus is roughly mimicked in the radio map of Sanghera et al. (1995). The most northern component of the complex radio structure has been identified as the nucleus by Spencer et al. (1991). If this is the case then the extended optical emission to the south of the optical nucleus is roughly aligned with the curved jet seen in the radio images of 3C 43 (e.g., Sanghera et al. 1995). There is a nearby (in projection) companion galaxy about 3'' north and 0.2'' east of the quasar nucleus with a total magnitude, m$_{F702W}$ $\approx$ 23.5. There is another galaxy just visible on the edge of the contour plot shown in Figure 5, which is almost certainly a foreground galaxy. \noindent {\bf 3C 47, z=0.425} 3C 47 shows signs of interaction with a small galaxy approximately 1.7 arc seconds to the northeast of the nucleus. There is a second galaxy along this same direction approximately 3.5 arc seconds from the nucleus. These galaxies have total magnitudes of 21.6 m$_{F702W}$ and 21.7 m$_{F702W}$ respectively. The elongated, off-center (i.e., not centered on the nucleus) isophotes strongly suggest that the host galaxy is interacting with one or both of the nearby (perhaps only in projection) galaxies. The 5 GHz radio map of Bridle et al. (1994) show a core, jet, and two lobe morphology. The jet is at PA$\approx$210$^\circ$ which corresponds closely with a linear feature seen in the HST image presented in Figure 5. \noindent {\bf 3C 68.1, z=1.238} The HST data are consistent with a point source and thus we do not detect any extended flux around 3C 68.1. \noindent {\bf 3C 93, z=0.357} 3C 93 possesses a host with a large angular size --- approximately 3'' in diameter (at a surface brightness of 22.9 m$_{F702W}$ arcsec$^{-2}$). The isophotes are approximately round, with the brighter isophotes being extended along the north south-direction and the faintest isophotes oriented along PA$\approx$40$^\circ$. The 1.5 and 8.4 GHz radio maps of Bogers et al. (1994) show a ``core + double lobe'' radio source with a relatively large angular size ($\sim$ 41''). The radio source is oriented along PA$\approx$45$^\circ$ and thus roughly coincident with the principal axis of the outer isophotes. The morphology of the host galaxy agrees well with the R-band image of 3C 93 presented in Hes, Barthel, and Fosbury (1996). \noindent {\bf 3C 138, z=0.759} 3C 138 is a flattened system oriented preferentially along PA$\approx$130$^\circ$. The inner isophotes become irregular with extensions along PA$\approx$ 70$^\circ$ and along PA$\approx$290$^\circ$. 3C 138 is also a so-called compact steep spectrum (CSS) radio source. The high resolution radio map of Redong et al. (1991) shows a compact source whose linear and triplet structure is extended on scales of a few tenths of an arc second along PA$\approx$70$^\circ$. This axis of emission is similar to the extension of the isophotes on scales of a few tenths of an arc second seen in the optical image. \noindent {\bf 3C 147, z=0.545} The isophotes of 3C 147 are fairly flat in the high surface brightness regions and become rounder at fainter isophotal levels. The main axis of the optical emission is at PA$\approx$55$^\circ$. The galaxy is about 2'' across down to about 22 m$_{F702W}$ arcsec$^{-2}$. 3C 147 is also a well known compact steep spectrum radio source whose size is 0.5'' and has a jet-like structure pointing out from the nucleus at PA$\approx$240$^\circ$ (van Breugel et al. 1992). Moreover, there is a blob of emission 0.4'' from the nucleus at PA$\approx$25$^\circ$. In the HST image, we also see two blobs of emission about 1.4'' to the south of 3C 147. The total magnitude of these two blobs is about 21.7 m$_{F702W}$. There is also another bright galaxy visible 3.3'' east of the nucleus. The total magnitude of this galaxy is 19.4 m$_{F702W}$. \noindent {\bf 3C 154, z=0.580} The HST data are consistent with a point source in the EED analysis. However, there is evidence for extended emission from the PSF subtraction. In Figure 6, we show the morphology of this extended emission. The F702W image of 3C 154 extended by about 1.5'' at a level down to a surface brightness of 22 m$_{F702W}$ arcsec$^{-2}$. Radio maps of 3C 154 show a classical core, double lobe morphology (e.g., Bogers et al. 1994). The source is oriented along PA$\approx$100$^\circ$ and is large (LAS$\sim$53''; Bogers et al. 1994). The position angle of the radio source corresponds to a faint extension in the HST image that reaches about 1.2'' from the nucleus down to a surface brightness of 22 m$_{F702W}$ arcsec$^{-2}$. \noindent {\bf 3C 175, z=0.768} The HST image of 3C 175 is marginally resolved. The PSF subtracted image shows a complex morphology. The inner, brighter isophotes are oriented preferentially east-west. The fainter isophotes show a ``plume'' of emission to the south-east and south of the nucleus. This ``plume'' reaches about 2'' from the nucleus (down to 22 m$_{F702W}$ arcsec$^{-2}$). The radio source also has a ``classical'' radio structure of core and two radio lobes. This triple structure is oriented along PA$\approx$240$^\circ$ (jet side). The size of the radio source is large --- LAS$\sim$56''. In the optical image there is an extension in the isophotes along PA$\approx$240$^\circ$. There is no evidence from ground-based optical images of 3C 175 that it is resolved (Malkan 1984; Hes et al. 1996). \noindent {\bf 3C 179, z=0.856} The images of 3C 179 were reduced using a different method than the rest of the sample. Two 300 second images were taken of 3C 179 separated by a period of a month. Over that time, 3C 179 increased by about 0.5 magnitudes in brightness. We therefore PSF subtracted each image individually and, aligned and rotated both residual images and then averaged the two images. The magnitude and fraction of the extended emission was obtained by comparing the average quasar brightness with that of the average host brightness. Of course the fraction of extended to total flux was different in the two images. The brightest isophotes of the fuzz are oriented preferentially in the east-west direction. There is a bright ``knot'' of emission about 0.8'' from the nucleus along a PA$\approx$270$^\circ$. The radio maps of Reid et al. (1995) show a complex morphology. The radio maps show a jet along PA$\approx$270$^\circ$ and a double lobe morphology. \noindent {\bf 3C 181, z=1.382} The image of 3C 181 does not appear to be spatially resolved in these HST data. \noindent {\bf 3C 186, z=1.063} The image of 3C 186 is not resolved according to the EED analysis. However, the PSF subtraction suggests that it might be resolved. We show the possible morphology of the host in Figure 6. The image of 3C 186 shows ``fuzz'' about 1.8'' across down to a surface brightness of 22 m$_{F702W}$ arcsec$^{-2}$. There are two significant position angles of extended emission, PA$\approx$30$^\circ$ and PA$\approx$110$^\circ$. There is a nearby (in projection) galaxy about 2.3'' from the nucleus along PA=65$^\circ$. The total magnitude of this nearby companion is m$_{F702W}$=22.2. The radio morphology is compact and is oriented along PA$\approx$140$^\circ$ (Rendong et al. 1991; Spencer et al. 1991). \noindent {\bf 3C 190, z=1.195} The PSF subtracted image of 3C 190 reveals a complex morphology. The principal axis of the optical emission is along PA$\approx$140$^\circ$. There is a clump of emission approximately 0.8'' to the east of the nucleus. The radio morphology of 3C 190 is compact (LAS $\sim$3'') and is linear having a chain of several hot spots (Spencer et al. 1991) along PA$\approx$ 240$^\circ$. In the HST image, we see a distortion in the isophotes along the PA of the radio structure. \noindent {\bf 3C 191, z=1.956} There is no evidence for a resolved component in 3C 191 according to the EED analysis, but PSF subtraction analysis suggests that it is resolved. The PSF subtracted image of 3C 191 reveals a complex morphology. The general orientation of the fuzz is along PA$\approx$0$^\circ$ and $\approx$200$^\circ$. There is a ``plume'' of emission to the southeast of the nucleus along PA$\sim$140$^\circ$. The radio morphology is a core plus double lobe morphology with a principal axis of emission along PA$\approx$165$^\circ$ (Ankujor et al. 1991). \noindent {\bf 3C 204, z=1.112} 3C 204 has a relatively high percentage of extended to total emission (almost 50\%). The host galaxy is a flat system (e$\sim$0.25) and its major axis is oriented preferentially along PA$\approx$150$^\circ$. The radio emission has a core, jet, double lobe morphology with the jet oriented along PA$\approx$275$^\circ$ (Reid et al. 1995). There appears to be a faint extension of emission along PA$\approx$275$^\circ$ in the HST image of the host galaxy. This ``finger'' of emission may be related to the radio jet seen in the radio image of Reid et al. (1995). \noindent {\bf 3C 205, z=1.534} The HST image of this quasar was saturated. No PSF subtraction or EED analysis was attempted. \noindent {\bf 3C 207, z=0.684} 3C 207 does not appear to be extended in these HST data. \noindent {\bf 3C 208, z=1.110} 3C 208 does not appear to be extended in these HST data. \noindent {\bf 3C 215, z=0.412} The optical counterpart of 3C 215 appears to have a flattened elliptical structure with its major axis oriented along PA$\approx$ 135$^\circ$. The 5 GHz radio image of Bridle et al. (1994) shows a complex structure with an inner region consisting of several high surface brightness knots of emission along an approximately east-west line, engulfed in a large, more diffuse emission oriented along PA$\approx$150$^\circ$. The radio emission is seen over a large scale, LAS$\sim$ 1'. A ground-based V-band image presented by Hes et al. (1996) shows a similar morphology to the image presented here. \noindent {\bf 3C 216, z=0.67} The HST image of 3C 216 indicates that the host galaxy is an interacting system. The faint isophotes of the quasar fuzz are not centered on the quasar nucleus, but are offset to the northeast along PA$\approx$30 --- 45$^\circ$. The nearby (in projection) galaxy is about 1.6'' to the north of the nucleus and has a magnitude of 21.9. There is another brighter galaxy just off the edge of the contour plot presented here is about 4'' to the east and 2.5'' north of the nucleus. The total magnitude of this galaxy is m$_{F702W} \approx$ 20.4. The 1.7 and 5 GHz radio images of Reid et al. (1995) show a compact radio source (LAS $\sim$ 6'') oriented along PA$\approx$40$^\circ$. Along this PA lies a radio ``hotspot'' about 1'' from the nucleus. This hotspot is approximately coincident with the ``plume'' of optical emission we see in the HST image. \noindent {\bf 3C 220.2, z=1.157} 3C 220.2 does not appear to be extended in these HST data. \noindent {\bf 3C 249.1, z=0.313} The HST image of 3C 249.1 is spectacular. The extended emission comprises about 70\% of the total light from the quasar. A narrow-band HST image centered on [OIII]$\lambda$5007 (Sparks, private comm.) shows that most of the emission to the east of the nucleus is probably [OIII] emission within the bandpass of the F702W filter (see also Stockton \& MacKenty 1987). However, the comparison with the narrow-band [OIII] image suggests that much of the light from the inner parts of the nebula is likely to be continuum emission from the host galaxy. A 5 GHz radio image of Bridle et al. (1994) reveals a core, jet, double lobe morphology oriented preferentially along PA$\approx$ 100$^\circ$. \noindent {\bf 3C 254, z=0.734} The EED analysis of 3C 254 shows no evidence for resolution, but the PSF subtraction indicates that it is probably resolved. The HST image of 3C 254 has a complex morphology. Its host galaxy is oriented approximately in the east-west direction. There are ``plumes'' of emission along PA$\approx$45$^\circ$ and 285$^\circ$. The second of these plumes corresponds to the direction of the most distant radio lobe seen in the 5 GHz radio map of Reid et al. (1995). This radio map reveals that 3C 254 has a double-lobed radio morphology with a central core. \noindent {\bf 3C 263, z=0.646} The host galaxy of 3C 263 appears to be a flat system (e$\sim$0.3), with its major axis aligned along PA$\approx$350$^\circ$. There is a nearby galaxy in projection along the major axis of the galaxy ($\approx$1.9'' from the nucleus) with a magnitude of 22.2. There is also another nearby galaxy about 0.2'' south and 1.5'' west of the nucleus. This galaxy has a total magnitude of about 22.3. The 5 GHz radio map in Bridle et al. (1994) shows a large scale (LAS$\sim$51'') core, jet, double lobe source with the jet have an orientation of $\approx$110$^\circ$. The HST image shows a ``finger'' of extended emission in the counter-jet direction (PA$\sim$300$^\circ$). \noindent {\bf 3C 268.4, z=1.400} The image of 3C 268.4 does not appear to be extended in the EED analysis. PSF subtracting the image suggests that 3C 268.4 might be extended. In Figure 6, we show the possible resolved structure of the quasar. The HST image of 3C 268.4 reveals that it is another source with a complex morphology. The long axis ($\approx$1.5'') of the host is approximately along PA = 230$^\circ$. In the faintest isophotes there is also a ``finger'' of emission pointing approximately to the south. In this direction there is a nearby (in projection) galaxy that is about 2.6'' from the nucleus and has a total magnitude of 21.1. The 1.4 and 5 GHz radio images of Reid et al. (1995) show a core, jet, double lobe source with a largest angular size of about 12''. The principal axis of the radio emission is approximately along PA = 215$^\circ$ and corresponds roughly to the principal axis of the host galaxy. \noindent {\bf 3C 270.1, z=1.519} According to the EED analysis, 3C 270.1 does not appear to be extended. However, in Figure 6, we show the morphology of the possible extended emission obtained by PSF subtraction which suggests that the image of 3C 270.1 is resolved. The host galaxy of 3C 270.1 is oriented preferentially in the east-west direction (PA=100$^\circ$). There are ``plumes'' of emission to the south and to the north-west. The high resolution radio image of Akujor et al. (1994) shows a compact (LAS$\sim$10''), ``bent'', triple source (core $+$ two lobes) along PA$\approx$180$^\circ$ and 320$^\circ$. \noindent {\bf 3C 277.1, z=0.321} The host galaxy of 3C 277.1 is large compared to those in the rest of the sample (emission is seen over 3'' down to 22 m$_{F702W}$ arcsec$^{-2}$). The overall morphology of the host is round, but distortions to the inner isophotes are seen in the directions of PA$\approx$170$^\circ$ and 310$^\circ$. The high resolution 1.7 and 5 GHz images of Reid et al. (1995) show a compact (LAS$\sim$1.5'') double oriented along PA$\approx$310$^\circ$. The position of the radio ``hotspot'' to the northwest roughly corresponds to the distortion we see in the HST image of the host galaxy. \noindent {\bf 3C 279, z=0.538} The image of this quasar was saturated. No PSF subtraction was attempted. \noindent {\bf 3C 280.1, z=1.659} The host galaxy of 3C 280.1 is compact and compared to most other hosts in the sample, its surface brightness increases rapidly with decreasing distance from the nucleus. The isophotes are round (e$\sim$0.05) and are oriented along PA$\approx$350$^\circ$. The 5 GHz radio map of Swarup, Sinha, \& Saika (1982; but see also Lonsdale et al. 1992 and Akujor et al. 1994) shows a core, two hotspots (one close to the nucleus, $\sim$ 1'' to the southeast, and a more distant one, about 12'' to the west-northwest) and then a wiggly chain of emission to the southeast along PA$\approx$120$^\circ$. This ``chain'' of radio emission is seen from about 4'' to about 11'' from the nucleus. The host galaxy, as seen in our HST image, does have an outward bending of the isophotes along the direction of and over the region of the southeastern hotspot seen in the radio maps of Swarup et al. (1982). \noindent {\bf 3C 287, z=1.055} The HST data are not spatially resolved. \noindent {\bf 3C 288.1, z=0.961} The HST data are not spatially resolved. \noindent {\bf 3C 298, z=1.436} The host galaxy of 3C 298 is dominated by two morphological features --- an ``arm'' of emission that projects from the nucleus to the south-west (PA=225$^\circ$) and then bends around to the east and a ``plume'' of emission to the north-northeast of the nucleus (PA$\approx$20$^\circ$). The radio images of Rendong et al. (1991) and van Breugel et al. (1992) show a compact triple source (LAS$\sim$1.8'') with an east-west orientation. \noindent {\bf 3C 309.1, z=0.905} The host galaxy of 3C 309.1 is a flat elliptical galaxy. The major axis of the host galaxy is oriented along PA=130$^\circ$. The high resolution radio image of Redong et al. (1991) shows 3C 309.1 to be a compact source, with a nuclear region oriented along PA$\approx$145$^\circ$ and a LAS$\sim$0.1'' and a ``lobe'' about 1'' from the nucleus along PA$\approx$95$^\circ$. The highest surface brightness isophotes of the host have an orientation roughly like that of the nuclear radio emission. \noindent {\bf 3C 334, z=0.555} The HST image of 3C 334 shows a host galaxy that is distorted, having twisted and off-center isophotes. The general orientation of the host is PA$\approx$120$^\circ$ and is 1.5'' across along its major axis (down to a surface brightness of 21 m$_{F702W}$ arcsec$^{-2}$).. The 5 GHz radio image of Bridle et al. (1994) shows a large (LAS$\sim$57'') triple (core, jet, two radio lobes) source. The radio jet emerges from the nucleus at PA$\approx$140$^\circ$ and curves around to the north. The orientation of the optical image of the host galaxy is approximately the same as that of the radio (5 GHz) image. Several ground-based images of 3C 334 show that it is extended and has a morphology similar to that presented here. For example, an [OII]$\lambda$3727 image in Hes et al. (1996) shows that 3C 334 is extended along PA$\approx$10$^\circ$ and a [OIII]$\lambda$5007 image of Lawrence (1990) shows extended line emission along PA$\approx$150$^\circ$. Both results have some morphological similarity to the image presented in this study. \noindent {\bf 3C 343, z=0.988} The image of 3C 343 is most unusual for the quasars imaged in this sample; it did not require any PSF subtraction! It appears to be a flat system with a major axis of about 2'' long oriented along PA$\approx$60$^\circ$. 3C 343 is another CSS radio source. The radio map of Rendong et al. (1991) reveals a compact (LAS $\approx$0.3'') complex radio source. There is a finger of emission (a jet?) pointing out along PA$\approx$320$^\circ$. From an analysis of an optical spectrum, Aldcroft, Bechtold, \& Elvis (1994) suggest that 3C 343 is a Seyfert 2 galaxy (i.e., the galaxy has narrow permitted and forbidden lines). Given the high radio luminosity of this galaxy, a more appropriate classification is as a radio galaxy. The characteristics of the optical spectrum from Aldcroft et al. supports our imaging data and our contention that 3C 343 does not appear to be a quasar. \noindent {\bf 3C 351.0, z=0.371} The image of this quasar was saturated. No PSF subtraction was attempted. \noindent {\bf 3C 380, z=0.692} 3C 380 appears to be marginally resolved and is perhaps an interacting system. It has two companion galaxies that appear to be immersed in common isophotes with the host galaxy of 3C 380. These galaxies are approximately 0.6'' west and 0.5'' north and 0.8'' west and 0.7'' north of the nucleus respectively. The total magnitudes of these two galaxy are m$_{F702W}$ $\approx$ 20.7 and m$_{F702W}$ $\approx$ 21.9. The radio image of van Breugel et al. (1992) at 1.4 GHz shows a very diffuse radio morphology that is strikingly similar but larger than the optical HST image shown in Figure 5. The interacting companions are engulfed in this radio emission and there is a surface brightness enhancement of the radio emission over the area of these companions (Reid et al. 1995). \noindent {\bf 3C 418, z=1.686} 3C 418 appears to have a compact host galaxy with several nearby (in projection) galaxies. The long axis of the host is only about 0.9'' across (down to a surface brightness of 22.9 m$_{F702W}$ arcsec$^{-2}$) and oriented preferentially along PA$\approx$225$^\circ$. The two nearby galaxies are 0.9''W, 0.2''N and 1.2''W, 1.2''N from the nucleus and have magnitudes of 23.8 and 23.0 m$_{F702W}$. A 15 GHz radio image of O'Dea, Barvainis, \& Challis (1988) shows a complex radio morphology with several bends in a radio ``jet'' pointing approximately along PA=330$^\circ$. This twisted jet is approximately 2'' long. The positions of the bends in the radio jet seem to correspond roughly to the positions of these nearby ``companions'' seen in the PSF subtracted HST image. However, given the magnitudes of these ``companions'', it seems very unlikely that they would be at the redshift of the quasar. \noindent {\bf 3C 432, z=1.785} The HST data for 3C 432 are not extended according to the EED analysis. However, PSF subtraction suggests that 3C 432 may in fact be resolved. The HST image of 3C 432 shows a compact host about 1.2'' in diameter down to a surface brightness of 22 m$_{F702W}$ arcsec$^{-2}$. The host is preferentially oriented along PA$\approx$ 45$^\circ$. There is a secondary ``plume'' of emission along PA$\approx$ 135$^\circ$. A 4.9 GHz radio map of Bridle et al. (1994) shows a classical core, jet, double lobed source extended over 15'' and oriented along PA$\approx$135$^\circ$. \noindent {\bf 3C 454, z=1.757} The HST image of 3C 454 reveals a round and compact ($\sim$1'' at a surface brightness of 22 m$_{F702W}$ arcsec$^{-2}$) host galaxy. The radio morphology is also compact, with a largest angular size of about 0.8'' (Rendong et al. 1991; Spencer et al. 1991). The radio emission is oriented preferentially in the north-south direction (Spencer et al. 1991). As is the case for 3C 181 and 3C 288.1, we urge caution in interpreting several of the features in the PSF subtracted image shown in Figure 5. Some of the structure seen along PA$\approx$205$^\circ$ may be due to incomplete removal of the most intense diffraction spike (i.e., the one in the +U3 direction). The questionable accuracy of the PSF removal is noted in Table 3. \noindent {\bf 3C 454.3, z=0.860} The image of this quasar was saturated. No PSF subtraction was attempted. \noindent {\bf 3C 455, z=0.5427} The image of 3C 455 is interesting. It has one of the highest extended/total brightness ratios of the entire sample ($\sim$70\%). It has a simple diffuse morphology with an embedded high surface brightness nucleus. Down to a surface brightness of about 22 mag arcsec$^{-2}$, 3C 455 is about 2'' in diameter along its major axis at PA$\approx$60$^\circ$. The radio morphology in the 8 GHz map of Bogers et al. (1994) is a core, double radio lobe type oriented along PA$\approx$245$^\circ$ (very similar to the optical major axis seen in the HST image). The spectrum of this object shown in Gelderman \& Whittle (1994) shows narrow permitted lines. Combined with our result that very little point-source subtraction was necessary, suggests that this object should be re-classified as a compact radio galaxy rather than a quasar. \section{Concluding Remarks} We can draw some general conclusions from an analysis of the images presented in Figures 5 and 6. We present HST ``snapshot'' images of a sample of 43 quasars. From a close inspection and analysis of these data we draw the following conclusions: \noindent $\bullet$ Our analysis suggests that the quasar fuzz contributes from $<$5\% to nearly 100\% in the most extreme case (about 20\% being typical) of the total light from the quasar. Although a large fraction of the objects do not appear to be resolved in the EED analysis ($\sim$40\%), in about 1/2 of those sources, the more sensitive PSF subtraction indicates the presence of a resolved component. \noindent $\bullet$ Many of the resolved sources show complex morphology with twisted, asymmetric, and/or distorted isophotes and irregular extensions. \noindent $\bullet$ In almost every case of the quasars with spatially resolved ``fuzz'', there are similarities between the radio and optical morphologies. In many cases there are features with similar radio and optical morphologies and/or the principal axes of radio and optical (continuum and line) emission are roughly aligned. This is further evidence for the reality of the structures detected in the PSF analysis. \noindent $\bullet$A significant fraction ($\sim$25\%) of sources show galaxies nearby in project (within 5'') and some ($\sim$10\% of the sources) show obvious signs of interactions with these nearby companions. These results show that the generally complex morphologies of host galaxies of quasars are influenced by the radio emitting plasma and by the presence of nearby companions. We must be cautious in interpreting these data since all of the images have a contribution from emission lines to their morphologies and brightnesses. Separation of the continuum and line contributions and constraints on the various mechanisms involved await new snapshot surveys that are being carried out of 3C sources with the linear ramp filters and another broad-band filter. A more robust and quantitative analysis of the data and the relationship between high redshift radio galaxies and quasars will be presented in a future paper. \acknowledgements This research is partially supported by HST GO grant number GO-5476.01-93A, by a program funded by the Dutch Organization for Research (NWO) and by a NATO research grant. M. D. L. would like to thank Chris Burrows, Tim Heckman, and Matt McMaster for useful suggestions. We would also like to thank an anonymous referee for her/his remarks that improved the presentation of our results. \newpage
1,314,259,995,750
arxiv
\section{Introduction} The apple industry relies heavily on manual labor. For instance, in the United States alone, it is estimated that the seasonal labor force needed for apple harvesting is more than 10 million worker hours each year, attributing to about 15\% of the total production costs \cite{gallardo2012}. The growing labor shortage and increased labor cost have thus become major concerns for the long-term sustainability and profitability of the apple industry. In the meantime, the past decade has seen great transitions in apple production systems; traditional unstructured orchards have been replaced with high-density orchard systems where trees are smaller and more uniformly structured (i.e., v-trellis, vertical fruiting wall, etc.). These modern tree structures can greatly facilitate orchard automation, and thus there has been a renewed interest in pursuing robotic harvesting as a promising solution to reduce the harvesting cost and dependence on manual labor. Over the past few years, several robotic systems have been designed to autonomously harvest different horticultural crops, including sweet pepper\cite{LehnertRAL2017}, strawberry\cite{Xiong2020}, apple \cite{silwal2017}, and kiwifruit\cite{Williams2020}. For apple harvesting, the automation system designs can be mainly grouped into two categories. The first category is the \textit{shake-and-catch} harvesting \cite{ZhangASABE2020}, where vibrations are applied to the tree trunk and/or branches to detach the fruits. Although the shake-and-catch harvesting systems are efficient in detaching fruits from trees, they often result in a high rate of apple bruising that is not acceptable for fresh market. The other category is the \textit{fruit-by-fruit harvesting} where manipulators are used to pick fruits in a controlled manner, and thus can substantially reduce fruit damage. However, designing such systems with high picking efficiency and practical viability presents a great challenge. So far, several fruit-by-fruit robotic apple harvesting systems have been developed \cite{baeten2008,silwal2017,Hohimer2019,Zhang2021MECH,Bulanon2021}. For instance, Baeton et al. combines a 7 degree-of-freedom (DOF) industrial manipulator with a vacuum activated, funnel shaped gripper for apple detachment, and the harvesting cycle time is 8-10 s/fruit \cite{baeten2008}. In \cite{silwal2017}, both hardware and software designs of an apple harvester are presented. Field tests conducted on a v-trellis orchard show that this system is able to pick 84\% of 150 apples attempted with the overall harvesting time being 7.6 s/fruit. In \cite{Hohimer2019}, Hohimer et al. developed a harvesting robot based on a pneumatic soft-robotic end-effector, and the average time that the system takes from apple detachment to transported to storage bin is 7.3 s/fruit. Despite the aforementioned progresses, the low picking efficiencies of existing systems are still unsatisfactory for their practical use in the real orchard environment \cite{lu2017innovative}. Towards the goal of developing a practically and economically viable robotic harvesting system, we have been developing an efficient automated apple harvesting system over the past three years. Tests in orchard field and indoor simulated orchard environment demonstrated a promising picking rate of $\sim$3.6 s/fruit, a significant improvement over the existing systems reviewed above. While the mechanical and preliminary control designs have been reported in \cite{Zhang2021MECH}, this paper presents the algorithm design and integration of the developed system, focusing on four major modules -- calibration, perception, planning, and control -- where several advancements have been made. First, we develop a robust extrinsic calibration scheme by detecting and removing data outliers via random sample consensus (RANSAC) \cite{FischlerCACM1981,RaguramTPAMI2013}. Second, we develop a deep learning-based multi-view fruit detection and localization framework by fusing two RGB-D sensors facing different angles. Compared to the single camera-based algorithm we developed earlier \cite{Chu2020PRL}, this multi-view fusion offers enhanced performance in both detection and localization. Third, a unified planning algorithm that simultaneously optimizes picking sequence and dropping spots is developed, which significantly improves harvesting efficiency. Lastly, two computationally-efficient nonlinear controllers are synthesized to enable accurate and smooth manipulator movement. Experiments in both an indoor simulated orchard environment and a real orchard field were conducted to illustrate the performance of the integrated system. The remainder of this paper is organized as follows. Section \ref{sec_systemOverview} provides an overview of the developed robotic apple harvesting system. The algorithm designs of calibration, perception, planning and control modules are detailed in Section \ref{sec_algorithm}, and experiment results are discussed in Section \ref{sec_perfEva}. Finally, conclusions are drawn in Section \ref{sec_conclusion}. \section{System Overview} \label{sec_systemOverview} The developed robotic apple harvesting system is shown in Fig. \ref{fig_appleRob}, which is comprised of four primary hardware components: a perception module consisting of two Intel RealSense D435i RGB-D cameras, a 3-DOF manipulator, a vacuum-based end-effector, and a dropping module. All components are affixed to a Segway mobility platform for ease of movement in orchard. The RGB-D cameras, the manipulator, and all communication devices are connected to an industrial computer (Xeon E2176G CPU and 64 GB RAM) resided in the mobility platform. The robot operating system (ROS) is used to fully integrate the entire software and facilitate the communication and control of different components. \begin{figure}[!h] \centering \includegraphics[width=0.85\linewidth]{figures/roboticSystem_v4-eps-converted-to.pdf} \caption{The developed robotic apple harvesting prototype.}\label{fig_appleRob} \end{figure} \vspace{-10pt} \subsection{Hardware Design} \label{sub_hardware} For automated apple harvesting, the first and foremost task is orchard perception, which detects and localizes the fruits to guide robotic manipulations. Different from existing works (e.g., \cite{baeten2008,Bulanon2021}) that attach the camera to the manipulator or the end-effector, the RGB-D cameras are installed on the Segway mobility platform to provide a global view of the scene, facilitating the use of multiple manipulator arms planned in our future versions. Moreover, the multi-camera setup is introduced to provide multi-view sensing from different perspectives, which is intended to achieve enhanced perception accuracy and robustness through sensor fusion to alleviate the impact of occlusions and challenging lighting conditions. To efficiently approach the target fruits, a 3-DOF manipulator with simple and compact mechanical structure is designed and assembled. Specifically, the manipulator is comprised of one prismatic joint and two revolute joints. The two revolute joints are linked using an $L$-shaped aluminum plate, which creates a pan-and-tilt module. The prismatic joint is assembled as the base of the pan-and-tilt module to extend the depth of the manipulator's workspace. A hollow aluminum link is installed on the pan-and-tilt module to ensure that the end-effector can reach the apple locations, and it also acts as a vacuum tube for grasping fruits in the harvesting process. Instead of relying on a hybrid pneumatic/motor actuation mechanism in our previous design \cite{Zhang2021MECH}, all the joints of the current manipulator are driven by servo motors, which not only reduces actuation complexity but also facilitates integrated control scheme design. In our system, a vacuum-based end-effector is designed to grasp and detach fruits. A soft silicone vacuum cup is attached to the front end of the aluminum tube. The vacuum cup with a special geometric configuration has shown satisfactory performance in conforming to various apple contours \cite{Dickinson2022}. Meanwhile, the rear end of the aluminum tube is connected to an electric powered wet/dry vacuum via a flexible and expandable tube. The vacuum-based end-effector can reduce potential damage to fruits. Moreover, if the manipulator does not reach the apple accurately, the vacuum-based end-effector can tolerate some approaching inaccuracies since it can attract the fruit within a certain distance (about 1.5 cm in our current prototype) when adequate vacuum flow is provided. For ease of collecting and transporting picked apples, a dropping module is assembled and affixed to the mobility platform. The base of the dropping module is a rectangular aluminum plate with a foam cushion covering. The manipulator can stop at any spots above the dropping module and then release the harvested fruit, thus reducing the harvesting cycle time. After the apple have fallen on the sloped surface of the dropping module, it rolls down to the rear end of the dropping module where a screw-driven conveyer is installed to transport the apple to a bin \cite{zhang2021PBT}. \subsection{Software Design} The software suite is designed and integrated in the ROS framework. Different software components are primarily communicated via custom messages sent through ROS actions and services. Fig. \ref{fig_flowDiagram} shows the main algorithm flow of the software system during an apple harvesting cycle. The algorithm structure mainly consists of four modules: calibration, perception, planning, and control. The logic flow of the apple harvesting cycle is detailed in the following. \begin{figure}[!h] \centering \includegraphics[width=0.75\linewidth]{figures/flowDiagram_v3-eps-converted-to.pdf} \caption{Algorithm flowchart in an apple harvesting cycle.}\label{fig_flowDiagram} \end{figure} At the beginning of each harvesting cycle, the RGB-D cameras are triggered to acquire images. With the obtained image information, the perception algorithm (Section~\ref{subsec_visualSensing}) is used to detect and localize the fruits within the system's workspace. A list of 3D apple locations are then generated and subsequently transformed into the 3D positions expressed in the coordinate frame of the manipulator. To enable the coordinate transformation, offline calibrations are conducted to obtain the transformation parameters between the perception system and the manipulator system (Section~\ref{subsec_calibration}). Based on the apple location list, the planning algorithm is utilized to optimize the apple picking sequence and its corresponding dropping spots (Section~\ref{subsec_planning}). The detected apples will be chosen as the targeted fruits by following the planned picking sequence, and a reference trajectory will be generated to guide the motion of the manipulator. The target apple location and its corresponding reference trajectory are passed onto the control module, which then actuates the manipulator to follow the reference trajectory to reach the fruit. Once the fruit is successfully attached to the end-effector (detected by a pressure sensor mounted inside the tube), the rotation mechanism is triggered to rotate the whole aluminum tube by a certain angle to detach the apple (Section~\ref{subsec_control}). Finally, the manipulator returns to a dropping spot and releases the fruit. It is apparent that the software design of our robotic system requires multi-disciplinary advances to enable various synergistic functionalities and coordination for achieving reliable automated apple harvesting. The next section describes each of the software components in more details. \section{Algorithm Design and Integration} \label{sec_algorithm} In this section, we describe our software components on calibration, perception, planning, and control in details. \subsection{Robust extrinsic calibration with outlier removal} \label{subsec_calibration} A new robotic system generally requires various calibration tasks necessary to coordinate different components (e.g., sensors, manipulators). In our system, we need to calibrate the intrinsic/extrinsic parameters of the two RGB-D cameras, as well as the extrinsic parameters between the perception system and the manipulator. The camera calibration toolbox \cite{Bougue2013cali} was used to obtain the intrinsic/extrinsic parameters of the two RGB-D cameras. In this subsection we focus on introducing how we perform robust calibration on the extrinsic transformation parameters between the perception system and the manipulator. More specifically, the extrinsic parameter calibration is aimed at obtaining the rotation matrix $R_{c}^{m}\in \mathbb{SO}^{3}$ and translation vector $t_{c}^{m}\in \mathbb{R}^{3}$ between the camera and manipulator, which plays a crucial role in transforming camera centered measurements into the manipulator's coordinate frame, and thus enabling the robot to accurately move based on camera measurement feedback. Our calibration procedure includes two steps, and the Qualisys motion capture system is utilized to facilitate a precise calibration. In the first step, we calibrate the extrinsic parameters between the manipulator and the Qualisys motion capture system. Specifically, a spherical marker is attached to the manipulator's end-effector, which can be detected and localized by the Qualisys motion capture system stably and precisely. We then move the end-effector to $n$ different positions and record the marker positions $p_{q,i}\in \mathbb{R}^{3}$ ($i=1, \cdots, n$) measured by the Qualisys motion capture system (here the subscript $q$ represents Qualysis). Meanwhile, based on the kinematic model of the manipulator, the corresponding end-effector positions $p_{m,i}\in \mathbb{R}^{3}$ ($i=1, \cdots, n$) under the manipulator coordinate frame can be obtained (here the subscript $m$ represents manipulator). Let $R_{q}^{m}\in \mathbb{SO}^{3}$, $t_q^{m}\in \mathbb{R}^{3}$ be the rotation matrix and translation vector from the Qualisys frame to the manipulator frame. In the ideal case, $p_{q,i}$ and $p_{m,i}$ should satisfy the geometric relationship $p_{m,i} = R_{q}^{m} p_{q,i} + t_q^{m}$. Based on the above geometric relationship and following \cite{Horaud1995IJRR}, $R_{q}^{m}$ and $t_q^{m}$ can be estimated by solving the following two optimization problems sequentially: \abovedisplayskip= 2.5pt \belowdisplayskip= 2.5pt \begin{equation} \label{calibration_opt1} \begin{aligned} & \min_{R_{q}^{m}} f_{1}(R_{q}^{m}) = \sum_{j=1}^{n-1}\left\| \bar{p}_{m,j} - R_{q}^{m} \bar{p}_{q,j} \right\|^{2}, \\ & \text{s.t.} \quad (R_{q}^{m})^{\top} R_{q}^{m} = I_{3\times 3},\, \det(R_{q}^{m})=1, \\ & \qquad \, \bar{p}_{m,j} = p_{m,j}-p_{m,n}, j=1, \cdots, n-1, \\ & \qquad \, \bar{p}_{q,j} = p_{q,j}-p_{q,n}, j=1, \cdots, n-1, \end{aligned} \end{equation} and \begin{equation} \label{calibration_opt2} \begin{aligned} \min_{t_{q}^{m}} f_{2}(t_{q}^{m}) = \sum_{i=1}^{n}\left\| p_{m,i}-R_{q}^{m}p_{q,i} - t_{q}^{m} \right\|^{2}. \end{aligned} \end{equation} Note that the minimization problem \eqref{calibration_opt1} has a closed-form solution, and once $R_{q}^{m}$ is determined, the minimization of $f_{2}$ over the translation vector $t_q^{m}$ is a linear least-square problem \cite{Horaud1995IJRR} that can be solved very efficiently. However, the data $\left\lbrace p_{m,1}, \cdots, p_{m,n} \right\rbrace$ and $\left\lbrace p_{q,1}, \cdots, p_{q,n} \right\rbrace$ are generally corrupted with noises and may contain outliers that do not satisfy the geometric relationship $p_{m,i} = R_{q}^{m} p_{q,i} + t_q^{m}$. These outliers can severely influence the calibration accuracy and thus need to be removed. Towards that end, we adopt the random sample consensus (RANSAC) methodology \cite{FischlerCACM1981,RaguramTPAMI2013} to extract credible data from $\left\lbrace p_{m,1}, \cdots, p_{m,n} \right\rbrace$ and $\left\lbrace p_{q,1}, \cdots, p_{q,n} \right\rbrace$. The RANSAC method performs outlier removal in an iterative, randomized and data-driven manner, which ensures robust extrinsic parameter calibration can be achieved. In the second step, the same process is performed to obtain the extrinsic parameters (i.e., rotation matrix $R_{c}^{q}\in \mathbb{SO}^{3}$ and translation vector $t_{c}^{q}\in \mathbb{R}^{3}$) between the Qualisys system and the perception system. Specifically, within the common field of view of the Qualisys system and the RGB-D camera, a spherical marker is placed at $n'$ different positions in sequence, and the corresponding 3D coordinates $p_{q,i}^{'}$, $p_{c,i}^{'}\in \mathbb{R}^{3}$ ($i=1, \cdots, n'$) are measured, respectively, by the Qualisys system and the RGB-D camera. According to these position measurements, the optimization problems similar to \eqref{calibration_opt1} and \eqref{calibration_opt2} are formulated and solved to obtain $R_{c}^{q}$ and $t_{c}^{q}$. The RANSAC method is also applied to remove outliers for improved robustness. Finally, the resultant calibration parameters $\left(R_{q}^{m}, t_q^{m} \right)$ and $\left(R_{c}^{q}, t_{c}^{q} \right)$ are combined to determine the extrinsic parameters $\left(R_{c}^{m}, t_c^{m} \right)$ (i.e., $R_{c}^{m}=R_{q}^{m}R_{c}^{q}$ and $t_{c}^{m}=t_{q}^{m} + R_{q}^{m}t_{c}^{q}$) between the manipulator and the perception system. \subsection{Multi-view fusion for robust detection and localization} \label{subsec_visualSensing} One of the key tasks in robotic apple harvesting is fruit detection and localization, where the former is to segment apples from the background areas whereas the latter subsequently calculates the 3D positions of the detected apples. In our preliminary work, a network with Mask R-CNN backbone and a suppression end was developed in \cite{Chu2020PRL} using a single RGB-D camera. In this new version, we extend the perception system to systematically fuse two RGB-D cameras to enhance the detection performance. This is motivated to address the two major challenges in orchard perception as identified in our previous filed tests: leaf/branch occlusion and varying lighting conditions. Exploiting multiple cameras from different views can alleviate the impact of occlusion and challenging lighting conditions as the two cameras can provide complementary views for enhanced performance. The network architecture of the proposed detection approach is shown in Fig. \ref{fig_appleDetStructure}. The raw images captured by the two RGB-D cameras from different perspectives are fed into identical but separate deep learning network which consists of two components: a feature learning backbone and a feature suppression end. The feature learning backbone adopts the classical backbone designed in Mask R-CNN \cite{he2017mask} to extract apple features and generate region proposals. Since the feature learning backbone might generate wrong inference features, the image patches inside the bounding boxes are then passed to a feature suppression end to remove some mis-classified candidates. Once the images from the deep cameras are processed by the deep learning network, the bounding boxes of apple candidates are obtained. This suppression Mask R-CNN design has been reported in \cite{Chu2020PRL}. In the new perception version, we further fuse the detection results from the two camera channels using a fuzzy logic unit. In particular, we first match the bounding boxes from the two camera frames based on the extrinsic calibration of the two cameras. For the matched and unmatched bounding boxes, a fuzzy logic unit is utilized to combine the detection results to further enhance the accuracy of the labeled candidates. The code on the network and fuzzy logic implementation is open sourced (\url{https://github.com/pengyuchu/DualCamFusion}). The detection algorithm with two-camera setup achieves an F1-score of 93.92\%, while the one with a single camera achieves an F1-score of 90.5\%. \begin{figure}[!t] \centering \includegraphics[width=8.6cm]{figures/suppression_MaskRCNN_v2-eps-converted-to.pdf} \caption{Apple detection structure based on two-camera setup.}\label{fig_appleDetStructure} \end{figure} After detecting the fruits, apple localization is performed by employing the depth information provided by the RGB-D camera. More precisely, for each bounding box, the image pixels of the detected apple are extracted to generate a range matrix by utilizing the disparity map. We then calculate the mean value of the range matrix and regard it as the apple's depth range. Combining the depth range with the center of the bounding box pixels, back-projection \cite{Hartley2003} is used to calculate the 3D position of the apple. This process is conducted for each of the bounding boxes to obtain positions of all detected apples in the image. \subsection{Unified picking/dropping and motion planning} \label{subsec_planning} In our system, there are two levels of planning tasks. At a high level, we need to plan the apple picking sequence and dropping spots based on the list of detected apple locations provided by the perception module. At a lower level, we need to generate reference trajectory for a selected target apple for the manipulation control. The high-level harvesting sequence planning is necessary as it plays a crucial role in reducing the harvesting cycle time. Different from existing works \cite{silwal2017,Edan1991} that only focus on optimizing the fruit picking sequence, we take both the apple picking and dropping spot sequences into consideration. This flexibility is enabled by our dropping module design where the end effector does not need to return all the way to the home position to release the picked fruits. As shown in Fig.~1, the dropping module allows the end effector to release the detached apple in a large area, offering additional flexibility and optimization freedom for improved harvesting efficiency. More specifically, given $N$ detected apples with $p_{i}\in \mathbb{R}^{3}$, $i=1, \cdots, N$, denoting their 3D positions expressed in the manipulator frame. The apple picking sequence and the dropping spot sequence are defined as follows: \begin{itemize} \item The apple picking sequence $S= \left\lbrace s_{1}, \cdots, s_{N} \right\rbrace$ is a permutation of $ \left\lbrace 1, \cdots, N\right\rbrace$, which determines the sequence of picking apples with the corresponding position sequence $\left\lbrace p_{s_{1}}, \cdots, p_{s_{N}} \right\rbrace$ that the manipulator will follow to travel through. Each apple only will be visited once \item The dropping spot sequence $S_{d}= \left\lbrace \bar{p}_{s_{1}}, \cdots, \bar{p}_{s_{N-1}} \right\rbrace$ is a list of ordered 3D positions where the manipulator will stop by and release the harvested fruit. As discussed in Section~\ref{sub_hardware}, the dropping module provides a specific domain (which is denoted by $\bar{\mathbb{P}}$) for the manipulator to release the fruit, and hence the dropping spots $\bar{p}_{s_{i}}$ should be generated from this domain, i.e., $\bar{p}_{s_{i}}\in \bar{\mathbb{P}}$. \end{itemize} In the planning phase, we consider that the manipulator will start from its home position $p_{0}$ and approach the detected apples by following the sequence $S$ defined above. For $i=1, \cdots, N-1$, once the apple located at $p_{s_{i}}$ is harvested, the manipulator will move to the position $\bar{p}_{s_{i}}$ to release the fruit and then heads to the next apple located at $p_{s_{i+1}}$. In particular, if the last apple in the picking sequence is harvested, the manipulator will return back to the home position $p_{0}$ for fruit release. According to the above description, it can be concluded that the manipulator's maneuver satisfies the following sequence: \begin{equation} \label{planning_seq} p_{0} \!\rightarrow\! p_{s_{1}} \!\rightarrow\! \bar{p}_{s_{1}} \!\rightarrow\! \cdots \!\rightarrow\! p_{s_{N-1}} \!\rightarrow\! \bar{p}_{s_{N-1}} \!\rightarrow\! p_{s_{N}} \!\rightarrow\! p_{0}. \end{equation} The planning objective is to determine the picking sequence $S$ and its corresponding dropping spot sequence $S_{d}$ by optimizing the travel cost along the maneuver sequence \eqref{planning_seq}. We use Euclidean distance to define the travel cost, as follows: \begin{equation} \label{eq_g} g = \| p_{s_{1}}-p_{0} \| + \sum_{i=1}^{N-1} g_{s_{i}, s_{i+1}} + \|p_{0}-p_{s_{N}} \|, \end{equation} where $g_{s_{i}, s_{i+1}} = \| \bar{p}_{s_{i}}-p_{s_{i}} \| + \|p_{s_{i+1}}-\bar{p}_{s_{i}}\| \in \mathbb{R}$ ($i=1, \cdots, N-1$) is the travel cost between two adjoining apples in $S$. Given $p_{s_{i}}$ and $p_{s_{s+i}}$, the optimal dropping spot $\bar{p}^{*}_{s_{i}}$ can be determined by solving the following problem: \begin{equation} \label{planning_opt1} \begin{aligned} &\min_{\bar{p}_{s_{i}}} g_{s_{i}, s_{i+1}}(\bar{p}_{s_{i}}) = \| \bar{p}_{s_{i}}-p_{s_{i}} \| + \|p_{s_{i+1}}-\bar{p}_{s_{i}}\|, \\ &\text{s.t.} \quad \bar{p}_{s_{i}}\in \bar{\mathbb{P}}. \end{aligned} \end{equation} Let $g_{s_{i}, s_{i+1}}^{*} = \| \bar{p}_{s_{i}}^{*}-p_{s_{i}} \| + \|p_{s_{i+1}}-\bar{p}_{s_{i}}^{*}\| \in \mathbb{R}$ be the minimal value of $g_{s_{i}, s_{i+1}}$. Then, the optimization problem over the picking sequence $S$ is formulated as \begin{equation} \label{planning_opt2} \begin{aligned} &\min_{S} g(S) = \| p_{s_{1}}-p_{0} \| + \sum_{i=1}^{N-1} g_{s_{i}, s_{i+1}}^{*} + \|p_{0}-p_{s_{N}} \|, \\ &\text{s.t.} \quad s_{i} \in \left\lbrace 1, \cdots, N \right\rbrace, i=1, \cdots, N, \\ & \qquad \, s_{i} \neq s_{j}, \text{for any}\; i\neq j\; \text{and}\; i, j = 1, \cdots N. \end{aligned} \end{equation} To determine the apple picking sequence and the dropping spot sequence. We first calculate the minimal travel cost between any two apple positions via \eqref{planning_opt1}. With the obtained minimal travel cost for any pair of two apples, the optimization problem \eqref{planning_opt2} can be reformulated as a travel salesman problem (TSP). The nearest neighbor algorithm~\cite{gutin2006TSP} is utilized to address the TSP, and then the apple picking sequence $S$ and the dropping spot sequence $S_{d}$ can be determined. Based on the sequences $S$ and $S_{d}$, the apple location $p_{s_{i}}$ and the dropping spot $\bar{p}_{s_{i}}$ will be assigned in turn as the targeted position $p_{d}=\begin{bmatrix} x_{d}, y_{d}, z_{d} \end{bmatrix}^{\top}$ where the manipulator needs to reach. In our implementation, the planning algorithm described above will be performed whenever the perception system sends an updated list of detected apple locations. To facilitate the manipulation control, given a targeted position $p_{d}$ (e.g., top of the picking list), we use the quintic function \cite{siciliano2010robotics} to generate a corresponding reference trajectory $p_{r}(t)=\begin{bmatrix} x_{r}(t),\, y_{r}(t),\, z_{r}(t) \end{bmatrix}^{\top}$. This reference trajectory is a function of time with its terminus being the target position $p_{d}$. The introduction of the quintic function-based reference trajectory $p_{r}$ brings the following advantages: First, the reference trajectory is continuously differentiable and its terminal velocity and acceleration are zero, which is conducive to ensuring that the end-effector approaches the desired position along a smooth path. Second, by adjusting function parameters, the velocity profile of the reference trajectory can be modified, and thus the end-effector can reach the desired position within a specific time interval. \subsection{Efficient nonlinear control for accurate reference tracking} \label{subsec_control} Given a target apple and the generated reference trajectory using the planning algorithm discussed above, we next introduce the control algorithms that drive the manipulator to follow the reference trajectory. As shown in Fig. \ref{fig_flowDiagram}, one key requirement of the control module is to adjust the manipulator to approach the detected fruits or dropping spots with high accuracy and flexibility. To achieve this goal, two motion control strategies are developed by fully exploiting the mechanical structure of the developed 3-DOF manipulator. \begin{figure}[!b] \centering \includegraphics[width=8.5cm]{figures/kineModel_v3-eps-converted-to.pdf} \caption{Kinematical description of the 3-DOF manipulator.}\label{fig_kineModel} \end{figure} The kinematic description of the 3-DOF manipulator is shown in Fig. \ref{fig_kineModel}. Denote $p = \begin{bmatrix} x, y, z \end{bmatrix}^{\top} \in \mathbb{R}^{3}$ as the position of the end-effector. Based on the Denavit–Hartenberg convention \cite{siciliano2010robotics} and the kinematical diagram presented in Fig. \ref{fig_kineModel}, the forward kinematics function of the manipulator can be derived, as follows: \begin{equation} \label{eq_xyz} \begin{aligned} x &= d_{x3}\cos(\theta) \cos(\varphi) + d_{x2}\cos(\theta) + d_{z2}\sin(\theta) + d_{x1} + D, \\ y &= d_{x3}\sin(\varphi) + d_{y2} + d_{y1}, \\ z &= -d_{x3}\sin(\theta) \cos(\varphi) - d_{x2}\sin(\theta) + d_{z2}\cos(\theta) + d_{z1}, \end{aligned} \end{equation} where $d_{x1}$, $d_{x2}$, $d_{x3}$, $d_{y1}$, $d_{y2}$, $d_{z1}$, $d_{z2} \in \mathbb{R}$ are the link lengths, and $\begin{bmatrix} \varphi, \theta, D \end{bmatrix}^{\top} \in \mathbb{R}^{3}$ are the joint variables. \iffalse The values of link length and joint variables are listed in Table \ref{table model parameter}. It is clear that \eqref{eq_xyz} characterizes the position of the end-effector as a function of the joint parameters $\begin{bmatrix} \varphi, \theta, D \end{bmatrix}^{T}$. From \eqref{eq_xyz} and the facts that $\varphi$, $\theta \in \left(-25^{\circ}, 25^{\circ} \right)$, it turns out that \begin{equation} \label{eq_inverKin} \begin{aligned} \varphi &= \arcsin \left( \frac{y-d_{y1}-d_{y2}}{d_{x3}} \right), \\ \theta &= \arcsin \left( \frac{-b-\sqrt{b^{2}-4ac}}{2a} \right), \\ D &= x-d_{x3}\cos(\theta) \cos(\varphi) - d_{x2}\cos(\theta) - d_{z2}\sin(\theta) - d_{x1}, \end{aligned} \end{equation} where $a = (d_{x3}\cos(\varphi)+d_{x2})^{2}+d_{z2}^{2}$, $b = 2(z-d_{z1})(d_{x3}\cos(\varphi)+d_{x2})$, and $c = (z-d_{z1})^{2}-d_{z2}^{2}$. \eqref{eq_inverKin} characterizes the inverse kinematics to calculate the joint parameters $\begin{bmatrix} \varphi, \theta, D \end{bmatrix}^{T}$ from the position of the end-effector $\begin{bmatrix} x, y, z \end{bmatrix}^{T}$. Generally, gradient-based optimization solvers \cite{wang1991,SugiharaTRO2011} can be used to compute the inverse kinematics of a manipulator. However, since the developed 3 DOF manipulator has a simple and exploitable structure, its inverse kinematics can be determined by the analytical expression in \eqref{eq_inverKin}, which can avoid iterative and complex optimization procedure that is more time consuming and can induce numerical errors. \begin{table}[!htbp] \caption{Model parameters of the 3 DOF manipulator} \label{table model parameter} \begin{center} \begin{tabular}{c c} \hline \hline Parameter & Value \\ \hline Link $d_{1}$ & 0.0635 $m$ \\ Link $d_{2}$ & 0.0889 $m$ \\ Link $d_{3}$ & 0.6985 $m$ \\ Revolute joint $\varphi$ & $\left(-25^{\circ}, 25^{\circ} \right)$ \\ Revolute joint $\theta$ & $\left(-25^{\circ}, 25^{\circ} \right)$ \\ Prismatic joint $D$ & $\left(0 m, 0.61 m\right)$\\ \hline \hline \end{tabular} \end{center} \end{table} \fi As described in Section \ref{subsec_planning}, the planning module will provide the reference trajectory $p_{r}(t)=\begin{bmatrix} x_{r}(t),\, y_{r}(t),\, z_{r}(t) \end{bmatrix}^{\top}$ for the targeted position $p_{d}$. The objective of the manipulation control is to regulate the end-effector to follow the reference trajectory $p_{r}$ and finally approach the target position $p_{d}$. The revolute joint parameters $\varphi$, $\theta$ and prismatic joint parameter $D$ are all driven by electrical motors, and two velocity-based control schemes are employed to generate explicit speed command to smoothly adjust the joints based on real-time position feedback. Specifically, based on \eqref{eq_xyz}, the time derivative of $\begin{bmatrix} x, y, z \end{bmatrix}^{\top}$ can be calculated as \begin{equation} \label{eq_dot_yz} \begin{aligned} \dot{x} &= -d_{x3}(\sin(\theta)\cos(\varphi)\omega_{\theta} + \cos(\theta)\sin(\varphi)\omega_{\varphi} ) \\ &\quad - d_{x2}\sin(\theta)\omega_{\theta} + d_{z2}\cos(\theta)\omega_{\theta} + v_{D}, \\ \dot{y} &= d_{x3}\cos(\varphi) \omega_{\varphi}, \\ \dot{z} &= -d_{x3}(\cos(\theta) \cos(\varphi) \omega_{\theta} - \sin(\theta)\sin(\varphi) \omega_{\varphi}) \\ &\quad -d_{x2}\cos(\theta)\omega_{\theta} - d_{z2}\sin(\theta)\omega_{\theta}, \end{aligned} \end{equation} where $\omega_{\varphi}$, $\omega_{\theta} \in \mathbb{R}$ are the angular velocity of the revolute joints $\varphi$ and $\theta$, respectively, and $v_{D}\in \mathbb{R}$ is the linear velocity of the prismatic joint $D$. Furthermore, the error signals $\begin{bmatrix} e_{x}, e_{y}, e_{z} \end{bmatrix}^{\top} \in \mathbb{R}^{3}$ are constructed as \begin{equation} \label{eq_ey_ez} \begin{aligned} e_{x} &= x-x_{r}, \quad e_{y} &= y-y_{r}, \quad e_{z} &= z-z_{r}. \end{aligned} \end{equation} Based on \eqref{eq_dot_yz}, \eqref{eq_ey_ez}, and by virtue of Lyapunov-based control techniques \cite{khalil2002nonlinear}, the first velocity controller is designed as \begin{equation} \label{eq_omega} \begin{aligned} \omega_{\varphi} & = \frac{-k_{y}e_{y} + \dot{y}_{r}}{d_{x3}\cos(\varphi)}, \\ \omega_{\theta} & = \frac{k_{z}e_{z} + d_{x3}\sin(\theta)\sin(\varphi)\omega_{\varphi} - \dot{z}_{r}}{d_{x3}\cos(\theta) \cos(\varphi) + d_{x2}\cos(\theta) + d_{z2}\sin(\theta)}, \\ v_{D} & = -k_{x}e_{x} +d_{x3}(\sin(\theta)\cos(\varphi)\omega_{\theta} + \cos(\theta)\sin(\varphi)\omega_{\varphi} ) \\ &\quad + d_{x2}\sin(\theta)\omega_{\theta} - d_{z2}\cos(\theta)\omega_{\theta} + \dot{x}_{r}, \end{aligned} \end{equation} where $k_{x}$, $k_{y}$, $k_{z} \in \mathbb{R}^{+}$ are positive constant gains. The velocity controller \eqref{eq_omega} can ensure that the end-effector position tracks the reference trajectory $\begin{bmatrix} x_{r}, y_{r}, z_{r} \end{bmatrix}^{\top}$ asymptotically, where the detailed stability analysis is given in Appendix A. Considering the existence of external disturbances, we further design a robust controller on the basis of \eqref{eq_omega}. More precisely, inspired by \cite{Xian2004TAC,PatilCSL2022}, the nonlinear integral term is introduced to facilitate the controller design, as follows: \begin{equation} \label{eq_omega_v2} \begin{aligned} \omega_{\varphi} & = \frac{-k_{y}e_{y} + \dot{y}_{r} + \eta_{y}}{d_{x3}\cos(\varphi)}, \\ \omega_{\theta} & = \frac{k_{z}e_{z} + d_{x3}\sin(\theta)\sin(\varphi)\omega_{\varphi} - \dot{z}_{r} + \eta_{z}}{d_{x3}\cos(\theta) \cos(\varphi) + d_{x2}\cos(\theta) + d_{z2}\sin(\theta)}, \\ v_{D} & = -k_{x}e_{x} +d_{x3}(\sin(\theta)\cos(\varphi)\omega_{\theta} + \cos(\theta)\sin(\varphi)\omega_{\varphi} ) \\ &\quad + d_{x2}\sin(\theta)\omega_{\theta} - d_{z2}\cos(\theta)\omega_{\theta} + \dot{x}_{r} + \eta_{x}, \end{aligned} \end{equation} where $\eta_{x}$, $\eta_{y}$, $\eta_{z} \in \mathbb{R}$ are given by \begin{equation} \label{eq_integral} \begin{aligned} \eta_{x} &= \iota_{x} \int_{0}^{t} \left( e_{x}(\tau) + \text{sgn}(e_{x}(\tau)) \right)d\tau, \\ \eta_{y} &= \iota_{y} \int_{0}^{t} \left( e_{y}(\tau) + \text{sgn}(e_{y}(\tau)) \right)d\tau, \\ \eta_{z} &= \iota_{z} \int_{0}^{t} \left( e_{z}(\tau) + \text{sgn}(e_{z}(\tau)) \right)d\tau. \end{aligned} \end{equation} In \eqref{eq_integral}, $\iota_{x}$, $\iota_{y}$, $\iota_{z} \in \mathbb{R}^{+}$ are positive constant gains and $\text{sgn}(\cdot)$ is the standard signum function. The controller designed in \eqref{eq_omega_v2} has advantages in eliminating the influence of bounded disturbances while ensuring asymptotic error convergence. Rigorous stability analysis can be conducted based on the results in Appendix A and by leveraging the techniques developed in \cite{Xian2004TAC,PatilCSL2022}. The proof is omitted here due to space limitation. \section{Experiment and Results} \label{sec_perfEva} In this section, both indoor and field experiments are presented to demonstrate the performance of the developed system. The indoor tests with artificial trees are focused on validating the planning and control schemes, and the integrated system is further evaluated in the orchard field. \subsection{Indoor test validation} Indoor tests were carried out in a simulated orchard environment, which consists of artificial trees hanging real apples, a flat-panel lighting system, and a Qualisys motion capture system. \begin{figure}[!h] \centering \includegraphics[width=6cm]{figures/scenePlanning_v2-eps-converted-to.pdf} \caption{The scenario of Trial 1 for planning evaluation, where green bounding boxes represent detected apples.}\label{fig_scenePlanning} \end{figure} We first validate the performance of the planning algorithm that is used to generate picking sequence and dropping spots based on the apple locations provided by the perception module. Three trials with different apple configurations are carried out for thorough validation. These three trials each have 5 apples, 7 apples, and 9 apples randomly hung on the artificial tree, and Fig. \ref{fig_scenePlanning} depicts the scenario of Trial 1 for reference. The perception algorithm is used to detect and localize these fruits, and then the planning module is triggered to determine the picking sequence and dropping spots. The travel cost defined in \eqref{eq_g} is calculated with the obtained picking sequence and dropping spots. We also actuate the manipulator to reach the apple positions and the dropping spots by following the planning results, and the total travel time (i.e., the maneuver time of the manipulator) is recorded. Moreover, to better demonstrate the effectiveness of the planning algorithm, the non-planning case is introduced for comparison. In the non-planning case, we do not optimize the dropping spots and consider that once the manipulator reaches a detected apple, it will always move to the home position to release the fruit. The same testing scenarios are used to obtain the travel cost and travel time under the non-planning case. The results are summarized in Table \ref{table planning}. It is clear that the proposed planning algorithm can significantly reduce the travel cost by optimizing the picking sequence and dropping spots. Furthermore, the travel time is an intuitive indicator to show the effect of the planning module in reducing the harvesting cycle time. Compared to the non-planning case, the proposed planning algorithm can efficiently reduce the travel time. \iffalse \begin{table}[!htbp] \caption{Comparison of travel cost ($m$) and travel time ($s$) between the proposed planning scheme and non-planning case} \label{table planning} \begin{center} \resizebox{0.48\textwidth}{!}{ \begin{tabular}{c c c c} \hline \hline & & Non-planning case & Proposed planning scheme \\ \hline Trial 1 (5 apples) & Travel cost & 4.1174 & 3.1835 \\ & Travel time & 12.5453 & 10.7027 \\ Trial 2 (7 apples) & Travel cost & 5.6113 & 4.2846 \\ & Travel time & 17.1137 & 14.5085 \\ Trial 3 (9 apples) & Travel cost & 6.8059 & 4.8641 \\ & Travel time & 21.0673 & 17.6509 \\ \hline \hline \end{tabular} } \end{center} \end{table} \fi \begin{table}[!t] \caption{Comparison of travel distance and travel time between the proposed planning scheme and non-planning case} \label{table planning} \begin{center} \begin{threeparttable} \begin{tabular}{c c c c} \hline \hline Trial & Fruit number & Travel distance NP (\textbf{P}) & Travel time NP (\textbf{P}) \\ \hline 1 & 5 & 4.12 (\textbf{3.18}) [m] & 12.55 (\textbf{10.70}) [s] \\ 2 & 7 & 5.61 (\textbf{4.28}) [m] & 17.11 (\textbf{14.51}) [s] \\ 3 & 9 & 6.81 (\textbf{4.86}) [m] & 21.07 (\textbf{17.65}) [s] \\ \hline \hline \end{tabular} \begin{tablenotes} \footnotesize \item where NP $=$ non-planning case and P $=$ proposed planning scheme. \end{tablenotes} \end{threeparttable} \end{center} \end{table} To thoroughly validate the control performance, the manipulator are driven to multiple target positions, and then the Qualisys motion capture system is utilized to evaluate the motion accuracy. Specifically, as discussed in Section \ref{subsec_control}, the developed manipulator includes three joints, i.e., $\begin{bmatrix} \varphi, \theta, D \end{bmatrix}^{\top}$. The desired joint values are selected from the following sets: $\varphi_{d}, \theta_{d} \in \left\lbrace -20^{\circ}, -10^{\circ}, 10^{\circ}, 20^{\circ} \right\rbrace$ and $D_{d} \in \left\lbrace 0.1 \text{m}, 0.3 \text{m}, 0.5 \text{m} \right\rbrace$, and the corresponding target positions can be computed based on \eqref{eq_xyz}. A total of 48 target positions are generated, which are evenly distributed in the workspace of the manipulator. Furthermore, a spherical marker is attached to the end-effector, ensuring that the Qualisys motion capture system can measure the end-effector position precisely through marker identification. The manipulator is actuated from the home position to each of the target positions, and the final position of the end-effector is recorded. Based on 48 pairs of final position records and the corresponding target positions, the average errors along the $x$-axis, $y$-axis, $z$-axis and the average distance errors are calculated. Both controllers designed in \eqref{eq_omega} and \eqref{eq_omega_v2} are tested, and the results are shown in Table \ref{table distance error}. It can be seen that the average distance errors of these two controllers are all less than 1 cm, and owing to the nonlinear integral term, the robust controller designed in \eqref{eq_omega_v2} can achieve better control accuracy compared to the controller designed in \eqref{eq_omega}. \begin{table}[!t] \caption{Comparison of average absolute error between target position and manipulator final positions with the two designed controllers} \label{table distance error} \begin{center} \begin{tabular}{c c c} \hline \hline & Controller \eqref{eq_omega} & Controller \eqref{eq_omega_v2} \\ \hline $x$-axis error [cm] & 0.3905 & 0.3442 \\ $y$-axis error [cm] & 0.3324 & 0.2089 \\ $z$-axis error [cm] & 0.2742 & 0.2365 \\ Distance error [cm] & 0.6566 & 0.5185 \\ \hline \hline \end{tabular} \end{center} \end{table} \vspace{-10pt} \vspace{-5pt} \subsection{Field test and validation} To further evaluate the performance of the integrated prototype, field experiments are conducted in the Horticultural Teaching and Research Center of Michigan State University during the 2021 harvest season\footnote{A video of the robotic apple harvester demonstrating the field tests is available at \url{https://www.youtube.com/watch?v=\_6-5qbZplZo}}. In the field test, the robotic apple harvesting system is run autonomously and continuously to harvest fruits within its workspace with fully integrated perception, planning, and control functionalities. On average, the duration for the manipulator to approach a target apple or move back to its corresponding dropping spot ranges between 0.75 s and 1.4 s. Detaching and releasing the fruit roughly take 1 s and 0.5 s, respectively. For the successfully harvested apples, the average cycle time is approximately 3.6 s, including software algorithm processing and hardware execution. Compared to our previous prototype \cite{Zhang2021MECH} and the existing literature \cite{baeten2008,silwal2017,Hohimer2019,Bulanon2021} which took 7-10 seconds to harvest an apple, the current robotic apple harvesting prototype clearly has made significant advancement in terms of harvesting efficiency, thanks to the simple yet efficient mechanism as well as the integrated algorithm design. However, there is still a considerable gap in achieving a satisfactory picking rate. In this field test, a total of 142 apples were attempted and 74 of them were picked successfully with the picking rate being 52.1\%. We studied the failed cases and identified the following major causes. First, we found that the depth measurement in the RealSense RGB-D camera is susceptible to varying lighting conditions and cases when fruit is partially obscured by foliage or branches, and sometimes provides inaccurate depth information with an error as large as 10 cm. Second, unlike the v-trellis structured orchard used in \cite{silwal2017} where most of the apples are well exposed, the orchard where we conducted the experiment does not have a well-structured fruiting system and a high percentage of fruits are being occluded by leaves and branches, which create challenges for the robotic system to approach the target fruits. Third, there are also many occurrences during which the end-effector has failed to detach target fruits due to inadequate vacuum power. Fig. \ref{fig_failedCase} illustrates two failed harvesting scenarios. \begin{figure}[!t] \centering \subfigure[] {\label{img_failedCase1} \includegraphics[width=0.22\textwidth]{figures/failedCase1-eps-converted-to.pdf} } \subfigure[] {\label{img_filedCase2} \includegraphics[width=0.22\textwidth]{figures/failedCase2-eps-converted-to.pdf} } \caption{Examples of failed apple harvesting.} \label{fig_failedCase} \end{figure} These findings provide useful insights towards further improvement of our system. The two-camera setup enhances fruit detection and localization in the indoor environment but appears to have limited contribution to improving the accuracy of fruit localization in the field. Fusing additional sensing modalities such as LiDAR could be a good way to achieve robust fruit localization and is currently under investigation. We also need to design an object segmentation algorithm to identify the obstacles (trunks, branches) and develop a path planning scheme to avoid obstruction. Finally, the vacuum system and fruit detachment strategy should also be further improved for reliable fruit picking. \section{Conclusion} \label{sec_conclusion} The algorithm design and integration for a newly-developed robotic apple harvesting prototype was introduced in the paper. The algorithm component is comprised of four core modules: calibration, perception, planning, and control. Indoor and field experiments demonstrated that the developed algorithm component can synergistically work with the hardware component to achieve the primary apple harvesting functionalities, offering a promising picking cycle time of 3.6 seconds. Guided by lessons learned from these experiments, future work will include improving fruit localization accuracy and robustness, developing object segmentation algorithms for obstacle detection, and designing optimal path planning scheme for obstacle avoidance. \appendices \section{Stability Analysis of Controller \eqref{eq_omega}} \begin{theorem} The velocity controller developed in \eqref{eq_omega} ensures that the end-effector position, i.e., $\begin{bmatrix} x, y, z \end{bmatrix}^{\top}$, converges to the reference trajectory $\begin{bmatrix} x_{r}, y_{r}, z_{r} \end{bmatrix}^{\top}$ asymptotically. \end{theorem} \begin{proof} To prove Theorem 1, a Lyapunov function $V \in \mathbb{R}$ is defined as \begin{equation} \label{eq_V} V = \frac{1}{2}e_{x}^{2} + \frac{1}{2}e_{y}^{2} + \frac{1}{2}e_{z}^{2}, \end{equation} where $e_{x}$, $e_{y}$ and $e_{z}$ are the error signals given in \eqref{eq_ey_ez}. Taking the time derivative of \eqref{eq_V} and utilizing \eqref{eq_dot_yz}-\eqref{eq_omega}, it can be further derived that \begin{equation} \label{eq_dot_V} \begin{aligned} \dot{V} &= e_{x}\dot{e}_{x} + e_{y}\dot{e}_{y} + e_{z}\dot{e}_{z} \\ &= e_{x}\left( -d_{x3}(\sin(\theta)\cos(\varphi)\omega_{\theta} + \cos(\theta)\sin(\varphi)\omega_{\varphi} ) \right. \\ &\quad \left. - d_{x2}\sin(\theta)\omega_{\theta} + d_{z2}\cos(\theta)\omega_{\theta} + v_{D} -\dot{x}_{r} \right) \\ &\quad + e_{y}\left( d_{x3}\cos(\varphi) \omega_{\varphi} - \dot{y}_{r} \right) \\ &\quad + e_{z} \left( -d_{x3}(\cos(\theta) \cos(\varphi) \omega_{\theta} - \sin(\theta)\sin(\varphi) \omega_{\varphi}) \right. \\ &\quad \left. -d_{x2}\cos(\theta)\omega_{\theta} - d_{z2}\sin(\theta)\omega_{\theta} - \dot{z}_{r} \right) \\ &= -k_{x}e_{x}^{2} -k_{y}e_{y}^{2} - k_{z}e_{z}^{2} \le 0. \end{aligned} \end{equation} According to \eqref{eq_V} and \eqref{eq_dot_V}, the Lyapunov's stability theorem \cite{khalil2002nonlinear} can be used to conclude that $\begin{bmatrix} e_{x}, e_{y}, e_{z} \end{bmatrix}^{\top} = \begin{bmatrix} 0, 0, 0 \end{bmatrix}^{\top}$ is asymptotically stable, which indicates that $\begin{bmatrix} x, y, z \end{bmatrix}^{\top}$ converges to $\begin{bmatrix} x_{r}, y_{r}, z_{r} \end{bmatrix}^{\top}$ asymptotically. \end{proof} \iffalse \section*{Acknowledgment} This research was support by the U.S. Department of Agriculture, Agricultural Research Service. The findings and conclusions in this paper are those of the authors and should not be construed to represent any official USDA or U.S. Government determination or policy. Mention of commercial products in the paper does not imply endorsement by USDA over those not mentioned. \fi \bibliographystyle{IEEEtran}
1,314,259,995,751
arxiv
\section{Introduction} \label{sec:intro} Numerous binary black hole merger events have now been observed by gravitational wave detectors \cite{Abbott:2016blz,TheLIGOScientific:2016pea,LIGOScientific:2018mvr,Green:2017voq,Zackay:2019tzo,Nitz:2018imz,Venumadhav:2019lyq}. The general features of the gravitational wave signal from such events are now well known. The first is the inspiral regime where the signal is a chirp of increasing amplitude and frequency, and the system is effectively modeled as two point particles orbiting around each other and emitting gravitational waves as the orbit decays. As the two black holes approach each other and coalesce to form a final common black hole, the inspiral description is no longer valid, and non-perturbative aspects of general relativity become important; this is the merger regime. Eventually, as the final black hole reaches equilibrium, the gravitational wave signal can be well modeled as a superposition of damped sinusoids (and, in principle, much weaker power-law tails). Corresponding to this behavior of the gravitational wave signals, one visualizes the black holes themselves separately in the three different regimes. The inspiral regime consists of two disjoint black hole horizons slightly distorted by each other's gravitational field. The merger is visualized as two horizons very close to each other and merging to form a single horizon which is initially very distorted. Finally, the ringdown is modeled as a perturbed Kerr horizon settling down to a final equilibrium Kerr black hole. These features of the waveform must be correlated in some way with properties of the gravitational field in the strong field region. In particular, the three regimes must correspond in some way to properties of the black hole horizons. The details of the correlations between the different portions of the gravitational wave signal and the behavior of the horizons, and the precise demarcations between the three regimes are yet to be fully quantified. A full understanding of these correlations is obviously necessary to have a complete picture of a binary black hole merger (see e.g. \cite{Gupta:2018znn,Jaramillo:2011rf,Jaramillo:2012rr,Kamaretsos:2012bs,Kamaretsos:2011um,Bhagwat:2017tkm}). It is also of interest to understand further quantitative features of the merger, such as the evolution of physical quantities across the merger. This includes, among other things, the fluxes of energy and angular momentum, and the evolution of higher order multipoles during the merger. These might be correlated with interesting features of the radiative multipoles found in \cite{Borhanian:2019kxt}. Numerical simulations are capable of solving the Einstein equations with high accuracy for binary black hole mergers (see e.g. \cite{Pretorius:2005gq,Campanelli:2005dd,Baker:2005vv,Szilagyi:2009qz}). Such simulations provide an obvious avenue for exploring such questions. To understand the correlations between the gravitational wave signal and the black hole horizons, we need to first decide precisely what geometrical quantities on the horizon should be considered. In fact, we need to go a further step backwards and decide what kind of horizons should be considered. There are two different ways of visualizing horizons using either event horizons or marginally trapped surfaces. Both of these descriptions are in good agreement in the inspiral and ringdown regimes, but differ substantially during the merger where non-linear and non-perturbative effects of general relativity are especially important. Consider first the event horizon description. An event horizon is the boundary of the region which is causally disconnected from an asymptotically far away observer. It is clear that locating an event horizon requires knowledge of the global properties of the spacetime infinitely far into the future. It is possible, though not trivial, to locate event horizons in numerical binary black hole simulations \cite{Hughes:1994ea,Diener:2003jc,PhysRevLett.74.630,Thornburg:2006zb}, and this yields the well known ``pair of pants'' picture \cite{Matzner:1995ib}. The cross-sections of the ``pair of pants'' corresponds with the expectations described above. At early times, the cross-section of the event horizon consist of two disjoint surfaces corresponding to the two separate black holes, and a single spherical surface at the end. There are several interesting features of the event horizon in the merger, including the existence of a toroidal phase early in the merger and the non-differentiability of the event horizon \cite{PhysRevD.60.084019}; the non-differentiability is in fact a general feature of event horizons \cite{Chrusciel:2000gj,Chrusciel:1996tw}. The ``pair of pants'' picture is intuitively appealing and moreover it seems to provide a complete picture of the black hole merger in accordance with our physical expectations. In reality however, this picture is not so useful, both as a matter of principle and therefore also for any detailed quantitative studies. The problems can be traced back to the global and teleological nature of event horizons: to locate them, one needs to know what happens in the spacetime far in the future. In perturbative situations and when the end-state is known or assumed, it is indeed possible to obtain expressions for the fluxes of energy and angular momentum through the event horizon \cite{Hawking:1972hy}. In general dynamical situations however, this is not true. There are simple examples, even in spherical symmetry, when the area of the event horizon grows without any corresponding flux of energy \cite{Ashtekar:2004cn}. Due to these teleological properties, there is no possible local expression of general validity for, say, the fluxes of energy and angular momentum through event horizons. It is thus not clear how to carry out the program of understanding the merger and relating it to gravitational wave observations outlined at the beginning of the previous paragraph. As a side remark, the teleological property also makes it difficult to locate event horizons in numerical simulations in real time, but in any case, it is certainly possible to locate them once the simulations are complete. There is an alternate way of visualizing a binary black hole merger which, for both conceptual and practical reasons, is of much greater importance in numerical simulations. The starting point is an unusual property of certain surfaces in the black hole region, first pointed out by Penrose \cite{Penrose:1964wq}. This requires the notion of the expansion $\Theta$ of a congruence of light rays; $\Theta$ is the logarithmic rate of change of an infinitesimal cross-section transverse to the null geodesics. A round sphere in flat space has $\Theta>0$ for the outgoing light rays and $\Theta< 0$ for the ingoing ones. In the black hole region, there exist spheres (the trapped surfaces) for which both sets of light rays have negative expansion. The outermost such sphere at any given time has vanishing outgoing expansion; these are the marginally trapped surfaces. In stationary situations such as for a Schwarzschild or Kerr black hole, cross-sections of the event horizon are also marginally trapped surfaces, but this correspondence is not true in non-stationary situations. Thus, cross-sections of the event horizon are marginally trapped surfaces very early in the inspiral regime or at very late times. At intermediate times, especially near the merger, the two notions are very different. Furthermore, unlike event horizons, marginal surfaces are not teleological and can be located at any given time without reference to any future properties of spacetime. It is possible to define physical quantities such as mass, angular momentum, multipole moments, and fluxes of energy and angular momentum quasi-locally, i.e. on the marginal surfaces. For this reason, marginal surfaces are widely used in numerical simulations when referring to the properties of black holes. There is a large literature on these quasi-local definitions and their applications to various problems in classical and quantum black hole physics (see \cite{Ashtekar:2004cn,Booth:2005qc,Faraoni:2015pmn,Krishnan:2013saa} for reviews). Despite this progress, there is still a missing ingredient, namely a unified treatment of inspiral, merger and ringdown. Thus far, all studies of binary black hole coalescence using marginal surfaces have considered the pre- and post-merger regimes separately. The reason for this is that, until recently, it was not known how marginal surfaces behave across the merger; near the merger the marginal surfaces are extremely distorted and previous numerical methods were not successful in tracking such highly distorted surfaces. Using improved numerical methods \cite{Pook-Kolb:2018igu}, we have recently shown the first evidence for the existence of a continuous sequence of marginal surfaces which interpolates between the two disjoint initial black holes and the single final remnant black hole \cite{Pook-Kolb:2019iao}. This is the analog of the ``pair of pants'' picture for event horizons. In the present work, with further improvements in numerical methods for locating marginal surfaces, we shall provide further unambiguous evidence for this scenario. We shall also show the existence of marginal surfaces with self-intersections. In a companion paper we shall study physical characteristics of the world-tube of marginal surfaces, which is the other important ingredient for physical applications. The scenario we obtain for the merger is summarized in Fig.~\ref{fig:merger1}. The details showing how these results are obtained will be explained in the next sections. The figure shows four snapshots of the MOTSs at various times\footnote{% We define the factor $\Munit[] := M_{\rm ADM} / 1.3$ to be able to state our coordinate quantities in terms of the ADM mass, which in our simulations was chosen to be $1.3$. } in a head-on binary black hole merger starting with Brill-Lindquist initial data. We initially have only the two individual MOTSs without a common horizon. As the black holes get closer, a common MOTS is formed which immediately bifurcates into outer and inner portions visible in the second snapshot. The outer portion loses its distortions as it approaches its equilibrium state, while the inner MOTS becomes increasingly distorted. At some point, just shortly after the third snapshot, the two individual MOTSs touch each other exactly at the time when they merge with the inner common MOTS. After this merger, the two individual MOTSs go through each other. Surprisingly, it turns out that the inner common MOTS continues to exist after the merger and now has self-intersections as shown in the last snapshot. The remainder of this paper will be devoted to explaining how we arrive at this result. A detailed study of the physical aspects of this scenario will be presented elsewhere. \begin{figure*}[h] \includegraphics[width=0.8\textwidth]{figs/BL-5_overview} \caption{% MOTS structure of a simulation of Brill-Lindquist initial data shown at different simulation times. The self-intersection of $\Surf_{in}$ is present from the first instance it is found after $\Surf_{1}$ and $\Surf_{2}$ touch at $\ensuremath{T_{\rm touch}} \approx 5.5378\Munit$. The upper left panel shows the initial condition and the upper right panel a time shortly after the two common MOTSs $\Surf_{out}$ and $\Surf_{in}$ have formed together. The lower left panel shows the last time we were able to locate $\Surf_{in}$ before $\Surf_{1}$ and $\Surf_{2}$ touch and then start to intersect, while the lower right panel shows a time well after $\ensuremath{T_{\rm touch}}$. } \label{fig:merger1} \end{figure*} Sec.~\ref{sec:motsdefn} summarizes the basic definitions and concepts that we shall need for this paper. The improved numerical algorithm for locating marginal surfaces is described in Sec.~\ref{sec:motsfinder} and Sec.~\ref{sec:validation} shows various numerical tests to validate the method. Sec.~\ref{sec:numerics} discusses our modifications to the numerical methods used to evolve Cauchy data using the Einstein equations. These modifications allow us to reach the required numerical accuracy and convergence, and to carry out our simulations more efficiently. Sec.~\ref{sec:selfintersect} puts together all these ingredients and presents our main results. For a particular initial configuration (the head on collision of comparable mass non-spinning black holes), the merger of marginally trapped surfaces is demonstrated with high numerical accuracy. The merger involves the formation of a marginally trapped surface with self-intersections, showing topology change in a binary black hole merger. \section{Marginally outer trapped surfaces} \label{sec:motsdefn} Let $\ell^a$ be a congruence of future directed null geodesics, and let $n^a$ be another such congruence satisfying $\ell^a n_a = -1$. Let $q_{ab}$ be the Riemannian metric in the 2-dimensional space transverse to both $\ell^a$ and $n^a$. The divergence of $\ell^a$ and $n^a$ are respectively \begin{equation} \Theta_{(\ell)} = q^{ab}\nabla_a\ell_b\,,\quad \Theta_{(n)} = q^{ab}\nabla_an_b\,. \end{equation} Let $\ensuremath{\mathcal{S}}$ be a closed spacelike 2-surface with null normal fields $\ell^a$ and $n^a$ respectively. We assume that it is possible to assign outgoing and ingoing directions on $\ensuremath{\mathcal{S}}$, and by convention, $\ell^a$ and $n^a$ are the outgoing and ingoing null normals respectively. The classification of $\ensuremath{\mathcal{S}}$ based on conditions on the expansions are the following: \begin{itemize} \item Trapped: $\Theta_{(n)}< 0$, $\Theta_{(\ell)} < 0$ \item Un-trapped: $\Theta_{(n)}< 0$, $\Theta_{(\ell)} > 0$ \item Marginally trapped: $\Theta_{(n)}< 0$, $\Theta_{(\ell)} = 0$ \item Marginally outer-trapped: $\Theta_{(\ell)} = 0$ (no condition on $\Theta_{(n)}$) \end{itemize} All of these refer to future-directed $\ell^a$. Thus we should say future-trapped rather than just trapped, but we shall only consider future directed cases. The most important case for us is the marginally outer trapped surface (MOTS) lying within a spatial slice $\Sigma$. As mentioned in the introduction, there is a large literature on the application of MOTSs to study black holes in various contexts (see e.g. \cite{Ashtekar:2004cn,Booth:2005qc,Gourgoulhon:2005ng,Hayward:2004fz,Jaramillo:2011zw,Krishnan:2013saa,Krishnan:2007va}). They are regularly used in numerical relativity simulations to compute physical quantities \cite{Dreyer:2002mx,Schnetter:2006yt,Gupta:2018znn}, and this formalism leads naturally to various versions of quasi-local black hole horizons. While we shall not delve into the mathematical and physical characteristics of MOTSs here, it shall be useful to understand the stability operator for a MOTS and its relevance for time evolution. For a given MOTS $\ensuremath{\mathcal{S}}$ consider a smooth one-parameter family of closed spherical surfaces $\ensuremath{\mathcal{S}}_\lambda$ which are \emph{variations} of $\ensuremath{\mathcal{S}}$ in the normal direction \cite{Newman1987} within the spatial hypersurface $\Sigma$. On each $\ensuremath{\mathcal{S}}_\lambda$, just as for $\ensuremath{\mathcal{S}}$, we can define the null normals and calculate the expansion $\Theta_{(\ell)}(\lambda)$, which will of course generally not vanish. The differentiation of $\Theta_{(\ell)}(\lambda)$ leads to an operator $L$ on $\ensuremath{\mathcal{S}}$: \begin{equation} \delta_{fr}\Theta_{(\ell)} =: Lf\,. \end{equation} Here $r^a$ refers to the unit outward pointing spacelike normal to $\ensuremath{\mathcal{S}}$ (within $\Sigma$) and $f$ is a scalar function on $\ensuremath{\mathcal{S}}$. Along the 1-parameter family $\ensuremath{\mathcal{S}}_\lambda$, every point on $\ensuremath{\mathcal{S}}$ traces out a curve with tangent vector $fr^a$. The variation of the expansion, i.e. the left hand side of the above equation, is the derivative of the expansion along these curves. This procedure defines an elliptic operator $L$ on a MOTS and the precise expression for $L$ can be worked out. Generically it is of the form \begin{equation} \label{eq:stability_operator} Lf = -\Delta f + \gamma^a \partial_a f + \beta f\,, \end{equation} Here $\Delta$ is the Laplace-Beltrami operator on $\ensuremath{\mathcal{S}}$ compatible with $q_{ab}$, $\gamma^a$ is a vector field on $\mathcal{S}$ related to black hole spin, and $\beta$ is a scalar related to the intrinsic (two-dimensional) Ricci scalar of $\ensuremath{\mathcal{S}}$. Thus, $L$ is not necessarily a self-adjoint operator due to the presence of $\gamma^a$, and its eigenvalues are not necessarily real. Nevertheless, its principal eigenvalue $\Lambda_0$, i.e. the eigenvalue with the smallest real part is indeed real. In this paper we shall restrict ourselves to non-spinning black holes with vanishing $\gamma^a$ so that all eigenvalues are real. The primary utility of $L$ is that it determines the behavior of $\ensuremath{\mathcal{S}}$ under time evolution. It was shown that if the principal eigenvalue is positive, then the MOTS evolves smoothly in time \cite{Andersson:2005gq,Andersson:2007fh,Andersson:2008up}. This stability condition is equivalent to saying that an outward deformation of $\mathcal{S}$ makes it untrapped which is what we expect to happen for the apparent horizon. While not emphasized in \cite{Andersson:2005gq,Andersson:2007fh,Andersson:2008up}, the condition for the existence of $\ensuremath{\mathcal{S}}$ under time evolution is the invertibility of $L$. Thus, if 0 is not in the spectrum of $L$, then $\ensuremath{\mathcal{S}}$ continues to evolve smoothly. In the case when $\Lambda_0<0$ (which will happen in our case), we must consider the next eigenvalue $\Lambda_1$ and check that it does not vanish. See e.g. \cite{Booth:2017fob,Sherif:2018scu,Mach:2017peu} as examples of studies which consider this notion of stability in specific examples. \section{Numerical methods for locating highly distorted MOTSs} \label{sec:motsfinder} Consider a Cauchy surface $\Sigma$ on which we wish to locate a MOTS $\ensuremath{\mathcal{S}}$. Let $\Sigma$ be equipped with a Riemannian metric $h_{ij}$ with the associated Levi-Civita connection $D_a$, and let the extrinsic curvature of $\Sigma$ be $K_{ij}$. Let $r^a$ be the unit-spacelike normal to $\ensuremath{\mathcal{S}}$ within $\Sigma$ and let $\tau^a$ be the unit-timelike normal to $\Sigma$. Then, a suitable choice of null-normals to $\ensuremath{\mathcal{S}}$ is \begin{equation} \ell^a = \frac{1}{\sqrt{2}} \left(\tau^a+ r^a\right)\,,\quad n^a = \frac{1}{\sqrt{2}}\left(\tau^a - r^a\right)\,. \end{equation} The condition $\Theta_{(\ell)}=0$ is rewritten as \begin{equation} \label{eq:motsequation} D_ar^a + K_{ab}r^ar^b - K = 0\,. \end{equation} This is the equation that we must solve to find $\ensuremath{\mathcal{S}}$. The conventional approach \cite{Thornburg:2006zb,Thornburg:2003sf} assumes that the surface is defined by a level-set function \begin{equation} \label{eq:starshaped} F(r,\theta,\phi) = r - h(\theta,\phi)\,, \end{equation} where $(r,\theta,\phi)$ are spherical coordinates on $\Sigma$. This assumes that $\ensuremath{\mathcal{S}}$ is \emph{star-shaped} with respect to the origin in the chosen coordinate system. In other words, any ray drawn from the origin must intersect the surface only once. This assumption will not hold for the surfaces of interest for us. A variant of this method was proposed in \cite{Pook-Kolb:2018igu} and shown to be capable of locating extremely distorted surfaces. This new method is based on using a \emph{reference} surface $\sigma_R$, and representing $\ensuremath{\mathcal{S}}$ in terms of distances $h(\lambda, \mu)$ from $\sigma_R$, where $\lambda, \mu$ parameterize $\sigma_R$. As long as the reference surface is chosen appropriately, the method can be used to locate almost arbitrarily distorted surfaces. For example, in a numerical evolution, one could choose $\sigma_R$ to be the MOTS located in the previous time step. The problem of locating $\ensuremath{\mathcal{S}}$ then translates to solving a nonlinear partial differential equation for the horizon function $h$. This can be done e.g. via a pseudospectral method, which is what we chose. For our present application, we have implemented two additional features compared to what was used in \cite{Pook-Kolb:2018igu}. These features are meant to deal with two additional complications that we must necessarily deal with: i) surfaces which have a very narrow ``neck'' (almost like a figure-eight), and in some instances have features like cusps and self intersections. For this purpose, motivated by the methods used in \cite{Jaramillo:2009zz}, we employ bi-spherical coordinates \cite{Ansorg:2005bp}. ii) Unlike in \cite{Pook-Kolb:2018igu} where the MOTS finder was applied to analytical initial data, we now have to deal with numerically generated data on a finite mesh. This requires the use of interpolation schemes some of which were already used in \cite{Pook-Kolb:2019iao}. We now describe in turn both of these additional features. We shall still be restricted to axisymmetry in this work, reducing the task of finding the horizon function $h$ to a one-dimensional problem. However, no in-principle difficulties are foreseen for general non-axisymmetric cases. \subsection{Bi-spherical coordinates} \label{subsec:bipolar} \begin{figure}[h] \centering \includegraphics[width=0.48\textwidth]{figs/bipolar-t3_0} \includegraphics[width=0.48\textwidth]{figs/bipolar-t5_5} \includegraphics[width=0.48\textwidth]{figs/bipolar-t6_5} \caption{% Visualizations of $\Surf_{in}$ in bipolar coordinates at different simulation times $T$. The left column shows the MOTS and lines of constant $s$ and $t$ in the $(x,z)$ plane while the right column contains $\Surf_{in}$ in the $(t,s)$ plane. Note that only positive values of $s$ are shown, though the full MOTS is of course symmetric about $s=0$. The first row shows a slightly distorted MOTS in both representations. At $T=5.5\Munit$ (second row), $\Surf_{in}$ is highly distorted in the $(x,z)$ plane and only slightly distorted in the bi-spherical coordinates. The last row shows a case of a self-intersecting $\Surf_{in}$. The dot marks the location of the ``neck'' in all cases. } \label{fig:bipolar1} \end{figure} For axisymmetric surfaces, choosing the symmetry axis to be the $z$ axis, we can restrict ourselves to the $(x,z)$ plane and it is often convenient to characterize any point using polar coordinates, i.e. using the distance from the origin and the angle of the position vector with the $z$ axis. However these coordinates are not optimal for describing surfaces with a very narrow neck connecting two spherical portions, i.e. close to a figure-eight in shape. We use instead the bipolar coordinates $(s,t)$ which are based on two foci located at $x=0$, $z=c \pm a$: \begin{equation}\label{eq:bipolar} x = \frac{a\sin s}{\cosh t - \cos s}\,,\quad z = \frac{a\sinh t}{\cosh t - \cos s} + c\,. \end{equation} The $(s,t)$ coordinates make the highly distorted inner common MOTS $\Surf_{in}$ much easier to parameterize. Examples demonstrating the effect of this coordinate transformation for three different simulation times are shown in Fig.~\ref{fig:bipolar1}. The three snapshots are at times i) $T=3\Munit$ which is a bit after the top right panel of Fig.~\ref{fig:merger1} and $\Surf_{in}$ does not have extreme distortions; ii) $T=5.5\Munit$, shortly before the bottom left panel in Fig.~\ref{fig:merger1} where $\Surf_{in}$ has a very narrow neck, and finally iii) $T=6.5\Munit$, a little bit before the bottom right panel of Fig.~\ref{fig:merger1}, and $\Surf_{in}$ has self-intersections. The bi-spherical coordinates are employed only for $\Surf_{in}$; none of the other horizons have the narrow neck and these coordinates are unnecessary to locate them. To determine the value of $c$ in \eqref{eq:bipolar}, we first find the two individual MOTSs $\Surf_{1}$ and $\Surf_{2}$ and choose $c$ to lie in the coordinate center between the lowest point of $\Surf_{1}$ and the upper-most point of $\Surf_{2}$. As detailed below, we find the various MOTSs in a series of time slices produced by the numerical simulation. During this {\em tracking} of $\Surf_{in}$, we numerically approximate the optimal value for $a$ as a post-processing step once the MOTS is located. In practice, this is done by representing $\Surf_{in}$ in bi-spherical coordinates and expressing the coordinate functions $s(\lambda), t(\lambda)$ as a truncated series of sines and cosines, respectively, which have the correct symmetry for the problem. We use a slightly lower number of basis functions than necessary to obtain convergence and check the residual expansion of the now imperfect representation. Varying the parameter $a$, we repeat this process to find the value resulting in the lowest residual. The value for $a$ determined this way is then used for finding the MOTS in the next slice, assuming the optimal parameter varies slowly with simulation time. A further optimization is to re-parameterize the reference surface $\sigma_R$ prior to finding the MOTS. A natural choice of parameterization would be the proper length or proper length in coordinate space, the latter obviously being better suited for our numerical representation of the surface. If the curve representing $\sigma_R$ in coordinate space is $\lambda \mapsto \gamma_R(\lambda)$, this would mean that $\Vert\gamma_R'(\lambda)\Vert_2 \equiv \text{const}$. However, we obtained faster convergence by taking a non-constant speed function such that $\Vert\gamma_R'(\lambda)\Vert_2$ is roughly\footnote{% We smoothen the speed function along the MOTS by exponentially damping the coefficients of a cosine series representation. This reduces higher frequencies in the density of collocation points along $\ensuremath{\mathcal{S}}$. } proportional to $1/k_{AB}k^{AB}$, where $k_{AB}$ is the second fundamental form of $\sigma_R$ embedded in coordinate space. Utilization of bi-spherical coordinates together with the above re-parameterization has led to convergent solutions $\Surf_{in}$ with about one order of magnitude fewer collocation points compared to the previous method. \subsection{Interpolating numerical data} \label{subsec:interpolation} In each time step, our axisymmetric numerical simulations produce data on a 2-dimensional grid of points lying equidistant in the $(x,z)$ coordinate plane. However, for the nonlinear search for a MOTS $\ensuremath{\mathcal{S}}$, the expansion $\Theta_{(\ell)}$ and its derivatives have to be computed on a set of points $x_n\in\mathbb{R}^2$ along trial surfaces $\ensuremath{\mathcal{S}}^{i}$, c.f. \cite{Pook-Kolb:2018igu}, Section III.B. This requires evaluating the components of the metric $h_{ij}$, its first and second spatial derivatives, the extrinsic curvature $K_{ij}$ and its first spatial derivatives at the points $x_n$ which generally do not coincide with any of the grid points of the simulation. In \cite{Pook-Kolb:2019iao} we used $4$th order accurate $5$-point Lagrange interpolation. Derivatives were obtained by evaluating $4$th order accurate finite differencing derivatives using the data on the grid and then interpolating the results using $5$-point Lagrange interpolation. For the present paper, however, we switched to quintic Hermite interpolation, which allows us to control the values along with first and second derivatives of the interpolant at the grid points. These derivatives are evaluated using $6$th order accurate finite differencing. Derivatives between the grid points are then computed by analytically differentiating the interpolating polynomial. The advantage is that now first and second derivatives are continuous throughout, which is not the case with Lagrange interpolation. Interpolation of discrete data will be more accurate with increased grid resolution. However, it will never be exact and even floating point accuracy cannot be neglected, especially near the punctures at computationally feasible resolutions. These additional inaccuracies may limit the numerical convergence as they move the plateau we see below in Fig.~\ref{fig:spectral_convergence} up---for example when moving closer to the punctures or reducing the grid resolution---or down. To account for this effect while tracking a MOTS through simulation time, we compute the expansion between the collocation points each time the expansion drops below a pre-set tolerance {\em at} the collocation points. After this, we increase the spectral resolution and continue until the tolerance is met at the now larger set of collocation points. This is repeated until the expansion between the collocation points no longer improves, signaling that we have reached the plateau. A second criterion for stopping to increase the spectral resolution is derived from the absolute values of the coefficients $a_n$ of the spectral representation of the horizon function $h$. In a pseudospectral method using a basis of cosines, one expects these coefficients to fall off exponentially for large $n$ if the solution exists. We hence stop increasing the resolution if sub-exponential fall-off of the $a_n$ is found following a region of exponential convergence. This prevents our code from overfitting $\ensuremath{\mathcal{S}}$ to features introduced by the interpolation method, which happens especially for lower resolution simulations. \section{Validating the MOTS finder} \label{sec:validation} With the addition of numerical simulations, the task for our MOTS finder has become more general compared to the purely time-symmetric cases considered in \cite{Pook-Kolb:2018igu}. Therefore, and in light of the surprising result of a self-intersecting MOTS, it is important to validate the method and test it for correctness in an analytic case where the result is known. We shall later present convergence results for further validation. For this purpose we construct a non-time-symmetric slice with analytically known horizon shape. We choose a slice of the Schwarzschild spacetime in Kerr-Schild coordinates \cite{Matzner:1998pt}, i.e. \begin{eqnarray}\label{eq:schwarzschildKS} h_{ij} &=& \delta_{ij} + \frac{2m}{r} \frac{x_i x_j}{r^2}\,,\\ \label{eq:schwarzschildKS_curv} K_{ij} &=& \frac{2m}{r^4} \frac{1}{\sqrt{1+2m/r}} \left[ r^2 \delta_{ij} - \left( 2 + \frac{m}{r} \right) x_i x_j \right] \,, \end{eqnarray} where $\delta_{ij}$ is the flat metric, $x_i$ are the standard Cartesian coordinates for the flat metric, and we shall often use $(x,y,z)$ instead of $x_i$ when no confusion can arise. For Schwarzschild, the radial coordinate is just $r^2= x^2+y^2+z^2$. These data have nontrivial extrinsic curvature with the horizon being located at $r=2m$. To make the horizon non-star-shaped and thus the task more difficult (but still axisymmetric), we transform the coordinates $(x,z) \rightarrow (\bar{x},\bar{z})$ via \begin{eqnarray}\label{eq:schwarzschildCoordTransform} \bar x = x \left(1 - \frac{\beta}{\cosh((z-z_0)/\gamma)}\right) \,,\quad \bar z = z\,. \end{eqnarray} These equations are used to sample $h_{ij}$ and $K_{ij}$ on grids of various resolutions from $1/h = 30$ to $1/h = 1920$. We choose a reference shape that is close but not identical to the horizon. The MOTS $\ensuremath{\mathcal{S}}$ and the reference shape $\sigma_R$ are shown in the first panel of Fig.~\ref{fig:schwarzschildKW-plot}. For this test we compute the area $A$ of $\ensuremath{\mathcal{S}}$ and compare it to the exact area $A_{\rm exact} = 16\pi m^2$, where $m=1$. We also compute the maximum coordinate distance $\Vert\ensuremath{\mathcal{S}}-\ensuremath{\mathcal{S}}_{\rm exact}\Vert_\infty$ of the numerical solutions to the exact horizon. The second panel demonstrates that our numerical solutions converge to the expected solutions as the resolution of the numerical grid is increased. \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{figs/SchwarzschildTransformed-plot} \includegraphics[width=0.45\textwidth]{figs/SchwarzschildTransformed-convergence} \caption{% {\em Top}: Horizon $\ensuremath{\mathcal{S}}$ and reference shape $\sigma_R$ for the transformed slice of Schwarzschild spacetime. The parameters for the transformation via \eqref{eq:schwarzschildCoordTransform} are $\beta = 0.97$, $\gamma = 0.7\Munit$ and $z_0 = 0.8\Munit$. {\em Bottom}: Convergence of the area (dashed) and surface coordinate shape (solid) with increased grid resolution. In each case, the spectral resolution was chosen such that a further increase does not result in a lower residual expansion (see section~\ref{subsec:convergence}). This thus shows the error introduced by the spatial discretization and interpolation. } \label{fig:schwarzschildKW-plot} \end{figure} \section{The numerical evolutions} \label{sec:numerics} \subsection{Formulations, Discretization, and Implementation} We set up initial conditions for the spacetime geometry as two puncture black hole using the method of Brill and Lindquist \cite{PhysRev.131.471}. To evolve the geometry, we use the BSSN formulation of the Einstein equations with a $1+\log$ slicing and a $\Gamma$-driver shift condition \cite{Alcubierre:2000xu, Alcubierre:2002kk}. We also impose axisymmetry throughout the calculation. For our setup (see below), we choose a domain with $x \in [0;10]$, $z \in [-10;10]$, and $T \in [0;7]$. (Due to axisymmetry, we only consider the hyperplane $y = 0$.) For simplicity, we use Dirichlet boundary conditions to set all time derivatives to zero at the outer boundary. We check that the errors introduced by the artificial boundary conditions do not affect the geometry near the MOTSs. We choose a Cartesian basis for the tangent space, i.e. we represent vectors and tensors via their $x, y, z$ components. Although axisymmetry requires that certain components or linear combinations of components must vanish, we do not explicitly impose such conditions. Instead, we only impose axisymmetry on spatial derivatives: We require that the Lie derivatives of all quantities in the $\phi$ direction be zero, and we use this to remove all $y$ derivatives. ($y$ derivatives are then either $0$, or are replaced by combinations of various $x$ derivatives.) We use l'H\^opital's rule to regularize these expressions on the axis. This closely follows the approach described in \cite{Pretorius_2005}, extended to handle second derivatives as well. The set of expressions for handling first and second $y$ derivatives for all tensor ranks appearing in the BSSN formulation is lengthy, and is available in a Mathematica script as part of Kranc \cite{Husa:2004ip, Kranc:web}. In our discretization, we also require a small region ``on the other side'' of the axis (where $x<0$), which we calculate by rotating the region with $x>0$ by $\pi$. We also experimented with the \emph{Cartoon} method \cite{Alcubierre:1999ab} to impose axisymmetry. Cartoon uses a spatial rotation in the $\phi$ direction and then spatial interpolation to populate points away from the $y$ axis, so that $y$ derivatives can be calculated in the standard manner. We found that the Cartoon method does not work well with higher order (higher than $4$th) finite differencing: The result of a Lagrange interpolation is not continuous, which leads to large oscillations when derivatives are taken near the axis where the Cartoon rotation angle is large. In our setup, the punctures are located on the $z$ axis and are initially at $z_\pm = \pm 0.65$. The puncture masses are $m_+ = 0.5$ and $m_- = 0.8$ (i.e. the ``upper'' black hole is smaller). The punctures have no linear or angular momentum. Details of initial and gauge conditions are described in \cite{wardell_barry_2016_155394}. Our exact parameter settings are available in the parameter files in the repository \cite{pook_kolb_daniel_2019_3260171}. We use $6$th order finite differencing to discretize space. We also add a $6$th order Kreiss-Oliger artificial dissipation, which reduces our spatial accuracy to $5$th order. We use a $6$th order accurate Runge-Kutta time integrator. Our discretization is globally $5$th order accurate, as we demonstrate below in section \ref{sec:convergence}. We do not use mesh refinement nor multiple grid patches as these would not be beneficial for our calculations that span only a short time and a small region of space, compared to systems of orbiting binary black holes. Compared to $4$th and $8$th order discretizations, $6$th order is most efficient for us. $4$th order calculations require significantly higher resolutions, and $8$th order calculations are significantly slower since they use larger stencils and require more integrator substeps. $8th$ order calculations also require higher resolutions before their error falls below that of $6$th order calculations. We perform our calculation via the \emph{Einstein Toolkit} \cite{Loffler:2011ay, EinsteinToolkit:web}. We use \emph{TwoPunctures} \cite{Ansorg:2004ds} to set up initial conditions and an axisymmetric version of \emph{McLachlan} \cite{Brown:2008sb} to solve the Einstein equations, which uses \emph{Kranc} \cite{Husa:2004ip, Kranc:web} to generate efficient C++ code. \subsection{Accuracy, Convergence} \label{sec:convergence} To demonstrate the accuracy of our discretization, we plot in Fig.~\ref{fig:constr} the Hamiltonian constraint \begin{equation}\label{eq:ham} \mathcal{H} = K_{ab} K^{ab} - K - R \end{equation} on grid points close to the inner common MOTS at two different times for different grid resolutions. Here, $R$ is the Ricci scalar of the slice $\Sigma$. There is no significant difference between the two times. Note that in coordinate space, $\Surf_{in}$ lies closer to the punctures in its upper than in its lower half, compare also Fig.~\ref{fig:merger1}. In terms of the curve's proper length parameter $\bar\lambda$ (scaled to $\bar\lambda\in[0,\pi]$), this corresponds to $\bar\lambda \lesssim \pi/2$ and $\bar\lambda \gtrsim \pi/2$, respectively, where our representation only covers half of the plotted MOTS (say for positive $x$ values) due to axisymmetry. The results have been scaled to account for $5$th order convergence. We indeed find $5$th order convergence for $1/h \geq 240$ closer to the punctures and for $1/h \geq 120$ further away from the punctures. In that latter region, the highest resolution results with $1/h = 960$ show slightly larger errors than expected from $5$th order accuracy. This is caused by round-off errors starting to dominate the finite difference derivatives, as is demonstrated in Fig.~\ref{fig:differentiation_roundoff}. Here, the different curves represent the results obtained using stencils of $3$ to $9$ points for the derivatives of the metric components, corresponding to $2$nd to $8$th order accuracy. We see the typical behavior of convergence up to the resolution at which the round-off error becomes dominant. This happens at lower resolutions for the higher order methods as these reach the round-off limit earlier. Note that the optimal resolution depends on the function being approximated and in our case becomes larger the closer we get to the puncture. This explains the different behavior in the first and second half of the plots in Fig.~\ref{fig:constr}. \begin{figure}[h] \centering \includegraphics[width=0.48\textwidth]{figs/ham_inner_5_35_proper_mult} \includegraphics[width=0.48\textwidth]{figs/ham_inner_5_73_proper_mult} \caption{% Convergence of the Hamiltonian constraint for increasing resolutions $1/h = 60$, $120$, $240$, $480$, $960$ at one time step before (upper panel) and after (lower panel) the individual horizons touch. The constraint is computed at grid points close to $\Surf_{in}$ and plotted over the proper length (scaled to $[0,\pi]$) of the curve representing $\Surf_{in}$ in the $(x,z)$ plane. } \label{fig:constr} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.48\textwidth]{figs/ham_over_res} \caption{% Hamiltonian constraint computed at one point of a slice of the Schwarzschild spacetime in Kerr-Schild coordinates as defined in \eqref{eq:schwarzschildKS}, \eqref{eq:schwarzschildKS_curv} for grid resolutions $1/h = 20$ to $1/h = 10^5$. Since this is an exact solution of the Einstein equations, we expect $\mathcal{H} \equiv 0$, and this figure thus shows the discretization error. The constraint is evaluated at a coordinate distance of $r \approx 0.24\,m$ from the puncture. } \label{fig:differentiation_roundoff} \end{figure} \section{The existence of self-intersecting MOTSs} \label{sec:selfintersect} With the technical improvements at hand, we now turn to the main result of this paper, namely the merger of the inner MOTS with the two individual horizons, and the occurrence of self intersecting MOTSs just after this merger (see Fig.~\ref{fig:merger1}). We will study a single configuration with high resolution. We focus primarily on numerical accuracy and convergence to confirm the merger scenario and the existence of self-intersecting MOTSs. There are obviously numerous physical and geometrical properties of great interest. First however, we need to prove this scenario numerically beyond any reasonable doubt, which is what we shall do here. A detailed discussion of the interesting physical and geometrical properties of the world tube of MOTSs will be postponed to a forthcoming paper. Similarly, we shall not discuss here the various extensions to non-time symmetric and non-axisymmetric data. As mentioned previously, we start with Brill-Lindquist initial data with the bare masses $m_+ = 0.5$ and $m_- = 0.8$. The initial coordinate separation between the punctures is $1.3\Munit$ (i.e. $1$ in units of the total ADM mass $M_{\rm ADM} = m_+ + m_-$). Simulations are performed at various grid resolutions: $1/h = 60$, $120$, $240$, $480$, $960$. We have already shown in the previous section that the numerical solution of the Einstein equations for the given initial data is sufficiently accurate and all constraint violations converge at the expected rate when $h$ is varied. Given this numerical spacetime, we can use our horizon finder to locate the various MOTSs. It remains to be shown now that the surfaces thus found are indeed MOTSs. Before proceeding further, it might be useful to clarify the nature of the MOTS with self-intersections shown in the bottom right panel of Fig.~\ref{fig:merger1}. Viewed as a submanifold of the 3-dimensional Riemannian spatial slice $\Sigma$, this manifold might appear to be non-differentiable at the point of self-intersection and one might be concerned that there is no well defined normal to the manifold at that point (and hence no well defined expansion either). This is however incorrect, and formally the curve is simply understood as an \emph{immersion} instead of an embedding. In the present case, because of axisymmetry, we can restrict ourselves to a two-dimensional section (say the $x$-$z$ plane as we have been using so far). Then the horizon is simply a parameterized curve, i.e. a mapping of the circle $S^1$ into $\Sigma$, $f:S^1\rightarrow \Sigma$ (this is precisely how this curve is defined numerically). Using the map $f$, we can push forward tangent vectors to $\Sigma$ and thus we have well defined normals depending on which direction one traverses the point of self-intersection (see Fig.~\ref{fig:knot}). The relevant topological property of the curve is the winding number, i.e. the number of rotations that a tangent vector undergoes when we go all the way around the curve; each loop adds +1 to the winding number. Curves with different winding numbers cannot be smoothly deformed into each other \cite{CM_1937__4__276_0}. This is why in order to get the self-intersections, it is necessary to go through the cusp at the merger. In non-axisymmetric situations, we have to necessarily deal with mappings of $S^2$ into the 3-manifold $\Sigma$ (which are in fact simpler \cite{10.2307/1993205}), but we shall not discuss this here. \begin{figure}[h] \centering \includegraphics[width=0.5\columnwidth]{figs/normals_self_crossing.png} \caption{ Tangent vectors at a regular crossing-point of a curve. As we traverse the curve following the arrows from the top-right, we push-forward tangent vectors in the usual way. Thus, the first time the self-intersection is crossed, the tangent vector is $V$. The second time, i.e. after traversing the loop in the clockwise direction, the tangent vector is $W$. Normal vectors are also well defined along the curve and uniquely specified once an outward direction is specified at any point. In our specific example, we say that at the north pole, the outward direction is the $+z$ direction. } \label{fig:knot} \end{figure} \subsection{Convergence} \label{subsec:convergence} Except for the modifications introduced earlier in Sec.~\ref{sec:motsfinder}, we employ the same basic Newton-Kantorovich search as in \cite{Pook-Kolb:2018igu} with each step being performed using a pseudospectral method. If the nonlinear search converges, we expect the exponential convergence of the individual pseudospectral steps to carry over to the solution of the full nonlinear problem. \begin{figure}[h] \centering \includegraphics[width=0.48\textwidth]{figs/spectral_convergence_T5_35} \includegraphics[width=0.48\textwidth]{figs/spectral_convergence_T5_73} \caption{% Convergence of the residual expansion of $\Surf_{in}$ at one time before (upper panel) and after (lower panel) the individual horizons touch. Note that the inner common MOTS has self-intersections in the latter case. We plot the maximum absolute residual expansion between the collocation points over the pseudospectral resolution used to find the MOTS. This is independent of the grid resolution ${\rm res} = 1/h$ of the simulation. % Exponential convergence is clearly visible up to reaching the plateau in the various cases. The plots also show that the plateau moves downward with increased grid resolution and that at lower resolution, we can identify a nonzero negative slope within the plateau, indicating the overfitting effect mentioned in the text. } \label{fig:spectral_convergence} \end{figure} This is indeed the case, as can be seen in Fig.~\ref{fig:spectral_convergence}. It shows the maximum residual expansion between the collocation points for $\Surf_{in}$ at two different times of the simulation: one at $T=5.35\Munit$, where the MOTS is already highly distorted, and one at $T\approx5.7333\Munit$. This second case is {\em after} the individual MOTSs touch. At this stage, $\Surf_{in}$ lies in the inside of $\Surf_{1} \cup \Surf_{2}$ and intersects itself. There is no qualitative difference in convergence and the plateau is approximately at the same level for the same grid resolution. We also see in Fig.~\ref{fig:spectral_convergence} that the negative slope continues into the plateau region. This effect is more pronounced for lower grid resolutions and not noticeable for $1/h=960$. It is caused by fitting the horizon to features introduced by the interpolation. We avoid this unphysical effect in practice by limiting the pseudospectral resolution as described at the end of Sec.~\ref{subsec:interpolation}. Instead of varying the pseudospectral resolution, we can test convergence for different grid resolutions $1/h$ of the simulation. The quantity we use here is the convergence of the coordinate shapes of the curves representing the MOTSs. Fig.~\ref{fig:curve_distances} shows that we indeed find convergence of the shapes. \begin{figure}[h] \centering \includegraphics[width=0.48\textwidth]{figs/curve_distances_inner} \caption{% Convergence of the coordinate shapes of the MOTSs for increasing numerical resolutions $1/h = 60$, $120$, $240$, $480$, $960$. Shown is the maximum coordinate distance of the horizons found in lower resolution simulations to the respective horizon found for $1/h = 960$. } \label{fig:curve_distances} \end{figure} We show in Fig.~\ref{fig:residual-expansion}, as a function of time, the residual expansion of the various MOTSs for the highest resolution that we have considered, namely $1/h = 960$. The residual expansion is one of the key ingredients which gives us confidence that the surfaces we find are indeed MOTSs. Note first that for all the ``easy'' cases, namely for the two individual MOTSs $\ensuremath{\mathcal{S}}_{1,2}$ and for the apparent horizon, the residual expansion is no more than $\mathcal{O}(10^{-11})$. These horizons do not have any portions with extreme curvatures and there is no difficulty in locating them. In fact, the residual expansion is largest for the smaller horizon, and is $\mathcal{O}(10^{-12})$ for the larger horizon and the apparent horizon. The difficult case is of course the inner common horizon, which required the various technical improvements detailed earlier. The most difficult cases are those which have the narrow neck and correspondingly highly curved portions. There is a small duration of time near $\ensuremath{T_{\rm touch}}$ where we are not able to locate $\Surf_{in}$. At all the other times shown in the plot, the residual expansion is no more than $\mathcal{O}(10^{-9})$. In fact, away from $\ensuremath{T_{\rm touch}}$, the residual expansion is as good as for the other MOTSs. In particular, this is true after $T \sim 5.7\Munit$. At these times $\Surf_{in}$ has developed self-intersections. Thus, our confidence in the existence of self-intersecting MOTSs is the same as our confidence in the existence of the other MOTSs, which of course are already well established. \begin{figure*}[h] \centering \includegraphics[width=\textwidth]{figs/residual-expansion-bl5-res960} \caption{ The residual expansion of the various MOTSs ($\ensuremath{\mathcal{S}}_{1,2}$ and $\ensuremath{\mathcal{S}}_{in,out}$) for the highest resolution $1/h=960$ as a function of time. We plot the absolute maximum expansion sampled between the collocation points used by our pseudospectral method. } \label{fig:residual-expansion} \end{figure*} \subsection{Area and stability} \label{subsec:area} Some quantitative numbers for this evolution are: \begin{itemize} \item The common horizon forms at $\ensuremath{T} \approx 1.37460222\Munit$. \item The two individual horizons touch at $\ensuremath{T_{\rm touch}} \approx 5.5378176\Munit$. \item The area of the inner horizon reaches a minimum at $\ensuremath{T_{\rm min}} \approx 5.50592\Munit$, i.e. just a little bit before $\ensuremath{T_{\rm touch}}$. This behavior of $\Surf_{in}$ was previously noted in \cite{Pook-Kolb:2019iao}. \end{itemize} These values were computed at the various resolutions up to $1/h = 960$ and converge up to the shown number of decimal places; compare also Fig.~\ref{fig:t_convergence}. \begin{figure}[h] \centering \includegraphics[width=0.48\textwidth]{figs/time_convergence} \caption{% Convergence of the various characteristic times with increased grid resolution. Shown is the difference between the value found at the finest resolution $1/h = 960$ and the respective lower resolution result. $\ensuremath{T}$ is the time when the common horizon forms, $\Surf_{1}$ and $\Surf_{2}$ touch at $\ensuremath{T_{\rm touch}}$, and the inner common horizon has a local minimal area at $\ensuremath{T_{\rm min}}$. } \label{fig:t_convergence} \end{figure} The areas of the various horizons are plotted as functions of time in Fig.~\ref{fig:area-bl5}. The bottom-right panel presents a useful picture of the merger process. It shows the areas of the apparent horizon, the inner common horizon and the sum of the areas of the individual horizons. It shows the formation and bifurcation of the apparent horizon and it also shows the merger, i.e. the crossing of the curves for the inner horizon and the individual horizons. \begin{figure*}[h] \centering \includegraphics[width=\textwidth]{figs/area-bl5} \caption{% Areas of the various horizons as functions of time. The top-left panel shows the area of the apparent horizon which, as expected, asymptotes to a final constant value as the black hole reaches equilibrium. The top-right and bottom-left plots show the areas of the smaller and larger black holes respectively, both showing large increases at late times. The area of the inner-common horizon is shown in the bottom-right panel. This panel also shows the apparent horizon, and the sum of the individual horizon areas. } \label{fig:area-bl5} \end{figure*} The principal eigenvalue of the stability operator for the various horizons is shown in Fig.~\ref{fig:stability-bl5}. We see that $\Lambda_0$ is always positive for $\ensuremath{\mathcal{S}}_{1,2}$ and for the apparent horizon, and that it is not strongly varying. $\Surf_{in}$ is more interesting. When it is initially born, it coincides with the apparent horizon and has $\Lambda_0=0$. At all subsequent times, $\Surf_{in}$ has $\Lambda_0<0$; to understand its stability we need to consider the next eigenvalue $\Lambda_1$. But already from Fig.~\ref{fig:stability-bl5}, we see interesting behavior of $\Lambda_0$ for $\Surf_{in}$, namely a cusp at $\ensuremath{T_{\rm touch}}$. Fig.~\ref{fig:stability1-inner-bl5} shows $\Lambda_1$ for the inner horizon, and it is seen to be positive thus demonstrating stability. Again, we see a cusp-like behavior near $\ensuremath{T_{\rm touch}}$. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{figs/stability-bl5-res960} \caption{% First stability parameters $\Lambda_0$ (i.e. the principal eigenvalue of the stability operator) for the various horizons. $\Lambda_0$ is positive for all horizons except $\Surf_{in}$, for which we instead plot $-\Lambda_0$. A cusp is clearly seen for the inner horizon. All the other horizons show unremarkable behavior in this respect; they remain stable as far as they can be reliably tracked. } \label{fig:stability-bl5} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{figs/stability1-inner-bl5-res960} \caption{% The stability parameter for the inner horizon (upper panel) and zoom in around $\ensuremath{T_{\rm touch}}$ (lower panel). We lose numerical precision very close to the merger time as the pseudospectral resolution becomes very large thereby increasing the condition number of the matrix of the discretized problem. } \label{fig:stability1-inner-bl5} \end{figure} \section{Conclusions} \label{sec:conclusions} In this paper we examined in detail the scenario for the merger of MOTSs outlined previously in \cite{Pook-Kolb:2019iao}. We have done this by evolving a particular Brill-Lindquist setup and finding all MOTSs at various times. We have tracked the inner common horizon with high accuracy. In particular, we present strong numerical evidence that the inner horizon merges with the two individual horizons precisely at the time when they touch. Moreover, we find that the inner horizon develops self-intersections just after the merger. This provides then a connected sequence of MOTSs taking us from the two disjoint initial horizons to the final apparent horizon. We have also studied some basic properties of the MOTSs including their area and stability. There are numerous other interesting physical and geometric properties of the world tube of MOTSs which shall be studied in detail in forthcoming work. \acknowledgments We thank Abhay Ashtekar, Bernd Brugmann, Luis Lehner, and Andrey Shoom for valuable discussions and comments. We are especially grateful to Jose-Luis Jaramillo for extensive discussions and for suggesting the use of bipolar coordinates. The MOTS finder \cite{pook_kolb_daniel_2019_3260171} used in this research is developed in Python with \emph{SimulationIO} \cite{erik_schnetter_2019_3258858} being used for reading the numerical simulation data. The libraries \emph{SciPy} \cite{Jones_SciPy}, \emph{NumPy} \cite{van_der_Walt_NumPy}, \emph{mpmath} \cite{mpmath}, \emph{SymPy} \cite{meurer2017sympy} and \emph{Matplotlib} \cite{Hunter:2007,michael_droettboom_2018_1202077} were used for certain numerical, validation and plotting tasks. O.B. acknowledges the National Science Foundation (NSF) for financial support from Grant No. PHY-1607520. This research was also supported by the Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Economic Development, Job Creation and Trade. This research was enabled in part by support provided by SciNet (www.scinethpc.ca) and Compute Canada (www.computecanada.ca). Computations were performed on the Niagara supercomputer at the SciNet HPC Consortium \cite{Loken_2010}. SciNet is funded by: the Canada Foundation for Innovation; the Government of Ontario; Ontario Research Fund -- Research Excellence; and the University of Toronto.
1,314,259,995,752
arxiv
\section{Introduction}\label{sect1} Many important problems in variational analysis and optimization can be modelled by an {inclusion} $y\in F(x)$, where $F$ is a set-valued mapping. The behavior of the solution set $F^{-1}(y)$ when $y$ and/or $F$ are perturbed is of special interest. The concepts of \emph{metric regularity} and \emph{subregularity} (cf., e.g., \cite{DonRoc14,Mor06.2,Iof17}) have been the key tools when studying stability of solutions. In the next definition, we use the names \emph{$\alpha-$regularity} and \emph{$\alpha-$subregularity}, fixing the main quantitative parameter in the conventional definitions of the properties. \begin{definition}\label{D1.1} Let $X$ and $Y$ be metric spaces, $F:X\rightrightarrows Y$, $(\bar x,\bar y)\in \gph F$, and $\alpha>0$. The mapping $F$ is \begin{enumerate} \item $\alpha-$regular at $(\bar x,\bar y)$ if there exist $\delta\in]0,+\infty]$ and $\mu\in]0,+\infty]$ such that \begin{align}\label{D1.1-1} \alpha d(x,F^{-1} (y))\le d(y,F(x)) \end{align} for all $x\in B_\delta(\bar x)$ and $y\in B_\delta(\bar y)$ with $d(y,F(x))<\alpha\mu$; \item $\alpha-$subregular at $(\bar x,\bar y)$ if there exist $\delta\in]0,+\infty]$ and $\mu\in]0,+\infty]$ such that \begin{align}\label{D1.1-2} \alpha d(x,F^{-1} (\bar y))\le d(\bar y,F(x)) \end{align} for all $x\in B_\delta(\bar x)$ with $d(\bar y,F(x))<\alpha\mu$. \end{enumerate} \end{definition} In the above definition and throughout the paper, $B_\delta(\bar x)$ and $B_\delta(\bar y)$ stand for open balls with radius $\delta$ around respective points in appropriate spaces. Note that $\delta$ and $\mu$ can take infinite values; thus, the definition covers local as well as global properties. This remark applies also to the subsequent definitions. The technical conditions $d(y,F(x))<\alpha\mu$ and $d(\bar y,F(x))<\alpha\mu$ can be dropped (cf. \cite{Iof00,Mor06.1}), particularly because the value $\mu=+\infty$ is allowed. This does not affect the properties themselves, but can have an effect on the value of $\delta$. Inequalities \eqref{D1.1-1} and \eqref{D1.1-2} provide linear estimates of the distance from $x$ to the solution set of the respective inclusion via the `residual' $d(y,F(x))$ or $d(\bar y,F(x))$. As commented by Dontchev and Rockafellar \cite[p.178]{DonRoc14} `in applications, the residual is typically easy to compute or estimate, whereas finding a solution might be considerably more difficult'. Besides their importance in studying stability of solutions to inclusions, regularity type estimates are involved in constraint qualifications for optimization problems, qualification conditions in subdifferential and coderivative calculus, and convergence analysis of computational algorithms \cite{Chi10,AraDonGeo07,DonVel09,AraDonGeoVel11,AdlCibNga15, AspChaLuk16,HesLuk13,LukThaTam18,CibPreRou19}. The name `metric regularity' was coined by Borwein in 1986 \cite{Bor86}, but the concept itself can be traced back to the Banach--Schauder open mapping theorem for linear operators, and its nonlinear extensions due to Lyusternik \& Graves \cite{Lyu34,Gra50} and Robinson \& Ursescu \cite{Rob76,Urs75}; see, for instance, \cite{Don96,DonRoc14,DonLewRoc03,RocWet98,Mor06.1,Iof16} for historical comments. Unlike the `full' regularity in part (i) of \cref{D1.1}, the weaker subregularity property in part (ii) (as well as closely related to it properties like \emph{calmness, error bounds} and \emph{weak sharp minima}) is not stable under small perturbations of the data. It has also been well studied; see, for instance, \cite{BorZhu88,DonLewRoc03, Mor06.1,YenYaoKie08,LiMor12,ApeDurStr13,DonRoc14, Kru15,ZheNg10}. Fortunately, the subregularity property is satisfied automatically in finite dimensions when the graph of $F$ is the union of finitely many polyhedral convex sets; cf. \cite{DonRoc14,Iof17}. \if{ \NDC{20.12.20 Could you please have a look at \cite[Theorem~7.29]{Iof16.2}? I think the theorem states that the convexity assumption can be dropped.} }\fi When $y$ is not fixed and can be any point in a neighbourhood\ of a given point $\bar y$, it represents \emph{canonical perturbations} of the inclusion $\bar y\in F(x)$. For some applications it can be important to allow also perturbations in the right-hand side. This leads to the need to consider parametric inclusions $\bar y\in F(p,x)$ (or even $y\in F(p,x)$, thus, combining the two types of perturbations), where $F$ is a set-valued mapping\ of two variables, with (nonlinear) perturbations in the right-hand side\ given by a parameter $p$ from some fixed set $P$. Along with the mapping $F:P\times X\rightrightarrows Y$, which is our main object in this paper, given a point $p\in P$, we consider the mapping $F_p:=F(p,\cdot):X\rightrightarrows Y$. Given a $y\in Y$, the mapping \begin{align}\label{G} p\mapsto G(p):=F_p^{-1} (y)=\{x\in X\mid y\in F(p,x)\} \end{align} can be interpreted as an \textit{implicit multifunction} corresponding to the parametric {inclusion} $y\in F(p,x)$. When studying implicit multifunctions, it is common to consider `uniform' versions of the properties in \cref{D1.1} (cf., e.g., \cite[Definition~3.1]{Iof17.1}). \begin{definition}\label{D1.2} Let $X$ and $Y$ be metric spaces, and $P$ be a set, $F:P\times X\rightrightarrows Y$, $\bar x\in X$, $\bar y\in Y$, and $\alpha>0$. The mapping $F$ is \begin{enumerate} \item $\alpha-$regular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ if there exist $\delta\in]0,+\infty]$ and $\mu\in]0,+\infty]$ such that \begin{align}\label{D1.2-1} \alpha d(x,F_p^{-1} (y))\le d(y,F(p,x)) \end{align} for all $p\in P$, $x\in B_\delta(\bar x)$ and $y\in B_\delta(\bar y)$ with $d(y,F(p,x))<\alpha\mu$; \item $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ if there exist $\delta\in]0,+\infty]$ and $\mu\in]0,+\infty]$ such that \begin{align}\label{D1.2-2} \alpha d(x,F_p^{-1} (\bar y))\le d(\bar y,F(p,x)) \end{align} for all $p\in P$ {and} $x\in B_\delta(\bar x)$ with ${d(\bar y,F(p,x))}<\alpha\mu$. \end{enumerate} \end{definition} If $P$ is a singleton, then the properties in \cref{D1.2} reduce to the corresponding conventional regularity properties in \cref{D1.1}. Moreover, the subregularity property in \cref{D1.2}(ii) coincides in this case with the subregularity property of the mapping \eqref{G} considered in \cite{ChuKim16}. \begin{remark}\label{R1.1} \begin{enumerate} \item If $Y$ is a linear metric space with a shift-invariant metric, in particular, a normed space, then the property in part (i) of \cref{D1.2} reduces to the one in part (ii) with the extended parameter set $\widehat{P}:=P\times Y$ and set-valued mapping\ $\widehat{F}((p,y),x):={F(p,x)-y}$, $((p,y),x)\in\widehat{P}\times X$, in place of $P$ and~$F$, respectively. Moreover, in both parts of the definition, it is sufficient to consider the case $\bar y:=0$: the general case reduces to it by replacing $F$ with $F-\bar y$. \item Unlike \cref{D1.1}, in \cref{D1.2} the reference point $(\bar x,\bar y)$ is not associated with the graph of $F$. This is a technical relaxation caused by the fact that $\gph F$ is a subset of a product of three spaces $P\times X\times Y$, and at this stage there is no reference point in $P$. \cref{D1.3} below is formulated in a more conventional way. \item There exist other concepts of uniform regularity in the literature. For instance, it is not uncommon to talk about uniform regularity when inequality \eqref{D1.1-1} holds for all $(\bar x,\bar y)$ in a compact subset of $X\times Y$ with the same parameters $\alpha$, $\delta$ and $\mu$; cf. \cite{CibPreRou19}. \end{enumerate} \end{remark} \if{ \AK{28/12/19. I've never heard about regularity of a mapping at a point in its image. Have you? Can you think of a better name? Zheng and Ng need to be checked again. I think they used to write about regularity of inclusions instead of mappings. This could be exactly our case.} \NDC{2.1.20 Me too. I think \cref{D1.2} can be rewritten ``The mapping $F_p$ is subregular at $\bar x$ uniformly in $P$ if there exist exist $\delta\in]0,+\infty]$ and $\mu\in]0,+\infty]$ such that $\alpha d(x,F_p^{-1} (\bar y))\le d(\bar y,F(p,x))$ for all $p\in P$ and $x\in B_\delta(\bar x)$ with $d(\bar y,F(p,x))<\alpha\mu$''. In this case, the mapping $G$ does not seem to play any role. What do you think about it? Yes, in \cite{ZheNg10}, also in \cite{Ngh14}, they called \textit{metric subregularity of generalized equations}, but they studied the conventional metric (sub-)regularity.} }\fi Local (in $p$) versions of the properties in \cref{D1.2} are of special interest. They correspond to $P$ being a neighbourhood\ of a point $\bar p$ in some metric spaces; cf., e.g., \cite{NgaTroThe13,Iof17.1}. \begin{definition}\label{D1.3} Let $P$, $X$ and $Y$ be metric spaces, $F:P\times X\rightrightarrows Y$, $(\bar p,\bar x,\bar y)\in\gph F$, and $\alpha>0$. The mapping $F$ is \begin{enumerate} \item $\alpha-$regular in $x$ uniformly in $p$ at $(\bar p,\bar x,\bar y)$ if there exist $\eta\in]0,+\infty]$, $\delta\in]0,+\infty]$ and $\mu\in]0,+\infty]$ such that inequality \eqref{D1.2-1} is satisfied for all $p\in B_\eta(\bar p)$, $x\in B_\delta(\bar x)$ and $y\in B_\delta(\bar y)$ with $d(y,F(p,x))<\alpha\mu$; \item $\alpha-$subregular in $x$ uniformly in $p$ at $(\bar p,\bar x,\bar y)$ if there exist $\eta\in]0,+\infty]$, $\delta\in]0,+\infty]$ and $\mu\in]0,+\infty]$ such that inequality \eqref{D1.2-2} is satisfied for all $p\in B_\eta(\bar p)$ and $x\in B_\delta(\bar x)$ with ${d(\bar y,F(p,x))}<\alpha\mu$. \end{enumerate} \end{definition} We often simply say that $F$ is regular or subregular if the exact value of $\alpha$ in the above definitions is not important. The exact upper bound of all $\alpha>0$ such that a property in the above definitions is satisfied with some $\delta\in]0,+\infty]$ and $\mu\in]0,+\infty]$ (and $\eta\in]0,+\infty]$), is called the \emph{modulus} (or rate) of the property. Apart from the main parameter $\alpha$, providing a quantitative measure of the respective property, the properties in above definitions depend also on the auxiliary parameters $\delta$, $\eta$ and $\mu$. They control (directly and indirectly) the size of the neighbourhood s of $\bar x$ and $\bar p$ involved in the definitions. As discussed above, the last parameter can be dropped (together with the corresponding constraints). We keep all the parameters to emphasize their different roles in the definitions and corresponding characterizations. The necessary and sufficient regularity conditions presented in the paper normally involve the same collection of parameters. The properties in \cref{D1.2,D1.3} can be interpreted as kinds of Lipschitz-like properties of the implicit multifunction (solution mapping) \eqref{G}. This observation opens a way for numerous applications of the characterizations established in this and many other papers; cf. \cref{S6}. \if{ \AK{26/12/19. In part (ii) there should be two different $\delta$'s or, more generally, two different sets instead of neighbourhood s. It seems $P$ does not have to be even a metric space.} \NDC{27/12/19 Yes, it can be two different $\delta$. Should we do it?} \AK{26/12/19. I think, with the above definition, \cref{P2,P1.1} are OK. What about the rest of the paper? Is subregularity needed? $B_\delta(\bar x)$ can also be replaced by an arbitrary set. Ioffe (and possibly others) used to write about regularity relative to a set. There can be issues with \cref{P1}, where $\delta+\mu$ is used.} }\fi \if{ \NDC{27/12/19 Subregularity does not needed. I formulated some corollaries for the property just for completeness. I have changed the rest of the paper according to the property in \cref{D1.2}. Everything seems OK. I think there was no issue with \cref{P1} since $\delta+\mu$ was related to $x$ and $y$.} }\fi \if{ \NDC{6.12.19 I do not know how to define Robinson semiregularity of implicit multifunctions.} }\fi Regularity properties of implicit multifunctions were first considered by Robinson \cite{Rob75.2,Rob76,Rob76.2} when studying stability of solution sets of generalized equations. This initiated a great deal of research by many authors, mostly in normed spaces (and with $\bar y:=0$). Dontchev et al. \cite[Theorem~2.1] {DonQuiZla06} gave a sufficient condition for regularity of implicit multifunctions in terms of graphical derivatives. Ngai et al. \cite{NgaThe04,NgaTroThe13} employed the theory of error bounds to characterizing the property in metric and Banach spaces. In \cite{LedZhu99,LeeTamYen08,YenYao09,HuyYao09,ChuKruYao11, HuyKimNin12, Ngh14, ChuKim16,GfrOut16.2} dual sufficient conditions were established in finite and infinite dimensions in terms of Fr\'echet, limiting, directional limiting and Clarke coderivatives. Chieu et al. \cite{ChiYaoYen10} established connections between regularity and Lipschitz-like properties of implicit multifunctions. The regularity properties of the type given in \cref{D1.2,D1.3} are often referred to in the literature as \textit{metric regularity} \cite{ChuKim16,HuyYao09,LeeTamYen08}, \emph{metric regularity in Robinson's sense} \cite{YenYao09,Ngh14}, and \textit{Robinson metric regularity} \cite{ChiYaoYen10,HuyKimNin12} (of implicit multifunctions). Following Ioffe \cite{Iof17,Iof17.1}, we prefer to talk about \textit{uniform regularity}. We refer the readers to \cite{LedZhu99,AzeCorLuc02,AzeBen08,Ngh14,Iof17.1,DonRoc14} for more discussions and historical comments. The metric properties in \cref{D1.3} admit equivalent geometric characterizations. This is illustrated by the next proposition providing a characterization for the property in \cref{D1.2}(ii). \begin{proposition Let $X$ and $Y$ be metric spaces, and $P$ be a set, $F:P\times X\rightrightarrows Y$, $\bar x\in X$, $\bar y\in Y$, and $\alpha>0$. The mapping $F$ is $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ with some $\delta\in]0,+\infty]$ and $\mu\in]0,+\infty]$ if and only if \begin{align}\label{P2-1} F_p^{-1} (\bar y)\cap B_\rho(x)\ne \emptyset \end{align} for all $\rho\in]0,\mu[$, $p\in P$ and $x\in B_\delta(\bar x)$ with $d(\bar y,F(p,x))<\alpha\rho$. \end{proposition} \begin{proof} Suppose $F$ is $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ with some ${\delta\in]0,+\infty]}$ and $\mu\in]0,+\infty]$. Let ${\rho\in]0,\mu[}$, $p\in P$ and $x\in B_\delta(\bar x)$ with $d(\bar y,F(p,x))<\alpha\rho$. Then $d(\bar y,F(p,x))<\alpha\mu$. By \cref{D1.2}(ii), $d(x,F_p^{-1} (\bar y))\le\alpha^{-1} d(\bar y,F(p,x))<\rho$. Hence, condition \eqref{P2-1} is satisfied. \sloppy Conversely, suppose $\delta\in]0,+\infty]$ and $\mu\in]0,+\infty]$, and condition \eqref{P2-1} is satisfied for all ${\rho\in]0,\mu[}$, $p\in P$ and $x\in B_\delta(\bar x)$ with $d(\bar y,F(p,x))<\alpha\rho$. Let $p\in P$ and $x\in B_\delta(\bar x)$ with $d(\bar y,F(p,x))<\alpha\mu$. Choose a $\rho$ satisfying $\alpha^{-1} d(\bar y,F(p,x))<\rho<\mu$. Then, by \eqref{P2-1}, $d(x,F_p^{-1} (\bar y))<\rho$. Letting $\rho\downarrow \alpha^{-1} d(\bar y,F(p,x))$, we arrive at \eqref{D1.2-2}, i.e. $F$ is $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ with $\delta$ and $\mu$. \sloppy \end{proof} The aim of this paper is not to add some new sufficient or necessary conditions for regularity properties of general set-valued mapping s or implicit multifunctions to the large volume of existing ones (although some conditions in the subsequent sections are indeed new), but to propose a unifying general (i.e. not assuming the mapping $F$ to have any particular structure and not using tangential approximations of $\gph F$) view on the theory of regularity, and clarify the relationships between the existing conditions including their hierarchy. We expose the typical sequence of regularity assertions, often hidden in the proofs, and the roles of the assumptions involved in the assertions, in particular, on the underlying space: general metric, normed, Banach or Asplund. We present a series of necessary and sufficient regularity conditions with the main emphasis (in line with the current trend in the literature) on the latter ones. The (typical) sequence of sufficient regularity conditions is represented by the following chain of assertions, each subsequent assertion being a consequence of the previous one: \begin{enumerate} \item nonlocal primal space conditions in complete metric spaces (\cref{P1}(ii)); \item local primal space conditions in complete metric spaces (\cref{C2.2}(ii)); \item subdifferential conditions in Banach and Asplund spaces (\cref{P5}); \item normal cone conditions in Banach and Asplund spaces (\cref{T2}); \item coderivative conditions in Banach and Asplund spaces (\cref{C3.3,C3.4}). \end{enumerate} Even if one targets coderivative conditions, they still have to go through the five steps listed above with details often hidden in long proofs. Apart from making the whole process more transparent, which is our main objective, the assertions in (i)--(iv) can be of independent interest, at least theoretically, especially the slope type conditions in (ii) and normal cone conditions in (iv). In combination with tangential approximations of $\gph F$, they are likely to lead to verifiable regularity conditions. The implications (i) $ \Rightarrow\ $ (ii) and (iv) $ \Rightarrow\ $ (v) in the above list follow immediately from the definitions. The main assertions are the sufficiency of condition (i), and implications (ii) $ \Rightarrow\ $ (iii) $ \Rightarrow\ $ (iv). They employ the following fundamental tools of variational analysis: \begin{itemize} \item \emph{Ekeland variational principle} (sufficiency of condition (i)); \item \emph{sum rules} for respective subdifferentials (implications (ii) $ \Rightarrow\ $ (iii) $ \Rightarrow\ $ (iv)). \end{itemize} Thus, all the sufficient conditions on the list are consequences of the Ekeland variational principle, and as such, they are `outer' conditions, i.e. they need to be checked at points outside the solution set $F_p^{-1} (\bar y)$. Most of the sufficient conditions are accompanied by the corresponding necessary ones. The necessary conditions do not require the underlying spaces to be complete and are generally easy consequences of the definitions. With the exception of the general nonlocal condition in \cref{P1}(i), such conditions are formulated in normed spaces and assume the graph of $F$ to be convex. In \cref{S4.2}, we provide a series of dual necessary regularity conditions for set-valued mapping s with closed convex graphs acting between Banach spaces some of which are also sufficient. In the setting of complete metric spaces, and assuming that $\gph F_p$ is closed for all $p\in P$, the gap between the nonlocal necessary and sufficient subregularity conditions in \cref{P1} is not big: they share the same inequality \eqref{P1-1}; with all the other parameters coinciding, the sufficiency part naturally requires it to hold for all $x$ in a larger set. Unfortunately, unlike the `full' regularity possessing the well known coderivative criterion (see, e.g., \cite{Kru88,Mor06.1}), this is not the case in general with local subregularity conditions unless the graph of $F$ is convex. The sufficient subregularity conditions presented in the paper are the weakest possible in each group, but can still be far from necessary. As it has been discussed in the literature (see, e.g., a discussion of the equivalent subtransversality property in \cite{KruLukTha17}), the reason for this phenomenon lies in the fact that the subregularity property lacks robustness. The hot topic of regularity of a set-valued mapping\ $F$ with a special structure, particularly in the arising in numerous applications such as, e.g., KKT systems and variational inequalities, case when $F=g+G$ with $g$ single valued and $G$ set-valued (typically a normal cone mapping), is outside the scope of the current paper. Computing `slopes' and coderivatives of such mappings (or normal cones to their graphs) is usually a difficult job and requires imposing additional assumptions on $g$ and $G$. This is what people working in this area normally do. We want to emphasize that this type of conditions still fall into the five-point scheme described above. The rest of the paper is organized as follows. The next \cref{sect1.2} contains some preliminary facts used throughout the paper. \cref{S4,S5} are dedicated, respectively, to primal and dual sufficient and necessary conditions for the regularity properties. In \cref{S6}, we illustrate the theory by characterizing the conventional metric regularity and subregularity of set-valued mappings as well as stability properties of solution mappings to parametric inclusions. \section{Preliminaries}\label{sect1.2} Our basic notation is standard, see, e.g., \cite{Mor06.1,RocWet98,DonRoc14}. Throughout the paper, if not explicitly stated otherwise, $P$ is an arbitrary set, $X$ and $Y$ are either metric or normed/Ba\-nach/Asplund spaces. Products of metric or normed spaces are assumed to be equipped with the maximum distance or norm. The topological dual of a normed space $X$ is denoted by $X^*$, while $\langle\cdot,\cdot\rangle$ denotes the bilinear form defining the pairing between the two spaces. In a primal space, the open and closed balls with center $x$ and radius $\delta\in]0,+\infty]$ are denoted, respectively, by $B_\delta(x)$ and $\overline{B}_\delta(x)$, while $\B$ and $\overline{\B}$ stand for, respectively, the open and closed unit balls. The open unit ball in the dual space is denoted by $\B^*$. Symbols $\R$, $\R_+$ and $\N$ stand for the real line, the set of all nonnegative reals, and the set of all positive integers, respectively. For a set $\Omega$ in a normed space, its closure is denoted by $\cl\Omega$. The distance from a point $x$ to $\Omega$ is defined by $d(x,\Omega):=\inf_{u\in\Omega}\|u-x\|$, and we use the convention $d(x,\emptyset)=+\infty$. The indicator function of $\Omega$ is defined by $i_\Omega(x)=0$ if $x\in\Omega$, and $i_\Omega(x)=+\infty$ if $x\notin\Omega$. The dual conditions in the paper are formulated in terms of Fr\'echet\ and Clarke normals and subdifferentials; cf., e.g., \cite{Kru03,Cla83}. Given a point $\bar x\in \Omega$, the sets \begin{gather}\label{NC} N_{\Omega}^F(\bar x):= \left\{x^\ast\in X^\ast\mid \limsup_{\Omega\ni x\to\bar x,\,x\ne \bar x} \frac {\langle x^\ast,x-\bar x\rangle} {\|x-\bar x\|} \le 0 \right\}, \\\label{NCC} N_{\Omega}^C(\bar x):= \left\{x^\ast\in X^\ast\mid \ang{x^\ast,z}\le0 \qdtx{for all} z\in T_{\Omega}^C(\bar x)\right\} \end{gather} are, respectively, the \emph{Fr\'echet} and \emph{Clarke normal cones} to $\Omega$ at $\bar x$. In definition \eqref{NCC}, $T_{\Omega}^C(\bar x)$ stands for the \emph{Clarke tangent cone} to $\Omega$ at $\bar x$. The sets \eqref{NC} and \eqref{NCC} are nonempty closed convex cones satisfying $N_{\Omega}^F(\bar x)\subset N_{\Omega}^C(\bar x)$. If $\Omega$ is a convex set, both cones reduce to the normal cone in the sense of convex analysis: \begin{gather*}\label{CNC} N_{\Omega}(\bar x):= \left\{x^*\in X^*\mid \langle x^*,x-\bar x \rangle \leq 0 \qdtx{for all} x\in \Omega\right\}. \end{gather*} For an extended-real-valued function $f:X\to\R\cup\{+\infty\}$ on a normed space, its domain and epigraph are defined, respectively, by $\dom f:=\{x\in X\mid f(x)< +\infty\}$ and $\epi f:=\{(x,\alpha)\in X\times \mathbb{R}\mid f(x)\le\alpha\}$. The \emph{Fr\'echet} and \emph{Clarke subdifferentials} of $f$ at $\bar x\in\dom f$ are defined, respectively, as \begin{gather}\label{sdF} \partial^F f(\bar x):=\left\{x^*\in X^*\mid \liminf_{\substack{x\to \bar x,\,x\ne\bar x}} \dfrac{f(x)-f(\bar x)-\langle x^*,x-\bar x\rangle}{\|x-\bar x\|}\ge 0\right\}, \\\label{sdC} \partial^Cf(\bar x):=\left\{x^*\in X^*\mid \langle x^*,z\rangle \le f^\circ(\bar x,z) \quad\text{for all}\quad z\in X\right\}, \end{gather} where $f^\circ(\bar x,z)$ is the \emph{Clarke--Rockafellar directional derivative} \cite{Roc79,Roc80} of $f$ at $\bar x$ in the direction $z\in X$. The sets \eqref{sdF} and \eqref{sdC} are closed and convex, and satisfy $\partial^F{f}(\bar x)\subset\partial^C{f}(\bar x)$. If $f$ is convex, they reduce to the subdifferential in the sense of convex analysis: \begin{gather*} \partial{f}(\bar x):= \left\{x^\ast\in X^\ast\mid f(x)-f(\bar x)-\langle{x}^\ast,x-\bar x\rangle\ge 0 \qdtx{for all} x\in X \right\} \end{gather*} It is easy to check that $N_{\Omega}^F(\bar x)=\partial^Fi_\Omega(\bar x)$, $N_{\Omega}^C(\bar x)=\partial^Ci_\Omega(\bar x)$, and \begin{gather*} \sd^F f(\bar x)=\left\{x^*\in X^*\mid (x^*,-1)\in N^F_{\epi f}(\bar x,f(\bar x))\right\},\\ \partial^C{f}(\bar x)= \left\{x^\ast\in X^\ast\mid (x^*,-1)\in N_{\epi f}^C(\bar x,f(\bar x))\right\}. \end{gather*} By convention, we set $N_{\Omega}^F(\bar x) =N_{\Omega}^C(\bar x):=\emptyset$ if $\bar x\notin \Omega$ and $\partial^F{f}(\bar x)=\partial^C{f}(\bar x):=\emptyset$ if $x\notin\dom f$. We often use the generic notations $N$ and $\sd$ for Fr\'echet and Clarke objects, specifying wherever necessary the type of the object by an appropriate superscript, e.g., $N:=N^F$ or $N:=N^C$. The following fact is an immediate consequence of the definition of the Fr\'echet\ subdifferential; cf. \cite{Kru03,Mor06.1}. \begin{lemma}\label{L2.3} Suppose $X$ is a normed space and $f:X\to\R\cup\{+\infty\}$. If $\bar x\in\dom f$ is a point of local minimum of $f$, then $0\in\sd^Ff(\bar x)$. \end{lemma} The representation of the (convex) subdifferential of a norm in the next lemma is of importance; cf. \cite[Corollary~2.4.16]{Zal02}. \begin{lemma} \label{L3} Let $(Y,\|\cdot\|)$ be a normed space. Then \begin{enumerate} \item $\sd\|\cdot\|(0)=\{y^*\in Y^*\mid \|y^*\|\le 1\}$; \item $\sd\|\cdot\|(y)=\{y^*\in Y^*\mid \langle y^*,y\rangle=\|y\|\;\; \text{and} \;\; \|y^*\|= 1\}, \;\; y\ne 0$. \end{enumerate} \end{lemma} For an extended real-valued function $f$ on a metric space, its \emph{slope} and \emph{nonlocal slope} (cf. \cite{NgaThe08,Iof00,AzeCorLuc02,Kru15}) at $x\in\dom f$ are defined, respectively, by \begin{align*} |\nabla f|(x):=\limsup_{u\rightarrow x,u\ne x}\dfrac{ [f(x)-f(u)]_+}{d(x,u)} \quad\mbox{and}\quad |\nabla f|^\diamond(x):=\sup\limits_{u\ne x} \dfrac{[f(x)-f_+(u)]_+}{d(x,u)}, \end{align*} where $\alpha_+:=\max\{0,\alpha\}$ for any $\alpha\in\R$. If $x\notin\dom f$, we set ${|\nabla f|(x)=|\nabla f|^\diamond(x):=+\infty}$. The following simple facts are well known; cf., e.g., \cite{CuoKru21}. \begin{lemma}\label{P1.3} Let $X$ be a metric space, $f:X\to\R\cup\{+\infty\}$, $x\in\dom f$, and ${f(x)>0}$. \begin{enumerate} \item $|\nabla f|(x)\le |\nabla f|^\diamond(x)$. \item If $X$ is a normed space and $f$ is convex, then $|\nabla f|^\diamond(x)=|\nabla f|(x)=d(0,\partial f(x)).$ \end{enumerate} \end{lemma} A set-valued mapping $F:X\rightrightarrows Y$ between two sets $X$ and $Y$ is a mapping, which assigns to every $x\in X$ a subset (possibly empty) $F(x)$ of $Y$. We use the notations $\gph F:=\{(x,y)\in X\times Y\mid y\in F(x)\}$ and $\dom\:F:=\{x\in X\mid F(x)\ne\emptyset\}$ for the graph and the domain of $F$, respectively, and $F^{-1}:Y\rightrightarrows X$ for the inverse of $F$. This inverse (which always exists with possibly empty values at some $y$) is defined by $F^{-1}(y):=\{x\in X \mid y\in F(x)\}$, $y\in Y$. Obviously $\dom F^{-1}=F(X)$. If $X$ and $Y$ are normed spaces, the \emph{coderivative} of $F$ at $(x,y)\in\gph F$ is a set-valued mapping $D^*F(x,y):Y^*\rightrightarrows X^*$ defined by \begin{align}\label{coder} D^*F(x,y)(y^*):=\{x^*\in X^*\mid (x^*,-y^*)\in N_{\gph F}(x,y)\}, \quad y^*\in Y^*. \end{align} Depending on the type of the normal cone in \eqref{coder}, it can define various coderivatives. We use symbols $D_F^*$ and $D_C^*$ to denote, respectively, the Fr\'echet and Clarke coderivatives. \if{ Let $\lambda\ge 0$, and $\varepsilon\ge 0$. Define the approximate normalized $\lambda-$enlargement of the duality mapping (cf. \cite{ChuKim16}) \todo{$X$ or $Y$?} $J_\lambda(\cdot,\varepsilon): X\rightrightarrows X^*$ for any $x\ne 0$ by \begin{align*} J_\lambda(x,\varepsilon):= \Big\{\dfrac{u^*+\lambda v^*}{\|u^*+\lambda v^*\|}\mid z\in X\setminus\{0\},\,v^*\in X^*,\,\|z-x\|\le\varepsilon,\, u^*\in J(z),\,\|v^*\|\le 1 \Big\}. \end{align*} When $\varepsilon=0$, the mapping is denoted by $J_\lambda(\cdot)$ and called the normalized $\lambda-$enlargement of the duality mapping (cf. \cite{LiMor12}).\sloppy }\fi The key tools in the proofs of the main results are the celebrated Ekeland variational principle and several subdifferential sum rules; cf. \cite{DonRoc14,Mor06.1,Iof17,IofTik79,Zal02,Roc79,Kru03,Fab89}. \begin{lemma}\label{EVP} Suppose $X$ is a complete metric space, $f: X\to \mathbb{R} \cup \{ +\infty\}$ is lower semicontinuous, $x\in X$, $\varepsilon>0$ and $\lambda>0$. If $f(x)<\inf_{X} f+\varepsilon$, then there exists an $\hat x\in X$ such that \begin{enumerate} \item $d(\hat{x},x)<\lambda$; \item $f(\hat{x})\le f(x)$; \item $f(u)+(\varepsilon/\lambda)d(u,\hat{x})\ge f(\hat{x})$ for all $u\in X.$ \end{enumerate} \end{lemma} \begin{lemma} \label{SR} Suppose $X$ is a normed space, $f_1,f_2:X\to\R \cup\{+\infty\}$, and $\bar x\in\dom f_1\cap\dom f_2$. \begin{enumerate} \item Let $f_1$ and $f_2$ be convex and $f_1$ be continuous at a point in $\dom f_2$. Then $$\partial(f_1+f_2)(\bar x)=\sd f_1(\bar x)+\partial f_2(\bar x).$$ \item Let $f_1$ be Lipschitz continuous and $f_2$ be lower semicontinuous in a neighbourhood of $\bar x$. Then $$\partial^C(f_1+f_2)(\bar x)\subset\sd^C f_1(\bar x) +\partial^Cf_2(\bar x).$$ \item Let $X$ be Asplund, $f_1$ be Lipschitz continuous and $f_2$ be lower semicontinuous in a neighbourhood of $\bar x$. Then, for any $x^*\in\partial^F(f_1+f_2)(\bar x)$ and $\varepsilon>0$, there exist $x_1,x_2\in X$ with $\|x_i-\bar x\|<\varepsilon$, $|f_i(x_i)-f_i(\bar x)|<\varepsilon$ $(i=1,2)$, such that $$x^*\in\partial^Ff_1(x_1) +\partial^Ff_2(x_2)+\varepsilon\B^\ast.$$ \end{enumerate} \end{lemma} Recall that a Banach space is \emph{Asplund} if every continuous convex function on an open convex set is Fr\'echet differentiable on a dense subset \cite{Phe93}, or equivalently, if the dual of each its separable subspace is separable. We refer the reader to \cite{Phe93,Mor06.1,BorZhu05} for discussions about and characterizations of Asplund spaces. All reflexive, particularly, all finite dimensional Banach spaces are Asplund. \if{ The next lemma is needed for our subsequent analysis. \begin{lemma}\label{L1.6} Let $\Omega$ be a subset of a metric space $X$, $f:X\rightarrow\R\cup\{+\infty\}$ be continuous. Then $\sup_{\Omega}f =\sup_{\cl\Omega}f$. \end{lemma} \begin{proof} It is straightforward that $\sup_{x\in\Omega}f(x) \le\sup_{x\in\cl\Omega}f(x)$. Let $x\in\cl\Omega$. Then there exists a sequence $\{x_k\}_{k\in\N}$ in $\Omega$ converges to $x$. In view of the continuity of $f$, we have $f(x)=\lim_{k\to+\infty}f(x_k) \le \sup_{\Omega}f.$ Hence, $\sup_{\cl\Omega}f \le\sup_{\Omega}f$. The proof is complete. \end{proof} }\fi \section{Slope Necessary and Sufficient Conditions}\label{S4} This section is dedicated to slope necessary and sufficient conditions. For simplicity, we focus on the uniform subregularity property in \cref{D1.2}(ii). The corresponding conditions for the property in \cref{D1.2}(i) can be formulated in a similar way. Besides, in view of \cref{R1.1}, in normed spaces (which is our setting in the next section) such conditions can be obtained as consequences of those for the subregularity. The necessary conditions are deduced directly from the definitions of the respective properties, while the sufficient ones come from the application of the Ekeland variational principle. In the convex case, the conditions are necessary and sufficient. In this section, $P$ is a nonempty set, $X$ and $Y$ are metric spaces, and $F:P\times X\rightrightarrows Y$. We assume the parameters $\bar x\in X$, $\bar y\in Y$, $\alpha>0$, ${\delta\in]0,+\infty]}$ and $\mu\in]0,+\infty]$ to be fixed. In what follows, we employ a collection of functions \begin{gather}\label{psi} \psi_p(u,v):={d(v,\bar y)}+i_{\gph F_{p}}(u,v), \quad u\in X,\;v\in Y \end{gather} depending on a parameter $p\in P$. Along with the standard maximum {distance} on $X\times Y$, we also use a {metric} depending on a parameter $\gamma>0$: \begin{gather}\label{pdist} d_\gamma((u,v),(x,y)) :=\max\left\{d(u,x),\gamma d(v,y)\right\},\quad u,x\in X,\;v,y\in Y. \end{gather} The next theorem plays a crucial role for the subsequent considerations. The slope and subdifferential/normal cone/coderivative conditions for uniform $\alpha-$subregularity in this paper are consequences of this theorem. \begin{theorem}\label{P1} \begin{enumerate} \item If $F$ is $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ with ${\delta}$ and $\mu$, then \begin{align}\label{P1-1} \sup_{\substack{(u,v)\in\gph F_p,\,(u,v)\ne (x,y)\\ d(u,\bar x)<\delta+\mu,\,d(v,\bar y)<\alpha\mu}} {\dfrac{d(y,\bar y)-d(v,\bar y)}{d_\gamma((u,v),(x,y))}}\ge\alpha \end{align} for $\gamma:=\alpha^{-1} $, and all $p\in P$, $x\in B_{\delta}(\bar x)$ and $y\in Y$ satisfying \begin{align}\label{P4.1-1} x\notin F_p^{-1} (\bar y),\quad y\in F(p,x){\cap B_{\alpha\mu}(\bar y)}. \end{align} \item Suppose $X$ and $Y$ are complete, and $\gph F_p$ is closed for all $p\in P$. If inequality \eqref{P1-1} holds for some $\gamma>0$, and all $p\in P$, $x\in B_{\delta+\mu}(\bar x)$ and $y\in Y$ satisfying \eqref{P4.1-1}, then $F$ is $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ with $\delta$ and $\mu$. \end{enumerate} \end{theorem} \begin{proof} \begin{enumerate} \item Suppose $F$ is $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ with $\delta$ and $\mu$. Let $p\in P$, $x\in B_{\delta}(\bar x)$ and $y\in Y$ satisfy \eqref{P4.1-1}, ${\gamma:=\alpha^{-1} }$, and $\eta>1$. By \eqref{P4.1-1} and \cref{D1.2}(ii), there exist a $\xi\in]1,\eta[$ such that $\xi{d(y,\bar y)}<\alpha\mu$, and a point $\hat{x}\in F_p^{-1} (\bar y)$ such that $\alpha{d(x,\hat{x})}<\xi{d(y,\bar y)}$. Thus, $(\hat{x},\bar y)\in\gph F_p$, $(\hat{x},\bar y)\ne(x,y)$, \begin{gather*} d(\hat{x},\bar x)\le d(\hat{x},x)+d(x,\bar{x})<\alpha^{-1} \xi d(y,\bar y)+\delta \le\mu+\delta,\quad\text{and}\\ d_\gamma((x,y),(\hat{x},\bar y))=\max\{ d(x,\hat{x}),\gamma d(y,\bar y)\}\le\alpha^{-1} \max\{\xi,1\}d(y,\bar y)=\alpha^{-1} \xi d(y,\bar y). \end{gather*} Hence, \begin{gather*} \sup_{\substack{(u,v)\in\gph F_p,\,(u,v)\ne (x,y)\\ d(u,\bar x)<\delta+\mu,\,d(v,\bar y)<\alpha\mu}} \dfrac{d(y,\bar y)-d(v,\bar y)}{d_\gamma((u,v),(x,y))}\ge \dfrac{d(y,\bar y)}{d_\gamma((\hat{x},\bar y),(x,y))} \ge\alpha\xi^{-1} >\alpha\eta^{-1} . \end{gather*} Letting $\eta\downarrow 1$, we arrive at \eqref{P1-1}. \item Suppose $F$ is not $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ with $\delta$ and $\mu$. By \cref{D1.2}(ii), there exist points $p\in P$ and $x\in B_\delta(\bar x)$ such that \begin{align*} d(\bar y,F(p,x))<\alpha\min\{d(x,F_p^{-1} (\bar y)),\mu\}. \end{align*} Hence, $x\notin F_p^{-1} (\bar y)$, or equivalently, $\bar y\notin F(p,x)$. Set $\mu_0:=\min\{d(x,F_p^{-1} (\bar y)),\mu\}$. Choose a number $\varepsilon$ such that $d(\bar y,F(p,x))<\varepsilon<\alpha\mu_0$, and a point $y\in F(p,x)$ such that ${d(y,\bar y)}<\varepsilon$. The function $\psi_p:X\times Y\rightarrow\R_+\cup\{+\infty\}$, defined by \eqref{psi}, is lower semicontinuous on $\overline{B}_{\delta+\mu}(\bar x)\times{\overline{B}_{\alpha\mu}(\bar y)}$. Besides, \sloppy \begin{align*} \psi_p(x,y)={d(y,\bar y)}< \inf_{\overline{B}_{\delta+\mu}(\bar x)\times {\overline{B}_{\alpha\mu}(\bar y)}}\psi_p+\varepsilon. \end{align*} Let $\gamma>0$. Applying the Ekeland variational principle (\cref{EVP}) to the restriction of $\psi_p$ to the complete metric space $\overline{B}_{\delta+\mu}(\bar x)\times {\overline{B}_{\alpha\mu}(\bar y)}$ with the metric \eqref{pdist}, we can find a point $(\hat{x},\hat{y})\in \overline{B}_{\delta+\mu}(\bar x)\times {\overline{B}_{\alpha\mu}(\bar y)}$ such that \begin{gather}\label{P1-3} d_\gamma((\hat{x},\hat{y}),(x,y))< \mu_0,\quad \psi_p(\hat{x},\hat{y})\le\psi_p(x,y), \\\label{P1-5} \psi_p(\hat{x},\hat{y})\le\psi_p(u,v) +(\varepsilon/\mu_0) d_\gamma((u,v),(\hat{x},\hat{y})) \end{gather} for all $(u,v)\in \overline{B}_{\delta+\mu}(\bar x)\times {\overline{B}_{\alpha\mu}(\bar y)}$. By \eqref{P1-3}, we have $\hat{y}\in F(p,\hat{x})$, and \begin{gather*} d(\hat{x},\bar x)\le d(\hat{x},x)+d(x,\bar x)<\mu_0+\delta \le\mu+\delta,\\ d(\hat{y},\bar y)\le d(y,\bar y)<\varepsilon<\alpha\mu_0\le\alpha\mu. \end{gather*} Besides, $d(\hat{x},x)<\mu_0\le d(x,F_p^{-1} (\bar y))$. This implies $\hat{x}\notin F_p^{-1} (\bar y)$, and consequently, ${\hat{y}\ne \bar y}$. It follows from \eqref{P1-5} that \begin{align*} \sup_{\substack{(u,v)\in\gph F_p,\, (u,v)\ne(\hat{x},\hat{y})\\ d(u,\bar x)<\delta+\mu,\,d(v,\bar y)<\alpha\mu}} \dfrac{d(\hat y,\bar y)-d(v,\bar y)}{d_\gamma((u,v),(\hat{x},\hat{y}))} \le\dfrac{\varepsilon}{\mu_0}<\alpha. \end{align*} The last estimate contradicts \eqref{P1-1}. \end{enumerate} \end{proof} \if{ Combining the two parts of \cref{P1}, we arrive at a complete primal space characterization of $\alpha-$subregularity. }\fi \if{ \begin{corollary Let $X$ and $Y$ be complete, and $\gph F_p$ be closed for all $p\in P$. The mapping $F$ is $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ with some $\delta\in]0,+\infty]$ and $\mu\in]0,+\infty]$ if and only if inequality \eqref{P1-1} holds with $\gamma:=\alpha^{-1} $ for all $p$, $x$ and $y$ satisfying~\eqref{P4.1-1}. \end{corollary} }\fi \begin{remark}\label{R2.1} \begin{enumerate} \item The expression in the left-hand side of the inequality \eqref{P1-1} is the nonlocal $\gamma$-slope \cite[p.~60]{Kru15} at $(x,y)$ of the restriction of the function $\psi_p$, given by \eqref{psi}, to $\gph F_p\cap[B_{\delta+\mu}(\bar x)\times B_{\alpha\mu}(\bar y)]$. \item By the definition of the {metric} \eqref{pdist}, if inequality \eqref{P1-1} is satisfied with a $\gamma>0$, then it is also satisfied with any $\gamma'\in]0,\gamma[$. This observation is applicable to all slope inequalities in this section. \item The completeness of the space and closedness assumption in part (ii) of \cref{P1} (and the subsequent statements) can be relaxed: it suffices to require that $\gph F_p\cap [\overline{B}_{\delta+\mu}(\bar x)\times \overline{B}_{\alpha\mu}(\bar y)]$ is complete for all $p\in P$. \item The sufficient condition in part (ii) of \cref{P1} is often hidden in the proofs of dual sufficient conditions. \item When $X$ and $Y$ are complete, and $\gph F_p$ is closed for all $p\in P$, the gap between the nonlocal necessary and sufficient regularity conditions in parts (i) and (ii) of \cref{P1} is not big: they share the same inequality \eqref{P1-1}; with all the other parameters coinciding, the necessity part (i) guarantees this inequality to hold for all $x\in B_{\delta}(\bar x)$, while the sufficiency part (ii) requires it to hold for all $x$ in a larger set $B_{\delta+\mu}(\bar x)$. \end{enumerate} \end{remark} We now illustrate \cref{P1} by applying it to the local (in $p$) setting in \cref{D1.3}(ii). The application is straightforward. We provide a single illustration of this kind, although the other statements in this and the next section are also applicable to this setting. \begin{corollary Let $P$ be a metric space, $(\bar p,\bar x,\bar y)\in\gph F$ and $\eta\in]0,+\infty]$. \begin{enumerate} \item If $F$ is $\alpha-$subregular in $x$ uniformly in $p$ at $(\bar p,\bar x,\bar y)$ with $\eta$, $\delta$ and $\mu$, then inequality \eqref{P1-1} holds with $\gamma:=\alpha^{-1} $ for all ${p\in B_\eta(\bar p)}$, $x\in B_{\delta}(\bar x)$ and $y\in Y$ satisfying \eqref{P4.1-1}. \item Suppose $X$ and $Y$ are complete, and $\gph F_p$ is closed for all $p\in B_\eta(\bar p)$. If inequality \eqref{P1-1} holds for some $\gamma>0$, and all $p\in B_\eta(\bar p)$, $x\in B_{\delta+\mu}(\bar x)$ and $y\in Y$ satisfying \eqref{P4.1-1}, then $F$ is $\alpha-$subregular in $x$ uniformly in $p$ at $(\bar p,\bar x,\bar y)$ with $\eta$, $\delta$ and $\mu$. \end{enumerate} \end{corollary} The next statement presents a localized version of \cref{P1}. \begin{corollary}\label{C2.2} \begin{enumerate} \item Suppose $X$ and $Y$ are normed spaces, and $\gph F_p$ is convex for all $p\in P$. If $F$ is $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ with ${\delta}$ and $\mu$, then \begin{align}\label{C1-1} \limsup_{\substack{u\to x,\,v\to y,\,(u,v)\in\gph F_p,\,(u,v)\ne (x,y)\\ d(u,\bar x)<\delta+\mu,\,d(v,\bar y)<\alpha\mu}} {\dfrac{d(y,\bar y)-d(v,\bar y)}{d_\gamma((u,v),(x,y))}}\ge\alpha \end{align} for $\gamma:=\alpha^{-1} $, and all $p\in P$, $x\in B_{\delta}(\bar x)$ and $y\in Y$ satisfying \eqref{P4.1-1}. \item Suppose $X$ and $Y$ are complete, and $\gph F_p$ is closed for all $p\in P$. If inequality \eqref{C1-1} holds for some $\gamma>0$, and all $p\in P$, $x\in B_{\delta+\mu}(\bar x)$ and $y\in Y$ satisfying \eqref{P4.1-1}, then $F$ is $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ with $\delta$ and $\mu$. \end{enumerate} \end{corollary} \begin{proof} In view of \cref{R2.1}(i) and \cref{R2.2}(i), assertion (i) follows from \cref{P1.3}(ii) and \cref{P1}(i), while assertion (ii) is a consequence of \cref{P1.3}(i) and \cref{P1}(ii). \end{proof} \begin{remark}\label{R2.2} \begin{enumerate} \item The expression in the left-hand side of inequality \eqref{C1-1} is the $\gamma$-slope \cite[p.~61]{Kru15} at $(x,y)$ of the restriction of the function $\psi_p$, given by \eqref{psi}, to $\gph F_p\cap[B_{\delta+\mu}(\bar x)\times B_{\alpha\mu}(\bar y)]$. \item The convexity assumption in part (i) of \cref{C2.2} (and the subsequent statements) can be relaxed: it suffices to require that $\gph F_p\cap [\overline{B}_{\delta+\mu}(\bar x)\times \overline{B}_{\alpha\mu}(\bar y)]$ is convex for all $p\in P$. \item In the particular case when $P$ is a neighborhood of a point $\bar p$ in some metric space, part (ii) of \cref{C2.2} is a quantitative version of \cite[Proposition~3.5]{Iof17.1}. Ngai et al. \cite[Theorem~3]{NgaTroThe13} established a primal sufficient condition for the property under the assumption that the mapping $F(\cdot,\bar x)$ is lower semicontinuous at $\bar p$. \end{enumerate} \end{remark} \if{ Combining the two parts of \cref{C2.2}, we arrive at a complete primal space infinitesimal characterization of uniform $\alpha-$subregularity in the convex setting. }\fi \if{ \begin{corollary}\label{T4.1} Suppose $X$ and $Y$ are Banach spaces, and $\gph F_p$ is closed and convex for all $p\in P$. The mapping $F$ is $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ if and only if inequality \eqref{C1-1} holds with $\gamma=\alpha^{-1} $ for all $p$, $x$ and $y$ satisfying \eqref{P4.1-1}. \end{corollary} }\fi \section{Dual Necessary and Sufficient Conditions}\label{S5} In this section, we continue studying the mapping $F:P\times X\rightrightarrows Y$ where $P$ is a nonempty set, while $X$ and $Y$ are assumed to be normed spaces. We also assume the parameters $\bar x\in X$, $\bar y\in Y$, $\alpha>0$, ${\delta\in]0,+\infty]}$ and $\mu\in]0,+\infty]$ to be fixed, and the collection of functions $\psi_p$ be defined by~\eqref{psi}. The primal and dual parametric product space norms, corresponding to the distance \eqref{pdist}, have the following form: \begin{gather}\label{pnorm} \|(x,y)\|_{\gamma}=\max\{\|x\|,{\gamma}\|y\|\},\quad x\in X,\;y\in Y, \\\label{dnorm} \|(x^*,y^*)\|_{\gamma}=\|x^*\|+\gamma^{-1} \|y^*\|,\quad x^*\in X^*,\;y^*\in Y^*. \end{gather} We denote by $d_\gamma$ the distance in $X^*\times Y^*$ determined by \eqref{dnorm}. \subsection{Dual Sufficient Conditions} In this subsection, we assume additionally that $X$ and $Y$ are Banach spaces, and $\gph F_p$ is closed for all $p\in P$. The next subdifferential sufficient condition for uniform $\alpha-$subregularity is a consequence of \cref{C2.2}(ii) thanks to the subdifferential sum rules in \cref{SR}. \begin{proposition}\label{P5} Let $\sd:=\sd^C$. If \begin{align}\label{P5-1} d_\gamma\left(0,\sd\psi_p(x,y)\right)\ge\alpha \end{align} for some $\gamma>0$, and all $p\in P$, $x\in B_{\delta+\mu}(\bar x)$ and $y\in Y$ satisfying \eqref{P4.1-1}, then $F$ is $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ with $\delta$ and~$\mu$. If $X$ and $Y$ are Asplund, then the above assertion is valid with $\sd:=\sd^F$. \end{proposition} \begin{proof} Suppose $F$ is not $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ with $\delta$ and $\mu$. Let $\gamma>0$. By \cref{C2.2}(ii), there exist points $p\in P$, $x\in B_{\delta+\mu}(\bar x)$ and $y\in Y$ satisfying \eqref{P4.1-1}, and an $\alpha'\in]0,\alpha[$ such that \begin{align*} \|y-\bar y\|-\|v-\bar y\|\le\alpha'\|(u,v)-(x,y)\|_\gamma \end{align*} for all $(u,v)\in\gph F_p\cap [B_{\delta+\mu}(\bar x)\times B_{\alpha\mu}(\bar y)]$ near $(x,y)$. In other words, $(x,y)$ is a local minimizer of the function \begin{align}\label{P5P1} (u,v)\mapsto\psi_p(u,v)+\alpha'\|(u,v)-(x,y)\|_\gamma. \end{align} By \cref{L2.3}, its Fr\'echet\ and, as a consequence, Clarke subdifferential at this point contains $0$. Observe that \eqref{P5P1} is the sum of the function $\psi_p$ and the Lipschitz continuous convex function $(u,v)\mapsto\alpha'\|(u,v)-(x,y)\|_{\gamma}$, and, by \cref{L3}, at any point all subgradients $(x^*,y^*)$ of the latter function satisfy $\|(x^*,y^*)\|_{\gamma}\le\alpha'$. By \cref{SR}(ii), there exists a subgradient $(x^*,y^*) \in \sd^C\psi_p(x,y)$ such that $\|(x^*,y^*)\|_{\gamma}\le\alpha'<\alpha$, which contradicts~\eqref{P5-1}. Let $X$ and $Y$ be Asplund. Choose an $\varepsilon>0$ such that \begin{align*} \varepsilon<\min\left\{{\delta+\mu}-\|x-\bar x\|, {\alpha\mu-\|y-\bar y\|},\alpha-\alpha',\|y-\bar y\|/2,d(x,F_p^{-1} (\bar y))/2\right\}. \end{align*} By \cref{SR}(iii), there exist points $x'\in B_\varepsilon({x}),y'\in B_\varepsilon({y})$ with $(x',y')\in\gph F_p$, and a subgradient $(x^*,y^*)\in \sd^F\psi_p(x',y')$ such that \begin{align}\label{P3.1-1} \|(x^*,y^*)\|_{\gamma}<\alpha'+\varepsilon<\alpha. \end{align} Besides, $x'\in{B_{\delta+\mu}(\bar x)}\setminus F_p^{-1} (\bar y)$, $\bar y\ne y'\in B_{\alpha\mu}(\bar y)$ as \begin{gather*} \|y-\bar y\|/2<\|y'-\bar y\|,\quad d(x,F_p^{-1} (\bar y))/2<d(x',F_p^{-1} (\bar y)),\\ \|x'-\bar x\|\le\|x'-x\|+\|x-\bar x\|<{\delta+\mu},\quad \|y'-\bar y\|\le\|y'-y\|+\|y-\bar y\|<\alpha\mu. \end{gather*} It follows from \eqref{P3.1-1} that $d_\gamma\left(0,\sd^F\psi_p{(x',y')}\right)<\alpha$, which contradicts \eqref{P5-1}. \end{proof} \begin{remark}\label{R3.01} Condition \eqref{P5-1} with the Fr\'echet\ subdifferentials is obviously weaker (hence, more efficient) than its version with the Clarke ones. However, it is only applicable in Asplund spaces. \sloppy \end{remark} The key condition \eqref{P5-1} in \cref{P5} involves subdifferentials of the function $\psi_p$. Subgradients of this function belong to $X^*\times Y^*$ and have two component vectors $x^*$ and $y^*$. In view of the representation \eqref{dnorm} of the dual norm on $X^*\times Y^*$, the contributions of the vectors $x^*$ and $y^*$ to the condition \eqref{P5-1} are different. The next corollary exposes this difference. \begin{corollary}\label{C7} If there exists an $\varepsilon>0$ such that $\|x^*\|\ge\alpha$ for all $p\in P$, $x\in B_{\delta+\mu}(\bar x)$ and $y\in Y$ satisfying \eqref{P4.1-1}, and all $(x^*,y^*)\in \sd^C\psi_p(x,y)$ with $\|y^*\|<\varepsilon$; particularly~if \begin{align* \liminf_{\substack{F_p^{-1} (\bar y)\not\ni x\to\bar x,\, F(p,x)\ni y\to\bar y,\,y^*\to0\\ p\in P,\,y\ne\bar y,\, (x^*,y^*)\in\sd^C\psi_p(x,y)}}\|x^*\|>\alpha, \end{align*} then $F$ is $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ with $\delta$ and $\mu$. If $X$ and $Y$ are Asplund, then the above assertion is valid with $\sd^F$ in place of $\sd^C$. \end{corollary} \begin{proof} Suppose $F$ is not $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ with $\delta$ and $\mu$. Let $\varepsilon>0$ and $\gamma:=\varepsilon/\alpha$. By \cref{P5}, there exist $p\in P$, $x\in B_{\delta+\mu}(\bar x)$ and $y\in Y$ satisfying \eqref{P4.1-1}, and a subgradient $(x^*,y^*)\in \sd^{C}\psi_p(x,y)$ ($(x^*,y^*)\in \sd^{F}\psi_p(x,y)$ if $X$ and $Y$ are Asplund) such that $\|(x^*,y^*)\|_{\gamma}<\alpha$. In view of the representation of the dual norm \eqref{dnorm}, this implies $\|x^*\|<\alpha$ and $\|y^*\|<\alpha\gamma=\varepsilon$, a contradiction. \end{proof} The function $\psi_p$ involved in the subdifferential sufficient conditions for the uniform $\alpha-$subregularity in \cref{P5}, is itself a sum of two functions. We are now going to apply the sum rules again to obtain sufficient conditions in terms of Clarke and Fr\'echet normals to $\gph F_p$. \begin{theorem}\label{T2} The mapping $F$ is $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ with $\delta$ and $\mu$ if, for some $ \gamma>0$, and all $p\in P$, $x\in B_{\delta+\mu}(\bar x)$ and $y\in Y$ satisfying \eqref{P4.1-1}, one of the following conditions is satisfied: \begin{enumerate} \item {with $N:=N^C$,} \begin{align}\label{T2-1} d_\gamma((0,-y^*),N_{\gph F_p}(x,y))\ge\alpha \end{align} for all $y^*\in Y^*$ satisfying \begin{align}\label{T1-1} \|y^*\|=1,\quad\langle y^*,y-\bar y\rangle=\|y-\bar y\|; \end{align} \item $X$ and $Y$ are Asplund, and there exists a $\tau\in]0,1[$ such that inequality \eqref{T2-1} holds with $N:=N^F$ for all $y^*\in Y^*$ satisfying \begin{align}\label{T1-2} \|y^*\|=1,\quad \langle y^*,y-\bar y\rangle>\tau\|y-\bar y\|. \end{align} \end{enumerate} \end{theorem} \if{ \begin{proof} Recall from \eqref{psi} that $\psi_p$ is a sum of two functions: the Lipschitz continuous convex function $v\mapsto g(v):=\|v-\bar y\|$ and the indicator function of the closed set $\gph F_p$. \begin{enumerate} \item By \cref{SR}(ii), $\sd^C\psi_p(x,y)\subset\{0\}\times\sd g(y) +N^C_{\gph F_p}(x,y)$. Since $y\ne\bar y$, by \cref{L3}(ii), $\sd g(y)$ is a set of all $y^*\in Y^*$ satisfying \eqref{T1-1}. Hence, condition (i) implies \eqref{P5-1} with $\sd:=\sd^C$. \sloppy \item Let $X$ and $Y$ be Asplund, and $\tau\in]0,1[$. By \cref{SR}(iii), if $(u^*,v^*)\in\sd^F\psi_p(x,y)$, then, for any $\varepsilon>0$, there exist $x_1\in B_\varepsilon({x})$, $y_1,y_2\in B_\varepsilon(y)$ with $(x_1,y_1)\in\gph F_p$, such that $(u^*,v^*)\in\{0\}\times\sd g(y_2) +N^F_{\gph F_p}(x_1,y_1)+\varepsilon\B_{X^*\times Y^*}$. \end{enumerate} \end{proof} }\fi \begin{proof} Suppose $F$ is not $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ with $\delta$ and $\mu$. Let $\gamma>0$. In view of \cref{P5}, there exist $p\in P$, $x\in B_{\delta+\mu}(\bar x)$ and $y\in Y$ satisfying \eqref{P4.1-1}, and a subgradient $(\hat x^*,\hat y^*)\in\sd\psi_p(x,y)$ such that $\|(\hat x^*,\hat y^*)\|_{\gamma}<\alpha$, where either $\sd:=\sd^C$ (if $X$ and $Y$ are general Banach spaces) or ${\sd:=\sd^F}$ (if $X$ and $Y$ are Asplund). Recall from \eqref{psi} that $\psi_p$ is a sum of two functions: the Lipschitz continuous convex function $v\mapsto g(v):=\|v-\bar y\|$ and the indicator function of the closed set $\gph F_p$. \begin{enumerate} \item By \cref{SR}(ii), there exist $y^*\in \sd g(y)$ and $(u^*,v^*)\in N^C_{\gph F_p}(x,y)$ such that $(\hat x^*,\hat y^*)=(0,y^*)+(u^*,v^*)$. Thus, \begin{align*} d_\gamma((0,-y^*),N^C_{\gph F_p} (x,y))\le\|(0,y^*)+(u^*,v^*)\|_\gamma=\|(\hat x^*,\hat y^*)\|_{\gamma}<\alpha, \end{align*} which contradicts \eqref{T2-1}. Since $y\ne\bar y$, by \cref{L3}, $y^*$ satisfies conditions \eqref{T1-1}. \item Let $X$ and $Y$ be Asplund, and $\tau\in]0,1[$. By \cref{SR}(iii), for any $\varepsilon>0$, there exist $x_1\in B_\varepsilon({x})$, $y_1,y_2\in B_\varepsilon(y)$ with $(x_1,y_1)\in\gph F_p$, and $y^*\in\sd g(y_2)$, $(u^*,v^*)\in N^F_{\gph F_p}(x_1,y_1)$ such that \begin{align}\label{T2-7} \|(0,y^*)+(u^*,v^*)-(\hat x^*,\hat y^*)\|_{\gamma}<\varepsilon. \end{align} The number $\varepsilon$ can be chosen small enough to ensure that $x_1\in B_{\delta+\mu}(\bar x)\setminus F_p^{-1} (\bar y)$, $y_1\in B_{\alpha\mu}(\bar y)$, $y_2\ne\bar y$, and \begin{gather*} \|y_1-\bar y\|\ge\frac{1}{2}\|y-\bar y\|,\; \|y_2-y_1\|<\frac{1-\tau}{4}\|y-\bar y\|,\; \|(\hat x^*,\hat y^*)\|_{\gamma}+\varepsilon<\alpha. \end{gather*} By \cref{L3}, we have $\|y^*\|=1$ and $\langle y^*,y_2-\bar y\rangle=\|y_2-\bar y\|$. Moreover, \begin{gather*} \|y_2-y_1\|<\frac{1-\tau}{4}\|y-\bar y\|\le\frac{1-\tau}{2} \|y_1-\bar y\|, \end{gather*} and consequently, \begin{align*} \langle y^*,y_1-\bar y\rangle &\ge\langle y^*,y_2-\bar y\rangle-\|y_2-y_1\|=\|y_2-\bar y\|-\|y_2-y_1\|\\ &\ge\|y_1-\bar y\|-2\|y_2-y_1\|>\tau\|y_1-\bar y\|. \end{align*} Making use of \eqref{T2-7}, we obtain \begin{align*} d_\gamma((0,-y^*),N^F_{\gph F_p} (x_1,y_1)) \le \|(0, y^*)+(u^*,v^*)\|_\gamma <\|(\hat x^*,\hat y^*)\|_{\gamma}+\varepsilon<\alpha. \end{align*} This contradicts \eqref{T2-1}. \end{enumerate} \end{proof} \begin{remark}\label{R3.02} \begin{enumerate} \item Condition \eqref{T2-1} with the Fr\'echet\ normal cones is obviously weaker (hence, more efficient) than its version with the Clarke ones; cf. \cref{R3.01}. However, the Asplund space sufficient condition for uniform $\alpha-$subregularity in part (ii) of \cref{T2} is not necessarily weaker than its general Banach space version in part~(i), as it replaces the equality in \eqref{T1-1} with a less restrictive inequality in \eqref{T1-2}, which involves an additional parameter $\tau$. Of course, $\tau$ can be chosen arbitrarily close to 1 making the difference between the constraints \eqref{T1-1} and \eqref{T1-2} less significant. The weaker than \eqref{T1-1} conditions \eqref{T1-2} employed in part (ii) of \cref{T2} are due to the approximate subdifferential sum rule (\cref{SR}(iii)) used in its proof. \item The following alternative sufficient condition has been established half way within the proof of part (ii) of \cref{T2}: \smallskip \emph{$X$ and $Y$ are Asplund, and, given any $\varepsilon>0$, inequality \eqref{T2-1} holds with $N:=N^F$ for all $v\in B_\varepsilon(y)$ and all $y^*\in Y^*$ satisfying \eqref{T1-1} with $v$ in place of $y$.} \smallskip It employs the stronger equality conditions \eqref{T1-1} instead of \eqref{T1-2}, but involves an unknown vector $v$ (arbitrarily close to $y$). Conditions of this type are used by some authors, but we prefer more explicit ones in \cref{T2}(ii) and the statements derived from it. \end{enumerate} \end{remark} The qualitative sufficient conditions for uniform regularity follow immediately. \begin{corollary}\label{C5.3} The mapping $F$ is subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ if one of the following conditions is satisfied: \begin{enumerate} \item $\displaystyle\sup_{\gamma>0}\liminf_{\substack{F_p^{-1} (\bar y)\not\ni x\to\bar x,\, F(p,x)\ni y\to\bar y\\p\in P,\,y\ne\bar y,\,\|y^*\|=1,\, \langle y^*,y-\bar y\rangle =\|y-\bar y\|}} d_\gamma((0,-y^*),N^C_{\gph F_p}(x,y))>0$; \smallskip \item $X$ and $Y$ are Asplund, and\\ $\displaystyle\sup_{\gamma>0,\tau\in]0,1[} \liminf_{\substack{F_p^{-1} (\bar y)\not\ni x\to\bar x,\, F(p,x)\ni y\to\bar y\\p\in P,\,y\ne\bar y,\,\|y^*\|=1,\, \langle y^*,y-\bar y\rangle>\tau\|y-\bar y\|}}d_\gamma((0,-y^*),N^F_{\gph F_p} (x,y))>0$. \end{enumerate} \end{corollary} \if{ \begin{proof} We provide the proof for item (i). The one of (ii) is analogous. Suppose condition \eqref{C9-2} is satisfied, i.e. there exist $\gamma,\alpha,\eta\in]0,+\infty]$ such that for all $p\in B_\eta(\bar p),x\in B_{\eta}(\bar x)\setminus F_p^{-1} (\bar y)$, $y\in F(p,x)$ with $0<\|y\|<\eta$, and all $y^*\in Y^*$ satisfying \eqref{T1-1}, it holds \begin{align}\label{T2-4} d_\gamma((0,-y^*),N^C_{\gph F_p}(x,y))>\alpha, \end{align} and $\gph F_p\cap[B_{\eta}(\bar x)\times B_{\eta}(0)]$ is closed for all $p\in B_{\eta}(\bar p)$. Choose $\mu,\delta\in]0,+\infty]$ such that $\max\{\delta,\alpha\mu,\delta+\mu\}:=\eta$, inequality \eqref{T2-4} holds for all $p\in B_\delta(\bar p),x\in B_{\delta}(\bar x)\setminus F_p^{-1} (\bar y)$, $y\in F(p,x)$ with $0<\|y\|<\alpha\mu$, and $\gph F_p\cap[B_{\delta+\mu}(\bar x)\times B_{\alpha\mu}(0)]$ is closed for all $p\in B_{\delta}(\bar p)$. In view of \cref{T2}, $F$ is $\alpha-$subregular at $\bar x$ with $\delta$ and $\mu$. \end{proof} }\fi \if{ The conditions in \cref{C5.3} employs two assumptions: the closedness of $\gph F_p$, and either condition (i) or condition (ii). The next example shows that either condition (i) or condition (ii) cannot be dropped. }\fi The next example illustrates the sufficient conditions for subregularity in \cref{C5.3}. \begin{example} Let $P=X=Y:=\R$, $F(p,x):=\{(p-x)^2\}$ for all $p\in P$ and $x\in X$, and let $\bar y:=0$. By \eqref{G}, $F_p^{-1} (\bar y)=\{p\}$. Thus, $d(x,F_p^{-1} (\bar y))=|x-p|$ and $d(\bar y,F(p,x))=(x-p)^2$ for all $p\in P$ and $x\in X$. Hence, for any $\alpha>0$ and $p\in P$, inequality \eqref{D1.2-2} is violated when $x$ sufficiently close to $\bar x:=0$, i.e. the mapping $F$ is not {subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$}. Observe that $\gph F_p$ is closed for all $p\in P$, and, for any $p\in P$ and $(x,y)\in\gph F_p$, \begin{align*} (2(x-p),-1)\in N^C_{\gph F_p}(x,y)=N^F_{\gph F_p}(x,y). \end{align*} Let $p=0$, $x\ne0$, $y=x^2$, and $y^*\in\R$ satisfy \eqref{T1-1} {or \eqref{T1-2}}, hence, $y^*=1$, and, for any $\gamma>0$, $d_\gamma((0,-y^*),(2x,-1))=2|x|\to0$ as $x\downarrow0$. Both inequalities in \cref{C5.3} are not satisfied. \end{example} \cref{T2} yields sufficient conditions for uniform $\alpha-$subregularity in terms of coderivatives. \begin{corollary}\label{C3.3} The mapping $F$ is $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ with $\delta$ and $\mu$ if, for some $\eta\in]0,+\infty]$, and all $p\in P$, $x\in B_{\delta+\mu}(\bar x)$ and $y\in Y$ satisfying \eqref{P4.1-1}, one of the following conditions is satisfied: \begin{enumerate} \item with $D^*:=D^*_C$, for all $y^*\in Y^*$ satisfying \eqref{T1-1}, it holds \begin{align}\label{C3.3-1} d(0,D^*F_p(x,y)(B_{\eta}(y^*)))\ge\alpha; \end{align} \item $X$ and $Y$ are Asplund, and there exists a $\tau\in]0,1[$ such that inequality \eqref{C3.3-1} holds with $D^*:=D^*_F$ for all $y^*\in Y^*$ satisfying \eqref{T1-2}. \end{enumerate} \end{corollary} \begin{proof} Given an $\eta\in]0,+\infty]$, set $\gamma:=\alpha^{-1} \eta$. In view of the representations \eqref{coder} of the coderivative and \eqref{dnorm} of the dual norm, condition \eqref{T2-1} means that $\|u^*\|+\gamma^{-1} {\|v^*-y^*\|}\ge\alpha$ for all $v^*\in Y^*$ and ${u^*\in D^*F_p(x,y)(v^*)}$. The last inequality is obviously satisfied if either $\|u^*\|\ge\alpha$ or $\|v^*-y^*\|\ge\eta$, or equivalently, if $\|u^*\|\ge\alpha$ when $v^*\in B_{\eta}(y^*)$. \sloppy \end{proof} The coderivative sufficient condition \eqref{C3.3-1} can be replaced by its `normalized' (and a little stronger!) version. \begin{corollary}\label{C3.4} The mapping $F$ is $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ with $\delta$ and $\mu$ if, for some $\eta\in]0,1[$, and all $p\in P$, $x\in B_{\delta+\mu}(\bar x)$ and $y\in Y$ satisfying \eqref{P4.1-1}, one of the following conditions is satisfied: \begin{enumerate} \item {with $D^*:=D^*_C$}, \begin{align}\label{C3.4-1} d\left(0,D^*F_p(x,y)\left(\frac{v^*}{\|v^*\|}\right) \right) \ge\frac{\alpha}{1-\eta} \end{align} for all $y^*\in Y^*$ satisfying \eqref{T1-1} and $v^*\in B_{\eta}(y^*)$; \item $X$ and $Y$ are Asplund, and there exists a $\tau\in]0,1[$ such that inequality \eqref{C3.4-1} holds with $D^*:=D^*_F$ for all $y^*\in Y^*$ satisfying \eqref{T1-2} and $v^*\in B_{\eta}(y^*)$. \end{enumerate} \end{corollary} \begin{proof} Let $\eta\in]0,1[$, $p\in P$, $x\in B_{\delta+\mu}(\bar x)$ and $y\in Y$ satisfy \eqref{P4.1-1}, and $y^*\in Y^*$ satisfy either \eqref{T1-1} or \eqref{T1-2}. We need to show that, if inequality \eqref{C3.4-1} holds for all $v^*\in B_{\eta}(y^*)$, then inequality \eqref{C3.3-1} holds. First note that, in view of \eqref{T1-1} or \eqref{T1-2}, $\|y^*\|=1$. Let $v^*\in B_{\eta}(y^*)$ and $u^*\in D^*F_p(x,y)(v^*)$. Then $\|v^*\|>1-\eta\in]0,+\infty]$. Thus, condition \eqref{C3.4-1} is well defined. Moreover, $u^*/\|v^*\|\in D^*F_p(x,y)(v^*/\|v^*\|)$ and, {in view of \eqref{C3.4-1}}, $\|u^*\|\ge\alpha\|v^*\|/(1-\eta)>\alpha$, i.e. inequality \eqref{C3.3-1} holds. \end{proof} The next qualitative assertion is an immediate consequence of \cref{C3.3}. \begin{corollary}\label{C3.7} The mapping $F$ is subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ if one of the following conditions is satisfied: \begin{enumerate} \item $\displaystyle\lim_{\delta\downarrow0}\; \inf_{\substack{x\in B_{\delta}(\bar x)\setminus F_p^{-1} (\bar y),\, \bar y\ne y\in F(p,x)\cap B_{\delta}(\bar y)\\p\in P,\, \|y^*\|=1,\,\langle y^*,y-\bar y\rangle=\|y-\bar y\|}} d(0,D^*_CF_p(x,y)(B_{\delta}(y^*)))>0;$ \smallskip \item $X$ and $Y$ are Asplund, and\\ $\displaystyle\lim_{\delta\downarrow0,\,\tau\uparrow1}\; \inf_{\substack{x\in B_{\delta}(\bar x)\setminus F_p^{-1} (\bar y),\, \bar y\ne y\in F(p,x)\cap B_{\delta}(\bar y)\\ p\in P,\, \|y^*\|=1,\,\langle y^*,y-\bar y\rangle>\tau\|y-\bar y\|}} d(0,D^*_FF_p(x,y)(B_{\delta}(y^*)))>0.$ \end{enumerate} \end{corollary} \begin{remark}\label{R3.1} \begin{enumerate} \item In the case when $P$ is a neighborhood of a given point $\bar p$ in a metric space, \cref{C3.7} (taking into account \cref{R3.02}(ii) in some instances) improves \cite[Theorem~3.6]{LedZhu99}, \cite[Theorem~3.4]{NgaThe04}, \cite[Theorem~3.2]{LeeTamYen08}, \cite[Theorem~3.5]{HuyYao09}, \cite[Theorem~3.1]{HuyKimNin12}, \cite[Corollary~2.2]{Ngh14}, \cite[Theorem~1]{ChuKim16} (in the linear setting), and \cite[Theorem~4.1(e)]{Iof17.1}. \if{ \NDC{5.1.20 Should we explain in details about the improvements?} }\fi \item Clarke normal cones in this section can be replaced by Ioffe's \emph{$G$-nor\-mal cones}~\cite{Iof17}. \end{enumerate} \end{remark} \subsection{Dual Necessary Conditions}\label{S4.2} In this subsection, $X$ and $Y$ are normed spaces, $F:P\times X\rightrightarrows Y$, $\bar x\in X$, $\bar y\in Y$, $\alpha>0$, ${\delta\in]0,+\infty]}$, $\mu\in]0,+\infty]$, and we assume that $\gph F_p$ is convex for all $p\in P$. The next statement provides a necessary condition for uniform $\alpha-$subregularity in terms of subdifferentials of the function $\psi_p$ defined by \eqref{psi}. \begin{proposition}\label{P5.1} If $F$ is $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ with $\delta$ and $\mu$, then inequality \eqref{P5-1} is satisfied with $\gamma:=\alpha^{-1} $ for all $p\in P$, $x\in B_{\delta}(\bar x)$ and $y\in Y$ satisfying~\eqref{P4.1-1}. \end{proposition} \begin{proof} Under the assumptions made, the function $\psi_p$ is convex for all $p\in P$. Let ${\gamma:=\alpha^{-1} }$, and $p\in P$, $x\in B_{\delta}(\bar x)$ and $y\in Y$ satisfy \eqref{P4.1-1}. For any $(x^*,y^*)\in\sd\psi_p(x,y)$, we have \begin{align*} \|(x^*,y^*)\|_{\gamma} &=\sup_{\substack{(u,v)\ne(0,0)}} \dfrac{\ang{(x^*,y^*),(u,v)}} {\|(u,v)\|_{\gamma}}\\ &=\limsup_{\substack{u{\to}x,\, v\to y\\ (u,v)\ne(x,y)}} \dfrac{-\ang{(x^*,y^*),(u,v) - (x,y)}}{\|(u,v)-(x,y)\|_{\gamma}}\\ &\ge\limsup_{\substack{u{\to}x,\, v\to y\\ (u,v)\ne(x,y)}} \dfrac{\psi_p(x,y)-\psi_p(u,v)} {\|(u,v)-(x,y)\|_{\gamma}}\\ &=\limsup_{\substack{u\to x,\,v\to y\\(u,v)\in\gph F_p,\,(u,v)\ne(x,y)}} \dfrac{\|y-\bar y\|-\|v-\bar y\|}{\|(u,v)-(x,y)\|_\gamma}. \end{align*} By \cref{C2.2}(i), we have $\|(x^*,y^*)\|_{\gamma}\ge\alpha.$ Taking the infimum in the left-hand side\ of the last inequality over $(x^*,y^*)\in\sd\psi_p(x,y)$, we obtain inequality \eqref{P5-1}. \end{proof} Combining the above statement with \cref{P5}, we obtain a complete subdifferential characterization of the uniform subregularity in the convex setting. \begin{corollary Let $X$ and $Y$ be Banach, and $\gph F_p$ be closed for all ${p\in P}$. The mapping $F$ is subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ if and only if \begin{align* \sup_{\gamma>0} \liminf_{\substack{x\to\bar x,\, y\to\bar y\\p\in P,\,x\notin F_p^{-1} (\bar y),\,\bar y\ne y\in F(p,x)}} d_\gamma\left(0,\sd\psi_p(x,y)\right)>0. \end{align*} \end{corollary} \if{ \begin{remark} In view of the convexity of $\psi_p$ and \cref{P1.3}(ii), we have $|\nabla \psi_p|_\gamma(x,y)=d_\gamma\left(0,\sd\psi_p(x,y)\right)$. Hence, \cref{C3.1} can be seen as the dual counterpart of \cref{T4.1}. \end{remark} }\fi The next corollary follows from \cref{P5.1} in view of the representation \eqref{dnorm} of the dual norm. \begin{corollary}\label{C7.1} If $F$ is $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ with $\delta$ and ${\mu}$, then $\|x^*\|\ge\alpha(1-\|y^*\|)$ for all $p\in P$, $x\in B_{\delta}(\bar x)$ and $y\in Y$ satisfying \eqref{P4.1-1}, and all $(x^*,y^*)\in \sd\psi_p(x,y)$. As a consequence, \begin{align*} \liminf_{\substack{F_p^{-1} (\bar y)\not\ni x\to\bar x,\,F(p,x)\ni y\to \bar y,\,y^*\to0\\p\in P,\,y\ne\bar y,\, (x^*,y^*)\in\sd\psi_p(x,y)}} \|x^*\|\ge\alpha. \end{align*} \end{corollary} The next statement gives a partial converse to \cref{T2}. \begin{theorem}\label{C3.08} If $F$ is $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ with $\delta$ and $\mu$, then for all $p\in P$, $x\in B_{\delta}(\bar x)$ and $y\in Y$ satisfying \eqref{P4.1-1}, and all $y^*\in Y^*$ satisfying \eqref{T1-1}, inequality \eqref{T2-1} is satisfied with $\gamma:=\alpha^{-1} $. \end{theorem} \begin{proof} Observe that $\psi_p$ is the sum of the convex continuous function $v\mapsto g(v):={\|v-\bar y\|}$ and the indicator function of the convex set $\gph F_p$. By \cref{SR}(i), $\sd\psi_p(x,y)=\{0\}\times\sd g(y)+N_{\gph F_p}(x,y)$. The assertion follows from \cref{P5.1} in view of \cref{L3}(ii). \sloppy \end{proof} Combining \cref{T2,C3.08}, we can formulate a necessary and sufficient characterization of the uniform subregularity in the convex setting. \begin{corollary}\label{C3.8} Let $X$ and $Y$ be Banach, and $\gph F_{p}$ be closed for all ${p\in P}$. The mapping $F$ is subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ if and only if \begin{align}\label{C3.9-1} \sup_{\gamma>0}\liminf_{\substack{F_p^{-1} (\bar y)\not\ni x\to\bar x,\, F(p,x)\ni y\to\bar y\\p\in P,\,y\ne\bar y,\,\|y^*\|=1,\, \langle y^*,y-\bar y\rangle=\|y-\bar y\|}} d_\gamma((0,-y^*),N_{\gph F_p}(x,y))>0. \end{align} \end{corollary} {The next example illustrates the necessary condition in \cref{C3.08}}. \begin{example} Let $P=X=Y:=\R$, $F(p,x):=\{p-x\}$ for all $p\in P$ and $x\in X$,and let $\bar x=\bar y:=0$. By \eqref{G}, $F_p^{-1} (\bar y)=\{p\}$. Thus, $d(x,F_p^{-1} (\bar y))=d(\bar y,F(p,x))=|x-p|$ for all $p\in P$ and $x\in X$. Hence, inequality \eqref{D1.2-2} is satisfied for all $p\in P$, $x\in X$, and $\alpha\in]0,1]$, i.e. the mapping $F$ is $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ for any $\alpha\in]0,1]$. {We have} $\gph F_p=\{(x,y)\mid y=p-x\}$ is closed and convex for all $p\in P$, and $N_{\gph F_p}(x,y)=\{(t,t)\mid t\in\R\}$ for any $(x,y)\in\gph F_p$. Let $y^*\in\R$ satisfy \eqref{T1-1}. Then $y^*=1$ if $y>0$, and $y^*=-1$ if $y<0$. It is easy to check that, given a $\gamma>0$, in both cases the distance $d_\gamma((0,-y^*),N_{\gph F_p}(x,y))$ equals 1 if $\gamma\le1$, or $\gamma^{-1} $ if $\gamma>1$. Hence, condition \eqref{C3.9-1} is satisfied, confirming the uniform {subregularity} of $F$. \end{example} The next statement is a consequence of \cref{C3.08}. It is in a sense a partial converse to \cref{C3.3}. \begin{corollary}\label{C4.9} If $F$ is $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ with ${\delta}$ and $\mu$, then $d(0,D^*F_p(x,y)(B_{\eta}(y^*)))\ge\alpha(1-\eta)$ for all ${\eta\in]0,1[}$, $p\in P$, $x\in B_{\delta}(\bar x)$ and $y\in Y$ satisfying \eqref{P4.1-1}, and all $y^*\in Y^*$ satisfying \eqref{T1-1}. \sloppy \end{corollary} \begin{proof} In view of the representations \eqref{coder} of the coderivative and \eqref{dnorm} of the dual norm, condition \eqref{T2-1} with $\gamma:=\alpha^{-1} $ means that $\|u^*\|+\alpha{\|v^*-y^*\|}\ge\alpha$ for all $v^*\in Y^*$ and ${u^*\in D^*F_p(x,y)(v^*)}$. Hence, it yields $\|u^*\|>\alpha(1-\eta)$ if $\|v^*-y^*\|<\eta$. \sloppy \end{proof} Combining the above statement with \cref{C3.3}, we obtain a {complete} coderivative characterization of the uniform $\alpha-$subregularity in the convex setting. It improves \cite[Theorem~3]{ChuKim16} (in the linear case). \begin{corollary Let $X$ and $Y$ be Banach, and $\gph F_{p}$ be closed for all $p\in P$. The mapping $F$ is $\alpha-$subregular in $x$ uniformly in $p$ over $P$ at $(\bar x,\bar y)$ if and only~if $$ \lim_{\delta\downarrow0}\;\inf_{\substack{x\in B_{\delta}(\bar x)\setminus F_p^{-1} (\bar y),\, \bar y\ne y\in F(p,x)\cap B_{\delta}(\bar y)\\ p\in P,\,\|y^*\|=1,\, \langle y^*,y-\bar y\rangle=\|y-\bar y\|}} d(0,D^*F_p(x,y)(B_{\delta}(y^*)))\ge\alpha. $$ \end{corollary} \if{ The next corollary provides quantitative dual characterizations of Robinson subregularity. \begin{corollary}\label{T4.2} Let $X$ and $Y$ be normed spaces, $\alpha>0$, $\delta\in]0,+\infty]$ and $\mu\in]0,+\infty]$. Suppose $\gph F_{\bar p}$ is convex, and the mapping $F$ is Robinson $\alpha-$subregular at $\bar x$ with $\delta$ and $\mu$. The following statements hold. \begin{enumerate} \item Inequality \eqref{T4-2} holds for $\gamma:=\alpha^{-1} $ and all $x,y$ satisfying \eqref{P4.2-1}, and $y^*\in Y^*$ satisfying \eqref{T1-1}. \item Let $\alpha<\frac{1}{\sqrt{2}}$. Inequality \eqref{T3.3-2} holds for some $\gamma>0$ and all $x,y$ satisfying \eqref{P4.2-1}, $y^*\in J_\gamma(y)$, and $x^*\in D^*F_{\bar p}(x,y)(y^*)$. \end{enumerate} \end{corollary} \begin{proof} Cf. the proof of (??). \end{proof} }\fi \if{ Thanks to Corollaries~\ref{T4}, \ref{T5} and \ref{T4.2}, we can obtain complete dual characterizations of Robinson subregularity. \begin{corollary}\label{C3.9} Let $X$ and $Y$ be Banach, $\gph F_{\bar p}$ be closed and convex. The mapping $F$ is Robinson subregular at $(\bar p,\bar x)$ if and only if one of the following conditions is satisfied: \begin{enumerate} \item \begin{align*} \sup_{\gamma>0}\liminf_{\substack{x\to\bar x,\, y\to0,\,x\notin G(\bar p)\\y\in F_{\bar p}(x)\setminus\{0\},\,\|y^*\|=1,\, \langle y^*,y\rangle=\|y\|}}d_\gamma((0,-y^*),N_{\gph F_{\bar p}}(x,y))>0; \end{align*} \item \begin{align*} \liminf_{\lambda\downarrow0} \big\{\|x^*\|\mid x^*\in D^*F_{\bar p}(x)(y^*),\, p\in B_\lambda(\bar p),\,x\in B_\lambda(\bar x)\setminus G(\bar p),\\ y\in F_{\bar p}(x)\cap (B_{\lambda}(0)\setminus\{0\}),\, y^*\in J_\lambda(y)\big\}>0. \end{align*} \end{enumerate} \end{corollary} \begin{remark} The characterization in item (i) is new, while the one in item (ii) recaptures \cite[Theorem~4]{ChuKim16}. \end{remark} }\fi \section{Metric Subregularity, Metric Regularity and Implicit Multifunctions}\label{S6} In this section, we illustrate the necessary and sufficient conditions for the uniform subregularity established in the preceding sections by characterizing several conventional properties of set-valued mapping s. \subsection{Metric Subregularity} As observed in the Introduction, the conventional regularity properties in \cref{D1.1} are particular cases of the uniform regularity properties in \cref{D1.2} corresponding to $P$ being a singleton, which practically means that the set-valued mapping\ $F$ does not involve a parameter. The next three statements, which are immediate consequences of the corresponding `parametric' ones in \cref{S4,S5}, illustrate this observation for the case of subregularity. Here $X$ and $Y$ are normed spaces, $F:X\rightrightarrows Y$, $(\bar x,\bar y)\in\gph F$, $\alpha>0$, $\delta\in]0,+\infty]$ and $\mu\in]0,+\infty]$. \begin{proposition}\label{P5.2} \begin{enumerate} \item Suppose $\gph F$ is convex. If $F$ is $\alpha-$subregular at $(\bar x,\bar y)$ with $\delta$ and $\mu$, then \begin{align}\label{P5.1-1} \limsup_{\substack{u\to x,\,v\to y,\,(u,v)\in\gph F,\,(u,v)\ne (x,y)\\ d(u,\bar x)<\delta+\mu,\,d(v,\bar y)<\alpha\mu}} {\dfrac{\|y-\bar y\|-\|v-\bar y\|}{{\|(u-x,v-y)\|_\gamma}}}\ge\alpha \end{align} for $\gamma:=\alpha^{-1} $, and all $x\in B_{\delta}(\bar x)$ and $y\in Y$ satisfying \begin{align}\label{P5.1-2} x\notin F^{-1} (\bar y),\quad y\in F(x){\cap B_{\alpha\mu}(\bar y)}. \end{align} \item Suppose $X$ and $Y$ are Banach, and $\gph F$ is closed. If inequality \eqref{P5.1-1} holds for some $\gamma>0$, and all $x\in B_{\delta+\mu}(\bar x)$ and $y\in Y$ satisfying \eqref{P5.1-2}, then $F$ is $\alpha-$subregu\-lar at $(\bar x,\bar y)$ with $\delta$ and $\mu$. \end{enumerate} \end{proposition} \begin{proof} The statement is a consequence of \cref{C2.2}. \end{proof} \begin{proposition \begin{enumerate} \item Suppose $\gph F$ is convex. If $F$ is $\alpha-$subregu\-lar at $(\bar x,\bar y)$ with $\delta$ and $\mu$, then \begin{align}\label{P5.2-1} d_\gamma((0,-y^*),N_{\gph F}(x,y))\ge\alpha \end{align} for $\gamma:=\alpha^{-1} $, all $x\in B_{\delta}(\bar x)$ and $y\in Y$ satisfying \eqref{P5.1-2}, and all $y^*\in Y^*$ satisfying~\eqref{T1-1}. \item Suppose $X$ and $Y$ are Banach, and $\gph F$ is closed. The mapping $F$ is $\alpha-$subregu\-lar at $(\bar x,\bar y)$ with $\delta$ and $\mu$ if, for some $ \gamma>0$, and all ${x\in B_{\delta+\mu}(\bar x)}$ and $y\in Y$ satisfying \eqref{P5.1-2}, one of the following conditions is satisfied: \begin{enumerate} \item inequality \eqref{P5.2-1} holds with $N:=N^C$ for all $y^*\in Y^*$ satisfying \eqref{T1-1}; \item $X$ and $Y$ are Asplund, and there exists a $\tau\in]0,1[$ such that inequality \eqref{P5.2-1} holds with ${N:=N^F}$ for all $y^*\in Y^*$ satisfying \eqref{T1-2}. \end{enumerate} \end{enumerate} \end{proposition} \begin{proof} The statement is a consequence of \cref{T2,C3.08}. \end{proof} \begin{proposition}\label{P5.3} \begin{enumerate} \item Suppose $\gph F$ is convex. If $F$ is $\alpha-$subregular at $(\bar x,\bar y)$ with $\delta$ and $\mu$, then $d(0,D^*F(x,y)(B_{\eta}(y^*)))\ge\alpha(1-\eta)$ for any $\eta\in]0,1[$, all $x\in B_{\delta}(\bar x)$ and $y\in Y$ satisfying \eqref{P5.1-2}, and all $y^*\in Y^*$ satisfying~\eqref{T1-1}. \sloppy \item Suppose $X$ and $Y$ are Banach, and $\gph F$ is closed. The mapping $F$ is $\alpha-$subregular at $(\bar x,\bar y)$ with $\delta$ and $\mu$ if, for some $\eta\in]0,+\infty]$, and all $x\in B_{\delta+\mu}(\bar x)$ and $y\in Y$ satisfying \eqref{P5.1-2}, one of the following conditions is satisfied: \begin{enumerate} \item with $D^*:=D^*_C$, for all $y^*\in Y^*$ satisfying \eqref{T1-1}, it holds \begin{align}\label{P5.3-1} d(0,D^*F(x,y)(B_{\eta}(y^*)))\ge\alpha; \end{align} \item $X$ and $Y$ are Asplund, and there exists a $\tau\in]0,1[$ such that inequality \eqref{P5.3-1} holds with ${D^*:=D^*_F}$ for all $y^*\in Y^*$ satisfying \eqref{T1-2}. \end{enumerate} \end{enumerate} \end{proposition} \begin{proof} The statement is a consequence of \cref{C3.3,C4.9}. \end{proof} \begin{remark} \begin{enumerate} \item In \cref{P5.2}(ii), it is sufficient to assume that $X$ and $Y$ are complete metric spaces (with distances in place of norms in condition \eqref{P5.1-1}), or even that $\gph F$ is complete; cf. \cref{R2.1}(iii). In this setting, the sufficient condition in \cref{P5.2}(ii) can be viewed as a quantitative version of \cite[Corollary~5.8(d)]{Kru15} and \cite[Theorem~2.4(a)]{Iof17.1}. \item Proposition \ref{P5.3} improves \cite[Theorem~5.3]{LiMor12}. In the linear setting, part (ii) of this proposition improves \cite[Theorem~3.3]{LiMor12}, \cite[Theorem~6]{NgaTroThe13}, \cite[Theorem~8]{ChuKim16}, \cite[Theorem~2.6]{Iof17.1}, and the corresponding parts of \cite[Corollary~5.8]{Kru15}. \cref{P5.3}(ii) with condition (a) recaptures \cite[Theorem~3.2]{ZheNg10}. \end{enumerate} \end{remark} \subsection{Metric Regularity} The conventional metric regularity is a particular case of the uniform regularity property in \cref{D1.2}(i) corresponding to $P$ being a singleton. At the same time, as it follows from the observation in \cref{R1.1}, in the normed space setting it can be treated as a particular case of the uniform subregularity property in \cref{D1.2}(ii) for the set-valued mapping\ $\widehat{F}(y,x):={F(x)-y}$, $(y,x)\in Y\times X$ with $y$ considered as a parameter. Obviously $(\bar x,\bar y)\in\gph F$ if and only if $(\bar y,\bar x,0)\in\gph\widehat{F}$. The next three statements, which are immediate consequences of the corresponding `parametric' ones in \cref{S4,S5}, illustrate the above observation. Here $X$ and $Y$ are normed spaces, $F:X\rightrightarrows Y$, $(\bar x,\bar y)\in\gph F$, $\alpha>0$, $\delta\in]0,+\infty]$ and $\mu\in]0,+\infty]$. \begin{proposition}\label{P5.4} \begin{enumerate} \item Suppose $\gph F$ is convex. If $F$ is $\alpha-$regular at $(\bar x,\bar y)$ with $\delta$ and $\mu$, then \begin{align}\label{P5.4-1} \limsup_{\substack{u\to x,\,v\to z,\,(u,v)\in\gph F,\,(u,v)\ne (x,z)\\ d(u,\bar x)<\delta+\mu,\,d(v,y)<\alpha\mu}} {\dfrac{\|z-y\|-\|v-y\|}{{\|(u-x,v-z)\|_\gamma}}}\ge\alpha \end{align} for $\gamma:=\alpha^{-1} $, and all {$x\in B_\delta(\bar x)$}, $y\in B_{\delta}(\bar y)$ and $z\in Y$ satisfying \begin{align}\label{P5.4-2} {x\notin F^{-1} (y)},\quad z\in F(x){\cap B_{\alpha\mu}(y)}. \end{align} \item Suppose $X$ and $Y$ are Banach spaces, and $\gph F$ is closed. The mapping $F$ is $\alpha-$regular at $(\bar x,\bar y)$ with $\delta$ and $\mu$ if inequality \eqref{P5.4-1} holds with some $\gamma>0$ for all {$x\in B_{\delta+\mu}(\bar x)$}, $y\in B_{\delta}(\bar y)$ and $z\in Y$ satisfying \eqref{P5.4-2}. \end{enumerate} \end{proposition} \begin{proof} The statement is a consequence of \cref{C2.2}. \end{proof} \if{ \AK{14/06/20. Should the $\delta+\mu$ estimate in the above proposition be applied to $x$ instead of $y$?} \NDC{17/06/20. Yes, I think it should be. I think the condition $y\in B_\delta(\bar y)$ is nothing else but the uniformity in the definition of metric regularity.} }\fi \begin{proposition \begin{enumerate} \item Suppose $\gph F$ is convex. If $F$ is $\alpha-$regular at $(\bar x ,\bar y)$ with $\delta$ and $\mu$, then \begin{align}\label{T2-5} d_\gamma((0,-y^*),N_{\gph F} (x,z))\ge\alpha \end{align} for $\gamma:=\alpha^{-1} $, and all {$x\in B_\delta(\bar x)$}, $y\in B_{\delta}(\bar y)$ and $z\in Y$ satisfying \eqref{P5.4-2}, and $y^*\in Y^*$ satisfying \begin{align}\label{T2-8} \|y^*\|=1,\quad\langle y^*,z-y\rangle=\|z-y\|. \end{align} \item Suppose $X$ and $Y$ are Banach, and $\gph F$ is closed. The mapping $F$ is $\alpha-$regular at $(\bar x,\bar y)$ with $\delta$ and $\mu$ if, for some $\gamma>0$, and all {$x\in B_{\delta+\mu}(\bar x)$}, {$y\in B_{\delta}(\bar y)$} and $z\in Y$ satisfying \eqref{P5.4-2}, one of the following conditions holds: \begin{enumerate} \item inequality \eqref{T2-5} holds with $N:=N^C$ for all $y^*\in Y^*$ satisfying \eqref{T2-8}; \item $X$ and $Y$ are Asplund, and there exists a $\tau\in]0,1[$ such that inequality \eqref{T2-5} holds with ${N:=N^F}$ for all $y^*\in Y^*$ satisfying \begin{align}\label{T2-9} \|y^*\|=1,\quad\langle y^*,z-y\rangle>\tau\|z-y\|. \end{align} \end{enumerate} \end{enumerate} \end{proposition} \begin{proof} The statement is a consequence of \cref{T2,C3.08}. \end{proof} \begin{proposition}\label{P5.6} \begin{enumerate} \item Suppose $\gph F$ is convex. If $F$ is $\alpha-$regular at $(\bar x,\bar y)$ with $\delta$ and $\mu$, then $d(0,D^*F(x,z)(B_{\eta}(y^*)))\ge\alpha(1-\eta)$ for all ${\eta\in]0,1[}$, {$x\in B_\delta(\bar x)$}, $y\in B_{\delta}(\bar y)$ and $z\in Y$ satisfying \eqref{P5.4-2}, and all $y^*\in Y^*$ satisfying \eqref{T2-8}. \sloppy \item Suppose $X$ and $Y$ are Banach, and $\gph F$ is closed. The mapping $F$ is $\alpha-$regular at $(\bar x,\bar y)$ with $\delta$ and $\mu$ if, for some $\eta\in]0,+\infty]$ and all {$x\in B_{\delta+\mu}\bar x)$}, {$y\in B_{\delta}(\bar y)$} and $z\in Y$ satisfying \eqref{P5.4-2}, one of the following conditions holds: \begin{enumerate} \item with $D^*:=D^*_C$, for all $y^*\in Y^*$ satisfying \eqref{T2-8}, it holds \begin{align}\label{C3.3-3} d(0,D^*F(x,z)(B_{\eta}(y^*)))\ge\alpha; \end{align} \item $X$ and $Y$ are Asplund, and there exists a $\tau\in]0,1[$ such that inequality \eqref{C3.3-3} holds with ${D^*:=D^*_F}$ for all $y^*\in Y^*$ satisfying \eqref{T2-9}. \end{enumerate} \end{enumerate} \end{proposition} \begin{proof} The statement is a consequence of \cref{C3.3,C4.9}. \end{proof} \begin{remark} \begin{enumerate} \item In the normed space setting, the sufficient condition in Propositi\-on~\ref{P5.4}(ii) can be viewed as a quantitative version of \cite[Theorem~1]{Iof00} and \cite[Theorem~2.4(a)]{Iof17.1}. \item \cref{P5.6}(ii) enhances \cite[Theorem~3.7]{Chu15}. \cref{P5.6}(ii) with condition (a) improves \cite[Corollary~3.1]{HuyKimNin12} and \cite[Theorem~3.5]{Chu15} (in the linear case), while with condition (b) it improves (in the linear case) \cite[Theorem~3.1]{Chu15} and \cite[Theorem~7]{ChuKim16}. \end{enumerate} \end{remark} \subsection{Implicit Multifunctions} Now we get back to the implicit multifunction \eqref{G} and consider its particular case corresponding to the parametric inclusion $\bar y\in F(p,x)$ (with fixed left-hand side), i.e. \begin{align}\label{G2} G(p):=\{x\in X\mid \bar y\in F(p,x)\},\quad p\in P, \end{align} where $F:P\times X\rightrightarrows Y$, and $P$, $X$ and $Y$ are metric spaces. Stability properties of implicit multifunctions, i.e. solution sets of parametric inclusions, are of great importance for many applications and have been the subject of numerous publications; cf., e.g., \cite{Rob75.2,Rob76.2,Bor86,LedZhu99,AzeCorLuc02,KlaKum02, NgaThe04,AzeBen08, LeeTamYen08,HuyYao09,YenYao09,ChiYaoYen10, ChuKruYao11, HuyKimNin12,NgaTroThe13,DonRoc14,Ngh14, Chu15,ChuKim16,GfrOut16.2,Iof17,Iof17.1,Ude21}. Here, for illustration, we focus on the most well known perturbation stability property of set-valued mapping s called \emph{Aubin property}; cf. \cite{DonRoc14,Mor06.1}. \begin{definition A mapping $G:P\rightrightarrows X$ between metric spaces has the Aubin property at $(\bar p,\bar x)\in\gph G$ with rate $l>0$ if there exist $\eta\in]0,+\infty]$, $\delta\in]0,+\infty]$ and $\mu\in]0,+\infty]$ such that \begin{align*} d(x,G(p))\le l\;d(p,p') \end{align*} for all $p,p'\in B_\eta(\bar p)$ with $d(p,p')<\mu$, and $x\in G(p')\cap B_\delta(\bar x)$. \end{definition} Similar to Definitions~\ref{D1.1}, \ref{D1.2} and \ref{D1.3}, the inequality $d(p,p')<\mu$ is not essential in the above definition and can be dropped together with the constant $\mu$. We keep them for consistency with the definitions and characterizations in the preceding sections. We also establish connections between the constant $\mu$ and the corresponding constants in the other definitions. Given a point ${(\bar p,\bar x,\bar y)\in\gph F}$ and a number $\alpha>0$, the uniform $\alpha-$subregula\-rity property of $F$ at $(\bar p,\bar x,\bar y)$ in \cref{D1.3}(ii) means that there exist $\eta\in]0,+\infty]$, ${\delta\in]0,+\infty]}$ and $\mu\in]0,+\infty]$ such that \begin{align}\label{43} \alpha d(x,G(p))\le d(\bar y,F(p,x)) \end{align} for all $p\in B_\eta(\bar p)$ and $x\in B_\delta(\bar x)$ with ${d(\bar y,F(p,x))}<\alpha\mu$. Several primal and dual sufficient and necessary conditions for this property have been formulated in the preceding sections. Inequality \eqref{43} provides an estimate for the distance from $x$ to the value of the implicit multifunction \eqref{G2} at $p$ in terms of the residual of the parametric inclusion. However, this estimate does not say much about the behaviour of the implicit multifunction. An additional assumption on the mapping $F$ is needed, which would allow one to get rid of $F$ in the right-hand side\ of the inequality \eqref{43}. This additional assumption is given in the next definition, which is a modification of the second part of \cite[Definition~3.1]{Iof17.1}, where we borrow the terminology from. A similar property was considered in \cite{KlaKum02}, where the authors used the name \emph{Lipschitz lower semicontinuity}. \begin{definition Let $l>0$. The mapping $F$ is said to $l-$recede in $p$ uniformly in $x$ at $(\bar p,\bar x,\bar y)$ if there exist $\eta\in]0,+\infty]$, $\delta\in]0,+\infty]$ and $\mu\in]0,+\infty]$ such that \begin{align}\label{D5.1-1} d(\bar y,F(p,x))\le l d(p,p') \end{align} for all $x\in B_\delta(\bar x)$ and $p,p'\in B_\eta(\bar p)$ with $d(p,p')<\mu$ and $\bar y\in F(p',x)$. \end{definition} In what follows we assume that ${(\bar p,\bar x,\bar y)\in\gph F}$, $\alpha>0$, $l>0$, $\eta\in]0,+\infty]$, $\delta\in]0,+\infty]$ and $\mu\in]0,+\infty]$. The next statement is a modification of \cite[Theorem~3.2]{Iof17.1}. \begin{proposition}\label{P5.7} Suppose that $F$ \begin{itemize} \item is $\alpha-$subregular in $x$ uniformly in $p$ at $(\bar p,\bar x,\bar y)$ with $\eta$, $\delta$ and $\mu$; \item $l-$recedes in $p$ uniformly in $x$ at $(\bar p,\bar x,\bar y)$ with $\eta$, $\delta$ and $\mu':=\alpha\mu/l$. \end{itemize} Then the mapping $G$ given by \eqref{G2} has the Aubin property at $(\bar p,\bar x)$ with rate $l/\alpha$, and $\eta$, $\delta$ and $\mu'$. \end{proposition} \begin{proof} Let $p,p'\in B_\eta(\bar p)$ with $d(p,p')<\mu'$, and $x\in G(p')\cap B_\delta(\bar x)$. By \eqref{G2} and \eqref{D5.1-1}, ${\bar y\in F(p',x)}$ and ${d(\bar y,F(p,x))}<\alpha\mu$. Using successively \eqref{43} and \eqref{D5.1-1}, we obtain \sloppy \begin{align* d(x,G(p))\le\frac{1}{\alpha} d(\bar y,F(p,x))\le\frac{l}{\alpha}d(p,p'). \end{align*} The proof is completed. \end{proof} Combining \cref{P5.7} with the sufficient conditions for the uniform subregularity formulated in the preceding sections, we can immediately obtain various sufficient conditions for the Aubin property of the implicit multifunction \eqref{G2}. The next proposition collects three sufficient conditions arising from {\cref{C2.2}(ii)}, \cref{T2} and \cref{C3.3}, respectively. \begin{proposition}\label{P5.8} Let $P$ be a metric space, $X$ and $Y$ be complete metric spaces, $F:P\times X\rightrightarrows Y$ and $G:P\rightrightarrows X$ be given by \eqref{G2}. Suppose that $\gph F_p$ is closed for all $p\in B_\eta(\bar p)$. The mapping $G$ has the Aubin property at $(\bar p,\bar x)$ with rate $l>0$, and $\eta$, $\delta$ and $\mu$ if, for some $l'>0$, the mapping $F$ $l'-$recedes in $p$ uniformly in $x$ at $(\bar p,\bar x,\bar y)$ with $\eta$, $\delta$ and $\mu$, and one of the following conditions holds true: \begin{enumerate} \item there exists a $\gamma>0$ such that \begin{align* \limsup_{\substack{u\to x,\,v\to y,\,(u,v)\in\gph F_p,\,(u,v)\ne (x,y)\\ d(u,\bar x)<\delta+l\mu,\,d(v,\bar y)<l'\mu}} {\dfrac{d(y,\bar y)-d(v,\bar y)}{d_\gamma((u,v),(x,y))}}\ge\frac{l'}{l} \end{align*} for all $p$, $x$ and $y$ satisfying \begin{align}\label{P5.8-3} p\in B_\eta(\bar p),\;x\in B_{{\delta+\mu}}(\bar x)\setminus F_p^{-1} (\bar y),\; y\in F(p,x){\cap B_{l'\mu}(\bar y)}; \end{align} \item $X$ and $Y$ are {Banach}, and there exists a $\gamma>0$ such that, with $N:=N^C$, \begin{align}\label{P5.8-4} d_\gamma((0,-y^*),N_{\gph F_p}(x,y))\ge\frac{l'}{l} \end{align} for all $p$, $x$ and $y$ satisfying \eqref{P5.8-3}, and all $y^*\in Y^*$ satisfying \eqref{T1-1}; \item $X$ and $Y$ are {Asplund}, and there exist a $\gamma>0$ and a $\tau\in]0,1[$ such that condition \eqref{P5.8-4} is satisfied with $N:=N^F$ for all $p$, $x$ and $y$ satisfying \eqref{P5.8-3}, and all $y^*\in Y^*$ satisfying~\eqref{T1-2}; \item $X$ and $Y$ are {Banach}, and \begin{align}\label{P5.8-5} d(0,D^*F_p(x,y)(B_{\eta}(y^*)))\ge\frac{l'}{l} \end{align} {with ${D^*:=D^*_C}$} for all $p$, $x$ and $y$ satisfying \eqref{P5.8-3}, and all $y^*\in Y^*$ satisfying \eqref{T1-1}; \item $X$ and $Y$ are Asplund, and {there exists} a $\tau\in]0,1[$ such that condition \eqref{P5.8-5} is satisfied with $D^*:=D^*_F$ for all $p$, $x$ and $y$ satisfying \eqref{P5.8-3}, and all $y^*\in Y^*$ satisfying~\eqref{T1-2}. \sloppy \end{enumerate} \end{proposition} \if{ \AK{14/06/20. Should some comments to this proposition be added as suggested by one of the reviewers in their last remark?} }\fi \begin{remark} Conditions (i) in \cref{P5.8} can be seen as a quantitative version of \cite[Theorem~3.9]{Iof17.1}, while conditions (iv) and (v) improve \cite[Theorem~4.1]{Iof17.1} and \cite[Theorem~7.26]{Iof17}. Conditions (ii) and (iii) are new. Note that these two conditions are weaker than conditions (iv) and (v), respectively. \end{remark} \addcontentsline{toc}{section}{References} \nocite{*} \bibliographystyle{jnsao}
1,314,259,995,753
arxiv
\section{Introduction}\label{sec:Introduction} The idea of modeling time-to-event data is well established in statistics and widely used in the medical sciences (in the context of survival analysis) and engineering (in the context of reliability analysis). In any of these situations, we are interested in representing the distribution of a non-negative random variable $T$, based on one of its representative functions, such as density, cumulative distribution, or the hazard function. Many authors have chosen to model survival data in the presence of covariates using the hazard function, which is related to its interpretation. The hazard function represents an interesting alternative, since its interpretation is given in terms of the instantaneous failure rate over time. Perhaps the best known model dedicated to hazard modeling is the Cox model \cite{Cox_72}, which has brought to light this modeling possibility. The Cox's proportional hazards model is quite flexible and used extensively in survival analysis. It can be easily extended to incorporate, for instance, the effect of time-dependent covariates. A strong assumption, and probably the most problematic of this model is that the failure rates of any two individuals are proportional, popularizing the name Cox proportional hazard (PH) model. The assumption of proportionality of hazards is not always in accordance with the observed reality in the field, which motivates the study and development of models that relax such a hypothesis. Several techniques have been proposed as alternatives to PH modeling. Among others, we can cite the use of covariate stratification \cite{kleinbaum2012stratified}, the adoption of time-dependent covariates \cite{kleinbaum2012extension}, the nonparametric accelerated failure time model \cite{prentice1978linear}-\cite{kalbfleisch2011statistical}, the hybrid hazard model \cite{etezadi1987extended}, the extension of hybrid hazard models \cite{louzada1997extended}-\cite{louzada1999polyhazard}, and the generalized survival models \cite{liu2017generalized}. Another approach is the generalized time-dependent logistic (GTDL) model introduced by MacKenzie in \cite{mackenzie1996regression}, whose proposal is to bring a fully parametric competitor for the Cox model. More recently, Louzada-Neto {\it et al.} \cite{louzada2010bayesian} proposed a Bayesian approach to the GTDL model, Louzada-Neto {\it et al.} \cite{louzada2011interval} compared several techniques for building confidence intervals using parametric and non-parametric resampling methods, MacKenzie and Peng \cite{mackenzie2014statistical} extended the GTDL model by incorporating a random effect into the hazard function and using h-likelihood procedures that obviate the need to marginalize the risk and survival functions, and Milani {\it et al.} \cite{milani2015generalized} extended the GTDL model by including a gamma frailty term in the modeling. These models have been successfully applied to situations where all units are susceptible to the event of interest, i.e., the presence of a cure fraction in the population is not feasible. Calsavara {\it et al.} \cite{calsavara2020long} proposed an extension of the GTDL model with application in the medical field, where long-term survivors are observed. From the fact that the GTDL and GTDL frailty models have the properties of non-PH and with/without long-term presence (see Figure \ref{fluxograma}), these models can be used in time-to-failure data with these characteristics. \begin{figure}[h] \centering \includegraphics[scale=0.5]{fig01.pdf} \caption{Exemplification of the flexibility of the GTDL and GTDL frailty models.} \label{fluxograma} \end{figure} A typical assumption in reliability data analysis is that all of the study units or systems will eventually experience the event of interest if they are followed long enough. Nevertheless, the event may not occur for some units, even after a longer period of time. In manufacturing, for instance, those items that did not fail nor malfunction during the examination time comprise the cured fraction \cite{vahidpour2016cure}. Nelson \cite{Nelson82} observed the life of insulation on electric motors, which were operated at some levels of temperature; the result is that at low temperature the motors lasted almost indefinitely, while at high temperatures the breake down occurred quickly. From a Bayesian perspective, Lin and Zhu \cite{lin2008cure} proposed a new approach to the reliability analysis of complex systems, where a part of the subsystems is considered ``longevous'' compared with the entire system. Thus, the system will not fail due to these subsystems. Hence, usual survival models, such as the Cox PH model or the accelerated failure time model, are not suitable for such cured individuals. As a result, cure models have been developed for manipulation and analysis of survival data with cure fraction. Boag \cite{boag1949} introduced the standard cure rate model, which is the most widely used cure rate model. His objective was to study cases where there was a fraction of cured patients among those who had received treatment for mouth cancer; the modeling of the failure time of the susceptible group was made by adopting the lognormal distribution and assuming the cure probability to be constant. The mixture cure model was further developed in \cite{berkson} and later studied extensively by various authors \cite{goldman84}, \cite{Farewell_86}, \cite{kuk1992mixture}, \cite{taylor1995semi}, \cite{Maller_Zhou}, \cite{peng_2000}, \cite{banerjee2004parametric}, \cite{Rodrigues}, among others. Usual models implicitly admit a homogeneous population for susceptible systems, but explanatory variables can be included to elucidate the observable heterogeneity. Nonetheless, genetic, environmental factors or even information that, for some reason, was not considered in the planning, can cause a portion of the unobserved heterogeneity. Hougaard \cite{hougaard1991modelling} discussed the benefits of adopting two sources of heterogeneity, the observable (given by explanatory variables) and the unobservable, considering for the latter some distribution families. Unobserved heterogeneity can be controlled by introducing a random effect to the hazard function, known as frailty (the term ``frailty'' was introduced in \cite{vaupel1979impact}). In this situation, the frailty models are widely used; for more details, we refer the reader to \cite{wienke2010frailty}. The exclusion of a relevant explanatory variable in the modeling will increase the amount of unobservable heterogeneity, thus, the frailty makes it possible to evaluate the effects of the explanatory variables that were not considered in the modeling. Therefore, the frailty, besides explaining the heterogeneity between the systems, also allows to alleviate the absence of important covariates. The challenges in the construction of oil wells are increasing over time, either due to the increase in technical difficulties due to the greater complexity of the areas to be explored or by the improvements in the rules of regulatory bodies aiming at increasing safety. The DHSV (downhole safety valve) is a subsurface safety valve, which is positioned in the oil production pipeline column below the seabed; its function is to enable the production column to be closed almost instantly, preventing uncontrolled leakage of hydrocarbons into the environment in the event of a catastrophic wellhead accident. The failure (closing or opening unwantedly and other unexpected actions) of the DHSV generates several unforeseen events causing great financial losses. Demonstrating reliability performance of DHSVs is an important activity related to risk assessment and management of offshore well systems \cite{selvik2015review}. The study of reliability associated with DHSV contemplates many ramifications, even in statistics itself, including (but not limited to): (i) investigating current failures; (ii) evaluating their root causes, failure mechanisms and effects; (iii) estimating and improving the reliability of its components; (iv) developing degradation models as part of a testing strategy, among others. Selvik and Abrahamsen \cite{selvik2015review} studied and discussed the specific statistics for the period 2002 - 2013, focusing on reliability aspects of the collected data. Their study also included a literature review and some testing data collected directly from oil and gas companies, to provide a more nuanced picture of the reliability issues. Rausand and Vatn \cite{rausand1998reliability} discussed the impact of using a Weibull life distribution instead of an exponential distribution, based on a specific data set for surface-controlled subsurface safety valves used in offshore oil and gas production wells. Oliveira \cite{oliveira2016} compared the reliability of some control systems models taking into account the equipment positions throughout the system and their failure rates with a vision more focused on loss of production than in security. Colombo {\it et al.} \cite{colombo2020regression} analyzed the behavior of several machine learning models to assess the reliability of DHSVs for further comparison against traditional statistical techniques, based on experimental evaluation over a data set collected from a Brazil's oil and gas company. In the context of this study, we would like to identify the association of the failure rate behavior with some environmental variables, such as the hydrogen sulfide (H2S) concentration, temperature, pressure, gas/oil ratio and water column. For this, we use the GTDL and GTDL frailty models, since the assumption of PH was not validated, consequently the Cox model cannot be used. After fitting a model, it is necessary to check the validity of its assumptions, as well as to carry out robustness studies to detect possible influential or extreme observations that can provoke distortions in the results of the analysis. There are several works in the survival analysis setting that present such analysis \cite{yiqi2016influence}, \cite{leao2017birnbaum}, \cite{leao2018incorporation}. In this study, we discuss the global influence starting from case-deletion, in which the influence of the $i$-th observation on the parameter estimates is investigated by excluding it from the analysis. We propose diagnostic measures based on case-deletion for the GTDL and GTDL frailty regression models, in order to determine which units might be influential in the analysis. To motivate our research, we describe the following real data set related to DHSVs. \section{Motivating example in oil and gas industry} The motivation for our study came from a real-world reability data set corresponding to the DHSVs used in the exploration of Petrobras' (abbreviation of \textit{Petr\'oleo Brasileiro S.A.}) oil wells in Brazil. Illustrated in Figure \ref{valvula_dhsv}, the DHSV is a subsurface safety valve whose function is to prevent uncontrolled leakage of hydrocarbons into the environment in the event of a catastrophic wellhead accident. \begin{figure}[!ht] \centering \includegraphics[width=0.8\linewidth]{fig02.pdf} \caption{Tubing-retrievable charged - downhole safety valves (TRC-DHSV) illustration (taken from \cite{garner2002ready}).} \label{valvula_dhsv} \end{figure} The records show the time (in years) of valve's life, whether or not there was a suspension of use and some other explanatory variables, presented in Table \ref{variaveis}, which are divided into groups according to their characteristics. The type of variable is also highlighted. \begin{table}[!ht] \centering \caption{Explanatory variables divided by group.} \resizebox{\linewidth}{!}{ \setlength{\tabcolsep}{3pt} \begin{tabular}{llcc} \hline Group & Variable & Abbreviation & Type of variable \\ \hline Environmental & Closed well temperature & CWT & Continuous \\ & Closed well pressure & CWP & Continuous \\ & Operating unit & OU & Qualitative \\ & Water column & WC & Continuous \\ \hline Operation & Well flowing pressure & WFP & Continuous \\ & Flow rate & FR & Continuous \\ \hline Valve & Manufacturer & Mfr. & Qualitative \\ & Family & Family & Qualitative \\ & Dimension & Dim. & Qualitative \\ & Pressure class & PC & Qualitative \\ \hline Flow & H2S concentration & H2S & Continuous \\ & Basic sediment and water & BSW & Continuous \\ & Gas oil ratio & GOR & Continuous \\ \hline \end{tabular} \label{variaveis} } \end{table} Our objective is to study the time until the failure of the DHSV and to identify possible associations between the characteristics of the valve, the environment, the functioning and the flow with the time-to-failure; for this, we adopt risk modeling. The assumption of PH is verified by means of the graph of the logarithm of cumulative hazard \textit{versus} time, for each covariate; \cite{colosimo2006, collett2015modelling, kleinbaum2012evaluating} present detailed discussions of how to assess the PH assumption. The log-cumulative hazard plots shown in Figure \ref{verificacao_proporcionalidade_1} indicate non-proportionality for the covariates: CWT, FR, Family and GOR. The remaining graphs are presented in Figure \ref{verificacao_proporcionalidade_2} of Appendix. \begin{figure}[!ht] \includegraphics[width=1\linewidth]{fig03.pdf} \caption{Plots of the logarithm of estimated cumulative hazard function \textit{versus} time, for the covariates: (a) CWT, (b) FR, (c) Family, and (d) GOR.} \label{verificacao_proporcionalidade_1} \end{figure} In addition to the graphical verification, we also carried out the investigation of the property of PH through Shoenfeld residuals \cite{grambsch1994proportional}. The null hypothesis is proportionality of hazards. The obtained results are summarized in Table \ref{tab:schoenfeld}. Considering a significance level of 10\%, the variables CWT, FR, Family, Dim. and GOR present non-PH. The global analysis was not performed due to the large amount of missing data in the covariates. \begin{table}[h] \centering \caption{Result of the hypothesis test to check the PH assumption.} \begin{tabular}{ccccc} \midrule Variable & p-value & & Variable & p-value \\ \midrule CWT & 0.049 & & Family & 0.015 \\ CWP & 0.727 & & Dim. & 0.075 \\ OU & 0.513 & & PC & 0.293 \\ WC & 0.173 & & H2S & 0.250 \\ WFP & 0.596 & & BSW & 0.563 \\ FR & 0.065 & & GOR & 0.063 \\ Mfr. & 0.506 & & & \\ \midrule \end{tabular} \label{tab:schoenfeld} \end{table} Since the results of the analysis of the property of PH are not unanimous, we decided to use the GTDL and GTDL with gamma frailty models. As explained in \cite{mackenzie1996regression}, the GTDL model can approach the PH model. When it occurs, the estimates of the regression parameters should be similar in both models. The inclusion of a frailty term in the traditional models can also be needed. As mentioned earlier, unobservable heterogeneity among units or systems may play an important role in assessing reliability, while its omission can cause biased results. Hence, this example serves as a motivation for the joint modeling of heterogeneity among valves by their frailties and the possible presence of a cured fraction of them when predicting their reliabilty. The remainder of the paper is organized as follows. In Section \ref{background}, we provide further details on the GTDL and GTDL frailty models, such as survival and hazard functions, their cure rate version and inference methods based on the likelihood function. Section \ref{influence} describes and discusses the influence diagnostics based on case-deletion. Section \ref{application} presents the fitted models to the groups of variables and the diagnostic analysis. Finally, Section \ref{conclusions} gives some concluding remarks. \section{Background} \label{background} In this section, we present the GTDL and GTDL with gamma frailty regression models, highlighting the hazard, reliability and probability density functions. These models are useful for data sets with covariates that do not satisfy the proportional hazards assumption. \subsection{GTDL model} \label{gtdl_model} Let $T>0$ be a random variable representing the failure time and $h(t)$ the instantaneous failure rate or baseline hazard function. According to \cite{mackenzie1996regression}, the hazard function of the GTDL model is given by \begin{eqnarray} h_0\left(t \mid {\bm x}\right)=\lambda\displaystyle\frac{\exp\left\{\alpha t+{\bm x}^\top\bm{\beta}\right\}}{1+\exp\left\{\alpha t+{\bm x}^\top\bm{\beta}\right\}}, \label{risco_mackenzie} \end{eqnarray} where $\lambda>0$ is a scalar, $\alpha \in \mathbb{R}$ is a measure of the time effect, ${\bm x}^\top=({ x}_{1},\ldots,{x}_{p})$ are the sets of covariates and $\bm{\beta}$=$(\beta_1,\ldots,\beta_p)^\top$ are the regression coefficients. The corresponding reliability function, $R(t|{\bm x})$, and probability density function, $f(t|{\bm x})$, are given, respectively, by \begin{eqnarray} R\left(t\mid{\bm x}\right)=\left(\frac{1+\exp\left\{\alpha t+{\bm x}^\top\bm{\beta}\right\}}{1+\exp\left\{{\bm x}^\top\bm{\beta}\right\}}\right)^{-\lambda/\alpha } \label{sobre_mackenzie} \end{eqnarray} and \begin{eqnarray*} \label{fdp_mack} f\left(t\mid{\bm x}\right)&=& \left(\lambda\displaystyle\frac{\exp\left\{\alpha t+{\bm x}^\top\bm{\beta}\right\}}{1+\exp\left\{\alpha t+{\bm x}^\top\bm{\beta}\right\}} \right) \\ &&\times \left(\frac{1+\exp\left\{\alpha t+{\bm x}^\top\bm{\beta}\right\}}{1+\exp\left\{{\bm x}^\top\bm{\beta}\right\}}\right)^{-\lambda/\alpha }. \end{eqnarray*} The hazard function (\ref{risco_mackenzie}) has monotonic behavior, being defined by the value of parameter $\alpha$. More specifically, when $\alpha>0$, the hazard function is increasing; when $\alpha<0$, it is decreasing; and finally, when $\alpha=0$, the hazard function is constant over time, that is, the resulting model is a PH model with exponential baseline hazard function, as highlighted in \cite{mackenzie2014statistical}. The GTDL model is said to be a non-PH model because the ratio of the hazard functions for two individuals is not constant over time. Consider two systems $i$ and $j$, $i\neq j$, with covariate vectors ${\bm x}_{i}$ and ${\bm x}_{j}$, ${\bm x}_{i} \neq {\bm x}_{j}$, for $i,j=1,\ldots,n$. Then, the ratio of the hazard functions is given by \begin{eqnarray} \tau\left(t\mid{\bm x}_{i},{\bm x}_{j}\right)&=&\frac{h_0\left(t\mid{\bm x}_{i}\right)}{h_0\left(t\mid{\bm x}_{j}\right)} \nonumber\\ &=&\frac{1+\exp\left\{\alpha t+{\bm x}_{j}^\top\bm{\beta}\right\}}{1+\exp\left\{\alpha t+{\bm x}_{i}^\top\bm{\beta}\right\}} \nonumber \\ &&\times\exp\left\{\left({\bm x}_{i}-{\bm x}_{j}\right)^\top\bm{\beta}\right\}. \label{Prova_NP} \end{eqnarray} Note from (\ref{Prova_NP}) that the time effect does not disappear, and hence the non-PH condition becomes evident. As mentioned in \cite{mackenzie1996regression}, the GTDL model is neither a PH model nor an accelerated life model. The reliability function (\ref{sobre_mackenzie}) is proper for $\alpha > 0$, i.e., $R(0| {\bm x})=1$ and $\displaystyle \lim_{t\rightarrow\infty} R(t| {\bm x})=0$. But when the value of parameter $\alpha$ is negative, the GTDL model naturally acquires an improper distribution, i.e., $R(0| {\bm x})=1$ and $\displaystyle \lim_{t\rightarrow\infty} R(t| {\bm x})=p > 0$; hence, the GTDL model is a cure rate model when $\alpha <0$. An advantage of the GTDL model over the mixed model \cite{berkson}, is that the former makes no assumption about the existence of a cure rate, leaving the data to indicate the presence or not of a cure fraction. In literature, models with this property have recently been called ``defective'' \cite{balka2009review}, \cite{rocha2016two} and \cite{scudilio2018defective}. In reliability, the event (failure or error) may not occur with some units, even after a very long period of time. Thus, the cure rate in the population is calculated as the limit of the reliability function (\ref{sobre_mackenzie}) when $\alpha<0$, given by \begin{eqnarray*} \label{prop_GTDL} \displaystyle p({\bm x})=\lim_{t\rightarrow\infty}R\left(t \mid {\bm x}\right)= \left({1+\exp\left\{{\bm x}^\top\bm{\beta}\right\}}\right)^{\lambda/\alpha }\in(0,1). \end{eqnarray*} Let $T_i>0$ be a random variable denoting the failure time for the $i$-th unit, and $\delta_i$ a censoring indicator variable, which is $\delta_i=0$ if the observed time is censored and $\delta_i=1$ otherwise, for $i=1, \ldots, n$. Also, consider $\boldsymbol{\eta}=(\lambda, \alpha, \beta_1, \ldots, \beta_p)$ and assume that $T_i$'s are independent and identically distributed (IID) random variables with hazard and reliability functions specified, respectively, by (\ref{risco_mackenzie}) and (\ref{sobre_mackenzie}). Then, the likelihood function considering right-censored reliability data is given by \begin{eqnarray*} L(\boldsymbol{\eta})&=&\displaystyle\prod_{i=1}^n h_0\left(t_i\mid{\bm x}_i\right)^{\delta_i}R\left(t_i\mid{\bm x}_i\right) \nonumber \\ &=&\displaystyle\prod_{i=1}^n \left(\lambda\displaystyle\frac{\exp\left\{\alpha t_i+{\bm x}_i^\top\bm{\beta}\right\}}{1+\exp\left\{\alpha t_i+{\bm x}_i^\top\bm{\beta}\right\}} \right)^{\delta_i} \\ && \times \left(\frac{1+\exp\left\{\alpha t_i+{\bm x}_i^\top\bm{\beta}\right\}}{1+\exp\left\{{\bm x}_i^\top\bm{\beta}\right\}}\right)^{-\lambda/\alpha } \end{eqnarray*} and the log-likelihood function, $\ell(\boldsymbol{\eta})=\log\left( L(\boldsymbol{\eta})\right)$, is given by \begin{eqnarray*} \ell(\boldsymbol{\eta})&=& \log(\lambda)\sum_{i=1}^n \delta_i + \sum_{i=1}^n \delta_i\alpha t_i +\sum_{i=1}^n \delta_i{\bm x}_i^\top\bm{\beta} \\ &&- \sum_{i=1}^n \delta_i \log\left(1+\exp\left\{\alpha t_i+{\bm x}_i^\top\bm{\beta}\right\}\right) \\ &&-\frac{\lambda}{\alpha}\sum_{i=1}^n \log\left(1+\exp\left\{\alpha t_i+{\bm x}_i^\top\bm{\beta}\right\}\right) \\ &&+\frac{\lambda}{\alpha}\sum_{i=1}^n \log\left(1+\exp\left\{{\bm x}_i^\top\bm{\beta}\right\}\right). \end{eqnarray*} \subsection{GTDL frailty model} The frailty model is characterized by the use of an unobservable random effect, which represents information that cannot or has not been observed, such as environmental factors, or information that has not been considered in planning. In conventional frailty models, the frailty variable is introduced in the modeling of the hazard function, with the aim of controlling the unobservable heterogeneity of the units under study, including the dependence of the units that share the same factors. Based on the GTDL model, the hazard function of the $i$-th individual with the multiplicative frailty term $v_i$ is given by \begin{eqnarray*} h_0\left(t\mid{\bm x}_i,v_i\right)=v_i \displaystyle\frac{\lambda \exp\left\{\alpha t+{\bm x}_i^\top\bm{\beta}\right\}}{1+\exp\left\{\alpha t+{\bm x}_i^\top\bm{\beta}\right\}}, \label{risco_fragilidade} \end{eqnarray*} where $v_i$ represents a value of the random variable $V$, and $h_0\left(t|{\bm x}_i,v_i\right)$ is called the conditional hazard function of the $i$-th individual given $v_i$. When $v_i>1$, we have that the individual $i$ is more fragile, and becomes stronger when $v_i<1$; hence, the model's name ``frailty'' (or ``fragility''). It is necessary to adopt a known distribution for the random variable $V$; as it can only assume positive values, the natural candidates are: gamma, inverse Gaussian, Weibull, positive stable and power variance function (PVF) distributions, among others; for more details, see \cite{hougaard1995frailty} and \cite{wienke2010frailty}. In general, the restriction adopted is $\mathbb{E}[V]=1$ and $\mathbb{V}{\rm ar}[V]=\theta$, where $\theta$ is interpretable as a measure of unobserved heterogeneity; this restriction was proposed in \cite{vaupel1979impact}. In order to make inferences on frailty models, we have some options. For instance, obtaining the marginal hazard and reliability functions and using the traditional likelihood function; or choosing other methods that obviate the need for marginalization, such as the h-likelihood approach proposed by Ha {\it et al.} \cite{ha2001hierarchical} and used in \cite{ha2010robust}. This paper considers the marginal hazard and reliability functions. \subsection{The GTDL gamma frailty model} The GTDL gamma frailty model was proposed in \cite{milani2015generalized}, wherein they added a frailty term to the hazard function in a multiplicative way and assumed a gamma distribution for it, i.e., $V \sim \text{Gamma}\left(1/\theta,1/\theta\right)$. This parametrization is considered to obtain $\mathbb{E}[V]=1$ and $\mathbb{V}{\rm ar}[V]=\theta$. The marginal reliability function is given by \begin{eqnarray} R\left(t \mid {\bm x}\right)=\left[ 1+\frac{\lambda\theta}{\alpha}\log\left(\frac{1+\exp\left\{\alpha t +{\bm x}^\top \bm{\beta}\right\}}{1+\exp\left\{{\bm x}^\top \bm{\beta}\right\}}\right) \right]^{-\frac{1}{\theta}}, \label{sobre_frag_g} \end{eqnarray} the corresponding marginal hazard function is given by \begin{eqnarray} h\left(t\mid{\bm x}\right)=\frac{h_0\left({t\mid\bm x}\right)}{\left[1+\frac{\lambda \theta}{\alpha}\log\left(\frac{1+\exp\left\{\alpha t+{\bm x}^\top \bm{\beta}\right\}}{1+\exp\left\{{\bm x}^\top \bm{\beta}\right\}} \right)\right]} \label{risco_frag_g} \end{eqnarray} and, finally, the density function is given by \begin{eqnarray*} f\left(t\mid{\bm x}\right)=\frac{h_0\left(t|{\bm x}\right)} {\left[1+\frac{\lambda \theta}{\alpha}\log\left(\frac{1+\exp\left\{\alpha t+{\bm x}^\top \bm{\beta}\right\}}{1+\exp\left\{{\bm x}^\top \bm{\beta}\right\}} \right)\right]^{(1+1/\theta)}}, \end{eqnarray*} where $h_0\left(t|{\bm x}\right)$ is the hazard function defined in (\ref{risco_mackenzie}). The hazard function (\ref{risco_frag_g}) takes unimodal and decreasing forms, therefore, the inclusion of frailty makes the GTDL hazard function more flexible; for more details, see \cite{eder2011}. It is evident that such a hazard function depends on the time, consequently, the GTDL gamma frailty model also accounts for non-PH. When the parameter $\theta \rightarrow 0$, the frailty model approaches the traditional GTDL model. As advocated by Aalen {\it et al.} \cite{aalen2015understanding}, heterogeneity is ubiquitous and if ignored, it can lead to misleading comparisons of hazard rates. Therefore, it is prudent to always carry out checks to detect the presence of unobserved heterogeneity. On the other hand, the reliability function (\ref{sobre_frag_g}) behaves similarly to the GTDL model, with cure fraction, $p(\bm{x})$, given by \begin{eqnarray} p({\bm x})=\left[1-\frac{\lambda\theta}{\alpha}\log\left(1+\exp\left\{{\bm x}^\top\bm{\beta}\right\}\right)\right]^{-\frac{1}{\theta}}\in(0,1). \nonumber \end{eqnarray} The explanatory variables can be incorporated in the model through the hazard function (\ref{risco_frag_g}) and the scale parameter $\alpha$. The use of regression in the $\alpha$ parameter is a more flexible approach, since it can directly reflect the influence of covariates on the effect of time-to-failure. Due to fhe fact that the parameter $\alpha$ can be estimated to be negative or positive, the identity link function is used, i.e., \begin{eqnarray*} \alpha\left(\mathbf{x_*}\right)=\mathbf{x_*}^{\top}\bm{\alpha}, \end{eqnarray*} where $\mathbf{x_*}^{\top}=(1, x_{*_{1}}, x_{*_{2}}, \ldots, x_{*_{q}})$ are the sets of covariates and $\bm{\alpha}=(\alpha_0,\alpha_1, \ldots, \alpha_q)^{\top}$ and are the regression coefficients. In practice, the covariate vectors may be the same, i.e., $\mathbf{x}=\mathbf{x}_*$. Note that we can include the intercept in the vector ${\bm{x}}$, and with that, we also include the parameter $\beta_0$. MacKenzie \cite{mackenzie2002logistic} presents a discussion of the parameters $\beta_0$ and $\lambda$; in short, only one parameter is allowed for the model to be identifiable, that is, the parameters $\beta_0$ and $\lambda$ are interchangeable. We chose to include the parameter $\beta_0$ because we are interested in the interpretation of the explanatory variables. Hence, the model parameters are $\boldsymbol{\nu}=(\alpha_0, \ldots, \alpha_q, \beta_0, \ldots, \beta_p, \theta)$. Let $T_i>0$ and $\delta_i$ be as previously defined, for $i=1,\ldots,n$. Also, consider that $T_i$'s are IID random variables with reliability and hazard functions given, respectively, by (\ref{sobre_frag_g}) and (\ref{risco_frag_g}). Then, the likelihood function considering right-censored reliability data is given by \begin{eqnarray*} L(\boldsymbol{\nu})&=&\displaystyle\prod_{i=1}^n h\left(t_i\mid{\bm x}_i, \mathbf{x_{*_{i}}}\right)^{\delta_i}R\left(t_i\mid{\bm x}_i, \mathbf{x_{*_{i}}}\right) \nonumber \\ &=&\displaystyle\prod_{i=1}^n h_0^*(t_i)^{\delta_i} \left[1+\frac{\theta}{\mathbf{x_{*_{i}}}^{\top}\bm{\alpha}}\log\left(h_0^*(t_i)\right) \right]^{-\left(1/\theta+\delta_i\right)}. \label{vero_gtdl_fra} \end{eqnarray*} where $h_0^*(t_i)=\frac{\exp\left\{\mathbf{x_{*_{i}}}^{\top}\bm{\alpha} t_i+{\bm x}_i^{\top}\bm{\beta})\right\}}{1+\exp\left\{\mathbf{x_{*_{i}}}^{\top}\bm{\alpha} t_i+{\bm x}_i^{\top} \bm{\beta}\right\}}$. The log-likelihood function, $\ell({\bm \nu}) =\log\left(L( {\bm \nu})\right)$, is given by \begin{eqnarray*} \ell(\boldsymbol{\nu})&=&\sum_{i=1}^n \delta_i\mathbf{x_{*_{i}}}^{\top}\bm{\alpha} t_i +\sum_{i=1}^n\delta_i {\bm x}_i^{\top}\bm{\beta} \\ && -\sum_{i=1}^n\delta_i \log\left(1+\exp\left\{\mathbf{x_{*_{i}}}^{\top}\bm{\alpha} t_i +{\bm x}_i^{\top}\bm{\beta}\right\}\right) \nonumber \\ &&-\sum_{i=1}^n\left(\delta_i+\frac{1}{\theta}\right)\log\left(h_0^*(t_i)\right). \label{log_vero2} \end{eqnarray*} The maximum likelihood estimates (MLEs) for the parameters of the GTDL and GTDL gamma frailty models are obtained by numerical maximization of the corresponding log-likelihood functions. In this work, we use the \texttt{optimr} function of the ``optimx'' package \cite{optimx1}, \cite{optimx2}, which is implemented in software R Core Team \cite{R1}. \section{Simulação} \begin{table}[ht] \centering \caption{Tabela simulação} \resizebox{\linewidth}{!}{ \setlength{\tabcolsep}{3pt} \begin{tabular}{llccccccccccccccc} \midrule & & \multicolumn{14}{c}{GTDL model} \\ \cline{3-16} n & & \multicolumn{4}{c}{70\% censura} & & \multicolumn{4}{c}{80\% censura} & & \multicolumn{4}{c}{85\% censura} \\ \cline{3-6} \cline{8-11} \cline{13-16} & & Bias & RMSE & SD & CP & & Bias & RMSE & SD & CP & & Bias & RMSE & SD & CP \\ \midrule 100& $\alpha$ & -0.155 & 0.609 & 0.589 & 0.958 & &-0.210 & 0.957 & 0.934 & 0.969 & & -0.200 & 1.207 & 1.190 & 0.980 \\ &$\beta_0$ & 0.112 & 0.641 & 0.631 & 0.962 & &0.158 & 0.771 & 0.754 & 0.967 & & 0.173 & 0.897 & 0.880 & 0.984 \\ &$\beta_1$ & -0.049 & 0.682 & 0.680 & 0.951 & &-0.051 & 0.797 & 0.796 & 0.964 & & -0.073 & 0.878 & 0.875 & 0.972 \\ \\ 200 &$\alpha$ & -0.095 & 0.399 & 0.387 & 0.953 & &-0.101 & 0.506 & 0.496 & 0.971 & & -0.145 & 0.718 & 0.703 & 0.963 \\ &$\beta_0$ & 0.084 & 0.419 & 0.411 & 0.958 & &0.101 & 0.465 & 0.454 & 0.960 & & 0.103 & 0.543 & 0.533 & 0.965 \\ &$\beta_1$ & -0.021 & 0.423 & 0.423 & 0.967 & &-0.045 & 0.517 & 0.515 & 0.947 & & -0.034 & 0.592 & 0.591 & 0.953 \\ \\ 300 &$\alpha$ & -0.054 & 0.297 & 0.292 & 0.953 & &-0.060 & 0.389 & 0.385 & 0.960 & & -0.057 & 0.540 & 0.537 & 0.948 \\ &$\beta_0$ & 0.032 & 0.338 & 0.336 & 0.942 & &0.047 & 0.372 & 0.369 & 0.962 & & 0.056 & 0.426 & 0.423 & 0.953 \\ &$\beta_1$ & 0.007 & 0.365 & 0.365 & 0.949 & &-0.011 & 0.403 & 0.403 & 0.956 & & -0.038 & 0.465 & 0.463 & 0.952 \\ \midrule n & & \multicolumn{4}{c}{70\% censura} & & \multicolumn{4}{c}{80\% censura} & & \multicolumn{4}{c}{85\% censura} \\ \cline{3-6} \cline{8-11} \cline{13-16} & & Bias & RMSE & SD & CP & & Bias & RMSE & SD & CP & & Bias & RMSE & SD & CP \\ \midrule 100 &$\alpha$ & -1.006 & 1.721 & 1.397 & 0.973 & &-1.739 & 2.711 & 2.079 & 0.979& & -2.031 & 3.019 & 2.233 & 0.986 \\ &$\beta_0$ & 0.473 & 0.985 & 0.864 & 0.966 & &0.639 & 1.326 & 1.161 & 0.985& & 0.639 & 1.478 & 1.333 & 0.995 \\ &$\beta_1$ & -0.221 & 1.111 & 1.089 & 0.973 & &-0.348 & 1.528 & 1.488 & 0.975& & -0.321 & 1.747 & 1.717 & 0.983 \\ &$\theta$ & -0.458 & 0.832 & 0.694 & 0.977 & &-1.219 & 2.030 & 1.623 & 0.981& & -1.747 & 2.813 & 2.205 & 0.969 \\ \\ 200 &$\alpha$ & -0.519 & 0.925 & 0.765 & 0.965 & &-1.157 & 1.826 & 1.412 & 0.965& & -1.581 & 2.414 & 1.823 & 0.964 \\ &$\beta_0$ & 0.273 & 0.584 & 0.516 & 0.951 & &0.402 & 0.783 & 0.672 & 0.951& & 0.440 & 0.917 & 0.804 & 0.963 \\ &$\beta_1$ & -0.144 & 0.635 & 0.619 & 0.949 & &-0.245 & 0.862 & 0.827 & 0.961& & -0.282 & 1.058 & 1.020 & 0.972 \\ &$\theta$ & -0.271 & 0.607 & 0.543 & 0.952 & &-0.864 & 1.347 & 1.033 & 0.934& & -1.426 & 2.106 & 1.550 & 0.937 \\ \\ 300 &$\alpha$ & -0.315 & 0.648 & 0.566 & 0.966 & &-0.763 & 1.288 & 1.037 & 0.964& & -1.098 & 1.703 & 1.301 & 0.955 \\ &$\beta_0$ & 0.160 & 0.411 & 0.379 & 0.953 & &0.256 & 0.573 & 0.513 & 0.950& & 0.330 & 0.658 & 0.569 & 0.954 \\ &$\beta_1$ & -0.067 & 0.435 & 0.430 & 0.969 & &-0.145 & 0.629 & 0.612 & 0.960& & -0.206 & 0.705 & 0.674 & 0.966 \\ &$\theta$ & -0.181 & 0.470 & 0.434 & 0.958 & &-0.680 & 1.113 & 0.881 & 0.939& & -1.073 & 1.640 & 1.241 & 0.920 \\ \midrule \end{tabular} } \end{table} \section{Diagnostic analysis} \label{influence} In this section, we present two important tools to check the quality of the model's fit to the data. The residual analysis is performed using the randomized quantile residuals. The other tool is the global influence analysis. \subsection{Randomized quantile residuals} The randomized quantile (RQ) residuals were proposed in \cite{dunn1996randomized}. They are used to check the overall fit quality of the model. As highlighted in \cite{rigby2005generalized}, the RQ residuals can be used in the presence of censored data. They are also widely used in generalized additive models for location, scale and shape (GAMLSS). The RQ residuals are defined by \begin{eqnarray} r_{i}=\Phi^{-1}\left(\widehat{R}(t_i \mid{\bm x}_i)\right), \quad i=1,2,\ldots,n, \end{eqnarray} where $\Phi^{-1}(.)$ is the inverse of the cumulative distribution function (or quantile function) of the standard normal and $\widehat{R}(t_i \mid{\bm x}_i)$ is the estimate of the reliability function obtained using the MLEs for the parameters. In this work, we analyze the RQ residuals using the quantile-quantile plot (QQ-plot). \subsection{Global influence} Global influence analysis consists of studying the effect of case-deletion from the data. It was introduced by Cook \cite{cook1977detection} and studied later by several authors \cite{cook1982residuals}, \cite{cook1986assessment}, \cite{yiqi2016influence}, \cite{leao2018incorporation}, among others. We denote by the subscript ``$(i)$'' the removal of the $i$-th observation from the original data set. The log-likelihood function of the parameter vector $\boldsymbol{\nu}$ is denoted by $\ell(\boldsymbol{\nu})$, as previously given; when we delete the $i$-th observation, we represent it by $\ell_{(i)}(\boldsymbol{\nu})$, with the respective MLE given by $\hat{\boldsymbol{\nu}}_{(i)}=(\hat{\boldsymbol{\alpha}}_{(i)}, \hat{\boldsymbol{ \beta}}_{(i)}, \hat{\theta}_{(i)})^\top$. In this study, we analyze two measures of global influence. The first is the generalized Cook's distance (GD), whose idea is to compare $\hat{\boldsymbol{\nu}}$ and $\hat{\boldsymbol{\nu}}_{(i)}$; if the deleted observation seriously influences the estimates, more attention should be paid to that observation. The GD is given by \begin{eqnarray*} \text{GD}_i\left(\boldsymbol{\nu}\right)=\left(\hat{{\boldsymbol \nu}}_{(i)}-\hat{{\boldsymbol \nu}}\right)^\top \left[\Sigma\left(\hat{\boldsymbol \nu}\right)\right]^{-1}\left(\hat{{\boldsymbol \nu}}_{(i)}-\hat{{\boldsymbol \nu}}\right), \end{eqnarray*} where ${\Sigma\left(\hat{\boldsymbol \nu}\right)}$ is the expected Fisher information matrix. For the models under study, this matrix is extremely complex, so in practice we use its observed version. An alternative way is to assess $\text{GD}_i\left({\boldsymbol \alpha}\right)$, $\text{GD}_i\left({\boldsymbol \beta}\right)$ and $\text{GD}_i\left(\theta\right)$, whose values reveal the impact of the case-deletion on the estimates of ${\boldsymbol \alpha}$, ${\boldsymbol \beta}$ and $\theta$, respectively. The second measure adopted here is the likelihood distance (LD), whose idea is to compare $\ell\left(\hat{{\boldsymbol \nu}}\right)$ and $\ell\left(\hat{{\boldsymbol \nu}}_{(i)}\right)$, similarly to the previous one; if the deleted observation seriously influences the value of the log-likelihood function, it deserves further attention. The LD is given by \begin{eqnarray*} \text{LD}_i\left(\boldsymbol{\nu}\right)= 2\left[\ell\left(\hat{{\boldsymbol \nu}}\right)-\ell\left(\hat{{\boldsymbol \nu}}_{(i)}\right)\right]. \end{eqnarray*} In order to investigate the impact of the detected influential cases, we calculate the relative change (RC), which is computed from parameter estimates with and without removing the influential cases, as follows: \begin{eqnarray*} \mbox{RC}_{\nu_{j(i)}}&=&\left | \frac{\hat{\nu}_j-\hat{\nu}_{j(i)}}{\hat{\nu}_j}\right |\times 100\% ,\\ \mbox{RC}_{\mbox{SE}(\nu_{j(i)})}&=&\left | \frac{\mbox{SE}(\hat{\nu}_j)-\mbox{SE}(\hat{\nu}_{j(i)})}{\mbox{SE}(\hat{\nu}_j)}\right |\times 100\%, \\ \end{eqnarray*} where $\hat{\nu}_{j(i)}$ and $\mbox{SE}(\hat{\nu}_{j(i)})$ are the MLEs and their respective estimated standard errors (SEs) when the $i$-th case is deleted, with $j=1, \ldots, p+q+3$ and $\nu_1=\alpha_0, \ldots, \nu_{q+1}=\alpha_q$, $\nu_{q+2}=\beta_0, \ldots, \nu_{p+q+2}=\beta_p$ and $\nu_{p+q+3}=\theta$. Note that this section was developed for the GTDL gamma frailty model, but if the adopted model is the GTDL one, it is only necessary to remove the parameter $\theta$. \section{Application} \label{application} In this section, we apply the proposed methodology to the DHSV data set. The data set is confidential due to the interests of the Petrobras company. The analyses were performed with the R software \cite{RCoreTeam}. A descriptive summary of the failure times or censoring (in years) provides the following main sample results: $n = 366$, ${\rm mean} = 5.0761$, ${\rm median} = 3.6082$, ${\rm minimum} = 0.0164$ and ${\rm maximum} = 28.8000$, with only $83$ $(22.68\%)$ failure times, while the rest are censored times. For better understanding, we present a descriptive statistical summary of the explanatory variables in Tables \ref{continuas} and \ref{qualitativas}. Table \ref{continuas} displays the minimum value (Min), median, mean, standard deviation (SD), coefficient of variation (CV), skewness (Sk) and kurtosis (K), maximum value (Max) and number of observations ($n$), for the continuous variables. The summary of failure times that are analyzed together with the covariates is also presented in this table. For the qualitative (categorical) variables, it is possible to observe the categories and the number of observations per category in an absolute and relative way, as shown in Table \ref{qualitativas}. \begin{table}[!h] \centering \caption{Descriptive summary of the continuous explanatory variables.} \resizebox{\linewidth}{!}{ \setlength{\tabcolsep}{3pt} \begin{tabular}{llccccccccc} \hline Characteristic & Variable & Min & Median & Mean & SD & CV & Sk & K & Max & $n$ \\ \hline Flow & Failure time & 0.87 & 3.03 & 4.04 & 2.67 & 0.66 & 0.27 & 0.28 & 11.32 & 21 \\ & Censoring time & 0.24 & 2.55 & 2.82 & 1.50 & 0.53 & 0.17 & 0.28 & 6.97 & 77 \\ & H2S & 0.00 & 2.45 & 18.57 & 34.14 & 1.84 & 0.38 & 0.03 & 90.00 & 98 \\ & BSW & 0.00 & 1.95 & 26.89 & 37.65 & 1.40 & 0.94 & 0.30 & 100.00 & 98 \\ & GOR & 0.00 & 234.70 & 203.54 & 123.03 & 0.60 & -0.62 & 0.28 & 612.30 & 98 \\ \hline Environment & Failure time & 0.02 & 5.72 & 8.21 & 6.87 & 0.84 & 0.40 & 0.33 & 26.60 & 65 \\ & Censoring time & 0.02 & 4.01 & 4.81 & 3.32 & 0.69 & 0.20 & 0.24 & 22.42 & 154 \\ & CWT & 0.00 & 10.00 & 20.00 & 22.89 & 1.14 & 0.62 & 0.33 & 109.00 &219\\ & CWP & 0.00 & 0.58 & 0.81 & 0.81 & 0.99 & 0.27 & 0.18 & 3.92 & 219 \\ & WC & 0.09 & 0.98 & 1.16 & 0.75 & 0.65 & 0.40 & 0.42 & 2.25 & 219 \\ \hline Operation & Failure time & 0.03 & 3.02 & 3.79 & 2.89 & 0.76 & 0.13 & 0.21 & 11.32 & 29 \\ & Censoring time & 0.88 & 3.13 & 3.78 & 2.31 & 0.61 & 0.24 & 0.26 & 13.38 & 151 \\ & WFP & 0.73 & 4.21 & 3.75 & 1.33 & 0.35 & -0.55& 0.23 & 6.71 & 180 \\ & FR & 0.00 & 2.83 & 2.93 & 2.40 & 0.82 & -0.15& 0.31 & 10.50 & 180 \\ \hline Valve & Failure time &0.03 & 3.00 & 3.50 & 2.84 & 0.81 & 0.10 & 0.21 & 11.32 & 34 \\ & Censoring time &0.10 & 3.20 & 3.94 & 2.84 & 0.72 & 0.21 & 0.23 & 14.05 & 258 \\ \hline \end{tabular} } \label{continuas} \end{table} \begin{table}[!h] \centering \caption{Descriptive summary of the qualitative explanatory variables.} \resizebox{\linewidth}{!}{ \setlength{\tabcolsep}{3pt} \begin{tabular}{llccc} \hline Characteristic & Variable & \multicolumn{3}{c}{Group/Basin} \\ \hline Environment & OU & Campos (CB) & Santos (SB) & Espírito Santo (ES) \\ & & 116 (52.96\%) & 91 (41.55\%) & 12 (5.49\%) \\ \hline Valve & Mfr. & A & B & Others \\ & & 88 (30.14\%) & 174 (59.59\%) & 30 (10.27\%) \\ & Family & A & B & Others \\ & & 87 (29.79\%) & 150 (51.37\%) & 55 (18.84\%) \\ & Dim. & 4.5'' & 5.5'' & \\ & & 111 (38.01\%) & 181 (61.99\%) & \\ & PC & 5,000 & 7,500 & 10,000 \\ & & 61 (20.89\%) & 49 (16.78\%) & 182 (62.33\%) \\ \hline \end{tabular} } \label{qualitativas} \end{table} The database contains 366 observations, but there is a lot of missing data in the explanatory variables. The removal of the cases with missing data reduces the database to only 54 observations, which makes multivariate analyses difficult. Hence, we decided to fit a model for each group of explanatory variables, thereby eliminating the observations with missing data inside the groups. The summary measures previously presented already consider this deletion. In order to choose/select the explanatory variables, we initially adopted the GTDL model with gamma frailty term and the following steps: \begin{description} \item[Step 1: ] \ For each group, select the significant covariates in the ${\bm x}^\top \bm{\beta}$ structure, using the stepwise method and the generalized likelihood ratio test, and also considering $\alpha$ as a scalar; \item[Step 2: ] \ For each group, select the significant covariates in the $\mathbf{x_*}^{\top}\bm{\alpha}$ structure, using the stepwise method and the generalized likelihood ratio test, considering for the ${\bm x}^\top\bm{\beta}$ structure the covariates obtained in Step 1. \end{description} At the end of Step 2, we perform a hypothesis test to verify whether there is observed heterogeneity ($H_0: \theta=0$). In this case, the generalized likelihood ratio test was adopted with the modification presented in \cite{Maller_Zhou}, because the value of the parameter under the null hypothesis is on the boundary of the parametric space. If the null hypothesis is not rejected, the adopted model will be the GTDL; otherwise, the GTDL model with gamma frailty term will be the chosen one. \subsection{Adjustment to the flow characteristics group} After applying the procedure previously described, we obtain in Step 1 that the statistically significant explanatory variables are H2S and BSW, while in Step 2 we do not identify any significant variables, and so only $\alpha_0$ is included in the model. We performed the hypothesis test for the parameter that measures the unobserved heterogeneity and obtained a p-value greater than the significance level of 10\%, so the null hypothesis is not rejected and the GTDL model (without frailty) is the adopted one. The MLEs, SEs and 90\% confidence intervals (90\% CIs) for the GTDL model parameters are presented in Table \ref{emv_fluxo}. By analyzing the confidence intervals, we conclude that all parameters are significant at the 10\% level. \begin{table}[!h] \centering \caption{Estimation results of the GTDL model fitted to the flow characteristics group.} \resizebox{\linewidth}{!}{ \setlength{\tabcolsep}{3pt} \begin{tabular}{l|ccc} \hline Parameter & MLE & SE & 90\% CI \\ \hline $\alpha_0$ & 0.7709 & 0.1969 & (0.4470; 1.0948)\\ $\beta_0$ & -5.5598 & 0.8784 & (-7.0048; -4.1148)\\ $\beta_1$ (H2S) & 0.0362 & 0.0084 & (0.0224; 0.0500) \\ $\beta_2$ (BSW) & -0.0202 & 0.0121 & (-0.0401; -0.0003)\\ \hline \end{tabular} } \label{emv_fluxo} \end{table} The QQ-plot of the RQ residuals is shown in Figure \ref{residuo_fluxo}. We observed a linear behavior of the residuals (with intercept 0 and slope 1), thus indicating an agreement between the residuals and the standard normal distribution. \begin{figure}[h] \centering \includegraphics[width=0.5\linewidth]{fig04.pdf} \caption{QQ-plot with envelope of 95\% for the RQ residuals of the adjustment to the flow characteristics group.} \label{residuo_fluxo} \end{figure} For the reliability functions and hazard ratios (HRs) that will be illustrated hereinafter in the text, we adopt the median value of continuous explanatory variables whenever necessary. When the objective is to present reliability functions for different values of a continuous covariate, we always choose the variation from the minimum to the maximum of the observed value. In order to illustrate what is the effect on the reliability function for an increase in the amount of H2S or BSW, we exhibit in Figure \ref{fig_conf_fluxo} (a) several reliability curves for different values of the H2S variable. We note that, with the increase in the value of the H2S variable, the reliability function shows a faster decreasing behavior, that is, the higher the concentration of H2S, the lower the reliability of DHSVs. It is known that the concentration of H2S is associated with failures of metallic components in the oil and gas exploration industry. This can be seen, for instance, in \cite{veritas2009recommended}, which presents a summary of common threats to corrosion, with some of them involving H2S; and \cite{iso2003iso}, which gives recommendations for material selection when H2S is present. In Figure \ref{fig_conf_fluxo} (b), we show the variation of the BSW variable being reflected in the reliability function. Observe that alterations in the BSW value changed the reliability curve, the higher the concentration of BSW, the greater the reliability. \begin{figure}[!] \centering \includegraphics[width=1\linewidth]{fig05.pdf} \caption{(a) Reliability function with variation in the value of the H2S variable. (b) Reliability function with variation in the value of the BSW variable.} \label{fig_conf_fluxo} \end{figure} In Figure \ref{razao_risco_fluxo} (a), we present the HR curve for the first and third quartiles of the H2S variable. We note that before 10 years of age the HR is less than one, so the risk of valve failure is greater when the H2S variable assumes the value of the third quartile; but after 10 years the HR is approximately one, and, therefore, we have approximately equal hazards. In Figure \ref{razao_risco_fluxo} (b), we show the HR for the first and third quartiles of the BSW variable. From this plot, we observe that the HR is greater than one up to 14 years old, therefore, in this period the valve has a higher risk of failure when the BSW variable takes on the value of the first quartile; after the first 14 years the HR is approximately one, so the risk of failure is approximately equal in both groups. We highlight here a different result than the one obtained if we had considered the Cox model, since the ratio of hazard functions in the Cox model would be constant throughout the time. \begin{figure}[!h] \centering \includegraphics[width=1\linewidth]{fig06.pdf} \caption{(a) Ratio of hazard functions between the first and third quartiles of the H2S variable. (b) Ratio of hazard functions between the first and third quartiles of the BSW variable.} \label{razao_risco_fluxo} \end{figure} In order to check for the presence of influential observations, we calculated the GD and LD measures. The obtained results are presented in Figure \ref{cooks_fluxo}, from which we can see the existence of four influential observations according to the Cook's distance - cases $2,3,34$ and $70$; while from the LD, we also observe four influential observations - cases $2,3,24$ and $70$. Hence, the detected influential observations are cases $2,3,24,34$ and $70$. \begin{figure}[!] \centering \includegraphics[width=1\linewidth]{fig07.pdf} \caption{(a) Generalized Cook's distance. (b) Likelihood distance, considering the GTDL model fitted to the flow group data.} \label{cooks_fluxo} \end{figure} We checked the impact of the detected influential cases on the model inference. With removal of influential observations, the RC values (in percentage, \%) and p-values are displayed in Table \ref{rc_fluxo}. At the 10\% level, note that the parameters $\alpha_0$, $\beta_0$ and $\beta_1$ remained significant in all scenarios, whereas the parameter $\beta_2$ was only significant in two scenarios (specifically, when excluding the observation 24, and when excluding all influential observations). Therefore, the effect of time and the H2S variable were significant to explain the time-to-failure, while the BSW variable became non-significant in some scenarios. The RC of the parameter $\alpha$ is the largest when case 2 is excluded (RC of 20.4872\%), while for the parameters $\beta_0$ and $\beta_1$ the RCs are the largest when case 3 is removed (RC of 11.1820\% and 13.3283\%, respectively), and, finally, for the parameter $\beta_2$ the RC is the largest when case 24 is removed (RC of 48.0736\%). When all influential observations are excluded, we find that the RCs are less than 6\%, indicating little change in point estimates. \begin{table}[!h] \centering \caption{The RC values (in \%) for the MLEs and SEs, in addition to the p-values, considering the deleted observations.} \resizebox{\linewidth}{!}{ \setlength{\tabcolsep}{3pt} \begin{tabular}{llcccc} \hline Deleted case & & $\hat{\alpha}_0$ & $\hat{\beta}_0$ & $\hat{\beta}_1$ & $\hat{\beta}_2$ \\ \hline \{2\} & $\mbox{RC}_{\nu_{j(i)}}$ & 20.4872 & 9.3523 & 4.0732 & 21.7665 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 27.0728 & 20.3730 & 9.6097 & 4.5475 \\ & p-value & 0.0002 & $<$0.0001 & $<$0.0001 & 0.1718 \\ \{3\} & $\mbox{RC}_{\nu_{j(i)}}$ &10.7815 & 11.1820 & 13.3283 & 8.0404 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ &19.7344 & 23.3812 & 15.1467 & 5.2880 \\ & p-value & 0.0003 & $<$0.0001 & $<$0.0001 & 0.1453 \\ \{24\} & $\mbox{RC}_{\nu_{j(i)}}$ &5.8829 & 3.1193 & 8.2916 & 48.0736 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ &1.2191 & 3.8244 & 9.0114 & 20.5358 \\ & p-value & $<$0.0001 & $<$0.0001 & $<$0.0001 & 0.0405 \\ \{34\} & $\mbox{RC}_{\nu_{j(i)}}$ &13.2155 & 5.0062 & 1.2822 & 10.6848 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ &32.4453 & 20.6371 & 7.0519 & 3.4001 \\ & p-value & 0.0008 & $<$0.0001 & $<$0.0001 & 0.1232 \\ \{70\} & $\mbox{RC}_{\nu_{j(i)}}$ &2.6705 & 6.6477 & 8.6163 & 13.3373 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ &5.9642 & 12.1104 & 8.4947 & 2.5371 \\ & p-value & 0.0001 & $<$0.0001 & $<$0.0001 & 0.1588 \\ \{2,3,24,34,70\} & $\mbox{RC}_{\nu_{j(i)}}$ &1.1559 & 1.0171 & 1.5947 & 5.7544 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ &1.9254 & 0.0629 & 0.0262 & 1.2419 \\ & p-value & 0.0001 & $<$0.0001 & $<$0.0001 & 0.0816 \\ \hline \end{tabular} } \label{rc_fluxo} \end{table} \subsection{Adjustment to the valve characteristics group} In the structure ${\bm x}^\top\bm{\beta}$ we identified only the variable Family as statistically significant, whereas in the structure $\mathbf{x_*}^{\top}\bm{\alpha}$ we found that the variables PC and Mfr. are meaningful. Therefore, the effect of time is different for each level of these two variables. The hypothesis test for the frailty distribution's parameter resulted in the rejection of the null hypothesis. Hence, the GTDL gamma frailty model is the one that best fits these data. The obtained MLEs are shown in Table \ref{ajuste_valvula}. From the fact that the frailty parameter is significant, there is evidence that important variables were not included in the modeling, so indicating that the variables PC, Mfr. and Family are not the only ones that impact the failure time of the valves. \begin{table}[!h] \centering \caption{Estimation results of the GTDL gamma frailty model fitted to the valve characteristics group.} \resizebox{\linewidth}{!}{ \setlength{\tabcolsep}{3pt} \begin{tabular}{l|cccc} \hline Parameter & MLE & SE & 90\% CI \\ \hline $\alpha_0$ & -5.3280 & 0.7101 & (-6.4961; -4.1599) \\ $\alpha_1$ (7,500) & 1.9430 & 0.7919 & (0.6403; 3.2457) \\ $\alpha_2$ (10,000) & 0.8336 & 0.3071 & (0.3284; 1.3388) \\ $\alpha_3$ (Mfr. B) & 5.6018 & 0.7466 & (4.3736; 6.8300) \\ $\alpha_4$ (Mfr. A) & 5.8969 & 0.7148 & (4.7211; 7.0727) \\ $\beta_0$ & -6.1317& 1.1002 & (-7.9415; -4.3219) \\ $\beta_1$ (family B) & 0.8631 & 1.2455 & (-1.1857; 2.9119) \\ $\beta_2$ (Others) & 5.8098 & 1.6673 & (3.0671; 8.5525) \\ $\theta$ &12.3951 & 3.5735 & (6.5166; 18.2735) \\ \hline \end{tabular} } \label{ajuste_valvula} \end{table} It is worth mentioning that when the Manufacturer is the ``Others'' class, the GTDL with gamma frailty term assumes a cure fraction, regardless of what the other explanatory variables are, since in this case $\mathbf{x_*}^{\top}\bm{\alpha}<0$, with the cure fraction value close to one. From a descriptive analysis, we identified that only two out of 30 observations in the ``Others'' class were failure times, occurring before one year of operation. In Figure \ref{residuo_valvula}, we present the QQ-plot of the RQ residuals. Again, we observed a good agreement between the residuals and the standard normal distribution. Therefore, we can conclude that there was a good adjustment of the GTDL gamma frailty model to the data. \begin{figure}[h] \centering \includegraphics[width=0.5\linewidth]{fig08.pdf} \caption{QQ-plot with envelope of 95\% for the RQ residuals of the adjustment to the valve characteristics group.} \label{residuo_valvula} \end{figure} Figure \ref{fig_conf_valvula} shows the reliability functions for the variables Family, PC and Mfr.. Note that there is a little difference between the reliability curves of the ``family A'' and ``family B'' classes; the same occurs with the ``Mfr. A'' and ``Mfr. B'' classes' curves. By analyzing the curves of the PC variable, we see that the lowest reliability is for PC equal to ``7,500'', while the highest one is for PC equal to ``5,000''. The closeness between the reliability curves of the ``family B'' and ``family A'' levels is justified by analyzing the confidence interval of the ``family B'' Family, since this level is not statistically different from the ``family A'' reference level. From this, we can conclude that the failure times showed no significant difference in relation to these two levels of the Family variable. The same conclusion can be made for the reliability curves considering the PC ``7,500'' and ``10,000'', and also the Manufacturers ``Mfr. A'' and ``Mfr. B''. But for that, it was necessary to change the reference classes and refit the model. \begin{figure}[!h] \centering \includegraphics[width=1\linewidth]{fig09.pdf} \caption{(a) Reliability function for the Family variable, considering PC equal to ``7,500'' and ``Mfr. B'' for the Manufacturer. (b) Reliability function for the PC variable, adopting ``family B'' for Family and ``Mfr. B'' for Manufacturer. (c) Reliability function for the Mfr. variable, assuming ``family B'' for Family and ``7,500'' for PC.} \label{fig_conf_valvula} \end{figure} Due to the architecture imposed for safety valves in deep water wells used by Petrobras, most of its valves have nitrogen chambers, which is a technology that mitigates the pressure sensitivity of the well and ensures, through a surface calibration for the individual condition of each well, the opening and closing pressures of a particular specification. In other words, the pressure class envisaged in the analysis, consists of a control variable and easy handling for new DHSVs. From the fact that the calibration is feasible and a low-cost action, with a relative impact on reducing the risks of that component, it is advisable to use the PC equal to 5,000 psi. Figure \ref{razao_risco_valvula} presents an analysis of the behavior of the hazard function using the GTDL gamma frailty model fitted to the data. From the HR of the Family variable shown in Figure \ref{razao_risco_valvula} (a), we observe that the ratios between ``family B'' and ``Others'', ``family A'' and ``Others'' start below one, so indicating that the ``Others'' family has a higher risk; however, after approximately 3 years this relationship is reversed. For the ratio between ``family B'' and ``family A'', we see that it is greater than one until approximately 5 years, i.e., at the beginning ``family B'' shows a higher risk of failure compared to ``family A''; but after 5 years it is ``family A'' that exhibits a greater risk of failure. From the HRs for the PC variable displayed in Figure \ref{razao_risco_valvula} (b), we observe that, initially, the PC of ``10,000'' shows more risk of failure than the PC of ``5,000'', and the PC of ``7,500'' is more at risk of failure than PC of ``5,000'' and ``10,000''; nevertheless, these relationships are reversed over time. It is worth noting that the risk of failure of PC ``7,500'' reaches approximately 14 times the risk of failure of PC ``5,000''. When the comparison is made with PC ``10,000'', the risk of PC ``7,500'' is even approximately 6 times. Finally, Figure \ref{razao_risco_valvula} (c) exhibits the HRs for the Mfr. variable, from which we note that the ratios between ``Others'' and ``Mfr. A'', ``Others'' and ``Mfr. B'' are always below one, so indicating that the risk of failure is greater for the Manufacturers ``Mfr. B'' and ``Mfr. A''. When comparing ``Mfr. B'' and ``Mfr. A'', we observe that, initially, the ratio is less than one until the age of approximately 3 years, thus indicating that ``Mfr. A'' has a higher risk of failure than ``Mfr. B''. However, after 3 years the relationship is inverted and maintained over time. \begin{figure}[!] \centering \includegraphics[width=1\linewidth]{fig10.pdf} \caption{(a) Ratio of hazard functions of the Family variable, adopting PC equal to ``10,000'' and Manufacturer ``Mfr. B''. (b) Ratio of hazard functions of PC, adopting the ``family B'' class for Family and Manufacturer ``Mfr. B''. (c) Ratio of hazard functions of Manufacturer, adopting the ``family B'' class for Family and PC equal to ``7,500''.} \label{razao_risco_valvula} \end{figure} The GD measure identified 18 influential observations, while the LD indicated 9 influential observations; these indications can be seen in Figure \ref{cooks_valvula}. The cases $76, 99, 148, 164, 196$ and $290$ were detected by both metrics. \begin{figure}[!] \centering \includegraphics[width=1\linewidth]{fig11.pdf} \caption{(a) Generalized Cook's distance. (b) Likelihood distance, considering the GTDL gamma frailty model fitted to the valve group data.} \label{cooks_valvula} \end{figure} In Table \ref{rc_valvula}, we observe that the MLEs of the parameters $\alpha_0$, $\alpha_1$, $\alpha_2$, $\alpha_3$, $\alpha_4$, $\beta_0$, $\beta_2$ and $\theta$ are always statistically significant at the 10\% level, while the MLE of $\beta_1$ is only meaningful when removing all influential observations. We also note a big change in the parameter estimates when all the influential observations are deleted, e.g., the estimates of the parameters $\alpha_1$, $\alpha_2 $, $\alpha_3 $, $\alpha_4 $, $\beta_1$ and $\theta$, change from 1.9430, 0.8336, 5.6018, 5.8969, 0.8631 and 12.3951, to 12.2712, 10.4262, -27.2694, -19.7312, 34.6029 and 64.3544, respectively. This makes us believe that the reason for this alteration is that all removed observations are due to failure times. \begin{table}[!h] \centering \caption{The RC values (in \%) for the MLEs and SEs, in addition to the p-values, considering the deleted observations.} \resizebox{\linewidth}{!}{ \setlength{\tabcolsep}{3pt} \begin{tabular}{llcccccccccc} \hline Deleted case & & $\hat{\alpha}_0$ & $\hat{\alpha}_1$ & $\hat{\alpha}_2$ & $\hat{\alpha}_3$ & $\hat{\alpha}_4$ & $\hat{\beta}_0$& $\hat{\beta}_1$& $\hat{\beta}_2$ & $\hat{\theta}$ \\ \hline \{4\} & $\mbox{RC}_{\nu_{j(i)}}$ &30.4017 & 1.4089 & 0.6052 & 29.2493 & 27.4773 & 0.0645 & 5.5488 & 15.4158 & 0.1140 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ &87.8263 & 2.0345 & 1.4861 & 81.4457 & 87.2687 & 0.3251 & 0.8465 & 1.4755 & 6.7392 \\ & p-value &0.0054 & 0.0177 & 0.0056 & 0.0034 & 0.0014 & $<$0.0001 & 0.4607 & 0.0028 & 0.0011 \\ \{7\} & $\mbox{RC}_{\nu_{j(i)}}$ &0.4452 & 0.3912 & 1.5539 & 0.0453 & 0.2204 & 1.2104 & 10.1491 & 0.5057 & 7.3346 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ &37.6123 & 0.3408 & 1.7897 & 34.5184 & 38.0240 & 6.1609 & 5.1090 & 4.3746 & 7.1643 \\ & p-value &$<$0.0001 & 0.0134 & 0.0087 & $<$0.0001 & $<$0.0001 & $<$0.0001 & 0.5536 & 0.0009 & 0.0005 \\ \{10\} & $\mbox{RC}_{\nu_{j(i)}}$ &83.5834 & 0.0054 & 0.8636 & 79.3918 & 75.5341 & 0.1580 & 1.6602 & 4.8208 & 0.6080 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ &52.8768 & 0.4091 & 0.3253 & 47.7292 & 50.6990 & 0.2150 & 0.5921 & 2.1095 & 2.3203 \\ & p-value &$<$0.0001 & 0.0145 & 0.0060 & $<$0.0001 & $<$0.0001 & $<$0.0001 & 0.4785 & 0.0012 & 0.0006 \\ \{26\} & $\mbox{RC}_{\nu_{j(i)}}$ & 0.7032 & 0.1040 & 4.3077 & 0.5007 & 0.5018 & 0.8333 & 7.7694 & 1.4911 & 7.8598 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 2.7794 & 0.3888 & 7.7963 & 2.8082 & 2.6908 & 2.6268 & 1.4871 & 2.6859 & 8.3664 \\ & p-value & $<$0.0001 & 0.0144 & 0.0086 & $<$0.0001 & $<$0.0001 & $<$0.0001 & 0.4618 & 0.0006 & 0.0006 \\ \{35\} &$\mbox{RC}_{\nu_{j(i)}}$ & 0.6100 & 0.4441 & 5.0868 & 0.3712 & 0.4040 & 0.9923 & 7.4050 & 1.7257 & 8.1467 \\ &$\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 2.5757 & 0.3541 & 7.5707 & 2.6490 & 2.4911 & 2.7222 & 1.6881 & 2.7972 & 8.3420 \\ &p-value & $<$0.0001 & 0.0141 & 0.0080 & $<$0.0001 & $<$0.0001 & $<$0.0001 & 0.4642 & 0.0006 & 0.0005 \\ \{38\} & $\mbox{RC}_{\nu_{j(i)}}$ & 1.2109 & 1.6831 & 1.1012 & 1.2304 & 1.0573 & 0.2682 & 7.6990 & 0.0978 & 5.6408 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 3.7460 & 0.2957 & 5.3532 & 3.4772 & 3.6495 & 1.2140 & 0.2522 & 1.4383 & 7.5534 \\ & p-value & $<$0.0001 & 0.0162 & 0.0108 & $<$0.0001 & $<$0.0001 & $<$0.0001 & 0.4566 & 0.0006 & 0.0007 \\ \{48\} &$\mbox{RC}_{\nu_{j(i)}}$ & 0.6206 & 10.6976 & 26.2448 & 4.0939 & 0.9822 & 5.5191 & 48.3224 & 5.4630 & 1.1571 \\ &$\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 14.3356 & 3.3399 & 7.6441 & 13.8904 & 13.0563 & 5.4882 & 1.0541 & 3.4839 & 2.9844 \\ &p-value & $<$0.0001 & 0.0086 & 0.0015 & $<$0.0001 & $<$0.0001 & $<$0.0001 & 0.3091 & 0.0004 & 0.0009 \\ \{49\} &$\mbox{RC}_{\nu_{j(i)}}$ & 34.6316 & 2.1279 & 0.1467 & 33.3239 & 31.3236 & 0.0332 & 6.3699 & 18.1310 & 1.2663 \\ &$\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 107.5781 & 2.6918 & 1.8539 & 99.9393 & 106.9684 & 0.5716 & 1.1461 & 2.8460 & 6.0849 \\ &p-value & 0.0181 & 0.0194 & 0.0056 & 0.0123 & 0.0062 & $<$0.0001 & 0.4559 & 0.0033 & 0.0012 \\ \{52\} &$\mbox{RC}_{\nu_{j(i)}}$ & 1.1845 & 1.5895 & 0.7249 & 1.1900 & 1.0265 & 0.1917 & 7.8443 & 0.0108 & 5.8111 \\ &$\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 3.7452 & 0.3120 & 5.6929 & 3.4954 & 3.6449 & 1.3445 & 0.3291 & 1.5451 & 7.6575 \\ &p-value & $<$0.0001 & 0.0161 & 0.0108 & $<$0.0001 & $<$0.0001 & $<$0.0001 & 0.4563 & 0.0006 & 0.0007 \\ \{53\} &$\mbox{RC}_{\nu_{j(i)}}$ & 1.3247 & 2.0415 & 2.9656 & 1.4077 & 1.1947 & 0.6467 & 6.5073 & 0.6328 & 4.7346 \\ &$\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 3.6747 & 0.2106 & 3.4059 & 3.3185 & 3.6000 & 0.5177 & 0.0978 & 0.8744 & 6.9317 \\ &p-value & $<$0.0001 & 0.0165 & 0.0109 & $<$0.0001 & $<$0.0001 & $<$0.0001 & 0.4600 & 0.0006 & 0.0007 \\ \{54\} &$\mbox{RC}_{\nu_{j(i)}}$ & 2.0334 & 9.6956 & 1.0775 & 2.6364 & 1.8700 & 0.2125 & 38.8591 & 1.3556 & 1.6908 \\ &$\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 2.9805 & 0.4837 & 0.2554 & 2.3465 & 2.9503 & 0.2574 & 2.8941 & 1.3314 & 1.0405 \\ &p-value & $<$0.0001 & 0.0074 & 0.0059 & $<$0.0001 & $<$0.0001 & $<$0.0001 & 0.6805 & 0.0005 & 0.0005 \\ \{60\} &$\mbox{RC}_{\nu_{j(i)}}$ & 3.2507 & 13.4936 & 5.3774 & 4.1729 & 3.0445 & 1.0967 & 49.4690 & 2.9123 & 2.7842 \\ &$\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 4.8711 & 0.5454 & 2.1287 & 3.8186 & 4.8151 & 1.4059 & 4.0903 & 2.6415 & 1.3689 \\ &p-value & $<$0.0001 & 0.0056 & 0.0051 & $<$0.0001 & $<$0.0001 & $<$0.0001 & 0.7366 & 0.0005 & 0.0004 \\ \{76\} &$\mbox{RC}_{\nu_{j(i)}}$ & 5.3734 & 19.7508 & 13.7541 & 6.8757 & 5.1136 & 2.8191 & 65.7080 & 5.8936 & 5.3690 \\ &$\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 7.8656 & 0.9865 & 7.5837 & 6.0300 & 7.7674 & 3.8681 & 6.5047 & 5.4044 & 2.9683 \\ &p-value & $<$0.0001 & 0.0036 & 0.0041 & $<$0.0001 & $<$0.0001 & $<$0.0001 & 0.8234 & 0.0005 & 0.0004 \\ \{91\} &$\mbox{RC}_{\nu_{j(i)}}$ & 0.4816 & 0.3599 & 1.3006 & 0.1340 & 0.9395 & 3.3248 & 24.9909 & 2.8898 & 6.3258 \\ &$\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 10.3862 & 0.3317 & 1.1331 & 9.6004 & 10.5617 & 1.5611 & 1.8674 & 1.9503 & 6.6533 \\ &p-value & $<$0.0001 & 0.0135 & 0.0081 & $<$0.0001 & $<$0.0001 & $<$0.0001 & 0.6099 & 0.0009 & 0.0005 \\ \{99\} &$\mbox{RC}_{\nu_{j(i)}}$ & 5.3029 & 2.7492 & 1.6940 & 5.3210 & 4.8801 & 0.3085 & 4.3383 & 3.6137 & 6.7477 \\ &$\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 6.6174 & 1.4876 & 1.7026 & 5.7547 & 6.5723 & 1.0905 & 1.0761 & 3.9151 & 6.1229 \\ &p-value & $<$0.0001 & 0.0105 & 0.0066 & $<$0.0001 & $<$0.0001 & $<$0.0001 & 0.5119 & 0.0005 & 0.0005 \\ \{136\} &$\mbox{RC}_{\nu_{j(i)}}$ & 12.2470 & 1.6784 & 0.2632 & 11.9606 & 11.0912 & 0.0027 & 5.1213 & 15.6160 & 1.1150 \\ &$\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 21.6595 & 2.0991 & 1.5249 & 19.1983 & 21.5928 & 0.3857 & 0.9612 & 4.2859 & 4.5029 \\ &p-value & $<$0.0001 & 0.0181 & 0.0057 & $<$0.0001 & $<$0.0001 & $<$0.0001 & 0.4620 & 0.0021 & 0.0010 \\ \{148\} &$\mbox{RC}_{\nu_{j(i)}}$ & 0.1586 & 7.5269 & 17.3875 & 0.4677 & 2.1281 & 18.1920 & 110.3501 & 19.2958 & 2.6873 \\ &$\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 12.7337 & 0.0151 & 6.5753 & 11.2412 & 12.8310 & 20.1945 & 12.3395 & 9.8633 & 2.5618 \\ &p-value & $<$0.0001 & 0.0083 & 0.0028 & $<$0.0001 & $<$0.0001 & $<$0.0001 & 0.1944 & 0.0002 & 0.0005 \\ \{164\} &$\mbox{RC}_{\nu_{j(i)}}$ & 18.0517 & 2.1956 & 1.9246 & 17.3682 & 16.4088 & 0.3663 & 2.5953 & 9.3629 & 7.0554 \\ &$\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 21.8948 & 0.8955 & 2.8491 & 18.3403 & 21.5007 & 1.4035 & 2.6764 & 13.1805 & 6.4181 \\ &p-value & $<$0.0001 & 0.0114 & 0.0071 & $<$0.0001 & $<$0.0001 & $<$0.0001 & 0.5109 & 0.0008 & 0.0005 \\ \{191\} &$\mbox{RC}_{\nu_{j(i)}}$ & 0.6231 & 2.7870 & 4.1934 & 0.4883 & 0.6648 & 2.1828 & 22.6979 & 1.6876 & 7.5923 \\ &$\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 3.8902 & 1.1116 & 11.5756 & 3.8539 & 3.8108 & 1.1182 & 0.9392 & 1.7019 & 7.7862 \\ &p-value & $<$0.0001 & 0.0126 & 0.0112 & $<$0.0001 & $<$0.0001 & $<$0.0001 & 0.5956 & 0.0008 & 0.0005 \\ \{196\} &$\mbox{RC}_{\nu_{j(i)}}$ & 21.9517 & 2.1653 & 0.7409 & 21.4201 & 19.9186 & 0.1242 & 7.4741 & 10.8301 & 7.2652 \\ &$\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 24.3956 & 0.8451 & 3.5498 & 20.6982 & 24.0475 & 1.3982 & 2.3772 & 14.0496 & 6.2292 \\ &p-value & $<$0.0001 & 0.0115 & 0.0083 & $<$0.0001 & $<$0.0001 & $<$0.0001 & 0.5311 & 0.0007 & 0.0005 \\ \{290\} &$\mbox{RC}_{\nu_{j(i)}}$ & 0.1946 & 1.0355 & 2.3766 & 0.2919 & 1.3768 & 11.3504 & 83.0588 & 12.0761 & 0.2652 \\ &$\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 5.0417 & 0.1510 & 0.0884 & 4.7237 & 5.1739 & 15.2285 & 9.9903 & 6.8935 & 1.0658 \\ &p-value & $<$0.0001 & 0.0153 & 0.0081 & $<$0.0001 & $<$0.0001 & $<$0.0001 & 0.2488 & 0.0003 & 0.0006 \\ \{All\} &$\mbox{RC}_{\nu_{j(i)}}$ & 639.0315 & 531.5499 & 1150.6768 & 586.7832 & 434.5975 & 849.2352 & 3909.2424 & 2257.1743 & 419.2168 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 115.8262 & 462.4476 & 1129.8854 & 245.8145 & 228.6784 & 1888.1650 & 1178.6618 & 1190.1720 & 946.7594 \\ &p-value & $<$0.0001 & 0.0059 & 0.0058 & $<$0.0001 & $<$0.0001 & 0.0078 & 0.0298 & $<$0.0001 & 0.0853 \\ \hline \end{tabular} } \label{rc_valvula} \end{table} \subsection{Adjustment to the environment characteristics group} In Step 1, the variables OU, CWT and WC are statistically significant; while in Step 2, only the intercept is relevant. The GTDL model (without frailty) is the one that best fits these data, since we do not reject the null hypothesis ($H_0: \theta=0$) at the 10\% significance level. From the obtained MLEs displayed in Table \ref{ajuste_ambiente}, we observe that all parameters are significant at the 10\% level, except for the parameter $\beta_1$, which measures the effect of the OU class ``SB''. Thus, we can say that there is no significant difference between the OU levels ``CB'' (reference) and ``SB''. \begin{table}[!h] \centering \caption{Estimation results of the GTDL model fitted to the environment characteristics group.} \resizebox{\linewidth}{!}{ \setlength{\tabcolsep}{3pt} \begin{tabular}{l|ccc} \hline Parameter & MLE & SE & 90\% CI \\ \hline $\alpha_0$ & 0.1542 & 0.0277 & (0.1087; 0.1996) \\ $\beta_0$ & -5.0075 & 0.5202 & (-5.8632; -4.1518) \\ $\beta_1$ (OU-SB) & -0.5979 & 0.5185 & (-1.4507; 0.2550) \\ $\beta_2$ (OU-ES) & 2.1236 & 0.4910 & (1.3158; 2.9314) \\ $\beta_3$ (CWT) & 0.0203 & 0.0067 & (0.0093; 0.0314) \\ $\beta_4$ (WC) & 0.6172 & 0.3611 & (0.0233; 1.2111) \\ \hline \end{tabular} } \label{ajuste_ambiente} \end{table} Figure \ref{residuo_ambiental} shows the QQ-plot of the RQ residuals. In general, we observed a good agreement between the residuals and the standard normal distribution, but we noticed a slight deviation in the lower tail. \begin{figure}[h] \centering \includegraphics[width=0.5\linewidth]{fig12.pdf} \caption{QQ-plot with envelope of 95\% for the RQ residuals of the adjustment to the environment characteristics group.} \label{residuo_ambiental} \end{figure} From Figure \ref{fig_conf_ambiente}, we note that ``ES'' is the class with the lowest reliability among the three operating units, while ``SB'' is the one with the highest reliability. Moreover, it can be observed that the higher the value of the CWT and WC variables, the lower the reliability. \begin{figure}[!] \centering \includegraphics[width=1\linewidth]{fig13.pdf} \caption{(a) Reliability function for the operating units. (b) Reliability function with variation in the value of the CWT variable. (c) Reliability function with variation in the value of the WC variable.} \label{fig_conf_ambiente} \end{figure} Figure \ref{razao_risco_ambiente} (a) shows that the HRs between ``CB'' and ``SB'' with ``ES'' are below one all the time, so indicating that the risk of valve failure is lower in the ``CB'' and ``SB'' operating units. When analyzing the HR between the ``CB'' and ``SB'', we note that it is always greater than one, thus indicating that ``SB'' has a lower risk of failure. Finally, by analyzing the HRs involving the first and third quartiles of the CWT and WC variables (Figure \ref{razao_risco_ambiente} (b) and Figure \ref{razao_risco_ambiente} (c), respectively), we observe that both are less than one, thus indicating that the increase in the value of these variables causes the risk to also increase. \begin{figure}[!] \centering \includegraphics[width=1\linewidth]{fig14.pdf} \caption{(a) Ratio of hazard functions of the OU variable. (b) Ratio of hazard functions between the first and third quartiles of the CWT variable. (c) Ratio of hazard functions between the first and third quartiles of the WC variable.} \label{razao_risco_ambiente} \end{figure} Finally, Figure \ref{cooks_ambiente} exhibits the GD and LD measures, considering the GTDL model fitted to the environmental group data. In total, 17 influential observations are detected. In Table \ref{influencia_ambiente}, we present the RCs and p-values, from which we note that the effect of time, the CWT variable and the OU class ``ES'' are significant at the 10\% level in all arrangements. While the OU class ``SB'' is not significant in any of the scenarios, and the WC variable is significant in most of the scenarios. The RC is the largest when we remove all the influential observations, and this result is valid for all parameters. These changes range from 29.1455\% (for the parameter $\beta_0$) to 65.9195\% (for the parameter $\beta_2$). \begin{figure}[!] \centering \includegraphics[width=1\linewidth]{fig15.pdf} \caption{(a) Generalized Cook's distance. (b) Likelihood distance, considering the GTDL model fitted to the environmental group data.} \label{cooks_ambiente} \end{figure} \begin{table}[!] \centering \caption{The RC values (in \%) for the MLEs and SEs, in addition to the p-values, considering the deleted observations.} \resizebox{\linewidth}{!}{ \setlength{\tabcolsep}{3pt} \begin{tabular}{clcccccc} \hline Deleted case & & $\hat{\alpha}_0$ & $\hat{\beta}_0$ & $\hat{\beta}_1$ & $\hat{\beta}_2$ & $\hat{\beta}_3$ & $\hat{\beta}_4$\\ \hline \{1\} & $\mbox{RC}_{\nu_{j(i)}}$ & 2.9254 & 2.9205 & 2.5237 & 1.0057 & 7.2610 & 10.9126 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 1.7930 & 3.6059 & 0.4641 & 0.3209 & 1.5571 & 1.8379 \\ & p-value & $<$0.0001 & $<$0.0001 & 0.2393 & $<$0.0001 & 0.0014 & 0.0626 \\ \{3\} & $\mbox{RC}_{\nu_{j(i)}}$ & 4.2387 & 2.4626 & 1.4264 & 0.1815 & 1.8372 & 11.9369 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 1.5334 & 2.5395 & 1.6244 & 0.6835 & 1.8752 & 1.6002 \\ & p-value & $<$0.0001 & $<$0.0001 & 0.2633 & $<$0.0001 & 0.0035 & 0.0597 \\ \{6\} & $\mbox{RC}_{\nu_{j(i)}}$ & 1.1481 & 0.3146 & 2.0711 & 13.7774 & 0.0845 & 0.2721 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 1.0717 & 0.6794 & 0.0710 & 5.8548 & 0.1591 & 0.1516 \\ & p-value & $<$0.0001 & $<$0.0001 & 0.2591 & $<$0.0001 & 0.0025 & 0.0887 \\ \{9\} & $\mbox{RC}_{\nu_{j(i)}}$ & 6.6762 & 4.8086 & 1.3080 & 1.9930 & 10.5056 & 16.2842 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 2.3401 & 4.6059 & 0.7279 & 0.6454 & 1.8089 & 2.3332 \\ & p-value & $<$0.0001 & $<$0.0001 & 0.2461 & $<$0.0001 & 0.0010 & 0.0521 \\ \{11\} & $\mbox{RC}_{\nu_{j(i)}}$ & 5.9552 & 4.1589 & 0.9594 & 1.3080 & 6.8238 & 15.6473 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 2.1207 & 4.0516 & 1.0153 & 0.6422 & 1.8370 & 2.1400 \\ & p-value & $<$0.0001 & $<$0.0001 & 0.2491 & $<$0.0001 & 0.0015 & 0.0529 \\ \{14\} & $\mbox{RC}_{\nu_{j(i)}}$ & 0.3567 & 0.1849 & 19.1322 & 2.9679 & 4.6735 & 10.1193 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 0.2406 & 0.4304 & 3.5654 & 1.1133 & 1.3708 & 1.9766 \\ & p-value & $<$0.0001 & $<$0.0001 & 0.1847 & $<$0.0001 & 0.0018 & 0.1319 \\ \{28\} & $\mbox{RC}_{\nu_{j(i)}}$ & 3.8931 & 3.0017 & 0.7810 & 1.6211 & 6.2633 & 9.4813 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 1.7758 & 3.2991 & 0.5321 & 0.4795 & 1.3797 & 1.6690 \\ & p-value & $<$0.0001 & $<$0.0001 & 0.2551 & $<$0.0001 & 0.0015 & 0.0657 \\ \{62\} & $\mbox{RC}_{\nu_{j(i)}}$ & 2.4544 & 0.2317 & 17.2524 & 4.0081 & 2.3116 & 9.7044 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 0.7470 & 0.4728 & 0.1128 & 0.7484 & 0.3410 & 0.3399 \\ & p-value & $<$0.0001 & $<$0.0001 & 0.3405 & $<$0.0001 & 0.0032 & 0.1240 \\ \{68\} & $\mbox{RC}_{\nu_{j(i)}}$ & 0.1492 & 0.3021 & 1.9317 & 10.0868 & 0.7457 & 2.3682 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 0.5829 & 0.0464 & 0.0464 & 5.8795 & 0.0634 & 0.0850 \\ & p-value & $<$0.0001 & $<$0.0001 & 0.2579 & $<$0.0001 & 0.0026 & 0.0949 \\ \{92\} & $\mbox{RC}_{\nu_{j(i)}}$ & 5.3858 & 3.4375 & 3.6343 & 2.3131 & 6.1674 & 9.3714 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 1.8626 & 3.3714 & 0.6349 & 0.6576 & 1.4118 & 1.7256 \\ & p-value & $<$0.0001 & $<$0.0001 & 0.2695 & $<$0.0001 & 0.0015 & 0.0661 \\ \{125\} & $\mbox{RC}_{\nu_{j(i)}}$ & 0.8984 & 0.6199 & 33.6173 & 4.5336 & 12.3489 & 11.8086 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 0.0543 & 0.0984 & 8.0266 & 1.4191 & 4.6786 & 2.8638 \\ & p-value & $<$0.0001 & $<$0.0001 & 0.1538 & $<$0.0001 & 0.0113 & 0.0632 \\ \{126\} & $\mbox{RC}_{\nu_{j(i)}}$ & 0.7894 & 0.3005 & 17.9258 & 2.3143 & 5.1040 & 5.9834 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 0.2181 & 0.0042 & 0.7860 & 0.4513 & 0.4750 & 0.1782 \\ & p-value & $<$0.0001 & $<$0.0001 & 0.3477 & $<$0.0001 & 0.0015 & 0.1074 \\ \{155\} & $\mbox{RC}_{\nu_{j(i)}}$ & 0.6267 & 1.1798 & 30.2296 & 0.1697 & 6.2293 & 4.4646 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 0.6618 & 1.7310 & 4.5377 & 0.8401 & 2.2139 & 2.5319 \\ & p-value & $<$0.0001 & $<$0.0001 & 0.1508 & $<$0.0001 & 0.0017 & 0.0816 \\ \{190\} & $\mbox{RC}_{\nu_{j(i)}}$ & 4.7315 & 2.6210 & 8.8509 & 3.5226 & 5.5838 & 2.0483 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 1.5743 & 2.6103 & 0.2028 & 0.6890 & 0.9269 & 1.2904 \\ & p-value & $<$0.0001 & $<$0.0001 & 0.2942 & $<$0.0001 & 0.0015 & 0.0850 \\ \{203\} & $\mbox{RC}_{\nu_{j(i)}}$ & 0.4418 & 0.1479 & 19.1232 & 3.0229 & 4.8032 & 10.1304 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 0.2563 & 0.4610 & 3.5651 & 1.1227 & 1.3781 & 1.9903 \\ & p-value & $<$0.0001 & $<$0.0001 & 0.1847 & $<$0.0001 & 0.0018 & 0.1320 \\ \{205\} & $\mbox{RC}_{\nu_{j(i)}}$ & 0.0180 & 2.1064 & 7.0915 & 0.3799 & 6.2955 & 11.7476 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 2.4970 & 0.4491 & 0.2286 & 0.2335 & 0.6286 & 0.5922 \\ & p-value & $<$0.0001 & $<$0.0001 & 0.2851 & $<$0.0001 & 0.0048 & 0.1337 \\ \{209\} & $\mbox{RC}_{\nu_{j(i)}}$ & 1.8041 & 2.0214 & 9.9710 & 0.8394 & 7.2795 & 13.3072 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 3.6545 & 0.6415 & 0.2373 & 0.4213 & 0.4518 & 0.2111 \\ & p-value & $<$0.0001 & $<$0.0001 & 0.3003 & $<$0.0001 & 0.0052 & 0.1392 \\ \{All\} & $\mbox{RC}_{\nu_{j(i)}}$ & 60.1541 & 29.1455 & 42.3398 & 65.9195 & 49.7668 & 34.4222 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ & 36.5916 & 40.6563 & 39.2314 & 29.8468 & 31.7057 & 32.9403 \\ & p-value & $<$0.0001 & $<$0.0001 & 0.2384 & $<$0.0001 & 0.0006 & 0.0839 \\ \hline \end{tabular} } \label{influencia_ambiente} \end{table} \subsection{Adjustment to the operation characteristics group} In this last fitting, only the WFP variable is significant in Step 1, the effect of time is measured only by $\alpha_0$ and the GTDL model (without frailty) is the one that best fits these data. Its estimation results are presented in Table \ref{ajuste_operacao}, from which we note that all parameters are significant at the 10\% level. \begin{table}[H] \centering \caption{Estimation results of the GTDL model fitted to the operation characteristics group.} \resizebox{\linewidth}{!}{ \setlength{\tabcolsep}{3pt} \begin{tabular}{l|ccc} \hline Parameter & MLE & SE & 90\% CI \\ \hline $\alpha_0$ & 0.1403 & 0.0708 & (0.0238; 0.2568) \\ $\beta_0$ & -2.1940 & 0.5532 & (-3.1040; -1.2840) \\ $\beta_1$ (WFP) & -0.4236 & 0.1454 & (-0.6627; -0.1845) \\ \hline \end{tabular} } \label{ajuste_operacao} \end{table} Figure \ref{residuo_operacao} shows the QQ-plot of the RQ residuals. In general, we observed a good agreement between the residuals and the standard normal distribution \begin{figure}[h] \centering \includegraphics[width=0.5\linewidth]{fig16.pdf} \caption{QQ-plot with envelope of 95\% for the RQ residuals of the adjustment to the operation characteristics group.} \label{residuo_operacao} \end{figure} Figure \ref{conf_operacao} (a) exhibits the behavior of the reliability function when we vary the value of the WFP variable. Note that with the increase in the value of the WFP variable, an increase in the value of the valve reliability is observed. An example of the HR is shown in Figure \ref{conf_operacao} (b), using the first and third quartiles of the WFP variable, from which we see that the ratio is greater than one, so the risk of failure is greater when the WFP variable takes on the value equal to the first quartile. We also observe that the ratio is initially close to the value 2, indicating that the risk of valve failure, when operated at the value of the first quartile, is approximately twice when operated at the value of the third quartile. Such a risk value decreases with time, but stays above 1.5 in the time of 20 years. \begin{figure}[!] \centering \includegraphics[width=1\linewidth]{fig17.pdf} \caption{(a) Reliability function with variation in the value of the WFP variable. (b) Ratio of hazard functions between the first and third quartiles of the WFP variable.} \label{conf_operacao} \end{figure} The total number of influential observations detected by the GD and LD measures are 16 and 8, respectively, as can be seen in Figure \ref{cooks_operacao}. Note, however, that only observations 14 and 47 were identified by both metrics. From the removal of influential observations, we can see in Table \ref{influencia_operacao} that the WFP variable remains significant in all configurations, and that the effect of time is almost always significant. The point estimates underwent few changes when excluding only one influential observation; this fact is quantified by RC less than or equal to 20.0448\%. But when we removed all the influential observations, we noticed major changes in the estimates of the parameters $\alpha_0$ and $\beta_1$, with RC of 266\% and 165\%, respectively. In this case, it is worth noting that the estimate of the parameter $\alpha_0 $ changed from 0.1403 to 0.5145, thus the effect of time is greater when removing the influential observations. \begin{figure}[!h] \centering \includegraphics[width=1\linewidth]{fig18.pdf} \caption{(a) Generalized Cook's distance. (b) Likelihood distance, considering the GTDL model fitted to the operation group data.} \label{cooks_operacao} \end{figure} \begin{table}[H] \centering \caption{The RC values (in \%) for the MLEs and SEs, in addition to the p-values, considering the deleted observations.} \resizebox{\linewidth}{!}{ \setlength{\tabcolsep}{3pt} \begin{tabular}{clccccccc} \hline Deleted case & & $\hat{\alpha}_0$ & $\hat{\beta}_0$ & $\hat{\beta}_1$ & Deleted case & $\hat{\alpha}_0$ & $\hat{\beta}_0$ & $\hat{\beta}_1$\\ \hline \{3\} & $\mbox{RC}_{\nu_{j(i)}}$ &4.5078 & 7.2306 & 8.5400& \{56\} & 1.4690 & 4.7637 & 6.0048 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ &6.8230 & 0.9056 & 3.8450& & 0.0069 & 0.9197 & 1.3361 \\ & p-value &0.0765 & 0.0003 & 0.0003& & 0.0509 & 0.0002 & 0.0023 \\ \{4\} & $\mbox{RC}_{\nu_{j(i)}}$ &11.0495 & 2.7706 & 2.8200& \{62\} & 7.0448 & 7.6786 & 8.1094 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ &0.6797 & 1.4186 & 1.8685& & 0.7200 & 3.6519 & 1.8788 \\ & p-value &0.0289 & 0.0001 & 0.0033& & 0.0302 &$<$0.0001 & 0.0086 \\ \{7\} & $\mbox{RC}_{\nu_{j(i)}}$ &4.7590 & 0.3162 & 5.6847& \{64\} & 8.4898 & 6.2349 & 6.1292 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ &1.0370 & 0.8085 & 1.8114& & 4.1284 & 1.9374 & 4.1063 \\ & p-value &0.0399 & 0.0001 & 0.0025& & 0.0816 & 0.0003 & 0.0030 \\ \{14\} & $\mbox{RC}_{\nu_{j(i)}}$ &8.3125 & 4.2790 & 7.1574& \{68\} & 6.4199 & 5.7994 & 4.9107 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ &1.8564 & 0.2452 & 1.6713& & 0.7443 & 2.5868 & 1.4885 \\ & p-value &0.0745 & 0.0002 & 0.0021& & 0.0363 &$<$0.0001& 0.0063 \\ \{19\} & $\mbox{RC}_{\nu_{j(i)}}$ &1.6059 & 5.3560 & 6.6352& \{77\} & 3.4080 & 3.0574 & 7.1069 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ &0.9277 & 3.6788 & 2.0471& & 1.4973 & 0.3660 & 1.7326 \\ & p-value &0.0461 & 0.0001 & 0.0077& & 0.0593 & 0.0001 & 0.0022 \\ \{28\} & $\mbox{RC}_{\nu_{j(i)}}$ &16.0427 & 4.6498 & 3.9293& \{96\} & 13.9211 & 5.2348 & 5.9877 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ &3.1449 & 0.3587 & 0.9763& & 4.9171 & 3.4998 & 1.2367 \\ & p-value &0.1068 & 0.0002 & 0.0027& & 0.0314 & 0.0001 & 0.0068 \\ \{29\} & $\mbox{RC}_{\nu_{j(i)}}$ &10.6234 & 0.6785 & 8.9663& \{106\} & 5.8258 & 1.2303 & 7.8611 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ &0.8946 & 0.6947 & 2.1941& & 1.0602 & 0.6240 & 2.0002 \\ & p-value &0.0298 & 0.0001 & 0.0019& & 0.0380 & 0.0001 & 0.0021 \\ \{33\} & $\mbox{RC}_{\nu_{j(i)}}$ &7.9757 & 7.7748 & 7.8140& \{114\} & 15.4408 & 3.3549 & 2.7208 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ &0.6730 & 3.4475 & 1.7762& & 4.1372 & 0.9527 & 0.5784 \\ & p-value &0.0336 &$<$0.0001& 0.0083& & 0.0281 &$<$0.0001& 0.0044 \\ \{39\} & $\mbox{RC}_{\nu_{j(i)}}$ &9.6157 & 6.5767 & 4.8055& \{125\} & 0.2446 & 1.6008 & 6.0331 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ &0.6014 & 2.5690 & 1.5782& & 1.2595 & 0.5882 & 1.7066 \\ & p-value &0.0309 &$<$0.0001& 0.0063& & 0.0498 & 0.0001 & 0.0024 \\ \{47\} & $\mbox{RC}_{\nu_{j(i)}}$ &12.0147 & 3.8095 & 1.3540& \{134\} & 15.7067 & 5.9306 & 6.8143 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ &0.5977 & 1.6467 & 1.8407& & 2.8567 & 0.3051 & 1.6271 \\ & p-value &0.0274 & 0.0001 & 0.0037& & 0.1014 & 0.0002 & 0.0022 \\ \{48\} & $\mbox{RC}_{\nu_{j(i)}}$ &20.0448 & 3.1934 & 8.9491& \{All\} & 266.6522 & 9.7659 & 165.3021 \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ &2.9911 & 0.9585 & 0.6384& & 123.4111 & 50.1084& 108.3383 \\ & p-value &0.0209 & 0.0001 & 0.0016& & 0.0011 & 0.0171 & 0.0002 \\ \{55\} & $\mbox{RC}_{\nu_{j(i)}}$ &1.2474 & 4.1259 & 5.0738& & & & \\ & $\mbox{RC}_{\mbox{SE}(\nu_{j(i)})}$ &0.0210 & 0.6495 & 0.9201& & & & \\ & p-value &0.0503 & 0.0002 & 0.0024& & & & \\ \hline \end{tabular} } \label{influencia_operacao} \end{table} \section{Conclusions} \label{conclusions} In this paper, we analyzed a real reliability data set on DHSVs used by the Brazil's Petrobras oil firm. This kind of valve has a high reliability, attested by the various technical standards that regulate the oil and gas production sector, consequently few failures are expected during its use. In the graphical analysis, we verified the presence of non-PH. But for some covariates, the Shoenfeld test did not confirm this result. Then, our proposed modeling was developed using the GTDL and GTDL gamma frailty models with regression also in the parameter that measures the effect of time. The decision to use modeling with frailty is in the sense that the variance of the distribution can indicate the absence of important covariates in the modeling, thus indicating that covariates that are in other groups or that for some reason were not registered in the database can be important to explain the time to failure. The modeling was divided into four groups due to the large amount of missing data. We identified that the variables H2S, BSW, PC, Mfr., Family, OU, CWT, WC and WFP are relevant to describe the time until failure. We also noted that only variables with valve characteristics are not enough to describe the time until failure, because the model with fragility needed to be adopted. The residual analysis indicated a good fit of the model to the data, in all groups of covariates. The global influence analysis highlighted possible influential points in the adjustments made. We presented summaries of the adjustments without the influential observations, because further information and investigations of these observations are for the exclusive use of the company. In this way, we demonstrated the importance of diagnostic analysis in the model because we detected inferential changes after eliminating potentially influential cases. The variables were divided into 4 groups considering their characteristics and the proposed modeling adopted this division, since the amount of missing data is large. In this work, our interest was to identify possible factors that may influence the time until failure and not to indicate a single model adjusted to a certain group of covariates as being the one that best suits failure times. As a future work, a study on how to circumvent problems for small sample sizes in the used models using Bayesian methods or bias correction approaches, the use of the premise that valves installed in the same production regions share the same frailty, thus characterizing the use of shared frailty models, and the use of the well-known statistical method of Principal Component Analysis (PCA) can be modeling alternatives. \section{} \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{fig19.pdf} \vspace*{0.5cm} \caption{Logarithm of the estimated cumulative hazard function for the variables: (a) CWP, (b) OU, (c) WC, (d) WFP, (e) Mfr., (f) Dim., (g) PC, (h) H2S and (i) BSW. } \label{verificacao_proporcionalidade_2} \end{figure}
1,314,259,995,754
arxiv
\section{\label{sec:level1}Introduction} Large-scale collective pattern formation by self-motile elements is a widely studied phenomenon in physics and biology ~\cite{dombrowski2004self, ballerini2008interaction, katz2011inferring, marchetti2013hydrodynamics, ramaswamy2010mechanics} . A prominent class of such collective behaviour such as flocking and swarming are caused by direct local interaction among individual entities within a group, influencing their relative motion \cite{vicsek1995novel, gregoire2004onset, chate2008modeling}. The surrounding medium can also play a significant part in mediating interactions between active particles. In wet systems, particularly at low Reynolds numbers, hydrodynamic interactions are nearly instantaneous. Inhomogeneous and complex environments qualitatively change the individual, as well as collective dynamics of such systems~\cite{bechinger2016active,elgeti2015physics,qi2020enhanced,mousavi2020wall,D0SM00797H,patteson2018propagation,theeyancheri2020translational,das2020aggregate}. In dry systems, particles may leave behind chemical or mechanical cues for other particles to follow, thereby increasing the level of medium-induced complexity. This phenomenon of stigmergy has been studied widely in the context of ants and termites leaving pheromone trails \cite{attygalle1985ant,jackson2004trail}. Recently, a mechanical form of stigmergy has been observed in motile bacteria \textit{Pseudomonas aeruginosa} and \textit{Myxococcus xanthus} when cultured on soft hydrogel substrates ~\cite{gloag2013self,gloag2015bacterial,gloag2016stigmergy}. Under favourable conditions, these bacteria form extensive networks of permanent furrows as they move collectively as a monolayer across the soft surface in the initial stages of biofilm formation. The rate of expansion of the colony is further observed to be intimately related to the morphology of the network and the cellular traffic within the furrows~\cite{gloag2013self,gloag2015bacterial,gloag2016stigmergy}. Several recent studies have explored collective phenomena in populations of rod-shaped particles using computer simulations. The expansion rate and morphology of colonies of growing and sub-dividing non-motile have been shown to strongly depend on the mechanical properties of the passive medium or the extracellular material secreted by the rods ~\cite{farrell2013mechanically, ghosh2015mechanically,tchoufag2019mechanisms, you2018geometry, acemel2018computer,farrell2017mechanical, you2019mono}. Studies of self-propelled rods in the absence of coupled interactions with a substrate \cite{peruani2006nonequilibrium,mccandlish2012spontaneous,abkenar2013collective,baskaran2008hydrodynamics,weitz2015self,prathyusha2018dynamically, velasco2018collective,kuan2015hysteresis,bar2020self,be2020phase,velasco2018collective,vliegenthart2020filamentous,bott2018isotropic} reveal self-organized formation of dynamic small or large clusters, polar lanes, and nematic defects. \begin{figure*}[t] \centerline{\includegraphics[width=0.8\linewidth]{fig_1_v4.pdf}} \vspace*{-1.3em} \caption{ \label{fig:colonizn} (A) Furrow network (in white) formation in a substrate of plasticity, $P = 0.0025$, at $\mathrm{Pe} = 100$. Active rods are red and substrate particles are blue. Adjacent figures show close-up images of two distinct types of motility-induced clusters. Rafts are arrow-head shaped clusters, while trains consist of rods arranged end-to-end. (B) Comparison of normalised colonisation speed, $V_c^*$ (filled symbols), and the normalised speed of isolated rods, $V_1^*$ (open symbols). The speeds are normalised by the observed speed of a single rod in a substrate of zero stiffness. When thermal fluctuations are weak, the average speed of an isolated rod in a fluid-like medium consisting of fully plasticised substrate is $V_1^0 \approx F^\mathrm{a}/ (\gamma_\mathrm{s} + \gamma_\mathrm{r}) = \mathrm{Pe}/ [N_\mathrm{b}^2\, (\gamma_\mathrm{s}/\gamma_\mathrm{r} + 1)]$.} \label{fig1} \vspace{-1.9em} \end{figure*} In an attempt to understand the behaviour observed in experiments in colonies of motile rod-shaped cells of \textit{P. aeruginosa} growing on agar \citep{gloag2013self}, Zachreson \textit{et al.} \citep{zachreson2017emergent,zachreson2017network} used simulations of self-propelled spherocylinders interacting with a continuum model of a deformable substrate. While their results demonstrate that substrate stiffness and its viscoelastic relaxation time can strongly influence the morphodynamics of active furrowing, their parameters were chosen specifically for the experimental system at hand. Moreover, the simulations also included cell division and population growth. Here, we take a closer look by using 2D simulations to study the furrow structure that emerges as a single row of active self-propelled rods advances through a plastic substrate. We show that it is \textit{motility-induced clustering} that generically causes furrow networks to emerge whose fractal dimension depend on substrate plasticity. Clustering further enhances the rate at which the colony edge advances, and this speed gain depends on cluster morphology. Our results also suggest that colonies must regulate their overall cell growth rate in order to sustain extended furrow networks. In our simulations, each rod of length $L \,= \, N_\mathrm{b} \, \sigma_\mathrm{b}$ is a rigid linear array of $N_\mathrm{b}$ beads of nominal diameter, $\sigma_b$. The substrate is discretised as randomly-packed isotropic particles (see Supplemental Information for detailed description \citep{SI}). The rods interact with other rods and substrate particles through repulsive excluded-volume forces modeled with the Separation-Shifted Lennards-Jones potential (SSLJ) \cite{abkenar2013collective}. The instantaneous configuration of rod $i$ are characterised by its position $\vec{r}_i(t)$ and the unit polarity vector $\hat{\vec{p}}_i(t)$. The time evolution of these variables are governed by Langevin equations that include random Brownian forces with an energy scale of $k_\textsc{b} \,T$ and a self-propulsion force of magnitude $F^\mathrm{a}$ that acts along $\hat{\vec{p}}_i$. Free rods experience frictional resistance to their motion characterised by the bare rod friction coefficient, $\gamma_\mathrm{r} \,=\, \gamma_\mathrm{b} N_\mathrm{b}$, where $\gamma_\mathrm{b}$ is the bare friction coefficient of a bead on a rod. A minimal model is used to represent the essential mechanical features of a semi-solid, plastic substrate. The $p$-th substrate particle is bound to its equilibrium coordinate $\vec{r}_{p,\,0}$ by a harmonic potential, $U_p \, =\, k \,\left(\vec{r}_{p} -\vec{r}_{p,\,0}\right)^2/ 2$, if $ \mid\vec{r}_p -\vec{r}_{p,\,0}\mid \,< l_\mathrm{max}$, where $k$ is the elastic stiffness of the substrate, and $l_\mathrm{max}$ is the maximum displacement of substrate particles beyond which they are plasticised (\textit{i.e.} $U_p \,=\,k\,l_\mathrm{max}^2/2$, a constant) and a permanent furrow is formed (see Fig.~S1 \citep{SI}). \begin{comment} \begin{gather} U_p(\vec{r}_{p}) = k \,\left(\vec{r}_{p} -\vec{r}_{p,\,0}\right)^2/ 2\,, \text{ if } \mid \vec{r}_p -\vec{r}_{p,\,0} \mid \,<l_\mathrm{max} \,, \label{elastic} \end{gather} \end{comment} Once broken off, the substrate particles remain as free particles in the system and continue to offer a frictional resistance to the rods. To model the high viscous resistance of semi-solid substrate to displacements, the friction coefficient, of a substrate particle $\gamma_\mathrm{s} \,= \,10 \,\gamma_\mathrm{b}$ in our simulations. For the qualitative exploration here, we have neglected direct substrate-substrate interactions. The rod-substrate interaction forces and the binding force derived from the potential $U_p$ determine the time-evolution of the substrate. The simulations are performed using LAMMPS. The masses of the particles are chosen to be small enough for the simulations to be in the overdamped regime (see Supplemental Material \citep{SI}). Each simulation starts with two rows of vertically aligned rods, introduced at the top and bottom boundaries with propulsion forces directed towards the undisturbed substrate that fills rest of the box. The height of the simulation box , $H$, is twice its width, $W$. Periodic boundary conditions are imposed on all the sides. The simulation is stopped when any rod from either side crosses the middle of the box to the other side. Each side of the box is then treated as a separate simulation instance. We set $\sigma_b$ as the length scale, and $\tau_\mathrm{b} \,=\, \sigma_\mathrm{b}^2 \gamma_\mathrm{b} / k_\textsc{b} \,T$ and $k_\textsc{b} \,T$ as the time and energy scales, respectively. We use rods with $N_\mathrm{b} \, =\, 5$ in simulation boxes of dimensions $H \,=\, 400 \,\sigma_\mathrm{b}$ and $W \,=\, 200 \,\sigma_\mathrm{b}$. The key dimensionless parameters are the P\'{e}clet number, $\mathrm{Pe} \, = \, F^\mathrm{a}\, L/ k_\textsc{b} \,T$ and the plasticity ratio, $ P \,=\, k \, l_\mathrm{max}/ F^\mathrm{a}$, with $l_\mathrm{max} \,=\, 0.6 \,\sigma_\mathrm{b}$. We focus here on collective behaviour of the rods and its effect on furrow formation at high P\'{e}clet numbers at which noise does not destroy structure formation. Our choice of parameters correspond to values of $P \ll 1$ \textit{i.e.} we consider plastic substrates for which the stiffness is chosen such that even single rods can displace the substrate to create permanent furrows. \begin{figure*}[!t] \centerline{ \includegraphics[width=0.95\textwidth]{fig_2_v4.pdf}} \vspace*{-1.4em} \caption{\label{fig:stats} (A) Fraction of train clusters, $f^\mathrm{T}$, out of all the clusters ( the fraction of rafts, $f^\mathrm{R} \,= \,1 - f^\mathrm{T}$). (B) Distribution of rod orientation angle $p(\phi)$ with respect to the vertical axis ($y$ axis) for rafts (blue curve) and trains (orange curve) for $\mathrm{Pe} \,=\, 100$ and $P \, = \, 0.01$. (C) Individual averages of the $y$-component of cluster velocity of rafts and trains, and the overall population average, for $\mathrm{Pe}\,=\,100$. } \vspace{-2.0em} \end{figure*} Initially, all the rods begin individually creating relatively straight and narrow furrows (see Fig.~S3; Movie $\S$1, \citep{SI}). Small orientational fluctuations cause the rods to begin colliding with neighbouring rods to begin forming dynamic clusters. As the clusters propel through the substrate, they permanently displace substrate particles, forming a complex network of permanent furrows (Fig~\ref{fig1}-A). To analyse the furrow formation more quantitatively, we define the colonised area $C$, which is the $y$-coordinate of the outermost leader rod at any instant of time (see Fig.~S2 \citep{SI}). This area is observed to grow nearly linearly in time in all our simulations until they are terminated (see Fig.~S4 \citep{SI}). The colonisation speed $V_c$ is estimated by a linear least-squares fit through the $C$-vs-$t$ data, and is compared in Fig.~\ref{fig:colonizn}-B with $V_1$, the speed of an isolated rod through the same substrate. The speeds are normalised by observed speed of a single rod in a substrate of zero stiffness. We observe that, at any given $\mathrm{Pe}$ and $P$, $V_c$ is systematically \textit{larger} than $V_1$, the speed at which an isolated rod moves through the substrate. Further, while $V_1$ decreases nearly linearly with $P$ at a fixed $\mathrm{Pe}$, $V_c$ varies non-monotonically with $P$, displaying a peak colonisation rate at a non-zero value of $P$. These interesting differences in colonisation speed from isolated rod speed clearly arise from collective effects and appear to be related to the behaviour of clusters of active rods that drive the formation of furrows. We used a clustering algorithm to identify separate clusters of beads. Individual clusters were then tracked in a frontier region of depth $8 \,L$ from the leading edge of the colony until they broke up or were joined by new members. Frequency distributions of quantities such as the number of rods in a cluster (\textit{i.e.} the cluster size), average cluster speed and vertical velocity component, and orientation angle relative to the vertical axis were determined. Based on the relative configurations of the rods in a cluster, each cluster is categorised as one of two mutually exclusive types: end-to-end \textit{trains} and non-train \textit{rafts} that are usually arrow-head shaped (Fig.~\ref{fig1}-A; see Supplementary Material for mathematical definitions \citep{SI}). We find that trains become more prevalent as the substrate becomes stiffer (Fig.~\ref{fig:stats}-A; Movie $\S$2, \citep{SI}). Both rafts and trains move more quickly through the substrate than isolated rods. Since the propulsive force on each rod is distributed uniformly over each bead, the total propulsive force on a tightly-packed cluster is proportional to its area. The resistance it experiences is however proportional to its frontal perimeter exposed to the substrate. Consequently, the resistance experienced per rod is smaller in a cluster. We therefore find that the average speed of clusters of either kind grows linearly with cluster size (Fig.~S5 in SI \citep{SI}). This effect is greater in trains, where the whole propulsive force of the train is only resisted by about one or two substrate particles. In addition, the orientational distribution of clusters shows that trains in the frontier region close to leading edge of the colony tend to be strongly aligned in the outward direction (Figs.~\ref{fig:stats}-B and S7 in SI \citep{SI}). Rods in rafts have more diverse orientations, which can cause rafts to have a broader range of orientations relative to the outward direction. The propulsive force on trains, on the other hand, is almost entirely directed along the train axis, which keeps them on course. The mean $y$-component of the velocity ($v_y$) of clusters within the frontier region is computed as $\overline{v}_y \,= \,\overline{v}^\mathrm{R}_{y}\,( 1 - f^\mathrm{T}) \,+\,\overline{v}^\mathrm{T}_{y}\, f^\mathrm{T}$, where $f^\mathrm{T}$ is the fraction of trains in the clusters. \begin{comment} \begin{gather} \overline{v}_y \,= \,\overline{v}^\mathrm{R}_{y}\,( 1 - f^\mathrm{T}) \,+\,\overline{v}^\mathrm{T}_{y}\, f^\mathrm{T} \,, \label{eq:clSpeed} \end{gather} \end{comment} In Fig.~\ref{fig:stats}-C, the individual means, $\overline{v}^\mathrm{R}_{y}$ and $\overline{v}^\mathrm{T}_{y}$, decrease non-monotonically with $P$ , the overall average $\overline{v}_y$ shows a maximum around the same $P$ at which the peak colonisation speed occurs in Fig.~\ref{fig:colonizn}-B. Therefore, the increase of the colonisation speed with stiffness at small values of $P$, appears to be because of the growth in the fraction of faster-moving trains that are also more aligned along the outward direction. When the fraction of trains begins to saturate at high stiffnesses, the decrease of their speeds with stiffness takes over, and the overall colonisation speed decreases with stiffness. The furrow network is created by clusters at the head of furrows repeatedly intersecting with previously created, empty, furrows. If a previous furrow a cluster encounters is narrow, the cluster can plough through and continue unhindered. On the other hand, when the previous furrow has a width comparable to $L$ or greater, the cluster quickly breaks up, as the small orientational noise present causes rods at the edge of the cluster to escape the cluster easily before the cluster passes through the furrow (see Movie $\S$3, \citep{SI}). This is consistent with observations elsewhere that motility-induced clustering can lead to large clusters and lanes in systems of free active rods at sufficiently high densities \citep{abkenar2013collective}, but in an empty furrow, a cluster can quickly ``evaporate''. We observe visually that arrow-head shaped rafts are stable for longer times. The rods at the head of such pointy rafts have convergent orientations. This leads to a rectification of their propulsion forces along the longitudinal axis of the cluster. Any remaining transverse component of the net propulsion force causes pointy rafts to swerve and take curved trajectories. Large rafts can thus create wide and long furrows that serve as arterial conduits in the network (Fig.~S8-D; Movie $\S7$ \citep{SI}). Broad-headed rafts with rods of divergent orientations quickly break up into multiple smaller rafts and corresponding furrows. Collisions of free rods with existing clusters at the head of furrows play an important role in the process of network formation (Fig.~S8 A--C \citep{SI}). Free rods in empty furrows experience lower (bare) friction and move through the network very quickly. These rods sometimes encounter a furrow wall head-on and can push through to create thin furrows on their own. However, more typically, on colliding with furrow walls, they reorient along the furrow axis and tend to catch up with one of the other rafts at the head of a newly forming furrow (or exit the periodic boundary at the back of their colony to enter the colony on the other side and catch up with a cluster at a furrow head on that side). Free rods approaching a raft or a train from behind cannot overtake the cluster. They either collide with the cluster from behind, or may, at best, squeeze through the side of the furrow to align with other rods at the head (Fig.~S8-A and B; Movies $\S4$ and $\S5$ \citep{SI}). These frequent collisions change the nematic alignment of the rods in the cluster. Collisions thus lead to the break-up of clusters and the formation of new ones (see Fig.~S8-C; Movie $\S$6 \citep{SI}). \begin{figure} \centerline{\includegraphics[width=\columnwidth]{fract_furrow_new.pdf}} \vspace*{-1.4em} \caption{\label{fig:clusters} (A) Box-counting fractal dimension as a function of plasticity number $P$, measured at termination of simulation. (B) The growth of total furrowed area with time, for different $Pe$ and $P$. furrowed area growth shows a non-trivial power law behaviour. The symbols are as follows: $P = 0.00125$( \textcolor{blue1}{$\medsquare$}) and $P = 0.05$(\textcolor{blue1}{$\filledmedsquare$}) at $\mathrm{Pe} =200$; $P = 0.0025$(\textcolor{blue2}{$\medtriangledown$}) and $P = 0.075$ (\textcolor{blue2}{$\filledmedtriangledown$}) at $\mathrm{Pe}=100$; $P=0.00625$ (\textcolor{blue3}{$\medcircle$}) and $P=0.0625$(\textcolor{blue3}{ \Large $\bullet$}) at $\mathrm{Pe}=80$.} \vspace{-2.0em} \end{figure} The trajectories and collisions of rafts and trains leads to a distinctive network morphology. The highly-ramified structure of the networks are quantified here by determining the fractal dimension, $D$, of the furrow-substrate interface using a box-counting algorithm \cite{gagnepain1986fractal}. As expected, $D$ has a non-integer value between 1 and 2, and is observed to systematically decrease with $P$ (Fig.~\ref{fig:clusters}-A) . This is associated with a change from networks created mostly by interactions of curved raft trajectories (\emph{e.g.} Fig.~\ref{fig:colonizn}-A) to those dominated by the thin and straight furrows created by trains (\emph{e.g.} Fig.~\ref{fig:colonizn}-A) This observation suggests a simple model for the growth of the area of the furrow network, $F$, with time. Furrow heads can form randomly at any point on the furrow interface within the network due to the constant scattering of free rods through the network by cluster breakup. The furrowing rate is then expected to be proportional to the length, $Z$, of the furrow interface. The fractal nature of the interface implies that $Z \sim F^{D/2}$ \cite{florio2019use}. The rate of furrow growth, $d F/ dt\, \sim\, Z\, R/\mathcal{F}$, where $R$ is the total area occupied by rods in the furrows (a constant in our simulations). Integrating, we obtain $ F \sim t^{ 2/ (4-D)}$ at large times. Since the fractal dimension, $1 < D < 2$, we observe in Fig.~ \ref{fig:clusters}-B that the power-law exponent for the growth of $F$ with $t$ is in the range $2/3 < 2/ (4-D) < 1$. If this can be sustained, while the colonized area $C$ grows linearly with time, the colony morphology will become increasingly sparse. Based on the results above, one can expect similar furrow networks to generically form when colonies of motile cells spread through soft, plastic environments and in three dimensions as well. The observations here are in line with many experiments with motile bacteria. Cell rafts similar to those discussed above have been observed to lead the formation of furrow networks in \textit{P. aeruginosa} and \textit{M. xanthus} \citep{gloag2013self, gloag2016stigmergy} cultured under confinement on semi-solid agar. These furrow networks extend across distances around two orders of magnitude greater than the length of a single cell. The networks are further lined with extracellular DNA (eDNA) and extracellular polysaccharides (EPS). Other experiments have further demonstrated that the three-dimensional lattices of eDNA are necessary for the mechanical integrity of biofilms of bacterial pathogens in complex biofluids such as sputum and otorrhea (ear-discharge fluid)\citep{Devaraj2019}. The current study provides a natural mechanism for such network templates to spontaneously emerge in such systems. Our observation of the power-law growth of $F$ further suggests that, in order to create extended furrow networks such as those observed in experiments, there must be mechanisms that limit the growth rate of cells to not exceed the rate at which the furrows are created. Unconstrained exponential growth or even linear growth of cells quickly obliterates the furrow network, leading to a dense colony (Fig.~S9; Movie $\S$8 \citep{SI}). Thereafter, the edge of the colony advances not by motile clusters, but by bulk motion of a densely-packed front. While this can lead to fingering patterns at the front \citep{Nagilla2018,giverso2015branching}, a furrow network cannot be sustained. Explosive cell lysis events have, in fact, been observed in furrow networks of \textit{P. aeruginosa} along with associated release of eDNA and other ``public goods'' required for biofilm formation \citep{Turnbull2016}. Beyond bacterial collectives, substrate interactions also occur during cancer metastasis in which groups of migrating cancer cells exploit the plasticity of the extracellular matrix (ECM) and actively remodel the ECM fibers to enhance their migration speeds\cite{lee2017local,wisdom2018matrix, winkler2020concepts}. In conclusion, the cluster-and-conquer mechanism observed in our simulations may be crucial in biofilm formation by motile bacteria in plastic environments, and experimental evidence suggests that bacteria may have adaptive mechanisms to exploit the fractal-like spatial structure of the furrow network in building robust biofilms. The results here suggest future theoretical and experimental avenues for exploring the role played by mechanical stigmergy in these biological phenomena. \textit{Acknowledgments}: RC acknowledges the financial support from SERB, India via the grants SB/S2/RJN-051/2015 and ECR/2017/000744. We acknowledge financial support from IITB-Monash Research Academy and computational-time grants from the National Computational Infrastructure, Canberra, the MonArch facility at Monash University and the SpaceTime facility (IIT Bombay).
1,314,259,995,755
arxiv
\section{Introduction} The present paper is concerned with the derivation and analysis of an asymptotic model for internal capillary-gravity waves. The model incorporates bi-directionality and the physical regime under which the corresponding differential system is derived is compatible with that of established by Benjamin, \cite{Benjamin1967,Benjamin1992,Benjamin1996}, rigurously developed by Albert et al., \cite{ABR}, for the Benjamin equation. More specifically, the idealized model under study consists of two inviscid, homogeneous, irrotational incompressible fluids of depths $d_{1}<<d_{2}$ and different (constant) densities $\rho_{1}<\rho_{2}$, see Figure \ref{f0}. The upper layer satisfies a rigid lid assumption (that is, it is bounded above by an impenetrable, bounding surface) while the lower layer is bounded below by an impermeable, horizontal and flat bottom. The interest is in the description of the motion of the deviation of the interface between the fluids, which is affected by both gravity and capillary forces. \begin{figure}[htbp] \centering {\includegraphics[width=10cm,height=4cm]{internalw3.eps}} \caption{Idealized model of internal wave propagation in a two-layer interface.} \label{f0} \end{figure} As mentioned by Kalisch in \cite{Kalisch2007}, much of the literature on capillary-gravity interfacial waves with rigid lid for the upper layer and infinitely deep lower fluid concerns analytical or computational studies of solitary waves, from the ones with oscillatory decay admitted as solutions of the weakly nonlinear model derived by Benjamin, \cite{Benjamin1992}. The stability of these waves under small perturbations was computationally analyzed by Calvo and Akylas, \cite{CA}, from the full Euler equations. On the other hand, the computations of interfacial capillary-gravity waves by Laget and Dias, \cite{LagetD1997}, are also based on the numerical approximation of an integro-differential formulation of the full Euler equations (see also \cite{DiasMV1996} and the analytical study by Dias and Iooss, \cite{DiasI1996}). The numerical results are compared with the experimental results by Koop and Butler, \cite{KoopB1981}. Concerning the formulation of asymptotic models, an extension of the Benjamin equation which allows weak transverse variations is derived by Kim and Akylas, \cite{KimA2006}. This is used to study gravity-capillary lumps from the previous numerical results obtained in the first part of the work, \cite{KimA2005}. Finally, Kalisch, \cite{Kalisch2007}, proposes some one-dimensional systems for the propagation of interfacial waves subject to capillary forces. His derivation is based on formal asymptotic expansions of the velocity potential associated to the layers form the one-dimensional Euler equations and the combination with the hypotheses of the physical regime for the Benjamin equation described in \cite{Benjamin1992,ABR}. Indeed this list of references is far from being exhaustive and can be additionally extended if the assumptions for the layers (concerning either the boundary conditions or the depths) are modified, \cite{HelfrichM}. In a different setting from that used in \cite{KimA2006} and \cite{Kalisch2007}, for the derivation of the model proposed in the present paper the approach developed by Bona et al. in \cite{BLS2008} is considered. This is based on several steps: first, a reformulation of the Euler system for internal waves is made by using two nonlocal operators linking the velocity potentials for the layers at the interface. Then suitable asymptotic expansions of these operators, in accordance with the physical regime under study for the layers, allows to derive the corresponding asymptotic model from the Euler system in some consistent, precisely defined way. The steps of this approach is adapted in the present paper as follows. We must first consider the Euler system for internal waves which includes surface tension effects at the interface. In \cite{Lannes}, the derivation of the corresponding equations makes use of the Dirichlet-to-Neumann operators associated to the two fluid layers, leading to a system of two equations for the deviation of the interface and a suitable combination of the traces of the velocity potentials at the interface. As mentioned by Lannes, when $\rho_{1}\neq 0$ these are the canonical variables of the Hamiltonian formulation made by Benjamin and Bridges, \cite{BenjaminB}, while if $\rho_{1}=0$, the formulation reduces to tha of the case of surface waves due to Zakharov, \cite{Zakharov}, and Craig and Sulem, \cite{CraigS}. In the present paper, the derivation will be made by using the nonlocal operators considered in \cite{BLS2008}. Then the formulation of the Benjamin asymptotic model from the resulting Euler system takes into account the physical regime that Benjamin assumed to obtain his uni-directional equation. The validity of this regime was specified by Albert et al. in \cite{ABR}. Their analysis is based on the approximation to the dispersion relation for the Euler equations. This is good for suitable ranges of the parameters measuring dispersive, nonlinear and surface tension effects (and specified below). They also give an idea about how the model may fit real situations. In the context of the approach of \cite{BLS2008} adopted here, the physical regime where the present paper introduces the proposed model is the so-called Benjamin-Ono (BO) regime, theoretically characterized, among others, by the hypothesis of a lower layer of infinite depth but which is useful, as indicated by Kalisch, \cite{Kalisch2007}, to deal with situations where the depth is much larger than the wavelength of a typical wave. In addition, the presence of interfacial tension must be under the range of validity specified in \cite{ABR} for the unidirectional model. The asymptotic expansions of the nonlocal operators corresponding to the Benjamin-Ono regime lead to the BO system, introduced in \cite{BLS2008} and investigated in \cite{Xu2012,AnguloS2020,BonaDM2020}, among others. The inclusion of the influence of surface tension at the interface, in the regime specified in \cite{ABR}, leads to the derivation of the two-dimensional asymptotic model proposed in the present paper, that will be called the Benjamin system. This, consequently, becomes the BO system in absence of surface tension. Finally, a similar argument to that considered in \cite{Kalisch2007} to recover the Benjamin equation, after an assumption of uni-directionality of the waves (see \cite{Whitham}), is applied here to the one-dimensional version of the Benjamin system. This leads to a one-parameter family of regularized versions of the Benjamin equation (containing the Benjamin equation as particular case), in a like way to that leading to the regularized BO equation in \cite{KB,BLS2008}. The rest of the paper is devoted to the analysis of some mathematical properties of the two models introduced, the regularized Benjamin (rBenjamin) equation and the Benjamin system. The study is focused on well-posedness, conserved quantities and existence of solitary wave solutions. First, linear well-posedness is discussed. Furthermore, while the rBenjamin equation is shown to possess at least three functionals preserved by the evolution of smooth enough solutions which decay to zero at infinity as well as a Hamiltonian structure, the Benjamin system only admits linear invariant quantities, but the evolution of candidates for momentum and energy is specified (cf. \cite{BonaDM2020}). Finally, the existence of solitary waves and some of their properties are analyzed in a computational study of comparison of the two models between them and with the Benjamin equation. The paper is structured as follows. Section \ref{sec:sec2} is devoted to reformulate the Euler system for internal waves with capillary effects in terms of the nonlocal operators used in \cite{BLS2008}. Then the physical regime of validity is incorporated, combining the BO regime with the conditions on the parameter of interfacial tension required by the uni-directional Benjamin equation. The application of these hypotheses leads to the two-dimensional, bi-directional Benjamin system, whose linear well-posedness is studied. From its one-dimensional version, a one-parameter family of the rBenjamin equations is derived. The family contains the usual Benjamin equation as particular case. Linear well-posedness, conserved quantities and Hamiltonian structure of the rBenjamin equation are also analyzed. Existence of solitary wave solutions for the three models and comparisons of the waves are discussed in a computational study in Section \ref{sec:sec3}. Some conclusions and perspectives are outlined in Section \ref{sec:sec4}. The following notation will be used throughout the paper. If $s\geq 0, d=1,2$, $H^{s}=H^{s}(\mathbb{R}^{d})$ will stand for the $L^{2}-$based Sobolev space of order $s$, with $H^{0}=L^{2}$. The corresponding norm in $H^{s}$ is denoted by $||\cdot||_{s}$. By ${\bf x}$ we will denote the horizontal variable, with ${\bf x}=x$ if $d=1$ and ${\bf x}=(x,y)$ if $d=2$, while $z$ will be used for the vertical variable. The symbol $\nabla_{{\bf x},z}$ (resp. $\Delta_{{\bf x},z}$) will denote the gradient operator (resp. the Laplace operator) with respect to the variables of ${\bf x}$ and $z$ while $\nabla$ (resp. $\Delta$) will only denote the gradient (resp. the Laplacian) with respect to ${\bf x}$. On the other hand, the symbol $\widehat{g}$ will stand for the Fourier transform of $g$ on $\mathbb{R}^{d}$, with ${\bf k}$ as the variable in the Fourier space, with ${\bf k}=k$ if $d=1$ and ${\bf k}=(k_{x},k_{y})$ if $d=2$. Using the notation of \cite{BLS2008}, if $f$ is a function on $\mathbb{R}^{d}$, $f(D)$ will denote the operator defined by the Fourier symbol \begin{eqnarray*} \widehat{f(D)g}({\bf k})=f({\bf k})\widehat{g}({\bf k}). \end{eqnarray*} In this way, $|D|$ will stand for the operator defined as \begin{eqnarray*} \widehat{|D|g}({\bf k})=|{\bf k}|\widehat{g}({\bf k}), \end{eqnarray*} with $|{\bf k}|=|k|$ if $d=1$ and $|{\bf k}|=\sqrt{k_{x}^{2}+k_{y}^{2}}$. In the first case, $|D|$ will sometimes be denoted by $\mathcal{H}=\partial_{x}H$, where $H$ is the Hilbert transform \begin{eqnarray} Hf(x)=\frac{1}{\pi}P.V.\int_{-\infty}^{\infty}\frac{f(\xi)}{\xi-x}d\xi,\label{hilbt} \end{eqnarray} with $P.V.$ standing for the Principal Value of the integral. \section{Derivation of the models} \label{sec:sec2} \subsection{Euler system for internal waves with surface tension} \label{sec21} In this section the full Euler equations for the two-layer interface problem, with nonnegligible surface tension effects at the interface is reformulated by using the approach introduced in \cite{BLS2008}. Let $\zeta=\zeta({\bf x},t)$ denote the deviation of the interface with respect to a rest posed at the vertical variable $z=-d_{1}$ (see Figure \ref{f0}). Assuming that the flows are irrotational, let $\Phi_{i}, i=1,2$ be the corresponding velocity potential for the upper and lower layer respectively. Incompressibility condition means that the potentials satisfy \begin{eqnarray} \Delta_{{\bf x},z}\Phi_{i}=0,\; ({\bf x},z)\in\Omega_{t}^{i},\; i=1,2,\label{bens21} \end{eqnarray} where the scales are chosen so that the regions $\Omega_{t}^{i}, i=1,2$ occupied by upper and lower layers respectively are described as \begin{eqnarray*} \Omega_{t}^{1}&=&\{({\bf x},z)/ -\infty<x,y<\infty, -d_{1}+\zeta({\bf x},t)<z<0\},\\ \Omega_{t}^{2}&=&\{({\bf x},z)/ -\infty<x,y<\infty, -d_{1}-d_{2}<z<-d_{1}+\zeta({\bf x},t)\}. \end{eqnarray*} Rigid conditions at the bottom and lid result in the vanishing of the normal component of both velocity potentials at the corresponding boundaries, that is, for $t>0$ \begin{eqnarray} \Phi_{1z}&=&0,\quad {\rm on}\quad \Gamma_{1}=\{({\bf x},z)/ -\infty<x,y<\infty, z=0\},\label{bens22a}\\ \Phi_{2z}&=&0,\quad {\rm on}\quad \Gamma_{2}=\{({\bf x},z)/ -\infty<x,y<\infty, z=-(d_{1}+d_{2})\}.\label{bens22b} \end{eqnarray} On the other hand, the conservation of momentum (Bernoulli equations) for both fluids are \begin{eqnarray} \partial_{t}\Phi_{i}+\frac{1}{2}|\nabla_{{\bf x},z}\Phi_{i}|^{2}=-\frac{P_{i}}{\rho_{i}}-gz, \; ({\bf x},z)\in\Omega_{t}^{i},\; i=1,2,\; t>0,\label{bens24} \end{eqnarray} where $g$ denotes the acceleration of gravity and $P_{i}$ is the pressure inside the fluid $i, i=1,2$. Finally, the boundary conditions at the interface \begin{eqnarray*} \Gamma_{t}=\{({\bf x},z)/ -\infty<x,y<\infty, z=-d_{1}+\zeta(x,t)\}, \end{eqnarray*} consist of the assumption that the fluids do not cross the interface (bounding surface condition) \begin{eqnarray} \partial_{t}\zeta-\sqrt{1+|\nabla\zeta|^{2}}\partial_{n}\Phi_{i}=0,\quad {\rm on}\quad \Gamma_{t}, \; t\geq 0, \; i=1,2,\label{bens23} \end{eqnarray} where $\partial_{n}\Phi_{i}:=\nabla\Phi_{i}\cdot n, i=1,2$, being $n$ the unit upwards normal vector to the interface. Equations (\ref{bens23}) imply that the normal component of the velocity is continuous at the interface, \cite{BLS2008,Lannes}. The boundary conditions are completed with the assumption on continuity of the stress tensor at the interface. In terms of the mean curvature of the deviation \begin{eqnarray*} \kappa(\zeta)=-\nabla\cdot\left(\frac{\nabla\zeta}{\sqrt{1+|\nabla\zeta|^{2}}}\right). \end{eqnarray*} the continuity condition reads \begin{eqnarray*} P_{2}-P_{1}=\sigma\kappa(\zeta),\quad {\rm on}\quad \Gamma_{t},\; t\geq 0, \end{eqnarray*} where $\sigma$ denotes the interfacial tension coefficient. We now proceed with the reformulation of (\ref{bens21})-(\ref{bens23}) by using the approach given in \cite{BLS2008}. The derivation is a simple adaptation to that given in this paper and therefore we just remind the main points. We introduce the trace of the potentials at the interface \begin{eqnarray*} \psi_{i}({\bf x},t)=\Phi_{i}({\bf x},t,-d_{1}+\zeta({\bf x},t)),\; i=1,2. \end{eqnarray*} On the other hand, the nonlocal operators considered in the approach are the Dirichlet-to-Neumann (D-N) operator $G[\zeta]$ such that \begin{eqnarray} G[\zeta]\psi_{1}=\sqrt{1+|\nabla\zeta|^{2}}\partial_{n}\Phi_{1}\Big|_{z=-d_{1}+\zeta},\label{bens25} \end{eqnarray} and the operator $H[\zeta]$ connecting the traces in such a way that \begin{eqnarray} H[\zeta]\psi_{1}=\nabla\psi_{2}.\label{bens26} \end{eqnarray} The arguments in \cite{BLS2008}, adapted to (\ref{bens21})-(\ref{bens23}), yield the system for $\zeta$ and $\psi_{1}$ \begin{eqnarray} \partial_{t}\zeta-G[\zeta]\psi_{1}=0,\label{bens27}&&\\ \partial_{t}\left(H[\zeta]\psi_{1}-\nabla\psi_{1}\right)+g(1-\gamma)\nabla\zeta+\frac{1}{2}\nabla\left((H[\zeta]\psi_{1})^{2}-\gamma |\nabla\psi_{1}|^{2}\right)\nonumber &&\\ +\nabla\mathcal{N}(\zeta,\psi_{1})=-\frac{\sigma}{\rho_{2}}\nabla \kappa(\zeta),\label{bens28}&& \end{eqnarray} where \begin{eqnarray} \mathcal{N}(\zeta,\psi)=\frac{\gamma\left(G[\zeta]\psi+\nabla\zeta\cdot\nabla\psi\right)^{2}-\left(G[\zeta]\psi+\nabla\zeta\cdot H[\zeta]\psi\right)^{2}}{2(\left(\sqrt{1+|\nabla\zeta|^{2}}\right)}.\label{bens29} \end{eqnarray} The derivation of the Benjamin system will be made from a dimensionless version of (\ref{bens27})-(\ref{bens29}), where the hypotheses on the physical regime can be applied, and using the same variables as those of \cite{BLS2008}. Thus, if $a$ and $\lambda$ denote, respectively, a typical amplitude and wavelength, we define \begin{eqnarray*} \widetilde{{\bf x}}=\frac{{\bf x}}{\lambda},\; \widetilde{z}=\frac{z}{d_{1}},\; \widetilde{t}=\frac{t\sqrt{gd_{1}}}{\lambda},\; \widetilde{\zeta}=\frac{\zeta}{a},\; \widetilde{\psi_{1}}=\frac{1}{a\lambda}\sqrt{\frac{d_{1}}{g}}\psi_{1}, \end{eqnarray*} and the parameters \begin{eqnarray} \epsilon=\frac{a}{d_{1}},\; \mu=\frac{d_{1}^{2}}{\lambda^{2}},\label{bens210} \end{eqnarray} with $\gamma=\frac{\rho_{1}}{\rho_{2}}<1, \delta=\frac{d_{1}}{d_{2}}$ denoting the density and depth ratios respectively. The parameters $\epsilon$ and $\mu$ in (\ref{bens210}) represent, respectively, nonlinear and dispersive effects with respect to the upper layer. The corresponding parameters with respect to the lower layer depend on the previous in the form \begin{eqnarray} \epsilon_{2}=\frac{a}{d_{2}}=\epsilon\delta,\; \mu_{2}=\frac{d_{2}^{2}}{\lambda^{2}}=\frac{\mu}{\delta^{2}}.\label{bens210b} \end{eqnarray} With the arguments used in \cite{BLS2008} the corresponding nondimensional version of (\ref{bens27})-(\ref{bens29}) is given by (tildes are dropped) \begin{eqnarray} &&\partial_{t}\zeta-\frac{1}{\mu}G^{\mu}[\epsilon\zeta]\psi_{1}=0,\label{bens211}\\ &&\partial_{t}\left(H^{\mu,\delta}[\epsilon\zeta]\psi_{1}-\nabla\psi_{1}\right)+(1-\gamma)\nabla\zeta +\frac{\epsilon}{2}\nabla\left((H^{\mu,\delta}[\epsilon\zeta]\psi_{1})^{2}-\gamma |\nabla\psi_{1}|^{2}\right)\nonumber\\ &&+\epsilon\nabla\mathcal{N}^{\mu,\delta}(\epsilon\zeta,\psi_{1})=-\frac{T}{\epsilon\sqrt{\mu}}\nabla \kappa(\epsilon\sqrt{\mu}\zeta),\label{bens212} \end{eqnarray} where \begin{eqnarray*} \mathcal{N}^{\mu,\delta}(\zeta,\psi)=\mu\frac{\gamma\left(\frac{1}{\mu}G^{\mu}[\zeta]\psi+\nabla\zeta\cdot\nabla\psi\right)^{2}-\left(\frac{1}{\mu}G^{\mu}[\zeta]\psi+\nabla\zeta\cdot H^{\mu,\delta}[\zeta]\psi\right)^{2}}{2(\left(\sqrt{1+\mu|\nabla\zeta|^{2}}\right)}, \end{eqnarray*} and \begin{eqnarray} T=\frac{\sigma}{g\rho_{2}\lambda^{2}}=\frac{\sigma\mu}{g\rho_{2}d_{1}^{2}},\label{bens213} \end{eqnarray} (note that $T$ is related to Weber and Froude numbers, cf. \cite{CA,KimA2006})and the nondimensional versions of the operators (\ref{bens25}), (\ref{bens26}) are defined in \cite{BLS2008} as \begin{eqnarray*} G^{\mu}[\epsilon\zeta]\psi_{1}&=&-\mu\epsilon\nabla\zeta\cdot\nabla\Phi_{1}\Big|_{z=-1+\epsilon\zeta}+\partial_{z}\Phi_{1}\Big|_{z=-1+\epsilon\zeta},\\ H^{\mu,\delta}[\epsilon\zeta]\psi_{1}&=&\nabla(\Phi_{2}\Big|_{z=-1+\epsilon\zeta}). \end{eqnarray*} See \cite{Lannes} for an alternative formulation, based on a system of two equations for $\zeta$ and a combination of the traces $\psi_{i}, i=1,2$. \subsection{The Benjamin system} \label{sec22} In this section we will derive from (\ref{bens211}), (\ref{bens212}) an asymptotic model which is compatible with the physical regime of validation of the Benjamin equation. Thus, we assume that the upper layer is shallow, the deformations are of small amplitude for an infintely deep lower layer and within a Benjamin-Ono regime. In terms of the parameters (\ref{bens210}), (\ref{bens210b}) this means that \begin{eqnarray} \mu\sim\epsilon^{2}<<1,\; \mu_{2}=\infty.\label{bens214} \end{eqnarray} In addition, and according to the regime associated to the Benjamin model, \cite{ABR}, the parameter (\ref{bens213}) of surface tension at the interface satisfies \begin{eqnarray} T\sim\sqrt{\mu}.\label{bens215} \end{eqnarray} Conditions (\ref{bens214}) and (\ref{bens215}) determine the asymptotic regime for the Benjamin system. Defining the velocity variable \begin{eqnarray} {\bf v}=H^{\mu,\delta}[\epsilon\zeta]\psi_{1}-\gamma\nabla\psi_{1},\label{bens215b} \end{eqnarray} we have the asymptotic expansions, \cite{BLS2008} \begin{eqnarray} \nabla\psi_{1}&=&-\frac{1}{\gamma}{\bf v}+\frac{\sqrt{\mu}}{\gamma^{2}}|D|{\bf v}+O(\mu),\label{bens216}\\ H^{\mu,\delta}[\epsilon\zeta]\psi_{1}&=&-\sqrt{\mu}|D|\nabla\psi_{1}+O(\mu)=\frac{\sqrt{\mu}}{\gamma}|D|{\bf v}+O(\mu),\label{bens217}\\ \frac{1}{\mu}G^{\mu}[\epsilon\zeta]\psi_{1}&=&\nabla\left((1-\epsilon\zeta)\left(-\frac{1}{\gamma}{\bf v}\right)+\frac{\sqrt{\mu}}{\gamma}|D|{\bf v}\right)+O(\mu),\label{bens218} \end{eqnarray} and \begin{eqnarray} \frac{T}{\epsilon\sqrt{\mu}}\nabla \kappa(\epsilon\sqrt{\mu}\zeta)=-T\nabla\left(\nabla\cdot\nabla\zeta\right)+O(\epsilon\mu).\label{bens219} \end{eqnarray} Using (\ref{bens211}), (\ref{bens218}) and (\ref{bens214}) in the form $\epsilon\sim\sqrt{\mu}$ we have \begin{eqnarray} \partial_{t}\zeta=-\frac{1}{\gamma}{\bf v}+O(\epsilon).\label{bens220} \end{eqnarray} Now, introducing the modelling parameter $\alpha\geq 0$ as in \cite{BLS2008} and using (\ref{bens220}) lead to \begin{eqnarray} \nabla\cdot{\bf v}=(1-\alpha)\nabla\cdot{\bf v}+\alpha\nabla\cdot{\bf v}=(1-\alpha)\nabla\cdot{\bf v}-\alpha\gamma \partial_{t}\zeta+O(\epsilon).\label{bens221} \end{eqnarray} Finally, applying (\ref{bens216})-(\ref{bens221}) to (\ref{bens211}), (\ref{bens212}) and dropping the $O(\epsilon^{2})$ terms we obtain the following system for the deviation of the interface $\zeta$ and the variable ${\bf v}$ in (\ref{bens215b}) \begin{eqnarray} \left(1+\frac{\alpha\sqrt{\mu}}{\gamma}|D|\right)\partial_{t}\zeta+\frac{1}{\gamma}\nabla\left((1-\epsilon\zeta){\bf v}\right)-(1-\alpha)\frac{\sqrt{\mu}}{\gamma^{2}}|D|\nabla\cdot{\bf v}=0,&&\label{bens222}\\ \partial_{t}{\bf v}+(1-\gamma)\nabla\zeta-\frac{\epsilon}{2\gamma}\nabla\left(|{\bf v}|^{2}\right)=T\nabla(\nabla\cdot\nabla\zeta),&&\label{bens223} \end{eqnarray} where $T$ is given by (\ref{bens213}). \begin{remark} When interfacial tension is negligible, then $T=0$ and (\ref{bens222}), (\ref{bens223}) is the BO system derived in \cite{BLS2008}. Furthermore, this and (\ref{bens219}) imply that the internal wave equations (\ref{bens211}), (\ref{bens212}) are consistent with the Benjamin system (\ref{bens222}), (\ref{bens223}), in the sense defined in \cite{BLS2008}, with a precision $O(\mu)$ (see Theorem 6 in that reference). \end{remark} We analyze now some elementary mathematical properties of (\ref{bens222}), (\ref{bens223}). Concerning linear well-posedness, we consider the associated linear problem \begin{eqnarray} \left(1+\frac{\alpha\sqrt{\mu}}{\gamma}|D|\right)\partial_{t}\zeta+\frac{1}{\gamma}\nabla\left({\bf v}\right)-(1-\alpha)\frac{\sqrt{\mu}}{\gamma^{2}}|D|\nabla\cdot{\bf v}&=&0,\label{bens32a}\\ \partial_{t}{\bf v}+(1-\gamma)\nabla\zeta&=&T\nabla(\nabla\cdot\nabla\zeta).\label{bens32b} \end{eqnarray} Note that the operator \begin{eqnarray} J(\alpha)=1+\frac{\alpha\sqrt{\mu}}{\gamma}|D|,\label{joper} \end{eqnarray} has Fourier symbol \begin{eqnarray*} \widehat{J(\alpha)}({\bf k})=1+\frac{\alpha\sqrt{\mu}}{\gamma}|{\bf k}|. \end{eqnarray*} It is therefore invertible for $\alpha\geq 0$ and we can write (\ref{bens32a}), (\ref{bens32b}) in the form \begin{eqnarray} \partial_{t}\zeta+\frac{1}{\gamma}J(\alpha)^{-1}J(\alpha-1)\nabla\left({\bf v}\right)&=&0,\label{bens33a}\\ \partial_{t}{\bf v}+\nabla\left((1-\gamma)-T\nabla\cdot\nabla\right)\zeta&=&0.\label{bens33b} \end{eqnarray} We now take Fourier transform in (\ref{bens33a}), (\ref{bens33b}) with respect to the spatial variables to have, for ${\bf v}=(v_{1},v_{2})$ \begin{eqnarray} \frac{d}{dt}\begin{pmatrix}\widehat{\zeta}({\bf k},t)\\ \widehat{v_{1}}({\bf k},t)\\ \widehat{v_{2}}({\bf k},t)\end{pmatrix}+i|{\bf k}|\mathcal{A}({\bf k})\begin{pmatrix}\widehat{\zeta}({\bf k},t)\\ \widehat{v_{1}}({\bf k},t)\\ \widehat{v_{2}}({\bf k},t)\end{pmatrix}=0,\label{bens34} \end{eqnarray} where \begin{eqnarray*} \mathcal{A}({\bf k})=\begin{pmatrix}0&\frac{k_{x}}{\gamma|{\bf k}|}\left(\frac{\widehat{J(\alpha-1)}({\bf k})}{\widehat{J(\alpha)}({\bf k})}\right)&\frac{k_{y}}{\gamma|{\bf k}|}\left(\frac{\widehat{J(\alpha-1)}({\bf k})}{\widehat{J(\alpha)}({\bf k})}\right)\\ \frac{k_{x}}{\gamma|{\bf k}|}\left(1-\gamma+T|{\bf k}|^{2}\right)&0&0\\ \frac{k_{y}}{\gamma|{\bf k}|}\left(1-\gamma+T|{\bf k}|^{2}\right)&0&0\end{pmatrix} \end{eqnarray*} The eigenvalues of $\mathcal{A}({\bf k})$ are $\{0,\pm\sigma({\bf k})\}$ with \begin{eqnarray*} \sigma({\bf k})=\left(\frac{\left(1-\gamma+T|{\bf k}|^{2}\right)}{\gamma}\left(\frac{\widehat{J(\alpha-1)}({\bf k})}{\widehat{J(\alpha)}({\bf k})}\right)\right)^{1/2}, \end{eqnarray*} which implies linear well-posedness of (\ref{bens222}), (\ref{bens223}) whenever $\alpha\geq 1$. In order to specify the Sobolev spaces in detail, we diagonalize the matrix $\mathcal{A}({\bf k})$ (see e.~g. \cite{DougalisMS2007}) \begin{eqnarray*} P({\bf k})^{-1}\mathcal{A}({\bf k})P({\bf k})=\begin{pmatrix} 0&0&0\\0&\sigma({\bf k})&0\\0&0&-\sigma({\bf k})\end{pmatrix} \end{eqnarray*} with \begin{eqnarray*} P({\bf k})=\begin{pmatrix} 0&\tau({\bf k})&-\tau({\bf k})\\-\frac{k_{y}}{\gamma|{\bf k}|}&\frac{k_{x}}{\gamma|{\bf k}|}&\frac{k_{x}}{\gamma|{\bf k}|}\\ \frac{k_{x}}{\gamma|{\bf k}|}&\frac{k_{y}}{\gamma|{\bf k}|}&\frac{k_{x}}{\gamma|{\bf k}|}\end{pmatrix},\; P({\bf k})^{-1}=\frac{1}{2\tau({\bf k})}\begin{pmatrix} 0&-2\tau({\bf k})\frac{k_{y}}{\gamma|{\bf k}|}&2\tau({\bf k})\frac{k_{x}}{\gamma|{\bf k}|}\\1&\tau({\bf k})\frac{k_{x}}{\gamma|{\bf k}|}&\tau({\bf k})\frac{k_{y}}{\gamma|{\bf k}|}\\ -1&\tau({\bf k})\frac{k_{x}}{\gamma|{\bf k}|}&\tau({\bf k})\frac{k_{y}}{\gamma|{\bf k}|}\end{pmatrix}, \end{eqnarray*} and \begin{eqnarray*} \tau({\bf k})=\frac{\sigma({\bf k})}{\left(1-\gamma+T|{\bf k}|^{2}\right)}. \end{eqnarray*} With the change of variables \begin{eqnarray} \begin{pmatrix} \widehat{\eta}\\\widehat{{\bf w}}\end{pmatrix}=P^{-1}\begin{pmatrix} \widehat{\zeta}\\\widehat{{\bf v}}\end{pmatrix},\; {\bf w}=(w_{1},w_{2}),\label{bens35} \end{eqnarray} the system (\ref{bens34}) is transformed into \begin{eqnarray*} \frac{d}{dt}\begin{pmatrix}\widehat{\eta}({\bf k},t)\\ \widehat{w_{1}}({\bf k},t)\\ \widehat{w_{2}}({\bf k},t)\end{pmatrix}+i|{\bf k}|\begin{pmatrix} 0&0&0\\0&\sigma({\bf k})&0\\0&0&-\sigma({\bf k})\end{pmatrix}\begin{pmatrix}\widehat{\eta}({\bf k},t)\\ \widehat{w_{1}}({\bf k},t)\\ \widehat{w_{2}}({\bf k},t)\end{pmatrix}=0, \end{eqnarray*} with solution \begin{eqnarray*} \widehat{\eta}({\bf k},t)=\widehat{\eta}({\bf k},0),\; \widehat{w_{1}}({\bf k},t)=e^{-i|{\bf k}|\sigma({\bf k})t}\widehat{w_{1}}({\bf k},0),\; \widehat{w_{2}}({\bf k},t)=e^{i|{\bf k}|\sigma({\bf k})t}\widehat{w_{2}}({\bf k},0). \end{eqnarray*} Now, since \begin{eqnarray} \widehat{\eta}({\bf k})&=&-\frac{k_{y}}{|{\bf k}|}\widehat{v_{1}}({\bf k})+\frac{k_{x}}{|{\bf k}|}\widehat{v_{2}}({\bf k}),\nonumber\\ \widehat{w_{1}}({\bf k})&=&\frac{1}{2\tau({\bf k})}\widehat{\zeta}({\bf k})+\frac{k_{x}}{2|{\bf k}|}\widehat{v_{1}}({\bf k})+\frac{k_{y}}{2|{\bf k}|}\widehat{v_{2}}({\bf k}),\nonumber\\ \widehat{w_{2}}({\bf k})&=&-\frac{1}{2\tau({\bf k})}\widehat{\zeta}({\bf k})+\frac{k_{x}}{2|{\bf k}|}\widehat{v_{1}}({\bf k})+\frac{k_{y}}{2|{\bf k}|}\widehat{v_{2}}({\bf k}),\label{bens35b} \end{eqnarray} and $\tau({\bf k})$ has order $-1$, then \begin{eqnarray*} (\zeta,v_{1},v_{2})\in H^{s+1}\times H^{s}\times H^{s}\Rightarrow (\eta,w_{1},w_{2})\in H^{s}\times H^{s}\times H^{s},\; s> 0, \end{eqnarray*} and therefore, when $\alpha\geq 1$ the system (\ref{bens222}), (\ref{bens223}) is linearly well-posed in $H^{s+1}\times H^{s}\times H^{s}, s>0$. Another point of interest is the existence of conserved quantities for the one-dimensional version (in the $x-$direction) of (\ref{bens222}), (\ref{bens223}) \begin{eqnarray} \left(1+\frac{\alpha\sqrt{\mu}}{\gamma}\mathcal{H}\right)\partial_{t}\zeta+\frac{1}{\gamma}\partial_{x}\left((1-\epsilon\zeta){u}\right)-(1-\alpha)\frac{\sqrt{\mu}}{\gamma^{2}}\mathcal{H}\partial_{x}{u}&=&0,\label{bens224}\\ \partial_{t}{u}+(1-\gamma)\nabla\zeta-\frac{\epsilon}{2\gamma}\nabla\left(u^{2}\right)&=&T\partial_{x}^{3}\zeta,\label{bens225} \end{eqnarray} where $\mathcal{H}=\partial_{x}H$, being $H$ the Hilbert transform (\ref{hilbt}), and $u$ is a horizontal velocity-like variable. The one-dimensional version (\ref{bens224}), (\ref{bens225}) trivially admits the linear functionals \begin{eqnarray*} I_{1}(\zeta,u)=\int_{-\infty}^{\infty}\zeta dx,\quad I_{2}(\zeta,u)=\int_{-\infty}^{\infty}u dx, \end{eqnarray*} as invariants by the time evolution of smooth enough solutions which decay, along with higher-order derivatives, to zero at infinity. As in the case of the BO system, \cite{BonaDM2020}, no other conserved quantities were found. It may be worth mentioning that the quantities \begin{eqnarray*} I&=&\int_{-\infty}^{\infty}\zeta u dx,\\ H&=&\frac{1-\gamma}{2}\int_{-\infty}^{\infty}\zeta^{2} dx-\frac{\varepsilon}{2\gamma}\int_{-\infty}^{\infty}\zeta u^{2} dx\\ &&+\frac{1}{2\gamma}\int_{-\infty}^{\infty}u^{2} dx -(1-\alpha)\frac{\sqrt{\mu}}{2\gamma^{2}}\int_{-\infty}^{\infty}u\mathcal{H}u dx-\frac{T}{2}\int_{-\infty}^{\infty}\zeta_{x}^{2}dx, \end{eqnarray*} satisfy the time evolution given by the equations (cf. \cite{BonaDM2020}) \begin{eqnarray*} \frac{d}{dt}I=-\frac{\alpha\sqrt{\mu}}{\gamma}\int_{-\infty}^{\infty}u\mathcal{H}\zeta_{t} dx \end{eqnarray*} \begin{eqnarray*} \frac{d}{dt}H=-\frac{\alpha\sqrt{\mu}}{\gamma}\int_{-\infty}^{\infty}\left((1-\gamma)\zeta-\frac{\varepsilon}{2\gamma}u^{2}\right)\mathcal{H}\zeta_{t} dx. \end{eqnarray*} Thus they are preserved only in the case $\alpha=0$. \subsection{The regularized Benjamin equation} \label{sec23} The reduction of a two-way model like (\ref{bens224}), (\ref{bens225}) to corresponding unidirectional models can be formally made in the same way as in \cite{BonaDM2020} for the Benjamin-Ono and Intermediate Long Wave systems for internal waves or as that of the bidirectional model for interfacial capillary-gravity waves in deep water in \cite{Kalisch2007}. Summarizing (see \cite{Whitham}), when $O(\epsilon)=O(\sqrt{\mu})$ terms in (\ref{bens224}), (\ref{bens225}) are neglected, the deformation $\zeta$ will satisfy a wave equation with speed \begin{eqnarray} c_{\gamma}=\sqrt{\frac{1-\gamma}{\gamma}}.\label{bens226} \end{eqnarray} Right moving waves will then be of the form $\zeta=\zeta_{0}(x-c_{\gamma}t), u=\sqrt{\gamma(1-\gamma)}\zeta$. In order to find solutions of (\ref{bens224}), (\ref{bens225}) moving e.~g. to the right to order $O(\epsilon)=O(\sqrt{\mu})$ we assume $u$ of the form \begin{eqnarray} u=\sqrt{\gamma(1-\gamma)}\left(\zeta+A\epsilon+B\sqrt{\mu}\right),\label{bens227} \end{eqnarray} for some functions $A, B$ of $\zeta$. As in \cite{BonaDM2020}, we substitute (\ref{bens227}) into (\ref{bens224}), (\ref{bens225}) and retain only the $O(\epsilon)=O(\sqrt{\mu})$ terms. Then consistency of the two equations leads to \begin{eqnarray} A=\frac{1}{4}\zeta^{2},\; B=\frac{1}{2\gamma}\mathcal{H}\zeta-\frac{T}{2(1-\gamma)\sqrt{\mu}}\partial_{x}^{2}\zeta,\label{bens228} \end{eqnarray} and the one-parameter family of unidirectional equations for $\zeta$ \begin{eqnarray} \left(1+\frac{\alpha\sqrt{\mu}}{\gamma}\mathcal{H}\right)\partial_{t}\zeta+c_{\gamma}\partial_{x}\zeta-\frac{3\epsilon}{4}c_{\gamma}\partial_{x}\zeta^{2}-c_{\gamma}\frac{(1-2\alpha)}{2\gamma}\sqrt{\mu}\mathcal{H}\partial_{x}\zeta&&\nonumber\\ -\frac{T}{2\sqrt{\gamma(1-\gamma)}}\partial_{x}^{3}\zeta=0.&&\label{bens229} \end{eqnarray} In absence of surface tension ($T=0$) equation (\ref{bens229}) reduces to the regularized Benjamin-Ono equation, \cite{KB,BLS2008}. According to this, (\ref{bens229}) will be called the regularized Benjamin (or rBenjamin) equation. The Benjamin equation corresponds to $\alpha=0$. Taking the Fourier transform in the corresponding linearized equation of (\ref{bens229}) leads to solutions of the form $\widehat{\zeta}(k,t)=e^{-ikm(k)t}\widehat{\zeta}(k,0)$ with \begin{eqnarray*} m(k)=\frac{c_{\gamma}\left(1-\frac{(1-2\alpha)}{2\gamma}\sqrt{\mu}|k|+\widetilde{T}k^{2}\right)}{1+\frac{\alpha}{\gamma}\sqrt{\mu}|k|},\; \widetilde{T}=\frac{T}{2\sqrt{\gamma(1-\gamma)}}, \end{eqnarray*} and to the linear dispersion relation $ \omega(k)=km(k)$, ensuring linear well-posedness of (\ref{bens229}). In analogy with the regularized BO equation, \cite{BonaDM2020}, (\ref{bens229}) admits at least three time invariant functionals and a Hamiltonian structure. The conserved quantities are \begin{eqnarray*} && C(\zeta)=\int_{-\infty}^{\infty} \zeta dx,\qquad D(\zeta)=\frac{1}{2}\int_{-\infty}^{\infty} \left(\zeta^{2}+\sqrt{\mu}\frac{\alpha}{\gamma}\zeta\mathcal{H}\zeta\right) dx,\\ && E(\zeta)=\frac{c_{\gamma}}{2}\int_{-\infty}^{\infty} \left(\zeta^{2}-\sqrt{\mu}\frac{(1-2\alpha)}{2\gamma}\zeta\mathcal{H}\zeta-\frac{1}{2}\zeta^{3}\right) dx+\frac{\widetilde{T}}{2}\int_{-\infty}^{\infty}(\partial_{x}\zeta)^{2}dx. \end{eqnarray*} The last one enables (\ref{bens229}) to have a Hamiltonian formulation \begin{eqnarray*} \partial_{t}\zeta=\mathcal{J}\frac{\delta}{\delta\zeta}E(\zeta), \end{eqnarray*} with structure operator $\mathcal{J}=-\partial_{x}J(\alpha)$, where $J(\alpha)$ is given by (\ref{joper}) (in its one-dimensional version) and $\frac{\delta}{\delta\zeta}$ denotes variational (Fr\'echet) derivative. When $T=0$ we recover the invariants and Hamiltonian structure of the regularized BO equation, \cite{BonaDM2020}. As in the particular case of the Benjamin equation ($\alpha=0$), the existence of these conserved quantities and the theory of Kenig et al., \cite{KenigPV1991,KenigPV1993}, might be used to obtain local and global well-posedness results for (\ref{bens229}), see \cite{Linares1999,LinaresS2005}. \section{A computational study of solitary wave solutions} \label{sec:sec3} \subsection{Preliminaries} \label{sec31} Another property typically studied in water wave models is the existence of special solutions. Of particular interest are the solutions of solitary-wave type, due to their relevance in the general dynamics of some models, \cite{Bona1981}. The present section is concerned with this topic for the cases of the one-dimensional version of the Benjamin system (\ref{bens224}), (\ref{bens225}), and the regularized Benjamin equations (\ref{bens229}). The purpose here, developed by computational means, is two-fold: The first one is related to the existence of solitary-wave solutions, for which no theoretical results are available. On the other hand, we are also interested in comparing these solitary wave profiles of the two models and with those of the Benjamin equation. In particular, and following here the study developed for the system introduced in \cite{Kalisch2007}, the dynamics of solitary wave solutions of the Benjamin equation under the evolution given by both the Benjamin system and the rBenjamin equation is numerically investigated. We start with a description of the equations involved for the generation of solitary wave solutions and the numerical tools used for the computational study. For the case of the Benjamin system (\ref{bens224}), (\ref{bens225}), we are looking for solutions in the form of traveling waves $\zeta(x,t)=\zeta(x-c_{s}t), u(x,t)=u(x-c_{s}t)$, for some speed $c_{s}\neq 0$ and with the profiles $\zeta(X), u(X), X=x-c_{s}t$ which are smooth and decay to zero as $|X|\rightarrow\infty$. Substituting into (\ref{bens224}), (\ref{bens225}) and integrating once yield the system \begin{eqnarray} \begin{pmatrix}-c_{s}J(\alpha)&\frac{1}{\gamma}J(\alpha-1)\\(1-\gamma)-T\partial_{x}^{2}&-c_{s}\end{pmatrix}\begin{pmatrix}\zeta\\u\end{pmatrix}=\frac{\epsilon}{\gamma}\begin{pmatrix}\zeta u\\ \frac{u^{2}}{2}\end{pmatrix}.\label{bens41} \end{eqnarray} In the case of the rBenjamin equation (\ref{bens229}), solutions $\zeta(x,t)=\zeta(x-c_{s}t), c_{s}>0$, with smooth profiles $\zeta(X)\rightarrow 0, |X|\rightarrow\infty$ will satisfy the equation \begin{eqnarray} -c_{s}J(\alpha)\zeta+c_{\gamma}\left(\zeta-\frac{3\epsilon}{4}\zeta^{2}+\frac{2\alpha-1}{2\gamma}\sqrt{\mu}\mathcal{H}\zeta\right)-T\zeta^{\prime\prime}=0.\label{bens42} \end{eqnarray} We will focus here on the strong numerical evidence of existence of solutions of (\ref{bens41}) and (\ref{bens42}) and on the properties of the solitary waves suggested by the computations, leaving the study of theoretical results to some future research. The computational approach is based on the iterative resolution of the algebraic systems obtained from the Fourier representation of (\ref{bens41}) and (\ref{bens42}). Written in fixed-point form, these are respectively \begin{eqnarray} \begin{pmatrix}-c_{s}\widehat{J(\alpha)}(k)&\frac{1}{\gamma}\widehat{J(\alpha-1)}(k)\\(1-\gamma)+Tk^{2}&-c_{s}\end{pmatrix}\begin{pmatrix}\widehat{\zeta}(k)\\\widehat{u}(k)\end{pmatrix}=\frac{\epsilon}{\gamma}\begin{pmatrix}\widehat{\zeta u}(k)\\ \widehat{\frac{u^{2}}{2}}(k)\end{pmatrix}.\label{bens43} \end{eqnarray} for $k\in\mathbb{R}$, and where $\widehat{J(r)}(k)=1+\frac{r}{\gamma}\sqrt{\mu}|k|$, and \begin{eqnarray} \left(-c_{s}\widehat{J(\alpha)}(k)+c_{\gamma}\left(1+\frac{2\alpha-1}{2\gamma}\sqrt{\mu}|k|\right)+Tk^{2}\right)\widehat{\zeta}(k)=\frac{3c_{\gamma}\epsilon}{4}\widehat{\zeta^{2}}(k).\label{bens44} \end{eqnarray} The numerical procedure is performed in the standard way: for each case (\ref{bens41}), (\ref{bens42}), the corresponding periodic problem on a long enough interval $(-L,L)$ is implemented via the Fourier representation based on the form (\ref{bens43}) and (\ref{bens44}) respectively, where now $k\in\mathbb{Z}$ and $\widehat{\zeta}(k), \widehat{u}(k)$ represent the corresponding $k$th Fourier coefficient. The resulting algebraic equations for each $k$ are iteratively solved by using the Petviashvili's method, \cite{Petv1976,pelinovskys}, taking advantage of the homogeneous character (of degree two) of the nonlinear term. The Petviashvili's iteration is complemented with vector extrapolation techniques, \cite{sidi,sidifs,smithfs}, in order to accelerate the convergence, \cite{AlvarezD2015}. \subsection{A comparative study} The approximate solitary wave solutions of the Benjamin system (\ref{bens224}), (\ref{bens225}), the regularized Benjamin equation (\ref{bens229}) and the Benjamin equation (again (\ref{bens229}) with $\alpha=0$) are here compared in a series of numerical experiments. The dimensionless parameters for the computations are taken as $\epsilon=\sqrt{\mu}=0.1$, with $T=0.1$, while different values of $\gamma$ and $c_{s}$ are considered. The interval of approximation is determined by $L=256$ and $4096$ Fourier modes are typically used. \begin{figure}[!htbp] \centering {\includegraphics[width=\columnwidth]{Bensw_fig01.eps}} \caption{Approximate profiles $(\zeta,u)$ of Benjamin system (\ref{bens41}) with $\gamma=0.8, c_s=0.49$. } \label{fig:bens_fig1} \end{figure} \begin{figure}[!htbp] \centering {\includegraphics[width=\columnwidth]{Bensw_fig02.eps}} \caption{Approximate profiles $(\zeta,u)$ of Benjamin system (\ref{bens41}) with $\gamma=0.8, c_s=-0.49$. } \label{fig:bens_fig2} \end{figure} Note first that if $(c_{s},\zeta,u)$ is a solution of (\ref{bens41}) then $(-c_{s},\zeta,-u)$ is also a solution, with the same $\zeta$ profile traveling in opposite direction. This is illustrated in Figures \ref{fig:bens_fig1} and \ref{fig:bens_fig2}, with the representation of the approximate $\zeta$ and $u$ solitary wave profiles corresponding to $\gamma=0.8$ and $c_{s}=0.49$. \begin{figure}[htbp] \centering \subfigure[] {\includegraphics[width=\columnwidth]{Bensw_fig1.eps}} \subfigure[] {\includegraphics[width=\columnwidth]{Bensw_fig3.eps}} \subfigure[] {\includegraphics[width=\columnwidth]{Bensw_fig5.eps}} \caption{Comparison of approximate $\zeta$ solitary wave profiles. (a) $\gamma=0.4, c_{s}=1.1$; (b) $\gamma=0.6, c_{s}=0.75$; (c) $\gamma=0.8, c_{s}=0.49$.} \label{fig:bens_fig3} \end{figure} \begin{figure}[htbp] \centering \subfigure[] {\includegraphics[width=\columnwidth]{Bensw_fig2.eps}} \subfigure[] {\includegraphics[width=\columnwidth]{Bensw_fig4.eps}} \subfigure[] {\includegraphics[width=\columnwidth]{Bensw_fig6.eps}} \caption{Phase portraits of the approximate $\zeta$ profiles of Figure \ref{fig:bens_fig3}.} \label{fig:bens_fig4} \end{figure} The second observation is that the computations generate solitary wave profiles when $|c_{s}|<c_{\gamma}$, where $c_{\gamma}$ is given by (\ref{bens226}), and the $\zeta$ profiles are of elevation. (This cannot serve us, indeed, to discard the existence of solitary wave solutions of depression as in the BO system, \cite{BonaDM2020,AnguloS2020}.) Additional observations are that the profiles are not positive; they contain an oscillatory decay in the same way as the known behaviour of the solitary wave solutions of the Benjamin equation, \cite{ABR,Benjamin1992}. The waves are taller and with less oscillations as $|c_{s}|$ moves away from $c_{\gamma}$. These properties may be compared with those of the solitary wave solutions of the BO and ILW systems, studied theoretically in \cite{AnguloS2020} and computationally in \cite{BonaDM2020}, for which the solitary waves do not show oscillatory decay. \begin{figure}[htbp] \centering \subfigure[] {\includegraphics[width=\columnwidth]{Bensw_fig7.eps}} \subfigure[] {\includegraphics[width=\columnwidth]{Bensw_fig8.eps}} \subfigure[] {\includegraphics[width=\columnwidth]{Bensw_fig9.eps}} \caption{Speed-amplitude relation. (a) $\gamma=0.4$; (b) $\gamma=0.6$; (c) $\gamma=0.8$.} \label{fig:bens_fig5} \end{figure} The main influence of the parameter $\gamma$ seems to be then through the apparently limiting speed $c_{\gamma}$. In Figure \ref{fig:bens_fig3} a comparison of the approximate $\zeta$ profiles of the Benjamin system, the rBenjamin equation and the Benjamin equation is made for different values of $\gamma=0.4, 0.6, 0.8$, for which $c_{\gamma}=1.2247, 0.8165, 0.5$ respectively. Note that as $\gamma$ grows, the profiles of the three models are closer, and the amplitude decreases. This is also observed in Figure \ref{fig:bens_fig4}, which displays the corresponding phase portraits. Here we can also notice the oscillatory decay of the waves. \begin{figure}[!htbp] \centering {\includegraphics[width=\columnwidth]{Bensw_fig11.eps}} \caption{Approximate $\zeta$ solitary wave profiles of the rBenjamin equation for several values of $\alpha$ and $\gamma=0.6, c_s=0.75$. } \label{fig:bens_fig6} \end{figure} As mentioned before, for a fixed value of $\gamma$, the amplitude of the waves is an increasing function of $c_{\gamma}-|c_{s}|$. This is illustrated in Figure \ref{fig:bens_fig5}, which corresponds to $\gamma=0.4, 0.6$ and $0.8$. In all the cases, the model providing the largest amplitudes is the rBenjamin equation but, as $\gamma\uparrow 1$ and for larger values of $c_{\gamma}-|c_{s}|$, the behaviour of the Benjamin equation and rBenjamin equation seems to approach while the amplitudes of the profiles of the Benjamin system tend to separate from those of the corresponding for the Benjamin equation. The results in Figures \ref{fig:bens_fig3} and \ref{fig:bens_fig4} correspond to taking $\alpha=1.2$ both in (\ref{bens229}) and in (\ref{bens224}), (\ref{bens225}). Figure \ref{fig:bens_fig6} displays the profiles of the rBenjamin equation for different values of $\alpha$ and $\gamma=0.6, c_{s}=0.75$. Observe that the amplitude of the profiles is an increasing function of $\alpha$. The similarities in the solitary waves of the three models can also be studied as follows. We generate an approximate solitary wave solution of the Benjamin equation. The profile is now considered as initial condition for two numerical methods that approximate the evolution of (\ref{bens229}) and (\ref{bens224}), (\ref{bens225}) respectively. In the case of the Benjamin system, the initial condition for the second component $u$ is given by (\ref{bens227}), (\ref{bens228}), where $\zeta$ would denote the computed solitary wave of the Benjamin equation. Then the evolution of the corresponding numerical approximation is monitored. The numerical schemes used for the simulations consist of the approximation of the corresponding periodic initial-value problem on a long enough interval with Fourier collocation discretization in space and a fourth-order, singly diagonally Runge-Kutta composition method as time integrator. Both numerical strategies were shown to have a good performance in related problems, \cite{FrutosS1992,DDM2015,DDM2019}. \begin{figure}[htbp] \centering \subfigure[] {\includegraphics[width=12cm]{Bensw_fig14a.eps}} \subfigure[] {\includegraphics[width=12cm]{Bensw_fig14b.eps}} \subfigure[] {\includegraphics[width=12cm]{Bensw_fig14c.eps}} \subfigure[] {\includegraphics[width=12cm]{Bensw_fig14e.eps}} \caption{Numerical approximation to the rBenjamin equation from a solitary wave of the Benjamin equation as initial condition. $\gamma=0.4, c_s=1.1, \alpha=1.2$.} \label{fig:bens_fig7} \end{figure} \begin{figure}[htbp] \centering \subfigure[] {\includegraphics[width=12cm]{Bensw_fig15a.eps}} \subfigure[] {\includegraphics[width=12cm]{Bensw_fig15b.eps}} \subfigure[] {\includegraphics[width=12cm]{Bensw_fig15c.eps}} \subfigure[] {\includegraphics[width=12cm]{Bensw_fig15e.eps}} \caption{Numerical approximation to the Benjamin system from a solitary wave of the Benjamin equation as initial condition. $\gamma=0.4, c_s=1.1, \alpha=1.2$.} \label{fig:bens_fig8} \end{figure} Taking $\gamma=0.4$ and $c_{s}=1.1$, this evolution is illustrated in Figure \ref{fig:bens_fig7} (for the rBenjamin equation with $\alpha=1.2$) and \ref{fig:bens_fig8} (for the Benjamin system with $\alpha=1.2$). \begin{figure}[!htbp] \centering {\includegraphics[width=\columnwidth]{Bensw_fig14em.eps}} \caption{rBenjamin equation. Magnification of Figure \ref{fig:bens_fig7}(d) } \label{fig:bens_fig7m} \end{figure} \begin{figure}[!htbp] \centering {\includegraphics[width=\columnwidth]{Bensw_fig15em.eps}} \caption{Benjamin system. Magnification of Figure \ref{fig:bens_fig8}(d) } \label{fig:bens_fig8m} \end{figure} The two models show a similar qualitative behaviour. The initial condition evolves into an approximate solitary wave solution of the corresponding equations along with a dispersive tail traveling in front of this main wave. (There is also a much smaller tail trailing the solitary-wave profile.) Furthermore, the formation of a small solitary wave-like structure is not discarded, in a sort of resolution property. This is suggested by the magnifications in Figures \ref{fig:bens_fig7m} and \ref{fig:bens_fig8m}. In a context of stability, the experiments suggest that the initial solitary wave solution of the Benjamin equation behaves as a small perturbation of some close solitary wave solutions of the rBenjamin equation and the Benjamin system. \begin{figure}[!htbp] \centering {\includegraphics[width=\columnwidth]{Bensw_fig12.eps}} \caption{Function $\phi$ in (\ref{bens46}) with $\sqrt{\mu}=T=0.1$ and $\gamma=0.4, \alpha=1.2$.} \label{fig:bens_fig9} \end{figure} \begin{figure}[!htbp] \centering {\includegraphics[width=\columnwidth]{Bensw_fig13.eps}} \caption{Function $\phi$ in (\ref{bens49}) with $\sqrt{\mu}=T=0.1$ and $\gamma=0.4, \alpha=1.2$. } \label{fig:bens_fig10} \end{figure} The structure of the dispersive tails may be studied from the corresponding linearized equations. In the case of (\ref{bens229}) and in a frame moving with the speed $c_{s}$ of the solitary wave, the equation is \begin{eqnarray} \left(1+\frac{\alpha\sqrt{\mu}}{\gamma}\mathcal{H}\right)(\partial_{t}-c_{s}\partial_{y})\zeta+c_{\gamma}\partial_{y}\zeta-c_{\gamma}\frac{(1-2\alpha)}{2\gamma}\sqrt{\mu}\mathcal{H}\partial_{y}\zeta&&\nonumber\\ -\frac{T}{2\sqrt{\gamma(1-\gamma)}}\partial_{y}^{3}\zeta=0,&&\label{bens45} \end{eqnarray} where $y=x-c_{s}t$. Plane wave solutions $\zeta(y,t)=e^{i(ky-\omega(k)t)}$ of (\ref{bens45}) will satisfy the linear dispersion relation $\omega(k)=-kc_{s}+c_{\gamma}\phi(|k|)$ where $\phi:[0,\infty)\rightarrow\mathbb{R}$ is defined as \begin{eqnarray} \phi(x)=\frac{1+\frac{(2\alpha-1)}{2\gamma}\sqrt{\mu}x+\frac{T}{2(1-\gamma)}x^{2}}{1+\frac{\alpha}{\gamma}\sqrt{\mu}x},\;\; x\geq 0.\label{bens46} \end{eqnarray} Therefore, the local phase speed relative to the speed of the solitary wave is \begin{eqnarray*} v(k)=\frac{\omega(k)}{k}=-c_{s}+c_{\gamma}\phi(|k|). \end{eqnarray*} Some properties of the function $\phi$ can explain the behaviour of the phase speed. These are collected in the following lemma. \begin{lemma} \label{lemm1} The following properties of the function $\phi$ defined in (\ref{bens46}) hold: \begin{itemize} \item[(i)] $\phi(0)=1$ and $\displaystyle\lim_{x\rightarrow +\infty}\phi(x)=+\infty$. \item[(ii)] $\phi$ attains a minimum at \begin{eqnarray} x^{*}=\frac{1}{2}\left(-b+\sqrt{b^{2}+4c}\right),\; b=\frac{2\gamma}{\alpha\sqrt{\mu}},\; c=\frac{1-\gamma}{\alpha T}, \label{bens47} \end{eqnarray} which satisfies $x^{*}>0$ and $\phi(x^{*})>0$ for $\mu$ small enough. \end{itemize} \end{lemma} {\em Proof.} We write $\phi(x)=\frac{P(x)}{Q(x)}$ where \begin{eqnarray*} P(x)=1+\frac{(2\alpha-1)}{2\gamma}\sqrt{\mu}x+\frac{T}{2(1-\gamma)}x^{2},\; Q(x)=1+\frac{\alpha}{\gamma}\sqrt{\mu}x,\; x\geq 0. \end{eqnarray*} Then elementary calculus proves (i) and the existence of $x^{*}$ given by (\ref{bens47}) where $\phi$ attains a minimum. In order to prove the last property, note that since $x^{*}>0$ then $Q(x^{*})>0$. If $2\alpha-1\geq 0$, then $P(x^{*})>0$ and consequently $\phi(x^{*})>0$. If $2\alpha-1< 0$ we observe that $P$ attains a minimum at \begin{eqnarray*} x_{*}=\frac{c_{\gamma}^{2}}{2T}(1-2\alpha)\sqrt{\mu}, \end{eqnarray*} for which, after some computations, one finds that \begin{eqnarray*} P(x_{*})=\frac{8\gamma^{2}T-(1-2\alpha)^{2}(1-\gamma)\mu}{8\gamma^{2}T}, \end{eqnarray*} and since $T=O(\sqrt{\mu})$ then $P(x)\geq P(x_{*})>0$ for $\mu$ small enough and $x\geq 0$. In particular $P(x^{*})>0$ and thus $\phi(x^{*})>0$. $\Box$ A typical form of the function $\phi$ for the range of values of the parameters used in the numerical experiments is illustrated in Figure \ref{fig:bens_fig9}. Note that Lemma \ref{lemm1} implies that \begin{eqnarray*} v(k)>-c_{s}+c_{\gamma}\phi(x^{*}), \end{eqnarray*} and $v(k)>0$ for all wavenumbers $k$ if we take $c_{s}<c_{\gamma}\phi(x^{*})$. Even if this is not satisfied, we observe that, due to (i) of Lemma \ref{lemm1}, we have $\phi(|k|)>1$ from some value of $|k|$ (see Figure \ref{fig:bens_fig9}) and then \begin{eqnarray*} v(k)>-c_{s}+c_{\gamma}. \end{eqnarray*} Therefore, if $c_{s}<c_{\gamma}$ then $v(k)>0$ from some value of $|k|$; this means that most of the solution components $\zeta(y,t)=e^{i(ky-\omega(k)t)}$ is leading the solitary pulse, cf. Figure \ref{fig:bens_fig7}. In the case of the Benjamin system (\ref{bens224}), (\ref{bens225}), the corresponding linearized equations are \begin{eqnarray} \left(1+\frac{\alpha\sqrt{\mu}}{\gamma}\mathcal{H}\right)(\partial_{t}-c_{s}\partial_{y})\zeta+\frac{1}{\gamma}\partial_{y}\left(1-(1-\alpha)\frac{\sqrt{\mu}}{\gamma^{2}}\mathcal{H}\right){u}&=&0,\label{bens48a}\\ \partial_{t}{u}+\partial_{y}\left((1-\gamma)-T\partial_{y}^{2}\right)\zeta&=&0,\label{bens48b} \end{eqnarray} System (\ref{bens48a}), (\ref{bens48b}) can be reduced to \begin{eqnarray*} \left(1+\frac{\alpha\sqrt{\mu}}{\gamma}\mathcal{H}\right)(\partial_{t}-c_{s}\partial_{y})^{2}\zeta-c_{\gamma}^{2}\partial_{y}^{2}\left(1-(1-\alpha)\frac{\sqrt{\mu}}{\gamma^{2}}\mathcal{H}\right)\zeta&&\\ -\frac{T}{\gamma}\partial_{y}^{4}\left(1-(1-\alpha)\frac{\sqrt{\mu}}{\gamma^{2}}\mathcal{H}\right)\zeta=0&&. \end{eqnarray*} Then the dispersion relation, relative to the speed $c_{s}$ of the solitary wave, has the form $\omega_{\pm}(k)=-kc_{s}\pm kc_{\gamma}\phi(|k|)$, where now $\phi:[0,\infty)\rightarrow\mathbb{R}$ is given by \begin{eqnarray} \phi(x)=\left(\frac{(1+\frac{(\alpha-1)}{\gamma}\sqrt{\mu}x)(1+\frac{T}{(1-\gamma)}x^{2})}{1+\frac{\alpha}{\gamma}\sqrt{\mu}x}\right)^{1/2},\;\; x\geq 0.\label{bens49} \end{eqnarray} We observe (see Figure \ref{fig:bens_fig10}) that the function $\phi$ in (\ref{bens49}) also satisfies (i) of Lemma \ref{lemm1}. If $|c_{s}|<c_{\gamma}$, then from some $|k|$ it holds that \begin{eqnarray*} v_{+}(k)=\frac{\omega_{+}(k)}{k}=-c_{s}+c_{\gamma}\phi(|k|)>-c_{s}+c_{\gamma}>0, \end{eqnarray*} and \begin{eqnarray*} v_{-}(k)=\frac{\omega_{-}(k)}{k}=-c_{s}-c_{\gamma}\phi(|k|)<-c_{s}-c_{\gamma}<0. \end{eqnarray*} Thus, as before, most of the wave components of the dispersive tail travels leading the solitary wave with speed $c_{s}$. \section{Concluding remarks} \label{sec:sec4} The present paper introduces a two-dimensional asymptotic model for the propagation of internal waves in a two-layer system of fluids with rigid lid condition for the upper layer and a lower layer of infinite depth (or much larger than that of the upper layer). Furthermore, the model takes into account both gravity and capillary effects at the interface. The derivation is carried out reformulating first the corresponding Euler equations by using the nonlocal operators considered in \cite{BLS2008}. Then asymptotic expansions of these operators, consistent with the physical regime of the Benjamin model, lead to a bi-directional system for the deviation of the interface and the velocity variables. In the one-dimensional, uni-directional case, the system can be reduced to a family of regularized Benjamin equations which contains, as particular case, a version of the model derived by Benjamin, \cite{Benjamin1967,Benjamin1992}. Some mathematical properties of the new models are also discussed: linear well-posedness, existence of conserved quantities and a computational study of comparison of solitary wave solutions for the one-dimensional Benjamin system, the regularized Benjamin equation and the usual Benjamin equation. The results obtained about these mathematical aspects will additionally serve us as starting point for a deeper study of the models, concerning topics of local and global well-posedness, existence of solitary waves and dynamics of the equations.
1,314,259,995,756
arxiv
\section{Motivation and numerical setup} \label{s:intro} Various catastrophic collapse events have been proposed to explain the energies released in a gamma--ray burst (GRB) including compact binary system mergers \cite{Go86, MH93}, collapsars \cite{Wo93} and hypernovae \cite{Pa98}. These models all rely on a common engine, namely a stellar mass black hole (BH) which accretes several solar masses of matter from a disk (formed during a merger or by a non--spherical collapse). A fraction of the gravitational binding energy released by accretion is converted into a pair fireball. Provided the baryon load of the fireball is not too large, the baryons are accelerated together with the e$^+\,$e$^-$ pairs to ultra--relativistic speeds (Lorentz factors $> 10^2$; \cite{CR78}). The existence of such relativistic flows is supported by radio observations of GRB\,980425 \cite{KF98}. The dynamics of spherically symmetric relativistic fireballs has been studied by several authors by means of 1D Lagrangian hydrodynamic simulations ({\sl e.g.,}\, \cite{ML93}). It has been argued that the rapid temporal decay of several GRB afterglows is more consistent with the evolution of a relativistic jet after it slows down and spreads laterally than with a spherical blast wave \cite{KD99}. The lack of a significant radio afterglow in GRB\,990123 provides independent evidence for jet--like geometry \cite{KF99}. Motivated by these observations and by the collapsar model of \cite{MW99}, we have simulated the propagation of jets from collapsars using relativistic hydrodynamics. In \cite{MW99} the continued evolution of rotating helium stars, whose iron core collapse does not produce a successful outgoing shock but instead forms a BH surrounded by a compact accretion disk, has been explored. Assuming that the efficiency of energy deposition by $\nu \bar\nu$--annihilation or, {\sl e.g.,}\, magneto-hydrodynamic processes is higher in the polar regions, \cite{MW99} obtained relativistic jets along the rotation axis, which remained highly focused, and capable of penetrating the star. However, as these simulations were performed with a Newtonian hydrodynamic code, appreciably superluminal speeds in the jet flow were obtained. We have performed axisymmetric relativistic simulations of jets from collapsars starting from Model\,14A of \cite{MW99}. The simulations have been performed with GENESIS a multidimensional relativistic hydrodynamic code (based on Godunov-type schemes) developed by \cite{AI99} using 2D spherical coordinates ($r, \theta$). GENESIS employs a 3th order explicit Runge--Kutta method \cite{SO89} to advance in the time the relativistic Euler equations written in conservation form. High spatial order is provided by a PPM reconstruction \cite{CW84} that sets up the values of the physical variables in order to solve linearized Riemann problems at every cell interface (using the Marquina's flux formula \cite{DF98}). The innermost $2.03\,M_{\sun}$ representing the iron core were removed from the helium star model by introducing an inner boundary at a radius of $200\,$km. When the central BH has acquired a mass of $3.762\,M_{\sun}$, we mapped the model to our computational grid. In the $r$--direction the computational grid consists of 200 zones spaced logarithmically between the inner boundary and the surface of the helium star at $R_{*} = 2.98\times 10^{10}\,$cm. Assuming equatorial--plane symmetry we use four different zonings in the angular direction: 44, 90 and 180 uniform zones ({\sl i.e.,}\, $2^{\circ}, 1^{\circ}$ and $0.5^{\circ}$ angular resolution), and 100 nonuniform zones covering the polar region $0^{\circ} \le \theta \le 30^{\circ}$ with 60 equidistant zones ($0.5^{\circ}$ resolution) and the remaining 40 zones being logarithmically distributed between $30^{\circ} \le \theta \le 90^{\circ}$. The gravitational field of the BH is described by the static Schwarzschild metric, neglecting the effects due to self--gravity of the star. We used the EOS of \cite{WJ94} which includes the contribution of non--relativistic nucleons treated as a mixture of Boltzmann gases, and radiation, as well as an approximate correction due to pairs $e^+e^-$. Full ionization and non-degeneracy of the electrons is assumed. We advect ({\sl i.e.,}\, we do not solve additional Riemann problems for each component) nine non-reacting nuclear species which are present in the initial model. In a consistent collapsar model the jet will be launched by any physical process which gives rise to a local deposition of energy and/or momentum. We mimic this process by depositing energy at a constant rate, $\dot E$, within a $30^{\circ}$ cone around the rotation axis of the progenitor star. In radial direction the deposition region extends from the inner boundary to a radius of $6\times10^7\,$cm. We consider two cases that bracket the expected $\dot E$ of the collapsar models: $10^{50}\,$erg/s, and $10^{51}\,$erg/s. \section{Results} \label{s:results} {\bf Low energy deposition rate (Model A).} Using a constant $\dot E = 10^{50}\,$erg/s a relativistic jet forms within a fraction of a second and starts to propagate along the rotation axis (Fig.\,1). The jet exhibits all the typical morphological elements \cite{BR74}: a terminal bow shock, a narrow cocoon, a contact discontinuity separating ste-% \begin{figure}[hb] \centerline{\epsfig{file=fig_1.ps,width=9.5cm}} \caption{\small Coloured contour maps of the logarithm of the rest--mass density (six top panels) and the Lorentz factor for model A at different evolution times. Note the change in the scale between left and right panels. \label{f:lorerho}} \end{figure} \clearpage \noindent llar and jet matter, and a hot spot. The propagation of the jet is unsteady, because of density inhomogeneities in the star. The Lorentz factor of the jet, $W$, increases non--monotonically with time, while the density drops to $\sim 10^{-6}$\,gr/cm$^3$. The density profile shows large variations (up to a factor of 100) due to internal shocks. The mean density in the jet is $\sim 10^{-2} - 1$ \,g/cm$^3$. Some of the internal shocks are biconical and recollimate the beam. These shocks develop during the jet's propagation and may provide the ``internal shocks'' proposed to explain the observed gamma--ray emission \cite{Ka94}. A particularly strong recollimation shock forms during the early stages of the evolution, followed by a strong rarefaction that causes the largest acceleration of the beam material giving rise to a maximum in $W$. When the jet encounters a region along the axis where the density gradient is positive the jet's head is decelerated, while the a central channel in the beam is cleaned by outflow into the cocoon through the head. This leads to an acceleration of the beam. The combination of both effects (deceleration of the head and beam acceleration) increases the strength of the internal shocks. The relativistic treatment of the hydrodynamics leads to an overall qualitatively similar evolution than in \cite{MW99} (formation of a jet), being, however, a quantitatively very different. We find that the results strongly depend on the angular resolution, and the minimum acceptable one is $0.5^{\circ}$ (at least near the axis). At this resolution we find $W_{\rm max} \sim 15-20$ (at shock break--out) at a radius $\sim 8\times10^9$\,cm. Within the uncertainties of the jet mass determinations due to finite zoning and the lack of a precise numerical criterium to identify jet matter, the baryon load, $\eta$, seems to decrease with increasing resolution. In the highest resolution run we find $\eta \simeq 1.3 \pm 1.2 $ at shock break-out (see also Sect.\,4). {\bf High energy deposition rate (Model B).} Enhancing $\dot E$ by a tenfold ($\dot E = 10^{51}$\,erg/s), the jet flow reaches larger values of $W_{\rm max}$. We observe transients during which $W_{\rm max}$ becomes as large as 40 ($W_{\rm max} =33.3$ at shock breakout). The jet propagates faster than in model A. The time required to reach the surface of the star is 2.27\,s instead of 3.35\,s. The opening angle of the jet at shock breakout is $\sim 10^{\circ}$, {\sl i.e.,}\, the jets is less collimated than model A. The strong recollimation shock present in the model A is not so evident here. Instead, several biconical shocks are observed, and $W$ near the head of the jet is larger ($\sim 22$ in the final model) because, due to the larger $\dot E$, the central funnel is evacuated faster, and because the mean density in the jet is 5 times smaller than in model A ($\eta$ being twice as large). \begin{figure}[hbt] \centerline{\psfig{file=fig_2.ps,width=6cm}} \caption{\small Evolution of the axial and lateral sizes of the jet cavity during the post--breakout epoch. Time is measured with respect to the breakout time for each model. \label{f:size}} \end{figure} {\bf Evolution after shock breakout.} After reaching the stellar surface the relativistic jet propagates through a medium of decreasing density continuously releasing energy into a medium whose pressure is negligible compared to that in the jet cavity, and whose density is (initially) of the same order as that of the jet. These are jump conditions that generate a strong blast wave. The external density gradient determines whether the shock will accelerate or decelerate with time (\cite{Sh79}). In order to satisfy the conditions for accelerating shocks (\cite{Sh79}), we have generated a Gaussian atmosphere matching an external uniform medium. We use models A and B to simulate the evolution after shock breakout. The computational domain is extended for this purpose to a radius of $R_t = 7.6\times10^{10}$\,cm. The jet reaches $R_t$ (from the stellar surface) after 1.8\,s in both models, {\sl i.e.,}\, the mean propagation velocity is $\sim 0.85c$ (almost three times larger than that inside the star). The evolution after shock breakout can be distinguished into three epochs (see Figs.\,\ref{f:lorerho} and \ref{f:size}), which are related with (i) the external thermodynamical gradients and (ii) the importance of the axial momentum flux relative to the pressure into the jet cavity. Both effects determine the shape of the expanding bubble --prolate-- (see Figs.\,\ref{f:lorerho} and \ref{f:size}) during the post--breakout evolution. However, when the jet reaches the uniform part of the circumstellar environment, the shape changes appreciably, because the sideways expansion is faster. We have not followed the evolution long enough to see what happens when most of the bubble has reached the uniform part of the environment. Nevertheless, we can infer from Fig.\,2 than the widening rate reduces with time in a way similar to what has happened to the axial expansion. At latter times most of the bubble is inside the uniform medium, and the bubble will eventually be pressure driven. Hence a isotropic expansion is expected. After shock breakout there are transients in which $W_{\rm max}$ becomes almost 50 in some parts of the beam, $W_{\rm max}$ is again obtained behind the strongest recollimation shock. The Lorentz factor near the boundary of the cavity blown by the jet grows from $\sim1$ (at shock breakout) to $\sim 3$ in both models decreasing with latitude. At the end of the simulation $W_{\rm max}$ is 29.35 (44.17) for model A (B), which is still smaller than the ones required for the fireball model (\cite{CR78}). However, our simulations have not been pushed far enough in time yet and, therfore, they can (at the present stage) neither account for the observational properties of GRBs nor of their afterglows. Instead, our set of numerical models can be regarded as simulations of a proto-GRB, because the scales treated in the simulations are still by more than 100 times smaller than the typical distances at which the fireball eventually becomes optically thin ($\sim 10^{13}$\,cm).
1,314,259,995,757
arxiv
\section{Introduction} Statistical mechanics is a powerful tool for understanding and constructing optimization algorithms. On one hand, disordered systems, such as spin glasses or polymers, prompted the development of new algorithms (simulated annealing \cite{Kirkpatrick671}, cluster algorithms \cite{wolff89}, hysteric optimization \cite{zarand02}). On the other hand, existing optimization algorithms have often been fruitfully analyzed in the statistical physics' framework, yielding knowledge about their behavior, phase transitions and possible improvement \cite{mezard02, franz02, hartmann03, lukasz17, montanari02}. In recent years, the vast class of machine learning algorithms \cite{jordan2015machine} has enjoyed a great deal of attention. Neural networks \cite{goodfellow2016deep, aggarwal2018neural} are nowadays used to predict protein folding \cite{jumper2021highly}, search for exotic particles in high-energy colliders \cite{baldi2014searching}, predict phase transitions \cite{wetzel2017phase}, and in many other fields \cite{carleo2019machine}. At the same time, reinforcement learning \cite{sutton2018reinforcement, mnih2015human} has proven to be a valuable tool for finding optimal jet grooming strategies \cite{carrazza2019jet}, in the pursue of the conformal bootstrap program \cite{kantor2022conformal}, or in the engineering of smart active matter \cite{celani2017flow}. Nonetheless, numerous questions about the algorithms' functioning remain unanswered \cite{zdeborova2020understanding}. Great progress has been made in the study of neural networks, the analogy between their highly non-convex loss function landscapes and the free energy landscape of disordered systems has been extensively studied \cite{gardner1988optimal,barkai1992broken,huang2014origin}. It has been shown how the stochastic gradient descent algorithm \cite{robbins1951stochastic, bottou2010large} is prone to lead the network's weights towards a needed suboptimal, robust, and well-generalizing region \cite{baldassi2016unreasonable, feng2021inverse}. However, all the results above are applicable to supervised learning problems, which can be mapped to disordered systems by interpreting the loss function as a Hamiltonian. Despite their late successes, reinforcement learning algorithms have not yet received such analysis. This is perhaps due to the lack of a clear mapping between RL problems and disordered systems. We try to overcome this gap by studying a subset of reinforcement learning algorithms named policy gradients (PG) \cite{williams1992simple,sutton2000policy}. PG are the most universal training methods for reward-driven learning, they can be applied without additional knowledge of the agent's surrounding. Their main disadvantage is their tendency to converge to local maxima, thus learning a peculiar behavior, heavily dependent on the initial parameters. Nonetheless, PG-based algorithms were applied with a tremendous success in areas such as robotics \cite{andrychowicz2020learning}, natural language processing \cite{paulus2017deep}, and games \cite{berner2019dota}. A proper understanding of the reasons of this success is still an open question. We obtain a description for the learning process in a convex landscape in terms of drift-diffusion dynamics. By mapping a non-convex RL setting to a spin glass at a finite temperature, we are able to explain the effect of hyperparameters on the learning success thanks to a mean-field analysis. As it turns out, the learning rate is coupled to the temperature and, thus, its variation allows one to perform an annealing. \section{The reinforcement learning framework} The typical reinforcement learning setting, the so-called \textit{Markov decision process} \cite{bellman57}, consists of an agent acting in an environment with the purpose of maximizing a given utility function. The agent bases its decisions on the environmental \textit{state} $s\in \mathcal{S}$, choosing an \textit{action} $a\in \mathcal{A}$, according to its \textit{policy} $\pi(a|s)$. Subsequently, it receives a feedback from the environment in terms of a \textit{reward} $R \in \mathbb{R}$ and the state of the environment changes to a new one $s\rightarrow s'$. The reward is generated from a distribution conditioned to the state and the chosen action $q(r|s,a)$ and the transition between states is governed by the probability density $p(s'|s,a)$. From this new state, a new action can be taken, generating again a new reward and a new state-transition. The sequence of rewards obtained through this iteration is the agent's maximization goal. The central evaluated quantity is the \textit{return}: $G = \sum_{t=0}^\infty R_t \gamma^{t}$, i.e. the sum of the obtained reward sequence discounted by a factor $\gamma$, $0 \leq \gamma < 1$, which tunes the importance of memory. Note that we used capital letters for $R$ and $G$ because they are, in general, stochastic variables. The utility function of the agent is the average return: $Q_\pi(s,a) = E_{\pi,p,q} [G | S_0=s,A_0=a]$. Denoting the distribution of initial states $\rho_0(s)$, the expected return of the policy $\pi$ reads: \begin{equation} J_\pi = \sum_s \rho_0(s) \sum_a \pi(a|s) Q_\pi(s, a) . \label{eq:return} \end{equation} Reinforcement learning aims to efficiently find a policy $\pi$ that maximizes $J_\pi$. In general, the agent does not know the rules that govern the environment (e.g. $p$ and $q$), and it must build its strategy based on the information that it acquires while learning. In this Letter we analyze \textit{policy gradient} algorithm \cite{sutton2018reinforcement}. It exploits the well-known idea of gradient ascent to find the maximum of the return function (\ref{eq:return}). In this case the policy $\pi(a|s,\boldsymbol{\theta})$ is parametrized with a $d$-dimensional set of numbers $\boldsymbol{\theta} = \{ \theta_1, \ldots, \theta_d\}$. The gradient ascent consists in updating these parameters in the direction of the steepest ascent of the average return (\ref{eq:return}). At state $s$ and for action $a$ it can be proven to be $\partial_{\boldsymbol{\theta}} J(\boldsymbol{\theta}) = E_{\pi, q, p} \left[ Q_\pi(s, a) \partial_{\boldsymbol{\theta}}\log\pi(a|s,\boldsymbol{\theta}) \right]$. However, since the agent does not know how to compute this average (it does not know $p$ and $q$, as well as the utility function), it has to rely on an estimate of this gradient. One solution is to use the quantity $(G(s,a) - h(s)) \partial_{\boldsymbol{\theta}} \log\pi(a|s,\boldsymbol{\theta})$, where $G(s,a)$ is an estimate of the quality function, and $h(s)$ is an arbitrary action-independent function called \textit{baseline}. At each time step $t$, the new parameters $\boldsymbol{\theta}_{(t+1)}$ will be derived from the current ones $\boldsymbol{\theta}_{(t)}$ by adding the gradient, multiplied by a coefficient $\alpha$, called \textit{learning rate}. To render the procedure invariant from the policy parametrization, one can fix the Kullback-Leibler divergence $D(\pi_{t+1} || \pi_t)$ at all steps, therefore obtaining the so-called \textit{natural policy gradient} \cite{kakade2001natural, bhatnagar2009natural}: \begin{equation} \begin{split} \boldsymbol{\theta}_{(t+1)} = \boldsymbol{\theta}_{(t)} + \alpha \; F_{(t)}^{-1} \; & \left( G(s_{(t)}, a_{(t)}) - h(s_{(t)}) \right) \\ &\times \partial_{\boldsymbol{\theta}}\log\pi(a_{(t)}|s_{(t)},\boldsymbol{\theta}_{(t)}), \end{split} \label{eq:natural_PG} \end{equation} where \begin{equation} (F)_{ij} = E_\pi \left[ \partial_{\theta_i} \log \pi(a|s,\boldsymbol{\theta}) \partial_{\theta_j} \log \pi_(a|s,\boldsymbol{\theta}) \right]. \end{equation} The matrix $F$ is the \textit{Fisher information metric} of the policy for the parameters $\boldsymbol{\theta}$ \cite{amari2016information}. There are several ways to choose $G(s,a)$, defining different types of policy gradient algorithms. One straightforward possibility is to compute the future return by sampling the rewards for the next step of the process at fixed policy. This procedure is called \textit{reinforce policy gradient} \cite{williams1992simple}. \section{Diffusion approximation for one-dimensional k-armed bandit} We will begin our analysis by studying a case in which a single agent can use $k$ actions in an environment composed of only one state. Such a problem is known in literature as \textit{k-armed bandit} \cite{lattimore2020bandit} since it is analogous to a slot machine with $k$ arms, for which the player must infer which arms give better rewards, whilst trying to maximize his win. We will start with a scenario with only two possible actions: $\mathcal{A} = \{1,2\}$. Since the gradient is not affected by the particular parametrization choice, we will use the convenient softmax function: \begin{equation} \pi(1|\theta) = x(\theta) = \frac{1}{1+e^{-\theta}}, \quad \pi(2|\theta) = 1-x(\theta). \label{eq:param} \end{equation} At every step $t$, the agent will choose actions $1$ and $2$ with probabilities $x(t) \equiv x(\theta(t))$ and $1-x(t)$, respectively. This will yield the total average return (\ref{eq:return}) for $\gamma = 0$: \begin{equation} J(\theta(t)) = x(t) R_1 +(1-x(t))R_2, \end{equation} where $R_a$ represent the stochastic reward extracted from its corresponding distribution $R_a \sim q_a=\mathcal{N}(r_a,\sigma_a)$. The bandit setting allows us to choose a zero discount factor $\gamma=0$ without losing generality since the best policy is independent of it and we will keep this through the rest of this Letter. Our aim is to obtain an effective stochastic description of the temporal evolution of the learning process, i.e. of the trajectory of the policy $x(t)$. In supervised learning, the effective noise of stochastic gradient descent is often modeled by heavy-tailed distributions \cite{gurbuzbalaban2021heavy,xie2020diffusion}. In our case, since the stochasticity is induced by uncorrelated Gaussian fluctuations in the rewards, we can describe the process in terms of a Langevin equation: \begin{equation}\label{eq:lang} \frac{dx}{dt} = u(x) + \sqrt{2D(x)}\cdot \eta_t, \end{equation} % where $\eta_t$ is white Gaussian noise with zero mean and correlation $E_t[\eta_\tau\eta_{\tau'}] = \delta(\tau-\tau')$. To this end, we expand the policy for small $\alpha$ by Taylor series: \begin{equation} d x(t)= \left.\frac{d x}{d\theta} \right|_{\theta=\theta_{(t)}} d\theta_{(t)}+ \left. \frac{1}{2}\frac{d^2 x}{d\theta^2}\right|_{\theta=\theta_{(t)}} d\theta_{(t)}^2 +o(\alpha^2). \end{equation} Substituting the parameter update (\ref{eq:natural_PG}) in this expression, and computing the derivatives of (\ref{eq:param}), we obtain the policy increments. The drift and the diffusion terms are given by the average and the variance of these increments, $u(x) = E_{t} [\dot{x}(t) | x(t)]$, and $D(x) = \text{Var}_{t} [\dot{x}(t) | x(t)]/2$. We refer the reader to the Supplemental Material for a thorough derivation of these terms, while reporting here only their final form obtained by expanding up to the second order in $\alpha$: \begin{equation} \begin{aligned} u(x) =& \; \underbrace{ \alpha x (1-x) (r_1-r_2)}_{\text{Selection}} + \underbrace{ \frac{\alpha^2}{2} (1-2x) \; m }_{\text{Mutations}}, \\ D(x) = & \underbrace{\frac{\alpha^2}{2} x (1-x)d_1 +\frac{\alpha^4}{4}(1-2x)^2d_2}_{\text{Random genetic drift}} . \end{aligned} \label{eq:coeff_NPG_diff} \end{equation} The three coefficients $m$, $d_1$ and $d_2$ are positive and depend on the reward variances as well as the policy, the average rewards, and the baseline: \begin{equation} \begin{aligned} m=& (1-x) \left(\sigma_{R1}^2 + l_1^2 \right) + x \left(\sigma_{R2}^2 + l_2^2 \right), \\ d_1 =& (1-x) \sigma_{R1}^2 + x \sigma_{R2}^2 + \left[ (1-x)l_1 + x l_2 \right]^2, \\ d_2=& (1-x)^2\frac{(3-x)c_1^2-2l_1^4}{x} +x^2 \frac{(2+x)c_2^2-2l_2^4}{1-x} , \end{aligned} \label{eq:mut_drift_funct} \end{equation} where $c_a=\sigma_a^2+l_a^2$ and $l_a=r_a-h$. It is interesting to highlight the similarity with an evolving population of competing species/genotypes, described by the Kimura equation \cite{kimura1964diffusion,baake2000biological}: \begin{equation} \begin{aligned} & u_{K}(x) = \underbrace{ x (1-x) (f_1-f_2)}_{\text{Selection}} \underbrace{ - \mu_{12} x + \mu_{21}(1-x)}_{\text{Mutations}}, \\ & D_{K}(x) = \underbrace{\frac{1}{2 N} x (1-x)}_{\text{Random genetic drift}} , \end{aligned} \label{eq:kimura} \end{equation} where $f_i$ is the fitness of the genotype $i$, $\mu_{ij}$ is the mutation rate from genotype $i$ to $j$, and $N$ is the population size. The mapping can be done by identifying genotypes with the actions and the policy of each action with the genotype frequency. In contrast to our expansion, the Kimura equation is obtained by manually adding the evolutionary forces: \textit{selection}, \textit{mutation} and \textit{random genetic drift}. Our derivation can perhaps be considered more natural and clearly shows the symmetry between the deterministic and stochastic forces, adding a term proportional to $(1-2x)$ in the diffusion coefficient. \begin{figure} \includegraphics[width=1.0\columnwidth]{fig1.pdf} \caption{Top: $10^2$ lightly shaded trajectories of the action probability $x$ generated by a natural policy gradient for the 2-armed bandit, along with their mean, compared to the Langevin dynamics (\ref{eq:coeff_NPG_diff}). The rewards are distributed as $\mathcal{N}_{1(2)}(r=\pm1,\sigma=1)$, while the learning rate is $\alpha=0.01$, and the initial policy is close to the worst one $x_0=0.975$. Bottom: The contributions of mutation and selection on the average Langevin dynamics near the boundaries, compared to the natural policy gradient. Rewards are distributed as $\mathcal{N}_{1(2)}(r=\pm1,\sigma=9)$, $\alpha=0.01$, $x_0=0.5$.} \label{fig:exp_sim_slow_learning} \end{figure} It is easy now to grasp how this dynamics evolves and how it is affected by the algorithm's parameters. Figure \ref{fig:exp_sim_slow_learning} shows the effects of the drift coefficient on the gradient dynamics. The two terms correspond to natural selection and mutations, and can be tuned with the learning rate. For a large learning rate, the policy is pushed away from pure strategies, i.e. vertices of the probability simplex. Conversely, for small learning rates, the policy tends to converge to the best action. The intrinsic stochasticity of the algorithm appears in the diffusion coefficient (\ref{eq:coeff_NPG_diff}): small learning rates confine stochasticity to the bulk of the strategy simplex ($x\approx 1/2$), while higher rates will generate higher fluctuations in the vicinity of pure strategies, as shown in appendix II. These insights can be used to improve the dynamics' convergence by treating the learning rate as a dynamical variable, which can be tuned according to a time schedule \cite{darken1992gradient}. The approximation in terms of an It\^{o} stochastic equation allows us to use It\^{o}'s lemma to derive the optimal scheduling of the learning rate. This turns out to be $\alpha(t)\propto 1/\sqrt{t}$, which is consistent with the results for the so-called Exp3 algorithm \cite{lattimore2020bandit}, all details of the derivation can be found in appendix I. All the obtained results can be easily generalized for the case in which the agent has $k$ possible actions and their probabilities follow a $k$-dimensional drift-diffusion motion: \begin{equation} d\pi_a = u_a(\pi)dt + \sum_{ab}^N\sigma_{ab}(\pi) dW_b \quad,\label{sup:multidim} \end{equation} expressed here in the It\^{o} form. The resulting coefficients for this motion are \begin{equation} \begin{aligned} u_a = & \alpha\pi_a \Bigg( r_a-\sum_b r_b\pi_b\Bigg) + \\ & \frac{\alpha^2}{2}\Bigg(\sigma_a^2 (1-\pi_a)(1-2\pi_a) -\sum_{b\neq a} \sigma^2_b(1-2\pi_b)\pi_a \Bigg),\\ &\\ D_{ab} = & \frac{\alpha^2}{8} \pi_a\pi_b\Bigg(\delta_{ab}\frac{\sigma_a^2}{\pi_a} + \sum_{c\neq a,b} \pi_c \sigma_c^2 \\ & \qquad \qquad -(1-\pi_a)\sigma_a^2 - (1-\pi_b)\sigma_b^2 \Bigg). \end{aligned} \label{eq:coef_simply} \end{equation} They drive the trajectory towards the best action by a so-called replicator dynamics \cite{schulster83} proportional to $\alpha$, and away from pure strategies by the mutation term proportional to $\alpha^2$. In addition, the diffusion term scatters the trajectory proportionally to the rewards' variances. A thorough derivation of these results is reported in the Supplemental Material. \section{p-dimensional k-armed bandit} The $k$-armed bandit can be viewed as a special case of a more general model in which the return is expressed as \begin{equation}\label{eq:pdim} J = \sum^K_{i_1,i_2,\ldots,i_p=1}R_{i_1 i_2 \ldots i_p} \pi_{i_1}\cdot\pi_{i_2}\cdot\ldots\cdot\pi_{i_p}, \end{equation} where $\sum_{i=1}^K \pi_i = 1, \; \pi_i\geq 0 \; \forall i\in\{i_1,\dots,i_p\}$. Each probability distribution $\pi_i$ is defined over a distinct set of $K$ actions. All $p$ such sets are independent. This picture can be viewed simply as a factorization of the overall distribution $\pi=\prod_i \pi_i$. It arises naturally when one deals with an agent performing a set of actions at each time step and the task is to optimize the resulting overall behavior. For instance, robotics deals with a multitude of artificial joints flexed simultaneously \cite{andrychowicz2020learning, todorov18}, producing a highly non-convex cost landscape, as portrayed in Fig. \ref{fig:hand_landscape}. Furthermore, this model describes $p$ interacting agents, each performing independently their set of $K$ actions \cite{littman1994markov}. The reward coefficients of each agent $R_{i_1\ldots i_p}$ could be different in this case, but for equal constant coefficients, this is a generalization of the random replicant model \cite{diederich89, opper92, biscari95}. Another useful interpretation arises when an agent is performing a sequence of actions in a state-changing environment so that for each state $s_t$, $\pi_t$ is the policy over the set of its $K$ actions. The ordered set $(\pi_1,\pi_2,\dots,\pi_p)$ then corresponds to the sequence of policies undertaken. \begin{figure} \includegraphics[width=.9\columnwidth]{inferno_landscape.pdf} \caption{An example of the return (energy) landscape of a robotic hand bending two fingers. Each finger can bend to $11$ different angles, the return $J$ is a function of the overall configuration.} \label{fig:hand_landscape} \end{figure} What is remarkable about this model is that now we have a clear way to map a reinforcement learning problem to a disordered system. This can be achieved by taking the instantaneous rewards to be normally distributed around their mean values $\mathcal{N}(\overline{R}_{i_1 i_2 \ldots i_p},\sigma_{i_1 i_2 \ldots i_p})$, and considering the system described by the Hamiltonian $H\equiv -\overline{J}$, obtained substituting mean rewards in (\ref{eq:pdim}). Its temperature $T(\sigma)$ is defined by the specific learning algorithm, and for a policy gradient is proportional to the diffusion coefficient of the Langevin dynamics (\ref{eq:lang}). PG dynamics is described by a system of $p$ multidimensional Langevin equations, navigating through the rough landscape of (\ref{eq:pdim}). To evaluate the effect of the learning rate on this motion, we will shift our perspective from the probabilities $\pi$ to the parameters $\theta$. The latter form a basis defined by \begin{equation} d\boldsymbol{\theta} \propto \alpha \nabla\ln \pi = \nabla \ln \phi, \qquad \phi=\pi^\alpha. \end{equation} In other words, we move from a picture in which the learning rate is affecting the parameters' change to the one where the learning rate is affecting the slope of the probability manifold. We can define the following Hamiltonian for this new landscape, \begin{equation}\label{eq:newham} H=-\sum_{i_1,i_2,\ldots,i_p}^K \overline{R}_{i_1i_2\ldots i_p} \phi^{1/\alpha}_{i_1}\phi^{1/\alpha}_{i_2}\ldots \phi^{1/\alpha}_{i_p}. \end{equation} We take $K$ to be large and mean rewards to be self-averaging, i.e. distributed as $\overline{R}\sim\mathcal{N}(0,\sigma)$ with $\sigma^2\sim 1/K$. This allows us to conveniently exploit methods of mean-field theory to analyze this landscape \cite{mezard1987spin}. Its average partition function over the variables $\pi$ will look similar to the partition function of the spherical $p$-spin \cite{kirkpatrick1987p,crisanti1992spherical} with planar rather than spherical constraints: \begin{equation}\label{eq:part1} \begin{split} \langle Z\rangle &= \int_0^\infty\prod_{i=1}^K d^p\pi_i \, \delta^p(\sum_i\pi_i - K) \\ &\times \int_{-\infty}^{+\infty} \prod_{i_1,\ldots,i_p} d\overline{R}_{i_1\ldots i_p} \\ \\ & \times \exp{\left[-\overline{R}^2_{i_1\ldots i_p}K^p +\beta \overline{R}_{i_1\ldots i_p}\pi_{i_1}\ldots\pi_{i_p}\right]}, \end{split} \end{equation} where $\beta=1/T$. This expression can be rendered tractable by the replica trick $\langle\ln Z\rangle = \lim_{n\to 0} \frac{1}{n} \ln \langle Z^n\rangle$ in order to compute its mean value. \begin{equation}\label{eq:pspin} \langle Z^n\rangle = \int D\pi \exp{\left[\frac{\beta^2}{4K^{p-1}}\sum^n_{a,b}\left(\sum_i^K\pi^a_i\pi^b_i\right)^p\right]}, \end{equation} where $\int D\pi$ is a shorthand for the measure $\prod_{a=1}^n\prod_{i=1}^K d\pi_i^a \, \delta(\sum_j\pi_j^a - K)$. Introducing $Q_{ab}=\sum_i\pi_i^a\pi_i^b$ by inserting the identity $1=\int\delta(Q_{ab} - \sum_i\pi_i^a\pi_i^b)\,dQ_{ab}$, and changing to Fourier representations for all delta functions, we obtain \begin{equation} \begin{split} \overline{Z^n} & = \int \prod_{a,b}^{n}\prod_{i}^KdQ_{ab}d\Lambda_{ab}d\xi^a d\pi^a_i \cdot \\ & \cdot \exp\left[ \frac{\beta^2 K}{4} \sum_{ab}Q^p_{ab} + K\sum_{ab}Q_{ab}\Lambda_{ab} \right.\\ & \left. - \sum_i\sum_{ab}\Lambda_{ab}\pi^a_i\pi^b_i -\sum_{ia}\xi^a\pi^a_i + K\sum_a\xi_a \right]. \end{split} \end{equation} For large $K \rightarrow \infty$, the integral is dominated by the saddle point of the exponent's argument, thus the free energy can be recovered by solving a system of equations. In the neighborhood of a pure strategy (where $\pi_a\approx 1, \; \pi_b\approx0 \; \forall \; b\neq a$), the partition function for the Hamiltonian (\ref{eq:newham}) can be recovered from Eq. (\ref{eq:pspin}) by substituting $p\rightarrow p/\alpha$. This will affect the saddle point equation containing the temperature \begin{equation} 0=\frac{p}{4T^2\alpha}Q_{ab}^{\frac{p}{\alpha}-1} + \Lambda_{ab} \end{equation} in a fundamental way: It will get modified by $T \rightarrow \sqrt{\alpha} T$. Thus, $\sqrt{\alpha}$ acts as an effective temperature that modifies the shape of the free energy landscape. \section{Discussion} Our analysis sheds light on the ability of policy gradient to overcome obstacles in complex reward landscapes. It appears that the dynamics of policies under PG follows a drift-diffusion motion with parameters strongly influenced by the learning rate. Higher values of the latter allow the policy to scatter and overcome obstacles. This picture is corroborated by our mean-field analysis of the free energy landscape for a complex reward scenario, with multiple local minima. The learning rate appears to act as an effective temperature smoothing the free energy landscape. It follows that scheduling of this parameter is essential to ensure the convergence to high value maxima. Furthermore, it follows that this scheduling corresponds to the physical process of annealing. This paves the road to a plethora of physics-inspired optimizations (as proposed, for instance, in \cite{zarand02, houdayer99, mobius97}) to PG algorithms. \begin{figure} \includegraphics[width=.49\columnwidth]{2p1.pdf} \includegraphics[width=.49\columnwidth]{2p2.pdf} \caption{Two average trajectories of the Natural Policy Gradient in a zero-sum game, corresponding to two different learning rates. Each point on the trajectories represents a pair $(\pi^1(t),\pi^2(t))$. The average rewards are $\overline{R}_1=((1,-1),(-1,1))$, $\overline{R}_2 = ((-1,1),(1,-1))$. The variance for all the rewards is equal to $\sigma=1$. The starting point is $(0.75,0.75)$, while the Nash equilibrium is at $(1/2,1/2)$.}\label{fig:2p_NPG} \end{figure} The $p$-dimensional $k$-armed bandit introduced here serves as a handy model to unify the description of partitioned policies, multi-state environments, and multi-agent interactions, by mapping them to a disordered system at finite temperature. This can be particularly well illustrated in the case of $p=2$, which can be interpreted as a Matrix Game \cite{vonneumann2007,nash51,berg98,berg99,berg00} between two players, each having its own reward matrix $R_{1(2)}$. It has been shown \cite{galla07}, that replicator dynamics with cooperation pressure $u$ does not converge to all Nash equilibria below a critical value of $u$, unless we deal with a zero-sum game, i.e. $R_1=-R_2^T$. On the other hand, the cooperation pressure, acts in the replicator equation as the mutation term acts in the Langevin approximation of PG. In the case of a zero-sum game, the replicator trajectories can only factorize into a number of converging spirals as shown in the left side of Fig. \ref{fig:2p_NPG}, since Nash equilibria for pure strategies are suppressed for $K\rightarrow\infty$. If, instead, $R_1\neq-R^T_2$, dynamics can converge to pure strategies, but such equilibria have been shown to give birth to a spin glass phase for low values of $u$ \cite{galla07}. \begin{acknowledgments} We would like to thank Antonio Celani, Andrea Mazzolini and Enrico Malatesta for the thoughtful discussions and precious insights on the topic. \end{acknowledgments}
1,314,259,995,758
arxiv
\section{Introduction} Our aim in this paper is to discuss when the set of those Lipschitz maps which strongly attain their norm is dense in the space of Lipschitz maps. Let us give the necessary definitions. A \emph{pointed metric space} is just a metric space $M$ in which we distinguish an element, called $0$. All along the paper, the metric spaces will be complete and the Banach spaces will be over the real scalars. Given a pointed metric space $M$ and a Banach space $Y$, we write ${\mathrm{Lip}}_0(M,Y)$ to denote the Banach space of all Lipschitz maps $F:M\longrightarrow Y$ which vanish at $0$, endowed with the Lipschitz norm defined by \begin{equation}\label{eq:def-norma-Lipschitz} \| F \|_L := \sup\left\{\frac{\|F(x)-F(y)\|}{d(x,y)} \colon x,y\in M,\, x \neq y \right\}. \end{equation} Let us comment that the election of the distinguished element is not important, as the resulting spaces of Lipschitz maps are isometrically isomorphic. Following \cite{kms} and \cite{Godefroy-survey-2015}, we say that $F\in {\mathrm{Lip}}_0(M,Y)$ \emph{attains its norm in the strong sense} or that \emph{strongly attains its norm}, whenever the supremum in \eqref{eq:def-norma-Lipschitz} is actually a maximum, that is, whenever there are $x,y\in M$, $x\neq y$, such that $$ \frac{\|F(x)-F(y)\|}{d(x,y)}=\|F\|_L. $$ The subset of all Lipschitz maps in ${\mathrm{Lip}}_0(M,Y)$ which attain their norm in the strong sense is denoted by $\operatorname{SNA}(M,Y)$. As far as we know, the study of norm attaining Lipschitz maps was initiated independently in \cite{Godefroy-PAFA-2016} and \cite{kms}. Both papers deal with notions of norm attainment which are different from the strong one, and they are focused on Lipschitz functionals (\cite{kms}) or on vector-valued Lipschitz maps (\cite{Godefroy-PAFA-2016}) defined on Banach spaces. The paper \cite{Godefroy-PAFA-2016} contains some negative results on the density of the set of Lipschitz maps which attain their norm in a very weak way. The paper \cite{kms} contains positive results on the density of the set of Lipschitz functionals which attain their norm ``directionally'', a notion which is weaker than the strong norm attainment. It also contains negative results: when $M$ is a Banach space, $\operatorname{SNA}(M,\mathbb{R})$ is not dense in ${\mathrm{Lip}}_0(M,\mathbb{R})$, and it is also the case when $M=[0,1]$ or, more generally, when $M$ is metrically convex (or geodesic) (see Section \ref{sect:negative} for the definition). Our first aim in this paper will be to extend these negative results to more general metric spaces as length spaces and subsets of $[0,1]$ with positive Lebesgue measure, see the details in Section \ref{sect:negative}. As a consequence of our results, we will obtain examples of metric spaces $M$ where $\operatorname{SNA}(M,\mathbb{R})$ is not dense in ${\mathrm{Lip}}_0(M,\mathbb{R})$ and, in contrast with the previously known results, no connectedness assumption is needed on $M$ (e.g.\ we can consider $M$ to be a ``fat'' Cantor set). On the other hand, the paper \cite{Godefroy-survey-2015} contains the first positive result on the density of strongly norm attaining Lipschitz functionals (and also some results for Lipschitz maps): this is the case when the little Lipschitz space over $M$ strongly separates $M$ (as it is the case of $M$ being compact and countable \cite{dalet}, when $M$ is the middle third Cantor set, or when $M$ is a compact H\"older space \cite[Proposition~3.2.2]{wea5}). A slight generalisation can be found in \cite[Section 4]{gprstu}. These results have been recently extended in \cite[Proposition 7.4]{lppr}, but we need a little more background in order to enunciate the result. Let $M$ be a pointed metric space. We denote by $\delta$ the canonical isometric embedding of $M$ into ${\mathrm{Lip}}_0(M,\mathbb{R})^*$, which is given by $\langle f, \delta(x) \rangle =f(x)$ for $x \in M$ and $f \in {\mathrm{Lip}}_0(M,\mathbb{R})$. We denote by $\mathcal{F}(M)$ the norm-closed linear span of $\delta(M)$ in the dual space ${\mathrm{Lip}}_0(M,\mathbb{R})^*$, which is usually called the \textit{Lipschitz-free space over $M$}, see the papers \cite{Godefroy-survey-2015} and \cite{gk}, and the book \cite{wea5} (where it receives the name of Arens-Eells space) for background on this. It is well known that $\mathcal{F}(M)$ is an isometric predual of the space ${\mathrm{Lip}}_0(M,\mathbb{R})$ \cite[pp. 91]{Godefroy-survey-2015}, indeed it is the unique isometric predual when $M$ is bounded or a geodesic space \cite{Weaver-2017}. Now, \cite[Proposition 7.4]{lppr} states that if $\mathcal{F}(M)$ has the Radon-Nikod\'{y}m property (RNP in short), then $\operatorname{SNA}(M,Y)$ is dense in ${\mathrm{Lip}}_0(M,Y)$ for every Banach space $Y$, extending by far the results of \cite{Godefroy-survey-2015}. At the beginning of Section \ref{sec:property_A} we will give a short exposition of why this result holds. Examples of metric spaces for which $\mathcal{F}(M)$ has the RNP are exhibited in Example \ref{ejernp}. There is a connection between the study of the density of norm attaining Lipschitz maps and the study of norm attaining linear operators, a research line which goes back to Lindenstrauss' seminal paper \cite{lindens} from 1963. Let us give a piece of notation for Banach spaces. Given a Banach space $X$, we will denote by $B_X$ and $S_X$ the closed unit ball and the unit sphere of $X$, respectively. We will also denote by $X^*$ the topological dual of $X$. If $Y$ is another Banach space, we write $\mathcal{L}(X,Y)$ to denote the Banach space of all bounded linear operators from $X$ to $Y$, endowed with the operator norm. We say that $T\in \mathcal{L}(X,Y)$ \emph{attains its norm}, and write $T\in \operatorname{NA}(X,Y)$, if there is $x\in X$ with $\|x\|=1$ such that $\|Tx\|=\|T\|$. The study of the density of norm attaining linear operators has its root in the classical Bishop-Phelps theorem which states that $\operatorname{NA}(X,\mathbb{R})$ is dense in $X^*=\mathcal{L}(X,\mathbb{R})$ for every Banach space $X$. J.~Lindenstrauss extended such study to general linear operators, showed that this is not always possible, and also gave positive results. If we say that a Banach space $X$ has \emph{(Lindenstrauss) property A} when $\overline{\operatorname{NA}(X,Y)}=\mathcal{L}(X,Y)$ for every Banach space $Y$, it is shown in \cite{lindens} that reflexive spaces have property A. This result was extended by J.~Bourgain \cite{bourgain1977} showing that Banach spaces $X$ with the RNP also have Lindenstrauss property A, and he also provided a somehow reciprocal result. We refer the interested reader to the survey paper \cite{AcostaRACSAM2006} for a detailed account on norm attaining linear operators. Coming back to Lipschitz maps, let us recall that when $M$ is a pointed metric space and $Y$ is a Banach space, it is well known that every Lipschitz map $f \colon M \longrightarrow Y$ can be isometrically identified with the continuous linear map $\widehat{f} \colon \mathcal{F}(M) \longrightarrow Y$ defined by $\widehat{f}(\delta_p)=f(p)$ for every $p \in M$. This mapping completely identifies the spaces ${\mathrm{Lip}}_0(M,Y)$ and $\mathcal{L}(\mathcal{F}(M),Y)$. Bearing this fact in mind, the set $\operatorname{SNA}(M,Y)$ is identified with the set of those elements of $\mathcal{L}(\mathcal{F}(M),Y)$ which attain their operator norm at elements of the form $\frac{\delta(x)-\delta(y)}{d(x,y)}$ for some $x,y\in M$, $x\neq y$. It then follows that when $\operatorname{SNA}(M,Y)$ is dense in ${\mathrm{Lip}}_0(M,Y)$, in particular, $\operatorname{NA}(\mathcal{F}(M),Y)$ has to be dense in $\mathcal{L}(\mathcal{F}(M),Y)$. The converse result is not true as, for instance, $\operatorname{NA}(\mathcal{F}(M),\mathbb{R})$ is always dense by the Bishop-Phelps theorem but, as we have already mentioned, there are many metric spaces $M$ such that $\operatorname{SNA}(M,\mathbb{R})$ is not dense in ${\mathrm{Lip}}_0(M,\mathbb{R})$. Of course, if $\operatorname{SNA}(M,Y)$ is dense in ${\mathrm{Lip}}_0(M,Y)$ for every Banach space $Y$, then $\mathcal{F}(M)$ has Lindenstrauss property A. We do not know whether the converse result is true, but it is now not very surprising the appearance of the RNP of $\mathcal{F}(M)$ as a sufficient condition for the density of $\operatorname{SNA}(M,Y)$ in ${\mathrm{Lip}}_0(M,Y)$ for every $Y$ \cite[Proposition 7.4]{lppr}. Actually, as far as we know, the RNP of $\mathcal{F}(M)$ could be a necessary condition for the density of $\operatorname{SNA}(M,Y)$ in ${\mathrm{Lip}}_0(M,Y)$ for every space $Y$. On the other hand, there are several geometric properties of a Banach space $X$ which imply Lindenstrauss property A, being the most common, apart from having $X$ the RNP, the properties $\alpha$ and quasi-$\alpha$ and the existence of a uniformly strongly exposed set of $B_X$ whose closed convex hull is the whole $B_X$. In Section \ref{sec:property_A}, we analyse these properties for Lipschitz-free spaces and show that each of them actually forces $\operatorname{SNA}(M,Y)$ to be dense in ${\mathrm{Lip}}_0(M,Y)$ for every Banach space $Y$. We also provide characterisations of these properties for $\mathcal{F}(M)$ in terms of the metric space $M$ and study the relationship between them. To this end, one of the main tools will be the recent characterisations of strongly exposed points and denting points of the unit ball of Lipschitz-free spaces appearing in \cite{gpr} and \cite{lppr}, respectively, which we will include at Subsection \ref{subsec:geometry-free-spaces}. The previous results make clear that the density of $\operatorname{SNA}(M,\mathbb{R})$ in ${\mathrm{Lip}}_0(M,\mathbb{R})$ is a strong requirement and there are not too many metric spaces having this property. A completely different situation holds when we deal with weak density: we show in Section \ref{sectidensidadebil} that $\operatorname{SNA}(M,\mathbb{R})$ is weakly sequentially dense in ${\mathrm{Lip}}_0(M,\mathbb{R})$ for every pointed metric space $M$, extending \cite[Theorem 2.6]{kms}, where the result was proved when $M$ is a Banach space or, more generally, when $M$ is a length space. The main tool to get the above result is an extension of a lemma from \cite{kms} which provides an easy criterium to get weak convergence of a sequence of Lipschitz maps which we include in Subsection \ref{subsec:geometry-free-spaces}. Such a result produces a by-product of our study: that the norm of the bidual of $\mathcal{F}(M)$ is octahedral when $M'$ is infinite or $M$ is discrete but not uniformly discrete. Recall that the norm of a Banach space $X$ is said to be \emph{octahedral} if, given a finite-dimensional subspace $Y$ of $X$ and $\varepsilon>0$, we can find $x\in S_X$ such that the inequality $$\Vert y+\lambda x\Vert\geq (1-\varepsilon)(\Vert y\Vert +\vert\lambda\vert)$$ holds for every $y\in Y$ and $\lambda\in\mathbb R$. From an isomorphic point of view, it was proved in \cite{godefroyocta} that a Banach space $X$ can be equivalently renormed to have an octahedral norm if, and only if, the space $X$ contains an isomorphic copy of $\ell_1$ and it was left as an open problem whether any Banach space containing an isomorphic copy of $\ell_1$ can be equivalently renormed so that the bidual norm is octahedral. From an isometric point of view, it is proved in \cite[Theorem~1 and Proposition 3]{deville} that if a Banach space $X$ has an octahedral norm, then every convex combination of weak-star slices of $B_{X^*}$ has diameter two, and the reciprocal result has been recently proved in \cite{blroctajfa}. By using this characterisation, it was proved in \cite[Theorem 2.4]{blr} that if $M$ is not uniformly discrete and bounded, then $\mathcal F(M)$ has an octahedral norm. This result was pushed further in \cite{pr}, where the authors characterised all the Lipschitz-free Banach spaces $\mathcal F(M)$ whose norm is octahedral in terms of a geometric property of the underlying metric space $M$. Observe that the norm of $X^{**}$ is octahedral if and only if every convex combination of weak slices of $B_{X^*}$ has diameter two \cite[Corollary 2.2]{blroctajfa}. Thus, easy examples show that the norm of a Banach space $X$ can be octahedral without its bidual norm being octahedral (e.g.\ $X=\mathcal C([0,1])$ does the work). It is then a natural question to check when the bidual norm of $\mathcal F(M)$ can be octahedral. Particular examples of metric spaces $M$ satisfying that the norm of $\mathcal{F}(M)^{**}$ is octahedral are known (for instance when $M$ is a subset of an $\mathbb{R}$-tree as a consequence of \cite[Theorem 4.2]{Godard} and \cite[Proposition 3.4]{Yagoub}). However, it is not known whether there exists a metric space $M$ such that the norm of $\mathcal{F}(M)$ is octahedral but that of $\mathcal{F}(M)^{**}$ is not. As we have already announced, we will prove that the norm of the bidual space of $\mathcal{F}(M)$ is octahedral when $M'$ is infinite or $M$ is a discrete but not uniformly discrete metric space. Besides, as a consequence of the techniques involved in the proof, we obtain a partial positive answer to \cite[Question 3.1]{blr}. Even though all the main results of the paper have been presented so far, we would like to include an outline of the paper. We finish this introduction with a subsection including the needed notation and terminology on metric spaces and some new and previously known results on the geometry of Lipschitz-free spaces which will be relevant in our discussion. In Section \ref{sect:negative} we extend the negative examples of \cite{kms} to more general ones: we prove that $\operatorname{SNA}(M,\mathbb{R})$ is not dense in ${\mathrm{Lip}}_0(M,\mathbb{R})$ whenever $M$ is a length metric space and when $M$ is a subset of an $\mathbb R$-tree with positive measure, in particular, when $M$ is a subset of $\mathbb{R}$ with positive Lebesgue measure. We devote Section \ref{sec:property_A} to discuss some sufficient conditions for Lindenstrauss property A in the setting of Lipschitz-free spaces, showing that all of them actually imply the density of strongly norm attaining Lipschitz maps; we also give metric characterisations of some of them and discuss the relations between them. The main result of Section \ref{sectidensidadebil} is that $\operatorname{SNA}(M,\mathbb{R})$ is weakly sequentially dense in ${\mathrm{Lip}}_0(M,\mathbb{R})$ for every pointed metric space $M$. Finally, we show in Section \ref{sectiocta} that the norm of $\mathcal{F}(M)^{**}$ is octahedral when $M'$ is infinite or $M$ is discrete but not uniformly discrete. \subsection[Geometry of Lipschitz-free spaces]{New and old results on the geometry of Lipschitz-free spaces}\label{subsec:geometry-free-spaces} Let $X$ be a Banach space. A \emph{slice} of the unit ball $B_X$ is a non-empty intersection of an open half-space with $B_X$; all slices can be written in the form \[ S(B_X,f,\beta):=\{x\in B_X \colon f(x)>1-\beta\} \] where $f \in S_{X^*}$, $\beta>0$. The notations $\ext{B_X}$, $\preext{B_X}$, $\strexp{B_X}$ stand for the set of extreme points, preserved extreme points (i.e.\ extreme points which remain extreme in the bidual ball), and strongly exposed points of $B_X$, respectively. A point $x \in B_X$ is said to be a \emph{denting point} of $B_X$ if there exist slices of $B_X$ containing $x$ of arbitrarily small diameter. We will denote by $\dent{B_X}$ the set of denting points of $B_X$. We always have that $$ \strexp{B_X}\subset \dent{B_X} \subset \preext{B_X} \subset \ext{B_X}. $$ Given a metric space $M$, $B(x,r)$ denotes the closed ball in $M$ centered at $x \in M$ with radius $r$. Given $x, y \in M$, we write $[x,y]$ to denote the \emph{metric segment} between $x$ and $y$, that is, \[ [x,y] := \{z \in M \colon d(x,z)+d(z,y)=d(x,y)\}. \] By a \emph{molecule} we mean an element of $\mathcal{F}(M)$ of the form \[ m_{x,y}:=\frac{\delta(x)-\delta(y)}{d(x,y)} \] for $x, y \in M$, $x \neq y$. We write $\Mol{M}$ to denote the set of all molecules of $M$. Note that, since $\Mol{M}$ is balanced and norming for ${\mathrm{Lip}}_0(M,\mathbb{R})$, a straightforward application of Hahn-Banach theorem implies that $$ \overline{\co}(\Mol{M})=B_{\mathcal F(M)} $$ (the notation $\overline{\co}(A)$ denotes the closed convex hull of a set $A$). The following proposition summarises some known results about extremality in Lipschitz-free spaces that we may find in \cite[Corollary 2.5.4]{wea5}, \cite[Theorem 2.4]{lppr}, \cite[Theorem 5.4]{gpr}, and \cite[Proposition 2.9]{lppr}. We need some notation: given $x,y,z \in M$, the \emph{Gromov product of $x$ and $y$ at $z$} is defined as \[ (x,y)_z:=\frac12\bigl(d(x,z)+d(y,z)-d(x,y)\bigr)\geq 0, \] see e.g.~\cite{bh}. It corresponds to the distance of $z$ to the unique closest point $b$ on the unique geodesic between $x$ and $y$ in any $\mathbb R$-tree into which $\{x,y,z\}$ can be isometrically embedded (such a tree, tripod really, always exists). Notice that $(x,z)_y+(y,z)_x=d(x,y)$ and that $(x,y)_z\leq d(x,z)$, facts which we will use without further comment. \begin{proposition}\label{prop:extremalidad} Let $M$ be a complete metric space. Then: \begin{enumerate}[(a)] \item Every preserved extreme point of $B_{\mathcal F(M)}$ is both a molecule and a denting point of $B_{\mathcal F(M)}$, so $$\preext{B_{\mathcal F(M)}}=\dent{B_{\mathcal F(M)}}\subseteq \Mol{M}.$$ \item Given $x,y\in M$ with $x\neq y$, the following assertions are equivalent: \begin{enumerate}[(i)] \item $m_{x,y}$ is a strongly exposed point of $B_{\mathcal F(M)}$. \item There exists $\varepsilon_0>0$ such that the inequality $$ (x,y)_z\geq \varepsilon_0 \min\{d(x,z),d(y,z)\} $$ holds for every $z\in M\setminus \{x,y\}$ (in other words, the pair $(x,y)$ fails property $(Z)$ of \cite{ikw}). \end{enumerate} \item $\Mol{M}$ is closed in $\mathcal F(M)$. \end{enumerate} \end{proposition} According to \cite[p.\ 51]{wea5}, a metric space $M$ is said to be \emph{concave} if every molecule $m_{x,y}\in \Mol{M}$ is a preserved extreme point of $B_{\mathcal F(M)}$. Thanks to the characterisation of the preserved extreme points given in \cite{ag}, a metric space $M$ is concave if, and only if, for every $x,y\in M$ and every $\varepsilon>0$, there exists $\delta>0$ such that the inequality $$ d(x,z)+d(y,z)>d(x,y)+\delta $$ holds for every $z\in M$ such that $\min\{d(x,z),d(y,z)\}\geq \varepsilon$. It is known that every H\"older metric space is a concave metric space \cite[p.~51]{wea5}. Recall that a \emph{H\"older metric space} is $(M,d^\theta)$ where $(M,d)$ is a metric space and $0<\theta<1$. We refer the reader to \cite{Kalton04} and \cite{wea5} for background on H\"older metric spaces and the structure of their Lipschitz-free spaces. Moreover, \cite[Corollary 4.4]{ag} yields that a compact metric space $M$ is concave if and only if $d(x,z)+d(z,y)>d(x,y)$ for every $x,y,z$ distinct points in $M$, that is, if $[x,y]=\{x,y\}$ for every $x,y\in M$. In general, examples of Banach spaces with a rich extremal structure are those with the RNP. Because of this reason and its relation with strongly norm attainment (see Theorem \ref{teornpdensidad}), let us exhibit some known examples of Lipschitz-free spaces with the RNP. \begin{example}\label{ejernp} The space $\mathcal F(M)$ has the RNP in the following cases: \begin{enumerate}[(a)] \item $M$ is uniformly discrete (i.e.\ $\inf_{x\neq y}d(x,y)>0$); \cite[Proposition 4.4]{Kalton04}. \item $M$ is a countable compact metric space (since, in this case, $\mathcal F(M)$ is a separable dual Banach space); \cite[Theorem~2.1]{dalet}. \item $M$ is a compact H\"older metric space (since, in this case, $\mathcal F(M)$ is a separable dual Banach space); \cite[Corollary~3.3.4]{wea5}. \item $M$ is a closed subset of $\mathbb{R}$ with measure $0$ (since, in this case, $\mathcal F(M)$ is isometric to $\ell_1$); \cite{Godard}. \end{enumerate} \end{example} The following lemma, coming from \cite{LCthesis}, provides a useful estimate of the norm of differences of molecules. For completeness, we will include a proof of the result. \begin{lemma}\label{lemma:ineqmolec} Let $M$ be a metric space and $x,y,u,v\in M$, with $x\neq y$ and $u\neq v$. Then \[ \norm{m_{x,y}-m_{u,v}}\leq 2\frac{d(x,u)+d(y,v)}{\max\{d(x,y),d(u,v)\}}. \] If, moreover, $\norm{m_{x,y}-m_{u,v}}<1$, then \[ \frac{\max\{d(x,u), d(y,v)\}}{\min\{d(x,y), d(u,v)\}}\leq \norm{m_{x,y}-m_{u,v}}. \] \end{lemma} \begin{proof} The first inequality follows from the well-known one \[ \norm{\frac{z}{\norm{z}} - \frac{w}{\norm{w}} } \leq 2 \frac{\norm{z-w}}{\max\{\norm{z},\norm{w}\}}, \] which holds for $z, w\neq 0$ in any Banach space, applied to $z=\delta(x)-\delta(y)$ and $w=\delta(u)-\delta(v)$. To prove the second inequality, we assume that $\Vert m_{x,y}-m_{u,v}\Vert<1$ and take $r:=\min\{d(x,y),d(x,u)\}$. We define $f(t):=\max\{r-d(t,x),0\}$ for every $t\in M$ and $g:=f-f(0)$. It follows that $g\in {\mathrm{Lip}}_0(M,\mathbb{R})$ with $\Vert g\Vert_L\leq 1$, so we get that $$\Vert m_{x,y}-m_{u,v}\Vert\geq \vert \langle g,m_{x,y}-m_{u,v}\rangle\vert\geq \frac{r}{d(x,y)},$$ from where $r<d(x,y)$. This implies that $r=d(x,u)$ and $$\frac{d(x,u)}{d(x,y)}\leq \Vert m_{x,y}-m_{u,v}\Vert.$$ Changing the roles of the pairs, we get the proof of the lemma. \end{proof} We also need the following result coming from \cite{kaufmannPreprint}, which is not included in the final version of that paper \cite{kaufmann}. Given a family $\{X_\gamma\colon \gamma\in\Gamma\}$ of Banach spaces, we will denote by $\left[\bigoplus\nolimits_{\gamma\in\Gamma}X_\gamma\right]_{\ell_1}$ the $\ell_1$-sum of the family. \begin{proposition}[Proposition 5.1 in \cite{kaufmannPreprint}]\label{prop:kaufmann} Suppose that $M = \bigcup_{\gamma\in \Gamma} M_\gamma$ is a metric space with metric $d$, and suppose that there exists $0 \in M$ satisfying \begin{enumerate} \item $M_\gamma\cap M_\eta = \{0\}$ if $\gamma\neq\eta$, and \item there exists $C\geq 1$ such that $d(x,0)+d(y,0)\leq Cd(x,y)$ for all $\gamma\neq \eta$, $x\in M_\gamma$ and $y\in M_\eta$. \end{enumerate} Then, $\mathcal F(M)$ is isomorphic to $\left[\bigoplus_{\gamma\in \Gamma} \mathcal F(M_\gamma)\right]_{\ell_1}$. If $C=1$ such an isomorphism can be chosen to be isometric. \end{proposition} The previous result motivates the following definition: if $M$ is a metric space which can be written as $M = \bigcup_{\gamma\in \Gamma} M_\gamma$ satisfying (1) and (2) in the statement of the previous proposition for $C=1$, we say that $M$ is the \emph{$\ell_1$-sum} of the family $\{M_\gamma\}_{\gamma\in \Gamma}$. The next lemma provides a criterion to get weak convergence of sequences of Lipschitz functionals and maps, for which the weak topology does not have any easy description. It is inspired by \cite[Lemma~2.4]{kms}, improves \cite[Corollary 2.5]{kms} and will be the key to prove the main results of Sections \ref{sectidensidadebil} and \ref{sectiocta}. \begin{lemma}\label{lemmaweaknull} Let $M$ be a pointed metric space, let $Y$ be a Banach space, and let $\{f_n\}$ be a sequence of functions in the unit ball of ${\mathrm{Lip}}_0(M,Y)$. For each $n\in \mathbb N$, we write $U_n:=\{x\in M \colon f_n(x)\neq 0\}$ for the support of $f_n$. If $U_n\cap U_m = \emptyset$ for every $n\neq m$, then the sequence $\{f_n\}$ is weakly null. \end{lemma} \begin{proof} We will show that for every finite collection of reals $\{a_j\}_{j=1}^n$, we have \[ \norm{\sum_{j=1}^n a_j f_j}_L \leq 2\max_j |a_j|\] and so $Te_n:= f_n$ defines a bounded linear operator from $c_0$ to ${\mathrm{Lip}}_0(M,Y)$. To this end, denote $f= \sum_{j=1}^n a_j f_j$. Take $x, y\in M$ with $x\neq y$, and let us give an upper estimate for $\frac{\|f(x)-f(y)\|}{d(x,y)}$. Since the supports of the functions $\{f_n\}$ are pairwise disjoint, there are $j_1,j_2\in \{1,\ldots, n\}$ such that $\{x,y\}\cap U_j = \emptyset$ if $j\in\{1,\ldots,n\}\setminus\{j_1,j_2\}$. Therefore, \begin{align*} \frac{\|f(x)-f(y)\|}{d(x,y)} &= \frac{\|a_{j_1}(f_{j_1}(x)-f_{j_1}(y))+a_{j_2}(f_{j_2}(x)-f_{j_2}(y))\|}{d(x,y)} \\ &\leq |a_{j_1}|+|a_{j_2}|\leq 2\max_j |a_j|. \end{align*} This shows that the operator $T$ defined above is bounded. Thus, it is also weak-to-weak continuous and the conclusion follows. \end{proof} Finally, it is convenient to recall an important tool to construct Lipschitz functions: the classical McShane's extension theorem. It says that if $N \subseteq M$ and $f \colon N \longrightarrow \mathbb{R}$ is a Lipschitz function, then there is an extension to a Lipschitz function $F \colon M \longrightarrow \mathbb{R}$ with the same Lipschitz constant; see for example \cite[Theorem~1.5.6]{wea5}. \section{New negative results}\label{sect:negative} In this section we will exhibit new examples of metric spaces $M$ such that $\operatorname{SNA}(M,\mathbb{R})$ is not dense in ${\mathrm{Lip}}_0(M,\mathbb{R})$. As we commented in the introduction, it was shown in \cite[Example 2.1]{kms} that $\operatorname{SNA}([0,1],\mathbb{R})$ is not dense in ${\mathrm{Lip}}_0([0,1],\mathbb{R})$ and that this is extended in the same paper to all metrically convex pointed metric spaces \cite[Theorem 2.3]{kms}. Let us introduce some notation. A metric space $(M,d)$ is said to be a \textit{length space} if $d(x,y)$ is equal to the infimum of the length of the rectifiable curves joining $x$ and $y$ for every pair of points $x,y\in M$. In the case that such an infimum is actually a minimum, it is said that $M$ is \textit{geodesic} (or \emph{metrically convex}). It is clear that every geodesic space is a length space, but Example 2.4 in \cite{ikw} shows that the converse is not true. On the other hand, length spaces have been recently considered in \cite{gpr} where it is proved that a metric space $M$ is a length space if, and only if, ${\mathrm{Lip}}_0(M,\mathbb{R})$ has the Daugavet property \cite[Theorem 3.5]{gpr}. Note by passing that for a complete metric space $M$, being a length space is also equivalent to the fact that every Lipschitz function on $M$ approximates its Lipschitz norm at points which are arbitrarily close (that is, $M$ is \emph{local}), see \cite[Proposition~3.4]{gpr}. In this section we will consider two different generalisations of the fact from \cite{kms} that $\operatorname{SNA}(M,\mathbb{R})$ is not dense in ${\mathrm{Lip}}_0(M,\mathbb{R})$ when $M$ is metrically convex. Our first aim is to replace metrically convex with being a length space in this result. To this end, we will need the following technical lemma, which is a generalisation of Lemma~2.2 in \cite{kms}. \begin{lemma}\label{lemlength} Let $M$ be a pointed metric space, let $f \in \operatorname{SNA}(M,\mathbb{R})$ which attains its norm at a pair $(p,q)$ of different elements of $M$, let $\varepsilon>0$, and let $\alpha_\varepsilon$ be a rectifiable curve in $M$ joining $p$, $q$ such that \[ {\mathrm{length}}(\alpha_\varepsilon)\leq d(p,q) + \varepsilon.\] Then, we have that \[ |f(z_1)-f(z_2)| \geq \lVert f \rVert_L (d(z_1,z_2)-\varepsilon) \quad \forall \, z_1, z_2 \in \alpha_\varepsilon.\] \end{lemma} \begin{proof} Fix $z_1$, $z_2 \in \alpha_\varepsilon$. By the definition of length of a curve, we have that \[ d(p,q)\leq d(p,z_1)+d(z_1,z_2)+d(z_2,q)\leq {\mathrm{length}}(\alpha_\varepsilon)\leq d(p,q)+\varepsilon.\] Consequently, \begin{align*} |f(z_1)-f(z_2)|&=|(f(p)-f(q))- ((f(p)-f(z_1))+(f(z_2)-f(q)))|\\ &\geq |f(p)-f(q)|-|f(p)-f(z_1)|-|f(z_2)-f(q)| \\ &\geq |f(p)-f(q)|-\lVert f \rVert_L d(p,z_1) - \lVert f \rVert_L d(z_2,q)\\ &= \lVert f \rVert_L (d(p,q)-d(p,z_1)-d(z_2,q)) \geq \lVert f \rVert_L (d(z_1,z_2)-\varepsilon).\qedhere \end{align*} \end{proof} We are now ready to state the desired result. \begin{theorem}\label{teo:length} Let $M$ be a complete length pointed metric space. Then, $\operatorname{SNA}(M,\mathbb{R})$ is not dense in ${\mathrm{Lip}}_0(M,\mathbb{R})$. \end{theorem} \begin{proof} Fix $\delta>0$, $x_0\in M\setminus\{0\}$. Let us consider a curve $$\gamma_\delta \colon [0,(1+\delta)d(0,x_0)] \longrightarrow M$$ joining $0$ and $x_0$. Now, let us consider a Lipschitz function $u_0\colon \gamma_\delta([0,(1+\delta)d(0,x_0)]) \longrightarrow \mathbb{R}$ such that $u_0(0)=0$, $u_0(x_0)=1$. Since $\gamma_\delta([0,(1+\delta)d(0,x_0)])$ is compact and connected, we have that $u_0(\gamma_\delta([0,(1+\delta)d(0,x_0)]))$ is a compact connected subset of $\mathbb{R}$, i.e.\ $u_0(\gamma_\delta([0,(1+\delta)d(0,x_0)]))=[a_0,b_0]$ for certain $a_0$, $b_0 \in \mathbb{R}$. We will write \[ a=\frac{a_0}{\|u_0\|_L}, \quad b=\frac{b_0}{\|u_0\|_L}, \quad \frac{u_0}{\|u_0\|_L} \colon \gamma_\delta([0,(1+\delta)d(0,x_0)])\longrightarrow[a,b]. \] We can apply McShane's extension theorem to $\frac{u_0}{\|u_0\|_L}$ to get a surjective function $u\colon M\longrightarrow[a,b]$ verifying that $\|u\|_L=1$. Let $A\subseteq [a,b]$ be a nowhere dense closed set of positive Lebesgue measure. Consider $g \in {\mathrm{Lip}}_0([a,b],\mathbb{R})$ the function whose derivate equals $\chi_A$ (characteristic function of $A$). We define $h=g\circ u\colon M\longrightarrow\mathbb{R}$. It is clear that $h(0)=g(u(0))=g(0)=0$ and $\|h\|_L=\|g\|_L=1$. Therefore, $h \in {\mathrm{Lip}}_0(M,\mathbb{R})$. Now, take $f \in \operatorname{SNA}(M,\mathbb{R})$. We will show that $\lVert h-f \rVert_L \geq\frac{1}{2}$. To this end, assume the contrary, that is, \ \[ \lVert f-h \rVert_L< \frac{1}{2}. \] In particular, note that $\lVert f \rVert_L >\frac{1}{2}$. We know that there exist $p,q \in M$ with $p\neq q$ such that \[ \lVert f \rVert_L = \frac{|f(p)-f(q)|}{d(p,q)}.\] Suppose that $u(p)=u(q)$, hence $h(p)=h(q)$ and we have that \[\lVert h-f \rVert_L \geq \frac{|(h-f)(p)-(h-f)(q)|}{d(p,q)} =\frac{|f(p)-f(q)|}{d(p,q)}=\lVert f \rVert_L > \frac{1}{2}, \] a contradiction. Therefore, $u(p)\neq u(q)$. We can assume that $u(p)<u(q)$ without any loss of generality. By the construction of $g$, there exist $c$, $d \in \mathbb{R}$ such that the interval $[c,d]$ is contained in $ (u(p),u(q))$ and that $g$ is constant in $[c,d]$. Take $\varepsilon_0>0$ verifying \begin{equation}\label{condiepslengthnodens} 0<\varepsilon_0 < |d-c|\frac{\lVert f \rVert_L - \lVert h-f \rVert_L}{\lVert f \rVert_L} \end{equation} and a rectifiable curve $\alpha_{\varepsilon_0}$ joining $p$ and $q$ such that \[ {\mathrm{length}}(\alpha_{\varepsilon_0})\leq d(p,q) +\varepsilon_0. \] Note that such a curve exists because $M$ is a length space. Let us write $\Lambda=\alpha_{\varepsilon_0}([0,d(p,q)+\varepsilon])\subseteq M$ and observe that \[[c,d]\subseteq (u(p),u(q))\subseteq u(\Lambda),\] so there exist $\tilde{z}_1$, $\tilde{z}_2 \in \Lambda$ such that $c=u(\tilde{z}_1)$, $d=u(\tilde{z}_2)$. Moreover, we have \[ |d-c|=|u(\tilde{z}_2)-u(\tilde{z}_1)| \leq d(\tilde{z}_2,\tilde{z}_1).\] Hence, if $z_1$, $z_2$ are different points of $ \Lambda$, using Lemma \ref{lemlength} we get that \[ \begin{split} |h(z_1)-h(z_2)| & \geq |f(z_1)-f(z_2)|-\lVert h-f \rVert_L d(z_1,z_2)\\ & \geq \lVert f \rVert_L d(z_1,z_2) - \lVert f \rVert_L \varepsilon_0 - \lVert h-f \rVert_L d(z_1,z_2)\\ & = \left( \lVert f \rVert_L - \lVert h-f \rVert_L -\frac{\varepsilon_0 \lVert f \rVert_L}{d(z_1,z_2)}\right) d(z_1,z_2). \end{split}\] Taking $z_1=\tilde{z}_1$, $z_2=\tilde{z}_2$ and applying the above inequality, we have \[\begin{split} |h(\tilde{z}_1)-h(\tilde{z}_2)| & \geq \left( \lVert f \rVert_L - \lVert h-f \rVert_L - \frac{\varepsilon_0 \lVert f \rVert_L}{d(\tilde{z}_1,\tilde{z}_2)}\right)d(\tilde{z}_1,\tilde{z}_2)\\ & \mathop{>}\limits^{\mbox{\eqref{condiepslengthnodens}}} (\lVert f \rVert_L - \lVert h-f \rVert_L - (\lVert f \rVert_L - \lVert h-f \rVert_L)) d(\tilde{z}_1,\tilde{z}_2) =0. \end{split}\] This implies that $h(\tilde{z}_1)\neq h(\tilde{z}_2)$ and so $g(c)\neq g(d)$, getting a contradiction with the fact that $g$ is constant in $[c,d]$.\end{proof} Let us now consider another negative example which can be seen as a generalisation of the fact that $\operatorname{SNA}([0,1],\mathbb{R})$ is not dense in ${\mathrm{Lip}}_0([0,1],\mathbb{R})$. This new generalisation will allow us to produce examples of metric spaces $M$ with very different geometric and topological properties for which $\operatorname{SNA}(M,\mathbb{R})$ is still not dense in ${\mathrm{Lip}}_0(M,\mathbb{R})$. In order to do that, we need to introduce a class of metric spaces $M$, the so-called $\mathbb{R}$-trees. An \emph{$\mathbb{R}$-tree} is a metric space $T$ satisfying: \begin{enumerate} \item for any points $x$, $y \in T$, there exists a unique isometry $\phi$ from the closed interval $[0,d(x,y)]$ into $T$ such that $\phi(0)=x$ and $\phi(d(x,y))=y$. Such isometry will be denoted by $\phi_{xy}$; \item any one-to-one continuous mapping $\varphi\colon [0,1]\longrightarrow T$ has the same range as the isometry $\phi$ associated to the points $x=\varphi(0)$ and $y=\varphi(1)$. \end{enumerate} Let us introduce some notation, coming from \cite{Godard}. Given points $x,y$ in an $\mathbb{R}$-tree $T$, it is usual to write $[x,y]$ to denote the range of $\phi_{xy}$, which is called a segment. We say that a subset $A$ of $T$ is \emph{measurable} whenever $\phi_{xy}^{-1}(A)$ is Lebesgue measurable for any $x$, $y \in T$. If $A$ is measurable and $S$ is a segment $[x,y]$, we write $\lambda_S(A)$ for $\lambda(\phi_{xy}^{-1}(A))$, where $\lambda$ is the Lebesgue measure on $\mathbb{R}$. We denote by $\mathcal{R}$ the set of those subsets of $T$ which can be written as a finite union of disjoint segments, and for $R=\bigcup_{k=1}^n S_k$ (with disjoint $S_k$) in $\mathcal{R}$, we put \[ \lambda_R(A)=\sum_{k=1}^n \lambda_{S_k}(A). \] Now, we can define the \emph{length measure} of a measurable subset $A$ of $T$ by \[ \lambda_T(A)= \sup_{R\in \mathcal{R}} \lambda_R(A). \] $\mathbb R$-trees were considered in \cite{Godard} in order to characterise those metric spaces $M$ for which $\mathcal F(M)$ is isometric to a subspace of $L_1$ as those which isometrically embed into an $\mathbb R$-tree. Here is the promised generalisation of the fact that $\operatorname{SNA}([0,1],\mathbb{R})$ is not dense in ${\mathrm{Lip}}_0([0,1],\mathbb{R})$. \begin{theorem}\label{nodensiR-trees} Let $T$ be a pointed $\mathbb{R}$-tree and let $M$ be a closed subset of $T$ containing the origin. If $M$ has positive length measure, then $\operatorname{SNA}(M,\mathbb{R})$ is not dense in ${\mathrm{Lip}}_0(M,\mathbb{R})$. \end{theorem} \begin{proof} Note that, as $M$ has positive length measure, we can find a segment $S=[x_0,y_0] \subseteq T$ such that $\lambda_T(M\cap S) >0$. We distinguish two cases:\\ First, assume that there exists a segment $[x_1,y_1]\subseteq M\cap S$. By Theorem 2.3 in \cite{kms} we know that there exists a function $f \in {\mathrm{Lip}}_0([x_1,y_1],\mathbb{R})$ such that $\| f \|_L =1 $ and $ \| f-g\|_L \geq \frac{1}{2}$ holds for all $g \in \operatorname{SNA}([x_1,y_1],\mathbb{R})$. Consider $\pi_1 \colon T \longrightarrow [x_1,y_1]$ the metric projection, which satisfies that \[d(x,y)=d(x,\pi_1(x))+d(\pi_1(x),y)\quad \forall \, x \in T, y \in [x_1,y_1]\] (c.f.\ e.g.\ \cite[Chapter II.2]{bh}). Define the norm-one Lipschitz function $\tilde{f}\colon M \longrightarrow \mathbb{R}$ by $\tilde{f}(p)=[f\circ \pi_1](p) $ for every $ p \in M$, and suppose that there exists $g \in \operatorname{SNA}(M,\mathbb{R})$ such that $\| \tilde{f}-g \|_L < \frac{1}{2}$. If we take $x$, $y \in M$ satisfying that $x \neq y$ and $\langle g,m_{x,y}\rangle=\|g\|_L$, we get \[ \frac{1}{2} > \frac{|f(\pi_1(x))-f(\pi_1(y))- (g(x)-g(y))|}{d(x,y)}\geq \| g \|_L - \frac{|f(\pi_1(x))-f(\pi_1(y))|}{d(x,y)},\] so $\pi_1(x) \neq \pi_1(y)$. Using that $\langle g,m_{x,y} \rangle=\|g\|_L$, Lemma 2.2 in \cite{kms} gives that $\langle g, m_{\pi_1(x),\pi_1(y)} \rangle=\| g \|_L$. Hence, $\restr{g}{[x_1,x_2]} \in \operatorname{SNA}([x_1,y_1],\mathbb{R})$. It follows from this that \[ \| \tilde{f}-g \|_L \geq \| f- \restr{g}{[x_1,y_1]} \|_L \geq \frac{1}{2},\] a contradiction.\\ Now, assume that no segment is contained in $M\cap S$. Define the norm-one Lipschitz function $f \colon S \longrightarrow \mathbb{R}$ by \[f(t)=\int_{[x_0,t]} \chi_M (x)\, dx=\lambda_T([x_0,t]\cap M) \] for all $t \in [x_0,y_0]$. As above, define $\tilde{f}\colon M \longrightarrow \mathbb{R}$ by $\tilde{f}(p)=[f\circ \pi_2](p)$ for every $p \in M$, where $\pi_2 \colon M \longrightarrow S$ is the metric projection onto $S$. Again, assume that there exists $g \in \operatorname{SNA}(M,\mathbb{R})$ such that $\|g-\tilde{f}\|_L <\frac{1}{2}$. Take $x, y \in M$ such that $x\neq y$ and $\langle g, m_{x,y} \rangle = \| g \|_L$. Then, using the same argument as above, we deduce that $\pi_2(x)\neq \pi_2(y)$. Now, since $[\pi_2(x),\pi_2(y)]\nsubseteq M\cap S$ by the assumption, we can find distinct points $x_2$, $y_2 \in M$ such that $[x_2,y_2] \subseteq \, ]\pi_2(x),\pi_2(y)[ \, \setminus(M\cap S)$. Recall that $\langle g, m_{x,y} \rangle=\|g\|_L$ and this implies that $\langle g, m_{x_2,y_2} \rangle=\|g\|_L$ by Lemma 2.2 in \cite{kms}. On the other hand, note that \[ \tilde{f}(x_2)=f(x_2)=\lambda_T([x_0,x_2]\cap M)=\lambda_T([x_0, y_2] \cap M)=f(y_2)=\tilde{f}(y_2).\] Therefore, we obtain \[ \frac{1}{2} > \|g-\tilde{f}\|_L \geq \langle g-\tilde{f}, m_{x_2,y_2} \rangle =\langle g, m_{x_2,y_2}\rangle=\|g\|_L > \frac{1}{2},\] getting again a contradiction. Consequently, the set $\operatorname{SNA}(M,\mathbb{R})$ is not dense in ${\mathrm{Lip}}_0(M,\mathbb{R})$, as desired.\end{proof} As a particular case, we obtain the following corollary. \begin{corollary}\label{cor[0,1]} Let $M$ be a closed pointed subset of $[0,1]$ whose Lebesgue measure is positive. Then $\operatorname{SNA}(M,\mathbb{R})$ is not dense in ${\mathrm{Lip}}_0(M,\mathbb{R})$. \end{corollary} \begin{remark}\label{remaejerarosRtree}{\slshape Notice that the examples of metric spaces $M$ such that $\operatorname{SNA}(M,\mathbb{R})$ is not dense in ${\mathrm{Lip}}_0(M,\mathbb{R})$ provided by Theorem \ref{teo:length} (and so by \cite[Theorem 2.3]{kms}) have very strong topological properties. For instance, it is clear that length metric spaces are arc-connected and, in particular, do not have isolated points. Nevertheless, Corollary \ref{cor[0,1]} produces quite different kind of such examples. For example, let $M$ be any nowhere dense subset of $[0,1]$ whose Lebesgue measure is positive (e.g.\ any so-called ``fat'' Cantor set). Corollary \ref{cor[0,1]} implies that $\operatorname{SNA}(M,\mathbb{R})$ is not dense in ${\mathrm{Lip}}_0(M,\mathbb{R})$, and $M$ is totally disconnected.} \end{remark} As a consequence of Theorem \ref{nodensiR-trees}, we can characterise when $\operatorname{SNA}(M,\mathbb{R})$ is dense in ${\mathrm{Lip}}_0(M,\mathbb{R})$ for compact subsets of $\mathbb R$-trees. Indeed, if $M$ is a compact subset of an $\mathbb{R}$-tree such that $\lambda_T(M)=0$, then $\mathcal{F}(M)$ is isometric to a subspace of $\ell_1$ \cite[Proposition~8]{dkp}, so $\mathcal{F}(M)$ has the RNP and thus, $\operatorname{SNA}(M,\mathbb{R})$ is dense in ${\mathrm{Lip}}_0(M,\mathbb{R})$ by \cite[Proposition 7.4]{lppr} (see Theorem \ref{teornpdensidad} below). Consequently, the following corollary follows. \begin{corollary}\label{caraR-tree} Let $T$ be a pointed $\mathbb{R}$-tree and let $M$ be a compact subset of $T$ containing $0$. Then, $\overline{\operatorname{SNA}(M,\mathbb{R})}={\mathrm{Lip}}_0(M,\mathbb{R})$ if, and only if, $\lambda_T(M)=0$. \end{corollary} \section{A discussion in Lipschitz-free spaces on sufficient conditions for Lindenstrauss property A}\label{sec:property_A} The starting point for this section is \cite[Proposition 7.4]{lppr}, which we present here with a short sketch of a proof slightly different to the one given in \cite{lppr}. In order to do that, we need a bit of notation. Let $X$ and $Y$ be Banach spaces. We say that an operator $T\in \mathcal{L}(X,Y)$ is \emph{absolutely strongly exposing} if there exists $x \in S_{X}$ such that for every sequence $\{x_n\} \subset B_X$ such that $\lim_{n} \|Tx_n\| = \|T\|$, there is a subsequence $\{x_{n_k}\}$ which converges to either $x$ or $-x$. Clearly, if $T$ is an absolutely strongly exposing operator, then $T$ attains its norm at the point $x$ appearing at the definition; it is easy to show that such point $x\in S_X$ is a strongly exposed point (indeed, let $y^*\in S_{Y^*}$ such that $y^*(Tx)=\|T\|$ and consider $x^*\in S_{X^*}$ such that $\|T\|x^*=T^*(y^*)$; if $\{x_n\}$ is a sequence in $B_X$ such that $x^*(x_n)\longrightarrow 1=x^*(x)$, then $$\|T(x_n)\|\geq y^*(Tx_n)=\|T\|x^*(x_n)\longrightarrow \|T\|,$$ so there is a subsequence $\{x_{n_k}\}$ converging to $x$ (it cannot converge to $-x$), showing that $x$ is strongly exposed by $x^*$). A famous result of J.~Bourgain \cite[Theorem~5]{bourgain1977} says that if $X$ is a Banach space with the RNP and $Y$ is any Banach space, the set of absolutely strongly exposing operators from $X$ to $Y$ is a $G_\delta$-dense subset of $\mathcal{L}(X,Y)$ (in particular, the space $X$ has Lindenstrauss property A). Now, let $M$ be a pointed metric space such that $\mathcal{F}(M)$ has the RNP and let $Y$ be a Banach space. As $\strexp{B_{\mathcal F(M)}}\subset \Mol{M}$ (see Proposition \ref{prop:extremalidad}), the discussion above shows that the set of those elements in $\mathcal{L}(\mathcal{F}(M),Y)$ which attain their norm at points of $\Mol{M}$ is dense, in other words, $\operatorname{SNA}(M,Y)$ is dense in ${\mathrm{Lip}}_0(M,Y)$. \begin{theorem}[\mbox{\cite[Proposition 7.4]{lppr}}]\label{teornpdensidad} Let $M$ be a complete pointed metric space such that $\mathcal F(M)$ has the RNP. Then $\operatorname{SNA}(M,Y)$ is dense in ${\mathrm{Lip}}_0(M,Y)$ for every Banach space $Y$. \end{theorem} Roughly speaking, the proof of the previous theorem shows how a property of Banach spaces (the RNP) which implies property A may actually imply the density of $\operatorname{SNA}(M,Y)$ in ${\mathrm{Lip}}_0(M,Y)$ by making a strong use of the special behaviour of the extremal structure of Lipschitz-free spaces. This fact motivates an analysis of the connections between certain linear properties on $\mathcal F(M)$ which imply property A and the fact that $\operatorname{SNA}(M,Y)$ is dense in ${\mathrm{Lip}}_0(M,Y)$ for every Banach space $Y$. The properties implying property A that we will discuss in the setting of Lipschitz-free spaces will be the following ones, whose definitions can be found in the respective subsections: \begin{itemize} \item the existence of a set of uniformly strongly exposed points whose closed convex hull equals the unit ball, introduced by Lindenstrauss himself \cite{lindens} in 1963; \item property $\alpha$, introduced by W.~Schachermayer \cite{Schachermayer} in 1983, which implies the previous one and which satisfies that ``many'' Banach spaces (separable, reflexive, WCG\ldots) can be renormed having it; \item property quasi-$\alpha$, which is weaker than property alpha but still implies property A, introduced by Y.~S.~Choi and H.~G.~Song \cite{ChoiSong} in 2008. \end{itemize} \begin{figure}[h] \centering \begin{tikzcd}[column sep=1.8cm] \lfbox{Property $\alpha$} \arrow[d, Rightarrow] & \lfbox{Property quasi-$\alpha$} \arrow[l, Rightarrow]\arrow[d, Rightarrow]& \\ \lfbox{$\begin{array}{c} B_X=\overline{\co}(S) \\ S \text{ unif. str. exp.}\end{array}$}\arrow[r, Rightarrow] & \lfbox{Property A} & \lfbox{RNP} \arrow[l, Rightarrow] \end{tikzcd} \caption{Relations between properties implying property A in general Banach spaces} \label{figure:general} \end{figure} In general, given a Banach space $X$, we have the implications given in Figure \ref{figure:general}. None of the implications reverses and the RNP is not related to the others three properties implying property A. We will discuss the relations between the properties in the setting of Lipschitz-free spaces in subsection \ref{subsec:relations}. \subsection{Uniformly strongly exposed points} We start with the definition of the property. \begin{definition} Let $X$ be a Banach space. A subset $S\subset S_X$ is said to be a \emph{set of uniformly strongly exposed points} if there is a family of functionals $\{h_x\}_{x\in S}$ with $\|h_x\|=h_x(x)=1$ for every $x\in S$ such that, given $\varepsilon>0$ there is $\delta>0$ satisfying that \[ \sup_{x\in S} \diam(S(B_X,h_x, \delta)) \leq \varepsilon; \] equivalently, if for every $\varepsilon>0$ there is $\delta'>0$ such that whenever $z\in B_X$ satisfies $h_x(z)>1-\delta'$ for some $x\in S$, then $\|x-z\|<\varepsilon$ (that is, all elements of $S$ are strongly exposed points with the same relation $\varepsilon$--$\delta$). \end{definition} This concept appeared in the seminal paper by Lindenstrauss \cite{lindens} (see also \cite{Finet} for further applications of it) to give a sufficient condition for a Banach space $X$ to enjoy property A. Namely, if $X$ is a Banach space containing a set of uniformly strongly exposed points $S\subset S_X$ such that $B_X=\overline{\co}(S)$, then $X$ has property A \cite[Proposition 1]{lindens}. Moreover, having a look at the proof of the result, something more can be said. In fact, it is actually proved that, given a Banach space $Y$, then the set $$ \left\{T\in \mathcal{L}(X,Y)\colon T\text{ attains its norm at a point of }\overline{S}\right\} $$ is dense in $\mathcal{L}(X,Y)$. Now, given a metric space $M$, if $\mathcal F(M)$ has a subset $S\subseteq S_X$ of uniformly strongly exposed points, then $S\subseteq \Mol{M}$ by Proposition \ref{prop:extremalidad}, since $S$ is made of strongly exposed points. Now, as $\Mol{M}$ is norm-closed (use Proposition \ref{prop:extremalidad} again), the following result follows. \begin{proposition}\label{prop:unifstrexpsnadens} Let $M$ be a complete pointed metric space and assume that $B_{\mathcal F(M)}$ is the closed convex hull of a set of uniformly strongly exposed points. Then $\operatorname{SNA}(M,Y)$ is norm dense in ${\mathrm{Lip}}_0(M,Y)$ for every Banach space $Y$. \end{proposition} In view of the previous proposition, we shall begin with a characterisation, inspired by \cite[Theorem 5.4]{gpr}, of the existence of such a set of uniformly strongly exposed points, which only depends on the metric space $M$. We previously need a technical lemma. \begin{lemma}\label{lemma:failZuniformly} Let $M$ be a complete pointed metric space, let $A = \{m_{x,y}\}_{(x,y)\in \Lambda}$ be a family of molecules in $\mathcal F(M)$. Assume that there is $\varepsilon_0>0$ such that \[ (x,y)_z > \varepsilon_0 \min\{d(x,z), d(y,z)\} \] whenever $m_{x,y}\in A$ and $z\in M\setminus\{x,y\}$. Then, there exists a family $B = \{h_{x,y}\}_{(x,y)\in \Lambda}$ in $S_{{\mathrm{Lip}}_0(M,\mathbb{R})}$ such that \begin{itemize} \item[(a)] $\langle h_{x,y}, m_{x,y}\rangle = 1$ for every $(x,y)\in \Lambda$, and \item[(b)] for every $\varepsilon>0$ there is $\delta = \delta(\varepsilon, \varepsilon_0)>0$ such that \begin{equation}\label{eq:lemmab} \langle h_{x,y}, m_{u,v}\rangle > 1-\delta \ \ \text{ implies } \ \ \norm{m_{x,y} - m_{u,v}}<\varepsilon \end{equation} for every $(x,y)\in \Lambda$ and every $u,v \in M$, $u\neq v$. \end{itemize} \end{lemma} \begin{proof} Fix $\varepsilon_1>0$ with $\frac{\varepsilon_1}{1-\varepsilon_1}<\frac{\varepsilon_0}{4}$. For $x,y\in M$ such that $m_{x,y}$ belongs to $A$, consider the Lipschitz function $g_{x,y}$ defined in~\cite[Proposition 2.8]{ikw}, namely \[ g_{x,y}(z):=\begin{cases} \max\left\{\frac{d(x,y)}{2}-(1-\varepsilon_1)d(z,x), 0\right\} & \text{if } d(z,y)\geq d(z,x),\\ & d(z,y)+(1-2\varepsilon_1)d(z,x)\geq d(x,y), \\ -\max\left\{\frac{d(x,y)}{2}-(1-\varepsilon_1)d(z,y), 0\right\} & \text{if } d(z,x)\geq d(z,y),\\ & d(z,x)+(1-2\varepsilon_1)d(z,y)\geq d(x,y). \end{cases} \] It is well defined and satisfies that $\norm{g_{x,y}}_{L}=1$, $\langle g_{x,y}, m_{x,y}\rangle = 1$, and \begin{equation}\label{eq:gxy} \langle g_{x,y}, m_{u,v}\rangle >1-\varepsilon_1\ \ \text{ implies } \ \ \max\{d(x,u),d(y,v)\}<\frac{d(x,y)}{4} \end{equation} for any $u,v\in M$, $u\neq v$ (see the proof of Proposition 2.8 in~\cite{ikw}). Consider also the function defined by \[f_{x,y}(t):= \frac{d(x,y)}{2}\frac{d(t,y)-d(t,x)}{d(t,y)+d(t,x)}\] for every $t\in M$, and take $h_{x,y}=\frac{1}{2}(g_{x,y}+f_{x,y})$. Now, one can check that the family $B=\{h_{x,y}\}_{(x,y)\in \Lambda}$ does the work following word-by-word the proof of \cite[Theorem 5.4]{gpr}. \end{proof} The previous lemma motivates us to consider the following property, related to the characterization of strongly exposed points of the unit ball of the Lipschitz-free spaces given in Proposition \ref{prop:extremalidad}.b. \begin{definition} Let $M$ be a metric space and let $A\subset \Mol{M}$. We say that \emph{$A$ is uniformly Gromov rotund} if there is $\varepsilon>0$ such that \[ (x,y)_z > \varepsilon \min\{d(x,z), d(y,z)\} \] whenever $m_{x,y}\in A$ and $z\in M\setminus\{x,y\}$. \end{definition} Observe that, thanks to Proposition \ref{prop:extremalidad}.b, $A$ is uniformly Gromov rotund if and only if it fails property (Z) in a uniform way. We can now give a metric characterisation of when a set of molecules in $\mathcal F(M)$ is uniformly strongly exposed in the following sense. \begin{proposition}\label{prop:charunifstrexp} Let $M$ be a complete pointed metric space and let $A$ be a set of molecules in $\mathcal F(M)$. Then, the following statements are equivalent: \begin{enumerate}[(i)] \item $A$ is uniformly strongly exposed in $B_{\mathcal F(M)}$, \item $A$ is uniformly Gromov rotund. \end{enumerate} \end{proposition} In order to prove the proposition, we need the following lemma. \begin{lemma}\label{lemma:strexp} Let $M$ be a complete pointed metric space. Let $x,y\in M$, $x\neq y$, and let $f\in {\mathrm{Lip}}_0(M,\mathbb{R})$ be such that $\norm{f}_L=1$ and $\langle f, m_{x,y}\rangle =1$. Then, for every $z\in M\setminus\{x,y\}$ we have that \[ \langle f, m_{x,z}\rangle \geq 1-2\frac{(x,y)_z}{d(x,z)} \quad \text{and} \quad \langle f, m_{z,y}\rangle \geq 1-2\frac{(x,y)_z}{d(y,z)}.\] \end{lemma} \begin{proof} Note that \begin{align*} 1 = \langle f, m_{x,y}\rangle & = \left\langle f, \frac{d(x,z)}{d(x,y)}m_{x,z} + \frac{d(z,y)}{d(x,y)}m_{z,y}\right\rangle \\ & = \frac{d(x,z)}{d(x,y)} \langle f, m_{x,z}\rangle + \frac{d(z,y)}{d(x,y)} \langle f, m_{z,y}\rangle . \end{align*} Thus, \begin{align*} d(x,z)+d(z,y)-2(x,y)_z &= d(x,y) = d(x,z)\langle f, m_{x,z} \rangle + d(z,y) \langle f, m_{z,y}\rangle \\ &\leq d(x,z)\langle f, m_{x,z}\rangle + d(z,y) \end{align*} and the conclusion follows. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:charunifstrexp}] (i)$\Rightarrow$(ii). Let $\{h_{x,y}\}_{m_{x,y}\in A}\subset S_{{\mathrm{Lip}}_0(M,\mathbb{R})}$ be a family which uniformly strongly exposes the family $A$. Take $\delta>0$ such that \[ \sup_{m_{x,y}\in A} \diam(S(B_{\mathcal{F}(M)}, h_{x,y}, \delta)) < \frac{1}{2}.\] Assume that $A$ is not uniformly Gromov rotund. Then, there are $x,y\in M$, $x\neq y$, such that $m_{x,y}\in A$, and $z\in M\setminus\{x,y\}$ such that \[ (x,y)_z < \frac{\delta}{2} \min\{d(x, z), d(y, z)\}.\] By interchanging the roles of $x$ and $y$ if needed, we may assume that $d(x, z)\leq d(y,z)$ and so, $d(y,z)\geq \frac{1}{2}d(x,y)$. Now, Lemma \ref{lemma:strexp} implies that \[ \langle h_{x,y}, m_{x,z}\rangle \geq 1-2\frac{(x,y)_{z}}{d(x,z)} > 1-\delta. \] From this and Lemma \ref{lemma:ineqmolec}, it follows that \[ \frac{1}{2}\leq\frac{d(y,z)}{d(x,y)} \leq \norm{m_{x,y}-m_{x,z}} < \frac{1}{2}\] which is a contradiction. \noindent (ii)$\Rightarrow$(i). By hypothesis, there is $\varepsilon_0>0$ such that \[ d(x,z)+d(z,y)>d(x,y)+\varepsilon_0 \min\{d(x,z),d(z,y)\} \] whenever $m_{x,y}\in A$ and $z\in M\setminus\{x,y\}$. Let $B=\{h_{x,y}\}_{(x,y)\in \Lambda}$ be the set provided by Lemma \ref{lemma:failZuniformly}. We claim that $B$ uniformly strongly exposes $A$. Indeed, given $\varepsilon>0$, take $0<\delta<\varepsilon$ such that \[ \langle h_{x,y}, m_{u,v}\rangle > 1-\delta\ \ \text{ implies }\ \ \norm{m_{x,y} - m_{u,v}}<\varepsilon \] for every $m_{x,y}\in A$ and every $u,v \in M$, $u\neq v$. Thus, \[ \diam\bigl(S(B_{\mathcal F(M)}, h_{x,y}, \delta)\cap \Mol{M}\bigr) \leq 2\varepsilon. \] Finally, note that \[ \diam(S(B_{\mathcal F(M)}, h_{x,y}, \delta^2) \leq 2\diam\left(S(B_{\mathcal F(M)}, h_{x,y}, \delta)\cap \Mol{M}\right) + 4\delta \leq 8\varepsilon, \] see e.g.\ Lemma 2.7 in \cite{lppr}. \end{proof} As an immediate consequence of Propositions \ref{prop:unifstrexpsnadens} and \ref{prop:charunifstrexp}, we obtain the following corollary. \begin{corollary} Let $M$ be a complete pointed metric space. If there exists a uniformly Gromov rotund subset $A\subset \Mol{M}$ such that $B_{\mathcal{F}(M)}$ is the closed convex hull of $A$, then $\operatorname{SNA}(M,Y)$ is dense in ${\mathrm{Lip}}_0(M,Y)$ for every Banach space $Y$. \end{corollary} The space ${\mathrm{Lip}}_0(M,\mathbb{R})$ has the Daugavet property if and only if the complete pointed metric space $M$ has property (Z) (see \cite{ikw} for the compact case and the very recent paper \cite{AviMar} for the general case) and if and only if $M$ is a length space \cite[Theorem 3.5]{gpr}. Thus, the previous result shows that the failure of the Daugavet property in a very strong sense implies the density of $\operatorname{SNA}(M, Y)$ in ${\mathrm{Lip}}_0(M,Y)$ for every Banach space $Y$. Compare with Theorem \ref{teo:length}, where it is shown that if ${\mathrm{Lip}}_0(M,\mathbb{R})$ has the Daugavet property, then $\operatorname{SNA}(M,\mathbb{R})$ is not dense in ${\mathrm{Lip}}_0(M,\mathbb{R})$. \subsection[Property alpha]{Property \texorpdfstring{$\alpha$}{alpha}} In the sequel we will consider a particular way in which a Banach space may contain a subset of uniformly strongly exposed points whose closed convex hull is the whole unit ball. It was introduced in \cite{Schachermayer} by W.~Schachermayer with the name of property $\alpha$ and its main interest is that ``many'' Banach spaces (e.g.\ separable, reflexive, WCG\ldots) can be equivalently renormed to have it. The prototype Banach space with property $\alpha$ is $\ell_1$. \begin{definition} A Banach space $X$ is said to have \emph{property $\alpha$} if there exist a balanced subset $\{x_\lambda\}_{\lambda \in \Lambda}$ of $X$ and a subset $\{x^*_\lambda\}_{\lambda \in \Lambda} \subseteq X^*$ such that \begin{enumerate}[(i)] \item $\lVert x_\lambda \rVert = \lVert x^*_\lambda\rVert = \lvert x^*_\lambda(x_\lambda)\rvert =1$ for all $\lambda\in \Lambda$. \item There exists $0\leq \rho <1$ such that \[ |x^*_\lambda(x_\mu)|\leq \rho \quad \forall \, x_\lambda \neq \pm x_\mu. \] \item $\overline{\co}\left(\{x_\lambda\}_{\lambda \in \Lambda}\right)= B_X$. \end{enumerate} \end{definition} Because of methodological reasons, we have modified a little bit the original definition from \cite{Schachermayer} to an equivalent one in which we impose the set $\{x_\lambda\}_{\lambda \in \Lambda}$ to be balanced. It is shown in \cite[Fact in p.~202]{Schachermayer} that if $X$ has property $\alpha$ witnessed by a set $\Gamma \subset S_X$, then $\Gamma$ is a set of uniformly strongly exposed points. Therefore, if $M$ is a pointed metric space for which $\mathcal{F}(M)$ has property $\alpha$, then Proposition \ref{prop:unifstrexpsnadens} gives that $\operatorname{SNA}(M,Y)$ is dense in ${\mathrm{Lip}}_0(M,Y)$ for every Banach space $Y$. This can be also proved directly by adapting the proof of \cite[Proposition 1.3.a]{Schachermayer} to our case, as it is shown there that the set of operators from $\mathcal{F}(M)$ to $Y$ attaining their norms on points of $\Gamma$ is dense in $\mathcal{L}(\mathcal{F}(M),Y)$, and we just have to observe that $\Gamma\subset \strexp{B_{\mathcal F(M)}}\subset \Mol{M}$ (see Proposition \ref{prop:extremalidad}). \begin{corollary} Let $M$ be a complete pointed metric space for which $\mathcal{F}(M)$ has property $\alpha$. Then, the set $\operatorname{SNA}(M,Y)$ is dense in ${\mathrm{Lip}}_0(M,Y)$ for every Banach space $Y$. \end{corollary} As we have said, if $\Gamma\subset S_{\mathcal F(M)}$ witnesses that $\mathcal F(M)$ has property $\alpha$, then $\Gamma$ is made up of molecules. We can say something more. J.~P.~Moreno proved in \cite[Proposition~3.6]{Moreno} that if a Banach space $X$ has property $\alpha$ witnessed by $\Gamma\subset S_X$, then $\Gamma = \dent{B_X} = \strexp{B_X}$. Indeed, if $x\in S_X$ is a denting point, then the slices of $B_{X}$ containing $x$ are a neighbourhood basis of $x$ for the norm topology in $B_X$. Since $\overline{\co}(\Gamma)=B_X$, we have that every slice of $B_X$ intersects $\Gamma$. It follows that $x\in \overline{\Gamma}$. Finally, if $X$ has property $\alpha$, then the set $\Gamma$ is obviously uniformly discrete, hence closed. Thus, \[ \dent{B_X}\subset \Gamma \subset \strexp{B_X} \subset \dent{B_X} \] From this and the fact that every preserved extreme point of $B_{\mathcal{F}(M)}$ is a denting point by Proposition~\ref{prop:extremalidad}, we get the following result. \begin{proposition}\label{prop:alphafreestrexp} Let $M$ be a complete pointed metric space and assume that $\mathcal F(M)$ has property $\alpha$ witnessed by $\Gamma\subset S_{\mathcal F(M)}$. Then, $$\Gamma=\preext{B_{\mathcal F(M)}}=\strexp{B_{\mathcal F(M)}}.$$ \end{proposition} In the sequel, we will get a reformulation of property $\alpha$ in $\mathcal F(M)$. To this end, we need the following elementary characterisation of uniformly discrete subsets of molecules. \begin{lemma}\label{lemma:unifdiscrmolec} Let $M$ be a metric space and consider $A\subset \Mol{M}$. Then, $A$ is uniformly discrete if and only if there exists $\delta>0$ such that \begin{equation}\label{eq:unifdisc} d(x,u)+d(v,y) \geq \delta\, d(x,y) \end{equation} whenever $m_{x,y}$ and $m_{u,v}$ are distinct elements of $A$. \end{lemma} \begin{proof} If $A$ is uniformly discrete, then there is $\delta>0$ such that \[ 2\delta\leq \norm{m_{x,y}-m_{u,v}}\leq 2\frac{d(x,u)+d(y,v)}{d(x,y)},\] where the last inequality follows from Lemma \ref{lemma:ineqmolec}. Conversely, assume that the inequality \eqref{eq:unifdisc} holds for every $m_{x,y}, m_{u,v}\in A$ with $m_{x,y}\neq m_{u,v}$. If one has that $\norm{m_{x,y}-m_{u,v}}<1$ then, again by Lemma \ref{lemma:ineqmolec}, we get that \[\norm{m_{x,y}-m_{u,v}}\geq \frac{\max\{d(x,u),d(u,v)\}}{d(x,y)}\geq \frac{1}{2}\frac{d(x,u)+d(u,y)}{d(x,y)} \geq \frac{\delta}{2}.\] Thus, $\norm{m_{x,y}-m_{u,v}}\geq \min\{1,\delta/2\}$ for $m_{x,y}\neq m_{u,v}$ in $A$. \end{proof} The following proposition characterises the Lipschitz-free spaces with property $\alpha$ in terms of the existence of a norming subset of molecules satisfying certain metrical conditions. \begin{proposition}\label{prop:charalpha} Let $M$ be a complete pointed metric space. The following are equivalent: \begin{enumerate}[(i)] \item $\mathcal F(M)$ has property $\alpha$. \item There exists $\Lambda\subset \{(p,q)\in M\times M\colon p\neq q\}$ such that, writing $A = \{m_{x,y}\colon (x,y)\in \Lambda\}\subset \Mol{M}$, one has that: \begin{itemize} \item there exists $\delta>0$ such that $d(x,u)+d(y,v)\geq \delta d(x,y)$ for all $(x,y),(u,v)\in \Lambda$ with $(x,y)\neq (u,v)$ (equivalently, $A$ is uniformly discrete); \item there is $\varepsilon>0$ such that \[ (x,y)_z > \varepsilon \min\{d(x,z), d(y,z)\} \] whenever $(x,y)\in \Lambda$ and $z\in M\setminus\{x,y\}$ (equivalently, $A$ is uniformly Gromov rotund); \item $\|f\|_L=\sup\left\{\frac{f(x)-f(y)}{d(x,y)}\colon (x,y)\in \Lambda\right\}$ for every $f\in {\mathrm{Lip}}_0(M,\mathbb{R})$ (equivalently, $B_{\mathcal F(M)} = \overline{\co}(A)$). \end{itemize} \end{enumerate} Moreover, in such a case, the set $A$ coincides with the whole set of strongly exposed points of $B_{\mathcal F(M)}$. \end{proposition} \begin{proof} (i)$\Rightarrow$(ii). Let $A\subset S_{\mathcal F(M)}$ witnessing that $\mathcal F(M)$ has property $\alpha$. Then $B_{\mathcal F(M)} = \overline{\co}(A)$. Moreover, it is clear that $A$ is uniformly discrete and it is known that it is uniformly strongly exposed \cite[Fact in p.~202]{Schachermayer}, so Proposition~\ref{prop:charunifstrexp} and Lemma \ref{lemma:unifdiscrmolec} give the result. (ii)$\Rightarrow$(i). Let $A=\{m_{x,y}\}_{(x,y)\in \Lambda}$ be a set of molecules satisfying the properties in the statement. Let $B=\{h_{x,y}\}_{(x,y)}\subset S_{{\mathrm{Lip}}_0(M,\mathbb{R})}$ be the family provided by Lemma~\ref{lemma:failZuniformly}. By Lemma~\ref{lemma:unifdiscrmolec}, \[ \varepsilon = \inf\{\norm{m_{x,y}-m_{u,v}}\colon m_{x,y}, m_{u,v}\in A,\, m_{x,y}\neq m_{u,v}\}>0.\] Take $\delta>0$ such that \eqref{eq:lemmab} in Lemma \ref{lemma:failZuniformly} holds for that $\varepsilon$. Then, $$\langle h_{x,y}, m_{u,v}\rangle \leq 1-\delta$$ whenever $m_{x,y}, m_{u,v}\in A$ and $m_{x,y}\neq \pm m_{u,v}$. Thus, $\mathcal F(M)$ has property $\alpha$. The last assertion follows from Proposition~\ref{prop:alphafreestrexp}. \end{proof} We can provide an easier characterisation in the bounded and uniformly discrete case. \begin{proposition}\label{prop:caralphaudiscreto} Let $M$ be a pointed bounded and uniformly discrete metric space. The following are equivalent: \begin{enumerate}[(i)] \item $\mathcal F(M)$ has property $\alpha$. \item the set $\strexp{B_{\mathcal F(M)}}$ consists of uniformly strongly exposed points (equivalently, it is uniformly Gromov rotund). \item there is $\varepsilon >0$ such that for every $x,y\in M$ with $x\neq y$, \[\text{either}\quad \inf_{z\in M\setminus \{x,y\}} (x,y)_z = 0 \quad \text{or} \quad \inf_{z\in M\setminus\{x,y\}} (x,y)_z \geq \varepsilon.\] \end{enumerate} \end{proposition} \begin{proof} Denote $D = \sup\{d(x,y)\colon x\neq y\}<\infty$ and $\theta = \inf\{d(x,y)\colon x\neq y\}>0$. (i)$\Rightarrow$(ii) follows from Propositions \ref{prop:alphafreestrexp} and \ref{prop:charalpha}. Next, assume that (ii) holds. Then, there is $\varepsilon>0$ such that \[ (x,y)_z \geq \varepsilon \min\{d(x,z),d(z,y)\}\geq \varepsilon \theta\] whenever $m_{x,y}\in \strexp{B_{\mathcal F(M)}}$. So, given $x, y\in M$, $x\neq y$, either $m_{x,y}$ is strongly exposed, and then $\inf_{z\in M\setminus\{x,y\}} (x,y)_z \geq \varepsilon\theta$, or $m_{x,y}$ is not strongly exposed, and then \[ \inf_{z\in M\setminus\{x,y\}} (x,y)_z \leq D\inf_{z\in M\setminus\{x,y\}} \frac{(x,y)_z}{\min\{d(x,z),d(y,z)\}} =0.\] This gives (iii). Finally, assume that (iii) holds and let $A=\strexp{B_{\mathcal F(M)}}$. Then, for every $m_{x,y}\in A$ we have that $\inf_{z\in M\setminus \{x,y\}} (x,y)_z > 0$ and so \[\inf_{z\in M\setminus \{x,y\}} (x,y)_z \geq \varepsilon\geq \frac{\varepsilon}{D}\min\{d(x,z),d(z,y)\}.\] That is, $A$ is uniformly Gromov rotund. Moreover, $B_{\mathcal F(M)}=\overline{\co}(A)$ since $\mathcal F(M)$ has the RNP. Finally, \[ d(x,u)+d(v,y)\geq \delta d(x,y),\] for every distinct pairs of points $(x,y), (u,v) \in \{(p,q)\in M\times M\colon p\neq q\}$, where $\delta = 2\theta/D$. By Proposition~\ref{prop:charalpha}, $\mathcal F(M)$ has property $\alpha$, getting (i). \end{proof} Let us exhibit some examples of metric spaces such that $\mathcal F(M)$ has property $\alpha$. \begin{example}\label{ex:spacesAlpha} The space $\mathcal F(M)$ has property $\alpha$ in the following cases: \begin{enumerate}[(a)] \item $M$ is finite. \item $M$ is a compact subset of $\mathbb R$ with measure $0$. \item There exists a constant $1\leq D<2$ such that $$1\leq d(x,y)<D$$ holds for every pair of distinct points $x,y\in M$ (equivalently, up to rescaling, there are constants $C>0$ and $1\leq D < 2$ such that $C\leq d(x,y)<CD$ for all $x,y\in M$, $x\neq y$). \end{enumerate} \end{example} \begin{proof} (a). Given $m_{x,y}\in \strexp{B_{\mathcal F(M)}}$, consider a strongly exposing functional $g_{x,y}\in S_{{\mathrm{Lip}}_0(M,\mathbb{R})}$. Take $\rho$ to be the maximum of the set \[ \{|\langle g_{x,y}, m_{u,v}\rangle | \colon m_{x,y}\in \strexp{B_{\mathcal F(M)}}, m_{u,v}\in \Mol{M}, m_{x,y}\neq \pm m_{u,v}\}.\] Then, $\rho<1$ since $M$ is finite. Moreover, $\mathcal F(M)$ is finite dimensional and so $B_{\mathcal F(M)}$ is the closed convex hull of its strongly exposed points. Thus, $\mathcal F(M)$ has property $\alpha$. \noindent (b). $\mathcal F(M)$ is isometric to $\ell_1$ by \cite{Godard}, so it clearly has property $\alpha$. \noindent (c). Let $0<\varepsilon<\frac{2}{D}-1$. Observe that given $x,y,z\in M$, we get \[\begin{split} \varepsilon \min\{d(x,z),d(y,z)\}& \leq \varepsilon D<2-D\leq d(x,z)+d(y,z)-D\\ & \leq d(x,z)+d(y,z)-d(x,y)=2(x,y)_z. \end{split}\] Consequently, if we define $\Lambda:=\{(p,q)\in M\times M\colon p\neq q\}$, then $\Lambda$ satisfies the condition (ii) in Proposition \ref{prop:charalpha}, and so $\mathcal F(M)$ has property $\alpha$. \end{proof} The next result provides a characterisation of those concave metric spaces for which $\mathcal F(M)$ has property $\alpha$. \begin{theorem}\label{alpha+unifconc} Let $M$ be a complete pointed concave metric space. Then the following are equivalent: \begin{enumerate}[(i)] \item $\mathcal F(M)$ has property $\alpha$. \item $M$ is uniformly discrete and bounded, and there is $\varepsilon>0$ such that \[ d(x,z)+d(z,y)-d(x,y)\geq \varepsilon\] whenever $x,y,z$ are distinct points in $M$. \end{enumerate} \end{theorem} \begin{proof} Assume first that $\mathcal F(M)$ has property $\alpha$ with constant $\rho>0$. By Proposition~\ref{prop:alphafreestrexp}, the set $\Gamma\subset S_{\mathcal{F}(M)}$ witnessing property $\alpha$ coincides with $\preext{B_{\mathcal F(M)}}$, so $\Gamma=\Mol{M}$ as $M$ is concave. Now, take $m_{x,y},m_{u,y}\in \Mol{M}=\Gamma$ and let $g_{x,y}\in S_{{\mathrm{Lip}}_0(M,\mathbb{R})}$ be the functional associated to $m_{x,y}$. Then, by Lemma \ref{lemma:ineqmolec}, we have that \begin{equation*} 2\frac{d(x,u)}{d(x,y)}\geq \norm{m_{x,y}-m_{u,y}}\geq |g_{x,y}(m_{x,y}-m_{u,y})|\geq 1-\rho. \end{equation*} From here, given $x,u\in M$ we have that \[(1-\rho)\sup\limits_{y\in M} d(x,y)\leq 2d(x,u),\] from where it follows that $M$ is bounded. Moreover, the following estimate holds: \[(1-\rho)\diam(M)\leq 2(1-\rho)\sup\limits_{y\in M} d(x,y)\leq 4d(x,u).\] Since $x,u\in M$ were arbitrary we conclude that $M$ is uniformly discrete. Now, Proposition~\ref{prop:caralphaudiscreto} provides $\varepsilon>0$ such that $(x,y)_z\geq \varepsilon$ whenever $m_{x,y}\in \strexp{B_{\mathcal F(M)}}$ and $z\in M\setminus\{x,y\}$. Since every molecule is strongly exposed, the conclusion follows. Finally, the converse statement follows from Proposition~\ref{prop:caralphaudiscreto}. \end{proof} As an application of the previous theorem, we may show that $D=2$ is not possible in Example \ref{ex:spacesAlpha}.c. \begin{example} Let $M=\{0,x_n,y_n\colon n\geq 2\}\subseteq c_0$, where $x_n:=(2-\frac{1}{n}) e_n$ and $y_n:=e_n+(1+\frac{1}{n})e_1$ for every $n\geq 2$. It can be proved routinely that $M$ is concave by using the characterisation of the preserved extreme points given in \cite{ag}. On the other hand, it is clear that the inequality $$1\leq d(x,y)<2$$ holds for every $x,y\in M$ with $x\neq y$. Nevertheless, one has that $$d(0,y_n)+d(y_n,x_n)-d(0,x_n)=\frac{3}{n}$$ for every $n\geq 2$, so $\mathcal F(M)$ fails property $\alpha$ by Theorem \ref{alpha+unifconc}. \end{example} \subsection[Property quasi-alpha]{Property quasi-\texorpdfstring{$\alpha$}{alpha}} In \cite{ChoiSong} it is defined a property which, in spite of being weaker than property $\alpha$, still implies property $A$. As in the case of property $\alpha$, we have slightly modified the original definition to an equivalent one which requires the set $\{x_\lambda\}_{\lambda \in \Lambda} \subseteq X$ bellow to be balanced. \begin{definition} A Banach space $X$ is said to have \textit{property quasi-$\alpha$} if there exist a balanced subset $A:=\{x_\lambda\}_{\lambda \in \Lambda}$ of $X$, a subset $\{x_\lambda^*\}_{\lambda \in \Lambda}\subseteq X^*$, and $\rho \colon \Lambda \longrightarrow \mathbb{R}$ such that \begin{itemize} \item[a)] $\| x_\lambda \|= \| x^*_\lambda\| =\| x^*_\lambda (x_\lambda)\|=1 $ for all $\lambda \in \Lambda$. \item[b)] $|x^*_\lambda(x_\mu)| \leq \rho(\mu)<1$ for all $x_\lambda \neq \pm x_\mu$. \item[c)] For every $e \in \ext{B_{X^{**}}}$, there exists a subset $A_e \subseteq A$ such that either $e$ or $-e$ belong to $\overline{A_e}^{\omega^*}$ and $r_e=\sup\{\rho(\mu)\colon x_\mu \in A_e\}<1$. \end{itemize} \end{definition} It follows that if $\{x_\lambda\}_{\lambda \in \Lambda}$ witnesses that $X$ has property quasi-$\alpha$, then $$B_X=\overline{\co}(\{x_\lambda\colon \lambda \in \Lambda\}).$$ Moreover, the same argument as the one used for property $\alpha$ in \cite[Fact in p.~202]{Schachermayer}, shows that for every $\lambda \in \Lambda$, $\varepsilon >0$, and $x \in B_X$, one has that \[ x^*_\lambda(x)> 1-\varepsilon(1-\rho(\lambda)) \Longrightarrow \| x-x_\lambda\| < 2 \varepsilon;\] so each $x_\lambda$ is strongly exposed in $B_X$ by $x^*_\lambda$. But now, as $\sup_{\lambda\in \Lambda}\rho(\lambda)$ may be equal to one, we do not get that $\{x_\lambda\}_{\lambda \in \Lambda}$ is a set of uniformly strongly exposed points. Nevertheless, the proof of Proposition 2.1 in \cite{ChoiSong} shows that if $\mathcal{F}(M)$ has property quasi-$\alpha$ then the set \[ \mathcal{A}:=\left\{T \in \mathcal{L}(\mathcal{F}(M), Y) \colon \| T \|=\|T(x_\lambda)\| \text{ for some } \lambda \in \Lambda \right\} \] is norm-dense in $\mathcal{L}(\mathcal{F}(M), Y) \equiv {\mathrm{Lip}}_0(M,Y)$. Now, every $x_\lambda$ is a strongly exposed point of $B_{\mathcal{F}(M)}$, and so, a molecule by Proposition \ref{prop:extremalidad}. Thus, $\mathcal{A}\subseteq \operatorname{SNA}(M,Y)$. We have proved the following. \begin{proposition}\label{prop:qalphaimplidensity} Let $M$ be a complete pointed metric space and assume that $\mathcal{F}(M)$ has property quasi-$\alpha$. Then $\operatorname{SNA}(M,Y)$ is norm-dense in ${\mathrm{Lip}}_0(M,Y)$ for every Banach space $Y$. \end{proposition} An analogous argument to the one given in the proof of Proposition~\ref{prop:alphafreestrexp} shows the following: \begin{proposition}\label{prop:qalphafreestrexp} Let $M$ be a complete pointed metric space and assume that $\mathcal F(M)$ has property quasi-$\alpha$ witnessed by a set $\Gamma\subset S_{\mathcal F(M)}$. Then, $$\preext{B_{\mathcal F(M)}}\subset \overline{\Gamma}.$$ \end{proposition} As a consequence, we obtain the following result in the case when $M$ is concave. \begin{proposition}\label{qalpha+unifconc} Let $M$ be a concave complete pointed metric space. If $\mathcal F(M)$ has property quasi-$\alpha$, then the set of isolated points of $M$ is dense in $M$. \end{proposition} \begin{proof} Assume that $\mathcal F(M)$ has property quasi-$\alpha$ witnessed by the sets $\Gamma\subset S_{\mathcal F(M)}$, $\Gamma^*\subset S_{{\mathrm{Lip}}_0(M,\mathbb{R})}$, and the function $\rho\colon \Gamma\longrightarrow \mathbb R$. Take $m_{x,y}\in \Gamma$ and let $g_{x,y}\in \Gamma^*$ be its associated functional. Then, \begin{equation}\label{eq:qalpha} \norm{m_{x,y}-m_{u,v}}\geq |g_{x,y}(m_{x,y}-m_{u,v})|\geq 1-\rho(m_{x,y}) \end{equation} for every $m_{u,v}\in \Gamma$ with $m_{u,v}\neq m_{x,y}$. By Proposition~\ref{prop:qalphafreestrexp}, $$\Mol{M}= \preext{B_{\mathcal F(M)}}\subset \overline{\Gamma}$$ and so \eqref{eq:qalpha} holds also for every $m_{u,v}\in \Mol{M}\setminus \{m_{x,y}\}$. Thus, by Lemma \ref{lemma:ineqmolec}, we have that \[ 2\frac{d(x,u)}{d(x,y)}\geq \norm{m_{x,y}-m_{u,y}} \geq 1-\rho(m_{x,y})\] whenever $m_{x,y}\in \Gamma$ and $u\in M\setminus\{x,y\}$. In particular, the open ball centred at $x$ of radius $\frac{1-\rho(m_{x,y})}{2}d(x,y)$ is a singleton whenever $m_{x,y}\in \Gamma$. This means that the set \[ A = \{x\in M\colon m_{x,y}\in \Gamma \text{ for some } y\in M\setminus\{x\} \}\] is made up of isolated points. In order to prove that $A$ is dense in $M$, consider the Lipschitz function $f(t) = d(t,A)-d(0,A)$ for every $t\in M$, which belongs to ${\mathrm{Lip}}_0(M,\mathbb{R})$, and consider its canonical linear extension $\hat{f}$ from $\mathcal F(M)$ to $\mathbb{R}$. Then, $\hat{f}$ vanishes on the norming set $\Gamma$, so $\hat{f}=0$. Thus, $f=0$, which yields that $\overline{A}=M$. \end{proof} \subsection{Relationship between the properties for Lipschitz-free spaces}\label{subsec:relations} In the context of Lipschitz-free spaces over complete metric spaces, Figure~\ref{figure:Lipfree} contains the implications between the properties of the previous subsections. \begin{figure}[h] \centering \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.5cm,main node/.style={rectangle, draw=black, text width=6em, minimum height=2em, text centered}] \node[main node] (alpha) {Property $\alpha$}; \node[main node] (qalpha) at (4.65,-1.2) {Property quasi-$\alpha$}; \node[main node, text width=8em] (use) at (0,-6) {$\begin{array}{c} B_{\mathcal F(M)}=\overline{\co}(S) \\ S \text{ unif.~str.~exp.}\end{array}$}; \node[main node, double, thick, text width=8em] (sna) at (4.65,-3.5) {$\operatorname{SNA}(M, Y) $ dense for all $Y$}; \node[main node] (a) at (4.65, -5.2) {Property A}; \node[main node] (finite) at (9.3,0) { $\begin{array}{c} \text{finite} \\ \text{dimensional}\end{array}$ }; \node[main node] (reflexive) at (9.3, -2.7) {Reflexive}; \node[main node] (rnp) at (9.3, -6) {RNP}; \draw[-implies,double equal sign distance] (alpha) -- (qalpha) node[midway,swap,sloped]{(9)}; \draw[-implies,double equal sign distance] (alpha) -- (use) node[midway]{(5)}; \draw[implies-implies,double equal sign distance] (finite) -- (reflexive) node [midway]{(1)}; \draw[-implies,double equal sign distance,transform canvas={xshift=-0.4em}] (reflexive) -- (rnp); \draw[-implies,double equal sign distance] (rnp) -- (sna) node[midway, sloped]{(4)}; \draw[-implies,double equal sign distance] (use) -- (sna) node[midway, sloped]{(6)}; \draw[-implies,double equal sign distance] (sna) -- (a) node[midway]{(8)}; \draw[-implies,double equal sign distance, transform canvas={yshift=-0.4em}] (finite) -- (alpha) node[midway]{(12)}; \draw[-implies,double equal sign distance] (qalpha) -- (sna) node[midway]{(10)}; \draw[-implies,double equal sign distance, neg] (use) -- (qalpha) node[pos=0.65, sloped]{(11)}; \draw[-implies,double equal sign distance, neg] (rnp) -- (qalpha) node[pos=0.65,sloped,swap]{(3)}; \draw[-implies,double equal sign distance, neg, transform canvas={xshift=0.4em}] (rnp) -- (reflexive) node[pos=0.4,swap]{(2)}; \draw[-implies,double equal sign distance, neg, transform canvas={yshift=0.4em}] (alpha) -- (finite) node[midway]{(13)}; \draw[-implies,double equal sign distance, neg] (rnp) -- (use) node[pos=0.55]{(7)}; \end{tikzpicture} \caption{Relations between the sufficient conditions for property A in Lipschitz-free spaces} \label{figure:Lipfree} \end{figure} Let us discuss why the numbered implications and non-implications hold. \noindent (1). It follows since every infinite-dimensional Lipschitz-free space contains an isomorphic copy of $\ell_1$ (this is folklore, but see \cite{cdw} where more is proved). \noindent (2). $\mathcal F(\mathbb N)=\ell_1$. \noindent (3). It follows from the following example. \begin{example}\label{ex:RNPnotQalpha} Let $M=([0,1],|\cdot|^\theta)$, where $0<\theta<1$. Then $\mathcal F(M)$ has the RNP (see Example \ref{ejernp}). Moreover, $M$ is concave \cite[p.~51]{wea5}. By Proposition~\ref{qalpha+unifconc}, $\mathcal F(M)$ does not have property quasi-$\alpha$. \end{example} \noindent (4). It follows from Theorem \ref{teornpdensidad}, whose proof is based on the proof of Bourgain that asserts that RNP implies property A \cite[Theorem~5]{bourgain1977}. \noindent (5). It follows from the definition, introduced in \cite{Schachermayer}. \noindent (6). It follows from Proposition \ref{prop:unifstrexpsnadens}, whose proof is based on \cite[Proposition~1]{lindens}, where it is proved that the existence of such a set $S$ implies property A. \noindent (7). It follows from the following example. \begin{example}\label{RNPnocufe} For every $n\in \mathbb{N}$, consider $M_n = \{0,x_n,y_n\}$, where $$d(0,x_n)=d(0,y_n)=1+1/n\qquad \text{and}\qquad d(x_n,y_n)=2$$ for each $n\in \mathbb N$, and let $M$ be its $\ell_1$-sum. Then, $\mathcal{F}(M)$ has the RNP, but $B_{\mathcal F(M)}$ is not the closed convex hull of any set of uniformly strongly exposed points. \end{example} \begin{proof} First, $\mathcal{F}(M)$ has the RNP as it is the $\ell_1$-sum of finite-dimensional Banach spaces by Proposition \ref{prop:kaufmann}. Suppose that $B_{\mathcal F(M)} = \overline{\co}(A)$. We claim that $m_{x_n,y_n}\in A\cup(-A)$ for every $n\in \mathbb N$. Indeed, assume that $m_{x_n,y_n}\notin A\cup(-A)$. Consider $f\colon M\longrightarrow \mathbb R$ given by $$f(0)=f(x_m)=f(y_m)=0\ \text{ if $m\neq n$},\ \ f(x_n)=-1,\ \text{ and } \ f(y_n)=1.$$ Clearly, $\norm{f}_L=1$. Moreover, we have that $$|\langle f, m_{x,y}\rangle | < (1+1/n)^{-1}\qquad \text{for every $m_{x,y}\in \Mol{M}\setminus\{m_{x_n,y_n}, m_{y_n,x_n}\}$.}$$ Thus, $A\cup(-A)$ is not norming, a contradiction. Now, note that \[ d(x_n,0)+d(y_n,0)-d(x_n,y_n) = \frac{2}{n}\] goes to $0$ as $n$ goes to $\infty$, and so $A$ is not uniformly Gromov rotund. \end{proof} \noindent (8). It is obvious. \noindent (9). It is obvious from the very definitions. \noindent (10). It follows from Proposition \ref{prop:qalphaimplidensity}, whose proof is based on that of \cite[Proposition 2.10]{ChoiSong}, where it is proved that property quasi-$\alpha$ implies property A. \noindent (11). It follows from the following example. \begin{example} Let $M=([0,1],| \cdot |^\theta)$, where $0<\theta<1$. Then, $\Mol{M}$ is a set of uniformly strongly exposed points, $B_{\mathcal{F}(M)}=\overline{\co}(\Mol{M})$, but $\mathcal F(M)$ fails to have property quasi-$\alpha$. \end{example} \begin{proof} In view of Proposition~\ref{qalpha+unifconc}, we just have to show that $\Mol{M}$ is uniformly Gromov rotund. To this end, define the map $f \colon (0,1)\longrightarrow \mathbb{R}$ given by \[f(\lambda)=\frac{(1-\lambda)^\theta + \lambda^\theta -1}{\min\{(1-\lambda)^\theta, \lambda^\theta\}} \] It is easy to see that $0<\varepsilon:=\inf\{ f(\lambda) \colon \lambda \in (0,1)\}\leq 1$. Take different points $x,y,z \in [0,1]$ such that $x<y$. First, if we assume that $z < x$, then \[ \frac{(x,y)_z}{\min\{|x-z|^\theta, |y-z|^\theta\}} = \frac{|x-z|^\theta+|z-y|^\theta-|x-y|^\theta}{|x-z|^\theta}\geq \frac{|x-z|^\theta}{|x-z|^\theta}=1,\] and the same happens in the case of $y<z$. On the other hand, if $z=\lambda x + (1-\lambda)y$ for some $\lambda \in (0,1)$, then \begin{align*} \frac{(x,y)_z}{\min\{|x-z|^\theta, |y-z|^\theta\}} & = \frac{|x-y|^\theta(1-\lambda)^\theta + |x-y|^\theta \lambda^\theta -|x-y|^\theta}{|x-y|^\theta \min \{(1-\lambda)^\theta, \lambda^\theta\}} \\ &=\frac{(1-\lambda)^\theta+\lambda^\theta -1}{\min\{(1-\lambda)^\theta, \lambda^\theta\}}\geq \varepsilon.\qedhere \end{align*} \end{proof} \noindent (12). See Example~\ref{ex:spacesAlpha}. \noindent (13). $\mathcal F(\mathbb N)=\ell_1$ has property $\alpha$. All the reverse implications not considered in our diagram (or which do not obviously follows from the ones given in it) are not known in the context of Lipschitz-free spaces. Particularly interesting are the cases of whether the converse of (4) and (8) holds. \section{Weak density of $\operatorname{SNA}(M,\mathbb{R})$}\label{sectidensidadebil} We have seen in the previous sections that the fact that $\operatorname{SNA}(M,\mathbb{R})$ is norm-dense in ${\mathrm{Lip}}_0(M,\mathbb{R})$ imposes severe restrictions on the metric space $M$ (c.f.\ e.g.\ Theorem \ref{teo:length} or Corollary \ref{caraR-tree}), and even more the known sufficient conditions to get such density of Section \ref{sec:property_A}. However, that is not the case if we replace norm density with weak density, as the following theorem shows. \begin{theorem}\label{th:weakdenseSA} Let $M$ be a complete pointed metric space. Then, $\operatorname{SNA}(M,\mathbb{R})$ is weakly sequentially dense in ${\mathrm{Lip}}_0(M,\mathbb{R})$. Moreover, for every $g\in {\mathrm{Lip}}_0(M,\mathbb{R})$ there is a sequence $\{g_n\}\subset\operatorname{SNA}(M,\mathbb{R})$ such that $g_n\stackrel{w}{\longrightarrow}g$, $\norm{g_n}_L\to\norm{g}_L$, and $g_n\to g$ uniformly on bounded sets. \end{theorem} This result extends \cite[Theorem 2.6]{kms}, where it was proved under the assumption of $M$ being a local metric space (equivalently, $M$ being a length space). In order to prove our result we need a pair of lemmata. To begin with, the following result is implicitly proved in \cite[Theorem 2.6]{kms} under the assumption of $M$ being a length space, but thanks to Lemma \ref{lemmaweaknull}, we can show that the same argument works in a much more general setting. \begin{lemma}\label{lemma:sufcond} Let $M$ be a pointed metric space. Assume that there exists a sequence $\{B(x_n,r_n)\}_{n\in \mathbb{N}}$ of disjoint balls of $M$ and a sequence $\{y_n\}_{n\in \mathbb{N}}$ of points of $M$ such that $0< d(x_n,y_n)/r_n \to 0$ and $r_n\to 0$. Then for every $g\in {\mathrm{Lip}}_0(M,\mathbb{R})$ there is a sequence $\{g_n\}\subset\operatorname{SNA}(M,\mathbb{R})$ such that $g_n\stackrel{w}{\longrightarrow}g$ and $\norm{g_n}_L\to\norm{g}_L$ and $g_n\to g$ uniformly. \end{lemma} \begin{proof} Given $g\in S_{{\mathrm{Lip}}_0(M,\mathbb{R})}$, just follow the proof of Theorem 2.6 in \cite{kms} to construct a sequence $\{g_n\}$ in $\operatorname{SNA}(M,\mathbb{R})$ with $\operatorname{supp}(g_n-g)\subset B(x_n,r_n)$, $g_n(y_n)=g(y_n)$ and $\norm{g_n}_L =1+2\frac{d(x_n,y_n)}{r_n}\to 1$. Then, $\{g_n\}\stackrel{w}{\longrightarrow} g$ by Lemma \ref{lemmaweaknull}. Moreover, \begin{align*} |g_n(x)-g(x)| &\leq |g_n(x)-g_n(y_n)|+|g(y_n)-g(x)| \leq (\norm{g_n}_L+\norm{g}_L) d(y_n, x)\\ &\leq (2+2\frac{d(x_n,y_n)}{r_n})(r_n+d(x_n,y_n)) \end{align*} whenever $x\in B(x_n,r_n)$. Since $r_n\to0$, $d(x_n,y_n)/r_n\to0$ and $\operatorname{supp}(g_n-g)\subset B(x_n,r_n)$, it follows that $g_n\to g$ uniformly. \end{proof} The following technical lemma will allow us to apply Lemma \ref{lemma:sufcond} in the case of $M$ being discrete but not uniformly discrete. \begin{lemma}\label{lemadiscre} Let $M$ be a complete metric space. Assume that $M$ is discrete but not uniformly discrete. Then, for every $k\geq 2$ and every $\varepsilon>0$, there exist $x, y\in M$ such that $0<d(x,y)\leq\varepsilon$ and the set $M\setminus B(x,k\,d(x,y))$ is not uniformly discrete. \end{lemma} \begin{proof} Assume that there exist $k\geq 2$ and $\varepsilon>0$ such that \[ \alpha(x,y) := \inf\{d(u,v)\colon u,v\in M\setminus B(x,kd(x,y)),\, u\neq v\}>0\] whenever $0<d(x,y)\leq \varepsilon$. Since $M$ is not uniformly discrete, one can construct inductively two sequences $\{x_n\}$ and $\{y_n\}$ in $M$ such that $0<d(x_1,y_1)\leq \varepsilon$ and $0<d(x_{n+1},y_{n+1})\leq \min\{\alpha(x_n,y_n), 2^{-n-1}\varepsilon\}$ for every $n\in \mathbb N$. It follows that either $x_{n+1}\in B(x_n, kd(x_n, y_n))$ or $y_{n+1}\in B(x_n, kd(x_n,y_n))$. In any case, \[x_{n+1} \subset B(x_n, kd(x_n,y_n)+d(x_{n+1},y_{n+1}))\subset B(x_n, 2^{-n}\varepsilon (k+1/2)).\] Thus, $\{x_n\}$ is Cauchy and so it has a limit in $M$, say $x$. Moreover, it is clear that $\{y_n\}$ also converges to $x$. Since $M$ is discrete, we conclude the existence of $n\in \mathbb N$ such that $x_n=y_n$, a contradiction. \end{proof} \begin{proof}[Proof of Theorem \ref{th:weakdenseSA}] We distinguish several cases depending on the properties of the set of cluster points $M'$. If $M'$ is infinite, then Lemma~\ref{lemma:sufcond} applies and so $\operatorname{SNA}(M,\mathbb{R})$ is weakly sequentially dense. Indeed, in such case it is not difficult to construct an infinite sequence of disjoint balls centered at (different) cluster points; as the centers are cluster points, we may also get the sequence $\{y_n\}_{n\in \mathbb{N}}$. If $M'$ is empty, then we distinguish two more cases: \begin{itemize} \item If $M$ is uniformly discrete, then $\mathcal F(M)$ has the RNP (see Example \ref{ejernp}), and so $\operatorname{SNA}(M,\mathbb{R})$ is indeed norm-dense in ${\mathrm{Lip}}_0(M,\mathbb{R})$ by Theorem \ref{teornpdensidad}. Note that if $\norm{g_n-g}_L\to 0$ then $g_n\to g$ uniformly on bounded sets. \item If $M$ is discrete but not uniformly discrete, then we can inductively apply Lemma~\ref{lemadiscre} to find sequences $\{x_n\}$, $\{y_n\}$ in $M$ such that, for every $n\in \mathbb N$, the space $M\setminus \bigcup_{m=1}^n B(x_m, 2md(x_m,y_m))$ is discrete but not uniformly discrete, $x_{n+1},y_{n+1}\in M\setminus \bigcup_{m=1}^n B(x_m, 2md(x_m,y_m))$ and $$d(x_{n+1},y_{n+1})\leq\min\left\{\frac{n}{n+1}d(x_n, y_n), n^{-2}\right\}.$$ It is easy to check that the balls $\{B(x_n, nd(x_n,y_n))\}$ are pairwise disjoint and satisfy the requirement of Lemma~\ref{lemma:sufcond}. The conclusion follows. \end{itemize} It remains to consider the case when $M'$ is non-empty and finite, say $M'=\{a_1,\ldots, a_k\}$. Moreover, we may assume that $a_1=0$. Given $\varepsilon>0$, we denote $E_\varepsilon :=\bigcup_{i=1}^k \bigl[M\setminus B(a_i,\varepsilon)\bigr]$. If $E_\varepsilon$ is finite for every $\varepsilon>0$, then $M$ is compact and countable. Then, $\mathcal F(M)$ has the RNP (see Example \ref{ejernp}) and the conclusion follows. Thus, we may and do assume that there is $0<\varepsilon_0<\frac{1}{4}\min_{i\neq j}\{d(a_i,a_j)\}$ such that $E_{\varepsilon_0}$ is infinite. Moreover, note that $E_\varepsilon$ is discrete for every $\varepsilon>0$. If there is $0<\varepsilon\leq \varepsilon_0$ such that $E_\varepsilon$ is not uniformly discrete in $M$, then the same argument as above provides a sequence of disjoint balls such that Lemma~\ref{lemma:sufcond} applies. Thus, we may also suppose that $E_\varepsilon$ is infinite and uniformly discrete in $M$ for every $0<\varepsilon\leq \varepsilon_0$. By rescaling the metric space, we may assume that $\varepsilon_0=2^{-1}$. For $n\in \mathbb{N}$ and $i\in\{1,\ldots, k\}$, let us denote $C_n^i := E_{(n+1)^{-1}}\cap B(a_i, n^{-1})$ and \[ \alpha_n^i:= \inf\{d(x,M\setminus\{x\})\colon x\in C_n^i\},\] with the convention that $\inf\emptyset=+\infty$. Note, by passing, that $$M= E_{2^{-1}} \cup \bigcup_{n,k} C_n^i.$$ Now, we distinguish two cases. \emph{Case 1}: assume that there is $i\in \{1,\ldots, k\}$ such that $\liminf_{n\to\infty} n\alpha_n^i=0$. Then we claim that it is possible to find a sequence $\{j_n\}$ in $\mathbb N$, and sequences $\{x_n\}$ and $\{y_n\}$ in $M$, such that: \begin{enumerate} \item $3nd(x_n, y_n)< (j_n+1)^{-1}$ for every $n$; \item $4j_{n+1}^{-1} < (j_n+1)^{-1} -3nd(x_n,y_n)$ for every $n$; \item $x_n\in C_{j_{n}}^i$. \end{enumerate} Indeed, take $j_1\geq 1$ such that $6j_1\alpha_{j_1}^i< 1$. Then there is $x_1\in C_{j_1}^i$ such that $$3d(x_1, M\setminus \{x_1\}) < 2^{-1}j_1^{-1}\leq (j_1+1)^{-1}.$$ Thus, there is $y_1\in M$ with $3d(x_1,y_1)<(j_1+1)^{-1}$. Now, assume that we have defined $x_{n}$, $y_{n}$ and $j_n$, and let us define $x_{n+1}$, $y_{n+1}$ and $j_{n+1}$. By condition (1), we can take $j_{n+1}\in \mathbb N$ such that $$4j_{n+1}^{-1}<(j_n+1)^{-1} -3nd(x_n,y_n) \quad \text{ and } \quad 6nj_{n+1}\alpha_{j_{n+1}}^i < 1.$$ Then, there are $x_{n+1}\in C_{j_{n+1}}^i$ and $y_{n+1}\in M$ such that \[ 3nd(x_{n+1}, y_{n+1})< 2^{-1}j_{n+1}^{-1}\leq (j_{n+1}+1)^{-1}.\] This completes the construction of the sequences $\{x_n\}$, $\{y_n\}$ and $\{j_n\}$. Now, we claim that $$B(x_n, 3nd(x_n,y_n))\cap B(x_m, 3md(x_m, y_m)) = \emptyset$$ whenever $n\neq m$. Indeed, assume that $n<m$. It follows from (1) and (2) that \[ B(x_n, 3nd(x_n,y_n))\cap B(a_i, j_{n+1}^{-1}) = \emptyset. \] Moreover, from (1) and (3) it follows that \begin{align*} B(x_m, 3md(x_m, y_m)) &\subset B(x_m, 3(j_m+1)^{-1}) \\ &\subset B(a_i, 3(j_m+1)^{-1} + j_m^{-1}) \subset B(a_i, 4j_m^{-1}). \end{align*} Finally, note that $4j_{m}^{-1}\leq 4j_{n+1}^{-1} < (j_n+1)^{-1}$. Thus $B(x_m, 3md(x_m, y_m))$ is contained in $B(a_i, 4j_{n+1}^{-1})$ and so, it does not intersect $B(x_n, 3nd(x_n,y_n))$. Therefore, we can apply Lemma \ref{lemma:sufcond} to get that $\operatorname{SNA}(M,\mathbb{R})$ is weakly sequentially dense. This completes the proof in the first case. \emph{Case 2}: assume now that there is a constant $C>0$ such that $C\leq n\alpha_n^i$ for every $n\in \mathbb N$ and $i\in\{1,\ldots, k\}$. We will show that in this case $\mathcal F(M)$ has the RNP. To this end, we will apply Proposition \ref{prop:kaufmann} several times in order to decompose $\mathcal F(M)$ as an $\ell_1$-sum of spaces with the RNP. Let us denote $E = E_{2^{-1}} \cup \{0\}$ and $N = \bigcup_{i=1}^k B(a_i, 1/2)$. We claim that $\mathcal F(M)$ is isomorphic to $\mathcal F(E) \oplus_1 \mathcal F(N)$. Note that $N$ is bounded and so, $R=\sup\{d(x,0)\colon x\in N\}<+\infty$. Moreover, note also that $E$ is uniformly discrete in $M$ and so, $$\alpha:=\inf\{d(x,y)\colon x\in E, y\in N, x\neq y\}>0.$$ Thus, given $x\in E$ and $y\in N$, we have that \[ d(x,0)+d(y,0) \leq d(x,y)+2d(y,0)\leq \left(1+2\frac{R}{\alpha}\right)d(x,y).\] By applying Proposition \ref{prop:kaufmann}, we get the claim. Now, for $i\in\{1,\ldots,k\}$, denote $\tilde{C}_0^i=\{0,a_i\}$, $\tilde{C}_n^i:= C_n^i\cup\{a_i\}$ if $n\geq 1$ and $\tilde{C}^i:=\bigcup_{n=0}^\infty \tilde{C}_n^i$. Note that $N= \bigcup_{i=1}^k \tilde{C}^i$ and $\tilde{C}^i\cap \tilde{C}^j=\{0\}$ if $i\neq j$. We claim that there is a constant $L>0$ such that \[ d(x,0)+d(y,0)\leq L\, d(x,y) \] whenever $x\in \tilde{C}^i$ and $y\in \tilde{C}^j$ with $i\neq j$. Take such an $x$ and $y$. Note that \[\begin{split} d(x,y)&\geq d(a_i, a_j)-d(x,a_i)-d(y,a_j) \geq \frac{\min_{i\neq j}d(a_i,a_j)}{2}\\ & \geq \frac{\min_{i\neq j}d(a_i,a_j)}{4\diam(N)}(d(x,0)+d(y,0)). \end{split}\] Therefore, $L = \frac{4\diam(N)}{\min_{i\neq j}d(a_i,a_j)}$ does the work. This shows that \[ \mathcal F(M) \approx \mathcal F(E)\oplus_1\mathcal F(N) \approx \mathcal F(E) \oplus_1 \mathcal F(\tilde{C}^1) \oplus_1 \cdots \oplus_1\mathcal F(\tilde{C}^k). \] Finally, we will show that $\mathcal F(\tilde{C}^i)\approx \left[\bigoplus_{n=0}^\infty \mathcal F(\tilde{C}_{n}^i)\right]_{\ell_1}$ for every $i\in \{1,\ldots, k\}$. To this end, consider $a_i$ as the distinguished point in $\tilde{C}^i$ and notice that $\tilde{C}_n^i\cap \tilde{C}_m^i = \{a_i\}$ if $n\neq m$. Fix $n, m\in \mathbb N\cup\{0\}$ with $n<m$, take $x\in \tilde{C}_n^i$ and $y\in \tilde{C}_m^i$ with $x\neq y$. Then \[ d(y,a_i) \leq \frac{1}{m} \leq \frac{\alpha_m^i}{C} \leq \frac{d(x,y)}{C} \] by definition of $\alpha_m^i$, and so \[ d(x, a_i) + d(y,a_i) \leq d(x,y) + 2d(y,a_i) \leq (1+2C^{-1})d(x,y).\] Thus, we can apply Proposition \ref{prop:kaufmann} to get that $\mathcal F(\tilde{C}^i)\approx \left[\bigoplus_{n=0}^\infty \mathcal F(\tilde{C}_{n}^i)\right]_{\ell_1}$. Therefore, \[ \mathcal F(M) \approx \mathcal F(E) \oplus_1 \left[\bigoplus_{n,i} \mathcal F(\tilde{C}_{n}^i)\right]_{\ell_1} \] where each one of the summands has the RNP as they are the Lipschitz-free space over a uniformly discrete metric space (see Example \ref{ejernp}). \end{proof} Let us finish the section with some observations. \begin{remark}{\slshape It follows from Theorem \ref{th:weakdenseSA} that the linear span of $\operatorname{SNA}(M,\mathbb{R})$ is norm dense in ${\mathrm{Lip}}_0(M,\mathbb{R})$ for every metric space $M$. When $\mathcal{F}(M)$ has the RNP, one actually has that } $$\operatorname{SNA}(M,\mathbb{R})-\operatorname{SNA}(M,\mathbb{R})={\mathrm{Lip}}_0(M,\mathbb{R}).$$ Indeed, it follows from \cite[Theorem 8]{bourgain1977} that in such a case $\operatorname{SNA}(M,\mathbb{R})$ contain a dense $G_\delta$-subset of ${\mathrm{Lip}}_0(M,\mathbb{R})$, so $\operatorname{SNA}(M,\mathbb{R})$ is residual. It then follows that $\operatorname{SNA}(M,\mathbb{R})-\operatorname{SNA}(M,\mathbb{R})={\mathrm{Lip}}_0(M,\mathbb{R})$ from Baire Category Theorem (indeed, an easy argument is given in \cite[Proposition 5.5]{KLMW2018}: just observe that for every $f\in {\mathrm{Lip}}_0(M,\mathbb{R})$, $[f+\operatorname{SNA}(M,\mathbb{R})]\cap \operatorname{SNA}(M,\mathbb{R})$ is not empty since, otherwise, the second category set $f+\operatorname{SNA}(M,\mathbb{R})$ would be contained in the first category set ${\mathrm{Lip}}_0(M,\mathbb{R})\setminus \operatorname{SNA}(M,\mathbb{R})$, which is impossible). \end{remark} Next, we observe that viewing ${\mathrm{Lip}}_0(M,\mathbb{R})\equiv \mathcal{L}(\mathcal{F}(M),\mathbb{R})\equiv \mathcal{F}(M)^*$, the Bishop-Phelps theorem gives that the set of those elements in ${\mathrm{Lip}}_0(M,\mathbb{R})$ which attain their norm \emph{as elements of the dual of $\mathcal{F}(M)$} is always norm dense. On the other hand, $\operatorname{SNA}(M,\mathbb{R})$ is the set of elements in $\mathcal{F}(M)^*$ which attain their norm at a molecule. As the unit ball of $\mathcal{F}(M)$ is the closed convex hull of $\Mol{M}$, one may wonder whether Theorem \ref{th:weakdenseSA} actually follows from these facts, that is, if whenever a subset $A$ of a Banach space $X$ satisfies that $B_X=\overline{\co}(A)$, then the set of elements of $X^*$ which attain their norms at a point of $A$ is weakly dense on $X^*$. This is not true in general, as the following example shows. \begin{example} Let $X=c_0 \ensuremath{\widehat{\otimes}_\pi} Y$ be the projective tensor product of $c_0$ and $Y$, where $Y$ is an equivalent renorming of $\ell_1$ such that $Y^*$ is strictly convex (see e.g.\ \cite[Theorem~II.2.6]{dgz}). We consider the subset of $B_X$ given by $$A:=\{x\otimes y\colon x\in B_{c_0},\, y\in B_{Y}\}$$ which satisfies that $B_X = \overline{\co}(A)$ (see e.g.\ \cite[Proposition~2.2]{ryan}). Next, observe that if an element of $X^* \equiv \mathcal L(c_0, Y^{*})$ attains its norm at a point of $A$ then, in particular, it attains its norm as an operator from $c_0$ to $Y^*$ (actually more), that is, the set of elements of $X^*$ attaining their norms at points of $A$ is contained in $\operatorname{NA}(c_0,Y^*)$. But this set is not weakly dense since it is contained in the space of compact operators $\mathcal K(c_0, Y^{*})$ by \cite[Proposition 4]{lindens} and there are non-compact operators from $c_0$ to $Y^*$. \end{example} \section{Octahedrality of the bidual norm of Lipschitz-free spaces}\label{sectiocta} As we have pointed out in the Introduction, it is proved in \cite[Theorem 2.4]{blr} that $\mathcal F(M)$ has an octahedral norm whenever $M$ is not uniformly discrete and bounded. The idea of the proof in \cite{blr} is to show that, under this hypothesis, every convex combination of weak-star slices of the unit ball of ${\mathrm{Lip}}_0(M,\mathbb{R})$ has diameter two, and then use \cite[Theorem 2.1]{blroctajfa} to get the octahedrality of the predual $\mathcal{F}(M)$ of ${\mathrm{Lip}}_0(M,\mathbb{R})$. The proof strongly relies on the fact that on bounded subsets of ${\mathrm{Lip}}_0(M,\mathbb{R})$ there is a good characterisation of the weak-star convergence: it agrees with the pointwise convergence. If one wants to prove the octahedrality of the norm of the bidual space of $\mathcal{F}(M)$ for some $M$, the analogous way is to show that every convex combination of weak slices of the unit ball of ${\mathrm{Lip}}_0(M,\mathbb{R})$ has diameter two and then use \cite[Corollary 2.2]{blroctajfa} to get from this fact the octahedrality of the norm of $\mathcal{F}(M)^{**}={\mathrm{Lip}}_0(M,\mathbb{R})^*$. The main difficulty for this is that, as far as we are concerned, the knowledge of the weak topology on bounded sets of ${\mathrm{Lip}}_0(M,\mathbb{R})$ is not very satisfactory. Nevertheless, our Lemma \ref{lemmaweaknull} (jointly with \cite[Corollary 2.2]{blroctajfa}), allows us to provide the following result. \begin{theorem}\label{teoctabiduallibre} Let $M$ be a complete metric space. If $M'$ is infinite or $M$ is discrete but not uniformly discrete, then the norm of $\mathcal F(M)^{**}$ is octahedral. \end{theorem} \begin{remark}{\slshape Note that the previous result is not sharp. For instance, it is well-known that the bidual norm of $\mathcal F(\mathbb N)=\ell_1$ is octahedral because every convex combination of slices of $B_{\ell_\infty}$ has diameter two \cite[Theorem 4.2]{aln}, but this result is not covered by the assumptions of our theorem.} \end{remark} As we announced above, in order to prove Theorem \ref{teoctabiduallibre}, we will focus on proving that ${\mathrm{Lip}}_0(M,\mathbb{R})$ has the so-called SD2P. Recall that a Banach space has the \emph{SD2P} if every convex combination of weak slices of $B_X$ has diameter two. In fact, we will consider the following stronger notion, introduced in the recent paper \cite{anp}. \begin{definition} A Banach space $X$ has the \textit{symmetric strong diameter two property} (\emph{SSD2P} in short) if for every $n\in\mathbb N$, every slices $S_1,\ldots, S_n$ of $B_X$ and every $\varepsilon>0$, there are $x_i\in S_i$ for every $i\in\{1,\ldots, n\}$ and there exists $\varphi\in B_X$ with $\Vert \varphi\Vert>1-\varepsilon$ such that $x_i\pm \varphi\in S_i$ for every $i\in\{1,\ldots, n\}$. \end{definition} If $X$ is a dual Banach space, the weak-star version of the previous property (the \emph{weak-star-SSD2P}), defined in the natural way, was considered in \cite[Definition~5.1]{hlln}. It is easy to prove (see e.g.\ \cite[Lemma 4.1]{aln}) that the SSD2P implies the SD2P, but the converse result is not true \cite[Remark 3.2]{hlln}. Our next result is an abstract condition to get the SSD2P in certain spaces of Lipschitz functions. \begin{lemma}\label{lematecnissd2p} Let $M$ be a pointed metric space and assume that there exists a pair of sequences $\{x_n\},\{y_n\}$ in $M$ satisfying that $nd(x_n,y_n)\longrightarrow 0$ and that the balls $B(x_n, nd(x_n,y_n))$ are pairwise disjoint. Then, ${\mathrm{Lip}}_0(M,\mathbb{R})$ has the SSD2P. \end{lemma} \begin{proof} By \cite[Theorem~2.1, (d)]{hlln}, it is enough to prove that given $N\in \mathbb{N}$ and norm-one Lipschitz functions $f_1,\ldots,f_N$, there are a weakly-null sequence $\{h_n\}_{n\in \mathbb{N}}$ of norm-one Lipschitz functions and sequences $\{g_i^n\}_{n\in\mathbb{N}}$ such that $\|g_i^n\|\longrightarrow 1$ and that $g_i^n\longrightarrow f_i$ weakly for every $i=1,\ldots,N$, satisfying that $\|g_i^n \pm h_n\|\longrightarrow 1$ for every $i=1,\ldots,N$. Let us construct the sequences. Given $i\in\{1,\ldots, N\}$ and $n\in\mathbb N$, we define $$g_i^n\colon [M\setminus B(x_n,nd(x_n,y_n))]\cup B(x_n, n^\frac{2}{3}d(x_n,y_n))\longrightarrow \mathbb R$$ by the equation $$g_i^n(x):=\left\{\begin{array}{cc} f_i(x_n) & x\in B\left(x_n,n^\frac{2}{3}d(x_n,y_n)\right),\\ f_i(x) & x\notin B(x_n, nd(x_n,y_n)). \end{array} \right.$$ It follows that $\Vert g_i^n\Vert_L\longrightarrow 1$. Indeed, pick $$x\in B(x_n,n^\frac{2}{3}d(x_n,y_n))\quad \text{and}\quad y\notin B(x_n, nd(x_n,y_n))$$ (the remaining cases are immediate). Then, \begin{align*} \frac{\vert g_i^n(x)-g_i^n(y)\vert}{d(x,y)}&=\frac{\vert f_i(x_n)-f_i(y)\vert}{d(x,y)}\leq \frac{d(x_n,y)}{d(x,y)}\leq \frac{d(x,y)+d(x,x_n)}{d(x,y)}\\ &=1+\frac{d(x,x_n)}{d(x,y)} \leq 1+\frac{n^\frac{2}{3}d(x_n,y_n)}{nd(x_n,y_n)}=1+\frac{1}{n^\frac{1}{3}}\longrightarrow 1. \end{align*} By McShane extension theorem, we can consider that the functions $g_i^n$ are defined in the whole of $M$. Now, observe that $\operatorname{supp}(g_i^n-f_i)\subseteq B(x_n,nd(x_n,y_n))$, which are disjoint sets by the assumption. Consequently, $g_i^n\stackrel{w}{\longrightarrow} f_i$ for every $i=1,\ldots,n$ by Lemma \ref{lemmaweaknull}. Now, again the assumptions on the balls imply the existence of a sequence $\{h_n\}\subseteq {\mathrm{Lip}}_0(M,\mathbb{R})$ such that, for every $n\in\mathbb N$, we have that $\Vert h_n\Vert_L=1$, that $\operatorname{supp}(h_n)\subseteq B(x_n,n^\frac{1}{3}d(x_n,y_n))$ and that $h_n(x_n)=0$. Again from the disjointness of the supports, we conclude that $\{h_n\}$ is weakly null from Lemma \ref{lemmaweaknull}. Let us prove that $\Vert g_i^n\pm h_n\Vert_L\longrightarrow 1$. To this end, pick $i\in\{1,\ldots, N\}$ and $n\in\mathbb N$. Given $x,y\in M$ with $x\neq y$, we have that $$ \frac{\vert (g_i^n\pm h_n)(x)-(g_i^n\pm h_n)(y)\vert}{d(x,y)}\leq \underbrace{\frac{\vert g_i^n(x)-g_i^n(y)\vert}{d(x,y)}}_A+\underbrace{\frac{\vert h_n(x)-h_n(y)\vert}{d(x,y)}}_B=:C. $$ Let us obtain an upper bound for $C$: \begin{itemize} \item If $B=0$ then $C\leq \Vert g_i^n\Vert_L$. \item If $B\neq 0$, then either $x$ or $y$ belongs to $B(x_n,n^\frac{1}{3}d(x_n,y_n))$, so let us assume that such a point is $x$. In this case, notice that $g_i^n(x)=f(x_n)$. Now, if $y\in B(x_n,n^\frac{2}{3}d(x_n,y_n))$ then $g_i^n(y)=f(x_n)$ from where $A=0$ and $C\leq 1$ in this case. Otherwise, if $y\notin B(x_n,n^\frac{2}{3} d(x_n,y_n))$, then $h_n(y)=0$. Furthermore, $$ d(x,y)\geq (n^\frac{2}{3}-n^\frac{1}{3})d(x_n,y_n). $$ So, taking $n \geq 2$, we get $$ B\leq \frac{d(x_n,x)}{d(x,y)}\leq \frac{n^\frac{1}{3}d(x_n,y_n)}{(n^\frac{2}{3}-n^\frac{1}{3})d(x_n,y_n)} =\frac{1}{n^\frac{1}{3}-1}, $$ and then $C\leq \Vert g_i^n\Vert_L+\frac{1}{n^\frac{1}{3}-1}$. \end{itemize} Taking supremum in $x,y\in M$ with $x\neq y$, we get that $$ \Vert g_i^n\pm h_n\Vert_L\leq \Vert g_i^n\Vert_L +\frac{1}{n^\frac{1}{3}-1} $$ for every $n \geq 2$. Since it is not difficult to prove that $\Vert g_i^n\pm h_n\Vert_L\geq 1$ (it is enough to consider points of $B(x_n, n^\frac{1}{3}d(x_n,y_n))$ from the assumption that $\Vert h_n\Vert_L=1$), we get that $\Vert g_i^n\pm h_n\Vert_L\longrightarrow 1$. \end{proof} As a consequence of the previous result we get the promised result about the SSD2P. \begin{theorem}\label{teogenessd2pesc} Let $M$ be a complete pointed metric space. If $M'$ is infinite or $M$ is discrete but not uniformly discrete, then ${\mathrm{Lip}}_0(M,\mathbb{R})$ has the SSD2P. \end{theorem} Note that, as announced, Theorem \ref{teoctabiduallibre} follows from this result by \cite[Corollary 2.2]{blroctajfa}. \begin{proof}[Proof of Theorem \ref{teogenessd2pesc}] If $M'$ is infinite, it is not difficult to construct an infinite sequence of disjoint balls centered at (different) cluster points and use the fact that the center of the balls are cluster points to get a sequence $\{y_n\}_{n\in \mathbb{N}}$ that allows us to use Lemma~\ref{lematecnissd2p}. On the other hand, if $M$ is discrete and not uniformly discrete, we can construct by Lemma~\ref{lemadiscre} a sequence of pairs $(x_n,y_n)$ such that, for every $n\in\mathbb N$, $0<d(x_n,y_n)<\frac{1}{n^2}$ and such that $B(x_n,nd(x_n,y_n))$ is a sequence of pairwise disjoint balls, so again Lemma~\ref{lematecnissd2p} applies. \end{proof} The following comment is pertinent. \begin{remark}{\slshape Note that Theorem \ref{teogenessd2pesc} improves, under its hypotheses, several known results about the big slice phenomena in spaces of Lipschitz functions. More precisely, given a metric space $M$ such that $M'$ is infinite or $M$ is discrete but not uniformly discrete, Theorem \ref{teogenessd2pesc} improves the consequences obtained in \cite{blr} (respectively, \cite{hlln}, \cite{ivak}) in ${\mathrm{Lip}}_0(M,\mathbb{R})$, namely, that ${\mathrm{Lip}}_0(M,\mathbb{R})$ has the weak-star-SD2P (respectively, weak-star-SSD2P, the property that every slice of its unit ball has diameter two).} \end{remark} In the compact case, we get the following optimal result. \begin{corollary} Let $M$ be an infinite compact metric space. Then the norm of $\mathcal F(M)^{**}$ is octahedral. \end{corollary} \begin{proof} The case of $M'$ being infinite follows from Theorem \ref{teoctabiduallibre}. Otherwise, $M'$ is finite and then, $M$ is a countable compact metric space. Therefore, by \cite[Theorem~2.1]{dalet} there is a Banach space $Z$ such that $Z^{**}={\mathrm{Lip}}_0(M,\mathbb{R})$. Actually, $Z$ can be considered to be the so-called little Lipschitz space ${\mathrm{lip}}_0(M,\mathbb{R})$, see \cite[Definition 3.1.1]{wea5} for background. Thus, $Z$ is a non-reflexive $M$-embedded Banach space by \cite[Theorem~6.6]{Kalton04}. As a consequence, both $Z$ and $Z^{**}={\mathrm{Lip}}_0(M,\mathbb{R})$ satisfy that every convex combination of weak slices of their unit ball has diameter two by \cite[Theorem~4.10]{aln}, and so the norm of $\mathcal F(M)^{**}={\mathrm{Lip}}_0(M,\mathbb{R})^*$ is octahedral by \cite[Corollary 2.2]{blroctajfa}. \end{proof} Let us discuss a little bit with a possible version of Theorem \ref{teogenessd2pesc} for vector-valued Lipschitz maps. We recall that given a metric space $M$ and a Banach space $Y$, it is said that the pair $(M,Y)$ satisfies the \emph{contraction-extension property} (\emph{CEP} in short) if McShane's extension theorem holds for $Y$-valued Lipschitz maps from subsets of $M$, that is, given $N\subseteq M$ and a Lipschitz function $f\colon N\longrightarrow Y$, there exists a Lipschitz function $F\colon M\longrightarrow Y$ which extends $f$ and satisfies that $$\Vert F\Vert_{{\mathrm{Lip}}_0(M,Y)}=\Vert f\Vert_{{\mathrm{Lip}}_0(N,Y)}.$$ On the one hand, in the particular case of being $M$ a Banach space, the definition given above agrees with the one given in \cite{beli}. On the other hand, let us give some examples of pairs which have the CEP. First of all, given $M$ a metric space, the pair $(M,\mathbb R)$ has the CEP by McShane's extension theorem. Actually, the pair $(M,\ell_\infty(\Gamma))$ has the CEP for every set $\Gamma$. Another example coming from \cite[Chapter 2]{beli} is the fact that the pair $(H,H)$ has the CEP whenever $H$ is any Hilbert space. Anyway, the CEP is a restrictive property as, for instance, if $Y$ is a strictly convex Banach space such that there exists a Banach space $X$ with $\dim(X)\geq 2$ and verifying that the pair $(X,Y)$ has the CEP, then $Y$ is a Hilbert space \cite[Theorem~2.11]{beli}. See also \cite{Gode-NWEJM} for the relation between the extension of certain vector-valued Lipschitz maps and the approximation property of the Lipschitz-free spaces in the context of compact metric spaces. Note that, following word-by-word the proof of Lemma \ref{lematecnissd2p}, using the CEP instead of McShane's extension theorem, we can get the following vector-valued version of Theorem \ref{teogenessd2pesc}. \begin{theorem}\label{teogenessd2pvect} Let $M$ be a pointed metric space and let $Y$ be a Banach space such that the pair $(M,Y)$ has the CEP. If $M'$ is infinite or $M$ is discrete but not uniformly discrete, then ${\mathrm{Lip}}_0(M,Y)$ has the SSD2P. \end{theorem} The previous theorem provides a partial positive answer to \cite[Question~3.1]{blr}, where the authors asked whether ${\mathrm{Lip}}_0(M,X^*)$ has the SD2P whenever the pair $(M,X^*)$ has the CEP and $M$ is not uniformly discrete and bounded. \noindent \textbf{Acknowledgment:\ } The authors are very grateful to Vladimir Kadets, Colin Petitjean, and Anton\'{\i}n Proch\'{a}zka for many comments which have improved the final version of this paper. They also thanks the anonymous referee for the valuable suggestions which have also improved the exposition of the paper.
1,314,259,995,759
arxiv
\section{I\MakeLowercase{o}T Application Scenario} \label{sec:iot_example_application} In Figure~\ref{fig:iot-overview-berlin}, we present an integrated public transport system of Berlin as a representative IoT application scenario. The components in this scenario are either stationary or mobile. Vehicles (red and yellow boxes), i.e., taxis, buses, subways, and trains move around the city and carry a set of sensors and a simple processing unit. Each unit collects vehicle data (e.g., routing, maintenance information, and occupancy/usage) as well as data from the environment (e.g., traffic, road conditions, and weather). The base stations, processing nodes, and dispatch station are stationary components. Base stations (green triangles) are distributed across the city and consist of antennas, network routers, and compute and storage capacity. Processing nodes (green circles) are distributed within the city to gather data from several base stations and apply more complex processing. The centralized dispatch station represents the endpoint for all data and merges data from the fog and the cloud with stored and external data. Users manage public transport through the dispatch station. This IoT scenario requires a massively distributed system with continuous data producers as well as transient and permanent, distributed compute and storage capabilities. The environment in this scenario differs fundamentally from current cloud-based data processing architectures. In particular, vehicles move within the city and interact with multiple antennas, which transmit data to base stations. Due to the dynamic nature, vehicles may encounter temporary connection losses or outages (red vehicle), e.g., when they are outside of transmission ranges. Furthermore, all vehicles move at different speeds, on different roads/tracks, and are potentially equipped with different hardware. User queries addressing only a subset of the vehicles do not require collecting all sensor data from all vehicles at every transmission interval. This represents a major characteristic that is crucial for enabling large-scale IoT applications. As a result, a fog requires continuous adaptation to a dynamic environment with respect to faults and changes in the availability, amount, type, capacity, and location of data and compute nodes. Furthermore, on the sensor level, a system has to continuously adapt the sensor reads depending on a dynamic query workload. Despite the distributed nature, it must be possible to manage the system through a centralized, global view and execute continuous as well as ad-hoc data analytics. This includes the entire data analysis pipeline, from information extraction to integration and model building using machine learning, signal processing, and other advanced analytics. From a user perspective, this system may assist the public transport dispatcher to schedule new vehicles or reroute vehicles in case of outages or increased passenger demand. This results in a feedback loop that may change the physical fog architecture. Furthermore, this architecture allows for enriching real-time data with external sources, e.g., air pollution measurements, event calendars, area crowdedness, or knowledge bases. The characteristics of this application are representative for many IoT scenarios including Industry 4.0, smart homes, smart grids, smart cities, or participatory sensing applications. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{figures/new-berlin-example-compact.pdf} \vspace{-0.6cm} \caption{IoT application scenario.} \vspace{-0.6cm} \label{fig:iot-overview-berlin} \end{figure} \section{Conclusion} \label{sec:conclusion} In this paper, we introduced NebulaStream, a general-purpose, end-to-end data management system for the IoT. We showed that current systems are not yet ready for the upcoming challenges of the IoT era. We highlighted the system design of the NebulaStream platform and its design principles. The goal of our envisioned design is to handle the heterogeneity, unreliability, and elasticity of a unified sensor-fog-cloud environment. Furthermore, we revealed upcoming research challenges and outlined possible solutions. Finally, we presented first results that motivate the need of a new system design for upcoming IoT applications. With our \mbox{NebulaStream Platform}, we aim to enable emerging IoT applications in different domains. \section{Introduction} \label{sec:intro} Over the last decade, the amount of produced data has reached unseen magnitudes. Recently, the International Data Corporation~\cite{dataAge} estimated that by 2025 the global amount of data will reach 175ZB and that 30\% of these data will be gathered in real-time. In particular, the number of IoT devices is expected to grow to as many as 20 billion connected devices by 2025~\cite{gartner}. At the same time, devices such as embedded computers or mobile phones continuously increase their processing capabilities. This trend enables the exploitation of their computing and communication capabilities, as they become objects of common use. As a result, the IoT is one of the fastest emerging trends in the area of information and communication technology~\cite{IOTVison}. \begin{figure}[t] \includegraphics[scale=.2]{figures/throughput-latency.png} \vspace{-5mm} \caption{IoT application using a cloud-centric SPE.} \label{fig:throughput-latency} \vspace{-7mm} \end{figure} The explosion in the number of connected devices triggers the emergence of novel data-driven applications. These applications require low-latency, location awareness, wide-spread geographical distribution, and real-time data processing on potentially millions of distributed data sources. To enable these applications, a data management system needs to leverage the capabilities of IoT devices. However, today's data management systems are not yet ready for these applications as they embrace either the cloud or the fog computing paradigm. Systems based on the cloud paradigm, e.g., Flink~\cite{Stratosphere}, Spark~\cite{zaharia2016apache}, and Kafka Streams~\cite{kstreams2018}, do not exploit the full capabilities of IoT devices. To implement IoT applications, these systems require the collection of sensor data centrally in a data center prior to applying processing. This centralized processing paradigm presents a bottleneck for upcoming IoT applications, which need to process data from millions of distributed sensors. In Figure~\ref{fig:throughput-latency}, we showcase the impact of this bottleneck by executing an IoT application scenario using a cloud-based approach and reporting the average processing latency. To this end, we scale the number of IoT data producers from 1 to 80. Each producer generates data at a constant speed of 50K record/sec. Producers send their data over a gateway to an Kafka cluster with five nodes. Inside the same cloud environment, we setup an Flink cluster with eight nodes (cloud nodes are connected through a 1 Gbit Ethernet connection). Our Flink job reads data from Kafka and executes a tumbling windowed aggregation of 10 seconds to count distinct events. We let the experiment run for 10 minutes and measure the end-to-end processing latency following the methodology introduced by Karimov et al.~\cite{KarimovRKSHM18}. Our experiment shows that latency increases as we increase the number of producers. Our cloud-based IoT application scenario can sustain up to 20 producers with constant latency. Beyond this point, our application saturates and latency increases gradually. This effect intensifies for more IoT producers and results in a continuously increasing backlog within Kafka. Overall, our experiment shows that a centralized cloud approach does not scale for IoT applications and thus future IoT applications require a new system. In contrast, systems based on the fog computing paradigm, e.g., Frontier~\cite{Frontier} and CSA~\cite{Streaming_IOT_Survey}, exploit the processing capabilities of edge devices, i.e., devices that are physically closer to the data sources. These devices apply data reduction techniques, e.g., pre-selection or pre-aggregation, to reduce data volume as early as possible in the processing pipeline, i.e., close to the sensor. However, fog computing systems only scale within the fog and do not exploit the virtually unlimited resources of modern cloud infrastructures (e.g., Amazon Web Services or Microsoft Azure). Data management systems for wireless sensor networks (WSNs), e.g., TinyDB~\cite{tinyDB}, exploit small battery-powered sensors to create a network of nodes to capture physical phenomena, such as earthquakes or volcanic eruptions. These systems apply acquisitional query processing techniques to optimize the execution for battery lifetimes and deploy a small set of specialized queries to capture the physical phenomena. However, WSN systems only scale within the sensor networks and do not exploit the resources of the attached cloud and fog environments. In particular, they do not consider offloading computation to external nodes and do not provide general-purpose query execution capabilities. Overall, there is no general-purpose, end-to-end data management system for a unified sensor-fog-cloud environment with functionality similar to production-ready systems such as Flink or Spark. To enable future IoT applications, a data management system for the IoT has to combine the cloud, the fog, and the sensors in a single unified platform to leverage their individual advantages and enable cross-paradigm optimizations (e.g., fusing, splitting, or operator reordering). From a system point of view, this unified environment imposes three unique characteristics that are not supported by state-of-the-art data management systems. \textbf{Heterogeneity:} A unified environment consists of a high\-ly heterogeneous hardware landscape. The processing nodes range from low-end battery-powered sensors (e.g., Mica \linebreak Motes) over system-on-a-chip devices (e.g., Raspberry PIs) to high-end rack-scale servers. In particular, cloud infrastructures consist of homogeneous node setups, whereas the fog contains heterogeneous, low-end computing devices. Furthermore, WSNs consist of highly specialized battery-\linebreak powered sensors. To exploit the individual capacities of each node, an IoT data management system has to take their individual capabilities into account, especially their resource restrictions. However, current data management systems abstract from the underlying hardware with virtual machines and managed runtimes. These abstractions hinder the exploitation of specialized instructions and processing units and prevent important optimizations. \textbf{Unreliability:} A unified environment has to handle different levels of runtime dynamics. The fog introduces a highly dynamic runtime environment with unreliable nodes that might change their geo-spatial position, i.e., resulting in many transient errors or changes in latency/throughput. WSNs exacerbate this highly dynamic runtime even further by turning-off sensors temporally to save energy and allowing reads only following a dedicated read schedule. In contrast, a cloud infrastructure is a relatively stable environment where node failures are rare. However, current approaches for load balancing, fault-tolerance, and correctness only concentrate on one particular environment. Thus, these approaches miss out important cross-paradigm optimization potential. \textbf{Elasticity:} In a unified environment, data move from the sensors via intermediate nodes to the cloud, and finally to the consumer, e.g., a user device or another system. The fog topology is commonly built as a tree-like network topology~\cite{bonomi2012fog, hong2013mobile} with several dataflow paths. Data processing in the fog topology has to be network-aware because only nodes on the path from the sensors to the cloud can participate. Furthermore, in a WSN, all sensors send their data to the next sensor in range until all data end up at the root of the network. In contrast, in the cloud, every node has access to all data, e.g., via a distributed file system, e.g., HDFS. However, current approaches allow optimizations, scaling, and load balancing only within nodes of the same environment and thus miss out important cross-paradigm optimization potential. Overall, a unified environment introduces a previously unprecedented, unique combination of characteristics, i.e., hardware heterogeneity, unreliable nodes, and changing network topologies. This new set of characteristics enables new cross-paradigm optimizations, which are crucial to support upcoming IoT applications over millions of sensors. In this paper, we propose \textit{NebulaStream} (NES), a novel data processing platform that addresses the above-mentioned heterogeneity, unreliability, and scalability challenges and enables effective and efficient data management for the IoT. In particular, NES copes with these unique characteristics as follows. First, NES copes with heterogeneity by maximizing \textit{sharing of results} and \textit{efficiency of computing} to significantly reduce the amount of data transferred and to exploit hardware capabilities efficiently. Second, NES addresses unreliability by applying \textit{dynamic decisions} and \textit{incremental optimizations} during runtime to be as flexible as possible. Third, NES enables elasticity by designing each node to react \textit{autonomously} to a wide range of situations during runtime. With NES, we enable future IoT applications by unifying sensors, fog, and cloud in one general-purpose, end-to-end data management platform. Our early experiments show that NES reduces the amount of data and sensor reads up to 90\%, increases node throughput and decreases energy consumption on low-end devices by up to two orders of magnitude, and processes queries with low latency even in the presence of many node failures. The remainder of the paper is structured as follows. We show a typical IoT application scenario in Section~\ref{sec:iot_example_application}. In Section~\ref{sec:neb_overview}, we describe the NebulaStream platform, discuss its design principles, and provide initial performance results. Finally, we survey related work in Section~\ref{sec:sota} and conclude in Section~\ref{sec:conclusion}. \section{NebulaStream Platform} \label{sec:neb_overview} In this section, we present the NebulaStream (NES) platform. First, we describe the common topology of IoT application scenarios and highlight its novelty (Section~\ref{sub:nes_topology}). After that, we identify key design principles for an IoT data management system (Section~\ref{sub:design_principles}) and later describe how NES implements them (Section~\ref{sub:arch_overview}). Finally, we discuss challenges for an IoT data management system and how NES addresses them (Section~\ref{subsec:nes_challenges}). \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/fog-topology-compact.pdf} \vspace{-0.6cm} \caption{Multi-layer NES Topology.} \vspace{-0.6cm} \label{fig:overview} \end{figure} \subsection{NES Topology} \label{sub:nes_topology} In Figure~\ref{fig:overview}, we present a multi-layer NES Topology that is common in today's IoT infrastructures~\cite{bonomi2012fog}. This figure presents the dataflow from the sensors to the cloud. The basic assumptions in this topology are three-fold. First, all data might reach the \textit{Cloud Layer}. Second, devices on the path from the sensors to the cloud are able to apply processing. Third, the Cloud Layer is able to apply remaining processing, i.e., representing a fall-back mechanism. In contrast, all other nodes can only access data if they are routed through them and their storage and processing capabilities determine the operations they can apply. The data are routed among the three layers as follows. On the \textit{Sensor Layer}, millions of sensors produce data without processing them. However, NES is able to schedule the sensor reads depending on the query, e.g., increasing read frequency or omitting reads. Sensors provide two data access patterns: pull-based and push-based. Each sensor is connected to at least one low-end node in the \textit{Fog Layer}, which is responsible for this sensor (so-called \textit{Entry Node}). In the \textit{Fog Layer}, NES processes data as they flow from Entry Nodes to Exit Nodes. During processing, nodes may change their geo-spatial position. The data transfer is orchestrated by \textit{Routing Nodes}, such as routers or switches. The data processing capabilities on Routing Nodes are restricted and the provided functionality is highly vendor-dependent~\cite{DPI,lerner}. In general, the storage and processing capabilities of nodes increase significantly in the NES Topology with each hop towards the Cloud Layer. After leaving the Fog Layer through an \textit{Exit Node}, data enter the Cloud Layer. The Cloud Layer provides virtually unlimited scaling of compute and storage. In IoT application scenarios, this layer will perform the remaining computation and output the data to the user. An alternative approach to this centralized design would allow each node in the fog to function as a potential sink. Thus, users would submit their queries directly through their device and each device would represent an exit node in the topology. In this decentralized design, each device will be responsible for answering the submitted user query. This design naturally supports geo-spatial query processing as most users are potentially only interested in data produced nearby. Exploring the design space of a centralized vs. a decentralized design is one major future challenge. The NES Topology introduced in Figure~\ref{fig:overview} represents a fundamentally new and unique set of characteristics and requirements compared to common cloud infrastructures. First, query processing and operator placement have to be network-aware. The main query optimization goal is to find an efficient route through the Fog Layer that reduces data volumes as early as possible without violating any Service-Level-Agreement (SLA) but fulfilling Quality of Ser-\linebreak vice (QoS) constraints. Second, the NES Topology is highly heterogeneous and many nodes have only limited processing capabilities. In particular, nodes in the lower parts of the Fog Layer are restricted in storage and processing capabilities. Furthermore, processing has to trade-off between energy consumption and performance. Third, the Fog Layer is highly unreliable compared to the homogeneous and relatively stable Cloud Layer. To support mobility and related aspects, the system has to take the characteristics of each individual environment into account. Fourth, the volume and velocity of sensor data represent an external factor. As a result, the entire system has to evolve around sensor data that is injected by the outside world. With \mbox{NES}, we build a platform that creates a federation of sensors, fog, and cloud, which enables big data acquisition and analysis. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{figures/software-architecture-compact.pdf} \vspace{-0.4cm} \caption{NES architecture overview.} \vspace{-0.5cm} \label{fig:sw_overview} \end{figure} \subsection{NES Design Principles} \label{sub:design_principles} NES is a platform for future IoT applications that copes with the unique set of characteristics of a unified environment. For individual layers, different approaches were proposed over the last decades. However, combining all of them into a single system is the major challenge that we address with NES. To handle millions of sensors and thousands of queries, we base the system design of NES on the following design principles: \begin{enumerate}[topsep=4pt, itemsep=-4pt, leftmargin=*] \item \textbf{Dynamic Decisions:} NebulaStream never expects a static behavior or conditions in any component. \item \textbf{Autonomous Processing:} NebulaStream equips compute nodes with all logic necessary to act as autonomously as possible. \item \textbf{Incremental Optimizations:} NebulaStream optimizes a network of active queries in incremental steps rather than traditional query optimization or batched changes. \item \textbf{Maximize Sharing:} NebulaStream shares data and processing wherever possible, i.e., on windows (stream slicing), among queries (multi-query optimization), on sensor data (acquisitional query processing), and on operator level (code optimization). \item \textbf{Maximize Efficiency:} NebulaStream applies hardware-tailored code generation to exploit the underlying hardware efficiently. \item \textbf{SLA Centric Processing:} NebulaStream's primary goal is to match user-provided SLAs and QoS constraints with available resources. \item \textbf{Ease of Use:} NebulaStream enables users to choose their preferred programming environments and models, without worrying about system-internals and performance implications. \end{enumerate} \subsection{NES Architecture} \label{sub:arch_overview} In Figure~\ref{fig:sw_overview}, we present the architecture of \mbox{NebulaStream}. In general, we design NES with a centralized deployment process and a decentralized run-time re-optimization. In particular, we envision a \textit{logically} centralized deployment process in which one central instance has control over the deployment. However, this logically centralized instance can be distributed among multiple region coordinators to form a hierarchy of coordinators. In the future, we envision moving towards a decentralized deployment process that enables every device to timely submit queries and receive results. In the current design, users interact with NES through one of the provided APIs to send queries to the \textit{NES Coordinator}~\makecircled{1}. Our current APIs allow specifying dataflow programs, similarly to the APIs of streaming systems like Flink, Spark, and Storm. The NES Coordinator consists of several components that orchestrate query processing. The \textit{NES Query Manager} is responsible for creating logical query plans from user requests~\makecircled{2}. Additionally, this component maintains \textit{logical streams} that represent logical views over sensors, e.g., a logical stream \textit{cars} could combine sensor inputs from multiple cars into one consistent stream. The \textit{NES Topology Manager} orchestrates the NES Topology, which consists of workers and sensors. During startup, each device registers itself and provides information, such as resource capabilities and network topology information. However, to reduce the complexity of optimization decisions, NES follows the idea of introducing \textit{zones} that aggregate a sub-tree or geo-spatial region of the topology into one node. Thus, the optimizer treats a zone as one node which transparently abstracts from the dynamic behavior inside the zone. As a result, a topology may consist of a hierarchy of zones, which simplifies the global optimization process. The efficient assembly of zones is one future research challenge for NES. The \textit{NES Optimizer} provides the assignment of a logical query plan (created by the NES Query Manager) to the current NES Topology plan~\makecircled{3} (maintained by the NES Topology Manager). This assignment defines the \textit{NES Execution Plan (NES-EP)}. The assignment process introduces a large optimization search space, e.g., operators can be assigned top-down, bottom-up, or by other assignment strategies. The \textit{NES Deployment Manager} takes the NES-EP~\makecircled{4}, disassembles it into Node Execution Plans (Node-EPs), deploys them to the nodes in the NES Topology, i.e., into either the Fog or the Cloud Layer, and sets up the sensors~\makecircled{5}. This deployment is performed incrementally and requires rerouting data on different dataflow paths. Note that this deployment process has to handle a gap between optimization and deployment time. Thus, optimization is based on a snapshot of the topology, while deployment has to take the current topology into account. Therefore, the deployment process in this highly dynamic execution environment introduces many interesting research challenges, such as the partial deployment of plans and the partial re-optimization of sub-plans. The \textit{NES Monitor} constantly collects feedback from the NES Topology~\makecircled{6} and maintains statistics and current resource utilization for the NES Topology Manager~\makecircled{7}. To improve operator placement, the NES Optimizer requests these statistics and current resource utilization from the \textit{NES Monitor}~\makecircled{7}. However, maintaining a centralized, coherent view over a large and highly dynamic topology is a major research challenge. First, the NES Optimizer has to be aware that the topology data is potentially out-dated and thus has to optimize accordingly, e.g., by providing a set of alternative plans. Second, the collection of monitoring data and the maintenance of statistics has to take the current system load into account and thus must be prioritized lower than data transfers to answer user queries. Third, we envision a decentralized run-time re-optimization process that is triggered by the nodes themselves. To this end, NES nodes first attempt to address a change locally, then communicate with their neighboring nodes, and finally requesting support from a central coordinator. In Figure~\ref{fig:node_overview}, we show the components of the node engine, which is deployed on all devices of the NES Topology. The \textit{NES Node Engine} is responsible for communicating with the NES Coordinator, accepting Node-EPs and control messages, as well as setting up the input sources, output sinks, and other components. The incoming queries are Node-EPs, which contain a partial subtree of the overall NES-EP. The Node-EP is compiled by the local query compiler and later injected into the processing tasks. As input, the NES Node Engine receives data from the network, e.g., from another node, or directly from an attached sensor. As output, the NES Node Engine either sends data over the network or triggers an action on an attached device, e.g., controlling an actuator such as a light switch. The \textit{Execution Engine} orchestrates the processing inside each NES Node Engine. The central unit of work is one task that combines $n$ input buffers, $m$ output buffers, and the execution of the specified operators~\cite{QTM}. The processing in NES is \textit{source-driven} and applies the following sequence of steps on each incoming buffer. First, the engine assembles the tasks by embedding the executable and allocates all required input, intermediate, and output buffers. After that, the engine enqueues the tasks in one of the processing queues. Finally, each thread in the \textit{Thread Pool} dequeues one task, processes it, and either enqueues the result buffer into an output queue or triggers an action. This highly dynamic design enables high resource utilization but also introduces a dynamic execution order, which poses new challenges for the system design. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/node-architecture-compact.pdf} \vspace{-0.5cm} \caption{NES Node Engine.} \vspace{-0.5cm} \label{fig:node_overview} \end{figure} In addition to processing components, each NES Node Engine contains dedicated components for local and neighboring optimizations, windows, routing, sensors, state, and run-time re-optimization. As a result, we drastically reduce the complexity of the query compiler and increase maintainability and separation of concerns in NES. In particular, NES compiles only the \textit{hot} code fragments and links other functionalities as pre-compiled components (following Neumann et al.~\cite{neumann2011efficiently}). Overall, it is a design decision in NES to equip the NES Node Engine with all necessary components to enable it to be as autonomous as possible. In particular, we assign all means to the node to enable it to make as many decisions as possible decentrally and independently. This design follows the Borealis design~\cite{borealis} and tries to encounter transient changes locally and permanent changes globally. In NES, we envision a system design with autonomous nodes and a simple coordinator to mitigate potential bottlenecks in large scale environments. \subsection{NES Solutions for IoT Challenges} \label{subsec:nes_challenges} Based on the unique characteristics highlighted in Section~\ref{sec:intro} and IoT application scenarios presented in Section~\ref{sec:iot_example_application}, we outline five main challenges for an IoT data management system. In the following, we discuss the challenges and propose our solutions. \subsubsection{C1 - Heterogeneity, Distribution, and Volume of Data At-Rest and Data In-Motion} NebulaStream's goal is to scale to thousands of queries and millions of sensors. In the IoT, data are generated by many distributed sources such as sensors or streams of other systems. A particular challenge originates from handling the sheer amount of diverse data sources, potentially up to the number of millions. These sources differ in their characteristics, ranging from millions of small sensor streams to a few large streams from sources such as click-streams or auctions. The accessibility of sources under security and privacy constraints, as well as efficient access paths, requires solutions completely different from what today's big data processing systems provide. For example, an IoT infrastructure enables new solutions for security and privacy as it allows local pre-processing of data next to the generation, e.g., inside a house or building. This enables a scenario where only authorized or anonymized data are sent to the central cloud. As a result, we can enable users to have full control of their own data. Overall, these characteristics imply research questions with respect to scalability, efficiency, integration, security, privacy, and interoperability. To support this extreme diversity in NES, we follow the \textit{Maximize Sharing} design principle (Section~\ref{sub:design_principles}) and apply data sharing techniques on three different levels. First, on the query level, NES exploits data sharing among multiple streaming queries as proposed by Karimov et al.~\cite{AStream}. Second, on the operator level, NES slices data streams and exploits data sharing on stream aggregations as proposed by Traub et al.~\cite{GeneralStreamSlicing}. Third, on the sensor level, NES applies \textit{Acquisitional Query Processing} (ACQP)~\cite{tinyDB} and \textit{On-Demand Scheduling} of sensor reads and data transmissions~\cite{OnDemandDataAcc}. These techniques limit data acquisition to data points which are required for answering user queries. By combining the introduced techniques in NES, we attempt to drastically reduce the amount of acquired, transferred, and processed data; thus, enabling IoT applications with thousands of queries over millions of sensors. \begin{figure}[t]% \centering% \includegraphics[scale=.21]{figures/sense_example.png}% \vspace{-0.2cm} \caption{NES data reduction on the sensor level.}% \vspace{-0.5cm} \label{fig:sense_example}% \end{figure} Figure~\ref{fig:sense_example} presents an initial experiment that demonstrates the potential savings of data reduction techniques in NES on the sensor level. We use the New York taxis data set~\cite{nyt}, derive routes for each taxi trip, and replay the routes of all taxis on Raspberry Pis, which represent sensor nodes located in taxis. As a baseline, we use a common IoT setup where sensor nodes stream current values to a central SPE in the cloud, without any knowledge about the executed queries. In contrast to this cloud-centric IoT setup, NES combines cloud and fog nodes as well as sensor nodes in taxis in one system to allow for holistic optimizations. We show three example queries in an SQL-like notation. The queries include an outlier detection (Query~\ref{lst:q1}), an airport attendance monitoring (Query~\ref{lst:q2}), and a top three query for the longest ongoing trips (Query~\ref{lst:q3}).\\ \definecolor{codegreen}{rgb}{0,0.6,0} \definecolor{codegray}{rgb}{0.5,0.5,0.5} \definecolor{codepurple}{HTML}{C42043} \definecolor{backcolour}{HTML}{F2F2F2} \definecolor{bookColor}{cmyk}{0,0,0,0.90} \color{bookColor} \lstdefinestyle{mystyle}{ backgroundcolor=\color{backcolour}, commentstyle=\color{codegreen}, keywordstyle=\color{codepurple}, numberstyle={}, stringstyle=\color{codepurple}, basicstyle=\footnotesize\ttfamily, breakatwhitespace=false, breaklines=true, captionpos=b, keepspaces=true, numbers=left, numbersep=2pt, showspaces=false, showstringspaces=false, showtabs=false, } \lstset{style=mystyle} \lstset{emph= AHEADLIMIT, DELAYLIMIT% },emphstyle={\color{codepurple}\bfseries}% }% \renewcommand{\lstlistingname}{Query} \begin{lstlisting}[ language=sql, numbers=none, caption={Journeys leaving the New York area and journeys without passengers. Checked every 2 seconds.}, label=lst:q1, captionpos=b] SELECT ts, medallion, trip_id, latitude, longitude, distance, passenger_count FROM stream(taxis, 2000) WHERE journey_flag=TRUE && (latitude<40.249448 || latitude>41.381560 || longitude<-74.820611 || longitude>-71.848319 || distance=0 ||passenger_count=0); -- NY area \end{lstlisting} \begin{lstlisting}[ language=sql, numbers=none, caption={Returning the number of passengers in the airport zone. Updated every 5 seconds.}, label=lst:q2, captionpos=b]] SELECT ts, sum(passenger_count) FROM stream(taxis, 5000) WHERE (40.536532<latitude AND latitude<40.745906) && (-73.946390<longitude AND longitude<-73.609759) --airport GROUP BY ts AHEADLIMIT 100 DELAYLIMIT 100; \end{lstlisting} \begin{lstlisting}[ language=sql, numbers=none, caption={Returning the top three longest ongoing trips. Updated every second.}, label=lst:q3, captionpos=b]] SELECT ts, latitude, longitude, trip_distance FROM stream(taxis, 1000) WHERE journey_flag = TRUE ORDER BY trip_distance DESC LIMIT 3; \end{lstlisting} \color{black} We modify the data acquisition process for all three queries such that only required data are sampled and transmitted. In particular, we can interleave data gathering operations (i.e., sensor reads) with data processing (e.g., filters)~\cite{tinyDB}. Theoretically, the system has to read all sensors specified in the \textit{select} clause at the frequency specified in the \textit{from} clause. However, the filter predicates in the \textit{where} clause allow for preventing sensor reads and data transmissions for tuples that are filtered out. For instance, in Query~\ref{lst:q1} and Query~\ref{lst:q3}, we first check the journey flag. If the value is \texttt{false}, we do not read any other sensor. Another important optimization is to adjust sampling \linebreak rates continuously and to prevent data transmissions based on the observed sensor values~\cite{OnDemandDataAcc}. For example, in Query~\ref{lst:q1} and Query~\ref{lst:q2}, we can use the current position of the taxi to calculate the earliest time when the taxi could leave New York or enter the airport area. Thus, we know upfront that no tuple will pass the filter for that time span and do not have to read or evaluate sensor values for that time. In addition, in Query~\ref{lst:q2}, we specify a tolerance for sensor read times (\textit{ahead} and \textit{delay} limit), which saves data transmissions when multiple queries request values from the same sensor. We apply user-defined sampling functions to adjust sampling rates continuously, apply read time tolerances, and schedule sensor reads, respectively~\cite{OnDemandDataAcc}. In Figure~\ref{fig:sense_example}, we show that the saved traffic between the fog and the cloud layer is significant for all queries using these optimizations. \subsubsection{C2 - Heterogeneity, Distribution, and Volume of Compute} NebulaStream's goal is to exploit the hardware resources of millions of heterogeneous devices efficiently. A particular challenge originates from the potentially millions of compute devices that are found in a fog topology. These devices have a diverse set of capabilities, with respect to storage, processing, and interconnect. The devices range from small battery-powered sensors with no compute capabilities (beyond simple filtering) and an unreliable temporary connection to a large compute cluster with huge storage, infiniband interconnect, and thousands of compute cores. These characteristics imply challenges with respect to security, permission management, and efficient and effective resource utilization. \begin{figure}[t] \centering \begin{subfigure}[c]{0.4\linewidth} \centering \includegraphics[scale=.21]{figures/compiler_evaluation.png}% \subcaption{Throughput.} \label{fig:pi_eval_throughput}% \end{subfigure} \hfill \begin{subfigure}[c]{0.5\linewidth} \centering \includegraphics[scale=.18]{figures/compiler_energy_evaluation.png}% \vspace{-1mm} \subcaption{Energy Efficiency.} \label{fig:pi_eval_energy}% \end{subfigure} \vspace{-0.3cm} \caption{YSB on RaspberryPi 3B+.}% \vspace{-0.3cm} \label{fig:pi_eval}% \end{figure} To support this heterogeneity in NES, we follow the \textit{Maximize Efficiency} design principle (Section~\ref{sub:design_principles}). In particular, we apply two techniques. First, we use query compilation, the leading paradigm for achieving high resource utilization in data-at-rest processing \cite{neumann2011efficiently}. In NES, we transfer this approach to the special semantics of fog and stream processing. In particular, NES generates specialized code depending on the actual query, hardware, and data characteristics~\cite{zeuch2019analyzing}. Second, NES distributes query optimization and code generation between the central coordinator and the local node engine. On the coordinator, NES performs global query optimizations (e.g., operator reorder) and splits the query into segments for individual devices. On the node engine, the query compiler produces hardware-tailored code to exploit the availability capabilities most efficiently. Our experiment in Figure~\ref{fig:pi_eval} evaluates the throughput and energy efficiency of the Yahoo Streaming Benchmark (YSB) on a RaspberryPi 3B+ using Python, Flink, a hand-opti\-mized Java program, and NES, respectively. The YSB simulates a real-word stream processing task and consists of a filter and a windowed aggregation~\cite{yahooB}. We implement the YSB with a one second tumbling window and 10000 campaigns based on the codebase provided by Gier et al.~\cite{grier2016extending}. Our results show that hardware-tailored code generation is essential to efficiently utilize resources, especially for low-end devices. In Figure~\ref{fig:pi_eval_throughput}, we present the maximal throughput of the four different YSB implementations. NES outperforms all other systems by at least 10x and is the only system that is able to reach a throughput of more than 10 million tuples per second. All other systems suffer from the high-overhead of the underlying managed runtime. This overhead is significant on low-end devices like the RaspberryPi. Furthermore, through code generation, NES reduces the energy consumption per device and thus requires less energy to achieve the same performance. In Figure~\ref{fig:pi_eval_energy}, we evaluated the energy efficiency of the four different YSB implementations. To this end, we define energy efficiency as the required energy in milli joule per processed record. Our results show that NES requires around 0.0003 milli joule per tuple, which is an 80x improvement compared to the Python implementation. In the future, we will further investigate the trade-off between energy consumption and performance as one major research question for NES. Especially for battery-powered sensors, code generation enables a higher operation time and thus reduced maintenance and replacement costs. \begin{figure}[t] \begin{center}% \includegraphics[scale=.2]{figures/performance-figure.png} \vspace{-2.5mm} \caption{Gathering coherent snapshots from sensor nodes.}% \vspace{-9mm} \label{fig:scalability_example}% \end{center}% \end{figure} As a second technique, we utilize in-network processing inside the Fog Layer to reduce the computation required at the Cloud Layer. In Figure~\ref{fig:scalability_example}, we present an example query that gathers values from up to 1000 nodes and joins them to coherent snapshots. A snapshot is coherent, if all sensor values contained in the snapshot have been read at the same time. In practice, snapshots are often incoherent, because the times of sensor reads are not perfectly aligned among all distributed nodes. In addition, clock deviations among sensor nodes lead to undetected incoherence, which potentially causes application failures such as false correlations. We use the techniques which were introduced in the \linebreak SENSE System~\cite{traub2019sense} to ensure scalability and to mitigate incoherence. SENSE arranges sensor nodes in data gathering pipe\-lines, which join tuples incrementally (decentralized join) and ensure coherence. In contrast to a centralized join, the Cloud Layer only joins the results of the pipelines instead of all individual sensor measures. This prevents a central bottleneck at the Cloud Layer and ensures high throughput when gathering values from a large number of sensors. As shown in Figure~\ref{fig:scalability_example}, a centralized join causes a drastic throughput decay when the number of nodes increases. In contrast, by utilizing the available computing resources on the path from the sensors to the Cloud Layer, NES achieves almost constant throughput and addresses coherence issues. By applying hardware-tailored code generation and in-network processing, NES exploits the available compute resources most efficiently and allows for balancing computational demands and energy consumption. \subsubsection{C3 - Spontaneous, Potentially Unreliable \\ Connectivity between Data and Compute} NebulaStream's goal is to detect and compensate potentially unreliable nodes in the Fog and Sensor Layer without impacting consistency and availability. A particular challenge originates from the need to manage data and compute together, as most applications will consist of ad-hoc or standing streaming queries. Furthermore, some compute units may be connected via Wifi, mobile, or satellite networks with intermittent connectivity and unreliable connections. In contrast to a homogeneous and relatively stable cloud environment, a heterogeneous and volatile fog environment has to handle frequent transient failures. Furthermore, WSNs are even more prone to transient failures due to their battery-powered low-end devices and vulnerable radio transmission. Failures in the fog and in WSNs occur due to numerous reasons, most notably hardware errors, software errors, congestion that results in back-pressure (straggler nodes), inadequate resource allocation, and transient connection loss. Furthermore, devices continuously refresh their connections while moving and create ad-hoc connections that result in an unpredictable communication pattern \cite{IOTVison}. This requires special solutions to deal with the intermittent availability of resources, both with respect to data and code management. The resulting challenges require changes in areas such as adaptivity, synchronization across devices, consistency, transaction management, recovery, and fault-tolerance. \begin{figure}[t]% \centering \includegraphics[scale=.2]{figures/figure_ft.png} \vspace{-3mm} \caption{Evaluation of fault tolerance mechanisms.} \vspace{-7.5mm} \label{fig:failure_example}% \end{figure} Common cloud-centric SPEs handle node failures using a stop-the-world recovery protocol~\cite{carbone2017flink,DBLP:reference/db/BalazinskaHS18}. When an error occurs, the system stops the entire processing and redeploys a new query plan. In contrast, NES adopts a fine-grained recovery protocol, i.e., NES restarts only the operator instances involved in a failure. To assess the performance of both protocols, we implement them in Flink and run the comparison on a simulated IoT environment. This environment comprises of 8 servers, which are equipped with Intel Xeon E5620 CPUs, 32 GB of RAM, and an 1 Gbits network. In Figure~\ref{fig:failure_example}, we show the end-to-end processing latency of both protocols while randomly terminating compute nodes (indicated by the black vertical lines). As shown, the stop-the-world protocol cannot recover from high transient error rates as the latency constantly increases. In contrast, the fine-grained recovery protocol restarts failed operators without halting the entire query. To achieve reliability in an unreliable environment, we apply the \textit{Dynamic Decisions}, \textit{Autonomous Processing} and \textit{Incremental Optimizations} design principles in NES. Because a central component cannot keep up with the pace of failures in a dynamic environment, we apply a diverse set of techniques in NES. On each layer of the NES Topology, we apply different failure recovery approaches; thus, providing different guarantees. On the Sensor Layer, NES substitutes missing sensor values from broken sensors with nearby sensors, if applicable, or buffers the values during transient connection loss \cite{tinyDB}. On the Fog Layer, NES extends the Frontier approach~\cite{Frontier}, which sends data through multiple network paths to achieve fault-tolerance. Furthermore, data are buffered by upstream operators and replayed in case of an error. On the Cloud Layer, NES extends existing fault-tolerance approaches, e.g., global checkpointing and message broker with fine-grained operator reconfiguration~\cite{VenturaPhDWork}. By extending and combining existing approaches on different levels of the NES Topology into a unified fault-tolerant solution, we attempt to handle spontaneous, potentially unreliable connectivity of IoT infrastructures. \subsubsection{C4 - Diversity in Programming and \\ Management Environments} NebulaStream's goal is to support a diverse set of data processing workloads specified in different query languages and following different processing models (e.g., relational, linear, or graph algebra). A particular challenge originates from IoT applications that require a combination of different data-oriented programming paradigms. Possible workloads range over the entire data management pipeline, from information extraction over information integration to model building and inference. In particular, running AI/ML/Data Science algorithms in the fog enables direct feedback loops between the digital and the physical world. These workloads include potentially iterative algorithms mixing relational, linear, and graph algebra, and may run on top of continuous data streams or finite data sets. This diversity presents challenges with respect to 1) holistic, optimizable, intermediate representations, 2) efficient and scalable physical operators across all paradigms that can be mixed and matched, and 3) the combination of domain-specific and generic query languages that offers a sufficiently powerful yet optimizable interface to a data engineer. Furthermore, the programming and reasoning about sensors and actuators in such a distributed, diverse setting entails a huge challenge with respect to both, scalability and ease of use. To support diverse workloads in NES and create a large community with diverse users from different fields, we envision an \textit{easy-to-use} interface. In particular, we attempt to allow users to choose their preferred programming environments and models without the need to take system-internals and performance implications into account. To enable this diversity, we build on top of existing frameworks, such as Weld~\cite{palkar2017weld}, Arc~\cite{kroll2019arc}, Emma~\cite{Alexandrov2016}, and LARA~\cite{kunft2019optend} to represent diverse queries in a unified intermediate representation, our so-called \textit{Nebular-IR}. The Nebular-IR allows us to perform optimizations across operators, processing models, and language boundaries. The optimizations range from high-level optimizations on the operator plan level (e.g., placement, ordering, fusion \cite{hirzel2014acatalog}) to low-level optimizations on the instruction level (e.g., branch conversion across operators). One particular challenge for the Nebular-IR is to handle and optimize UDFs. In particular, most data processing systems treat UDFs as black boxes and thus provide only basic optimizations to plans containing them. However, in NES we first analyze UDFs to perform high-level optimizations on the IR (e.g., operator reordering~\cite{hueske2012opening}). After that, we fuse operators across UDF-boundaries and generate compact machine code. This allows NES to achieve high code efficiency among different UDFs. From a management point of view, centrally managing the system in a heterogeneous distributed setup introduces challenges from areas such as data collection, response time, and fault-tolerance. To this end, NES provides a management view with a centralized, homogeneous interface, automatic distribution and parallelization, and means to adaptively detect and react to changes in the environment. Although the management is performed centrally, parts of the system require a decentralized design. By providing a central management view as well as an intermediate representation in NES, we support a diverse set of data processing workloads specified in different query languages and following different processing models. \subsubsection{C5 - Constant Evolution under \\ Continuous Operation} \label{sub:c5_constant_evolution_under_continuous_operation} NebulaStream's goal is to support continuous operations while the topology and user workloads change constantly. A particular challenge originates from a changing topology where new devices join the fog/WSN and existing devices get phased out or change their geo-spatial position. Additionally, the workloads continuously change as users submit, update, or delete queries. Furthermore, to enable time-sensitive processing, nodes must behave dynamically and autonomously during runtime, to capture and react to changes in velocity, volume, and variety. Managing and reacting to changes in a robust way while the system is in continuous operation presents drastic challenges to the software architecture and fabric of an IoT data management system. To support such a highly dynamic environment in NES, we apply the \textit{Autonomous Processing}, \textit{Dynamic Decisions}, and \textit{Incremental Optimizations} design principles. First, NES equips the compute nodes with all necessary components to autonomously react to a wide range of situations. We enrich the Node-EPs with several alternative routes and different options. As a result, if a node detects changes in velocity, volume, or variety, it reacts dynamically at runtime. To this end, nodes require mechanisms to cope with a highly dynamic environment either locally, by interacting with nodes in the neighborhood, or by reaching out to a global coordinator. The possible design space for these changes includes reduction of the sampling rate, dropping of packages, change in the operator order or algorithm, or rerouting of data streams. Second, each software component in NES is designed to allow for the ever-changing network topology and query workloads and to handle some degree of bounded staleness. We expect that this dynamicity will result in a complete redesign of many components and will require new algorithms and protocols. In particular, we plan to incorporate the actor model \cite{DBLP:journals/pacmpl/BernsteinBBCFKK17} to capture the dynamic behavior between moving devices. In this model, each device represents either a client, worker, source, or coordinator actor. Using the actor model, we make sure that each device is always in a valid state and that each device can react to a wide range of events autonomously, e.g., lost connection or coordinator change. We plan to use the actor model for coordination between actors, e.g., sending queries or reacting to node failures. Due to the high message overhead of the actor model, we plan to offload data transfer to a more light-weight mechanism, e.g., ZMQ\footnotemark or RabbitMQ\footnotemark. \footnotetext[1]{https://zeromq.org/} \footnotetext[2]{https://www.rabbitmq.com/} Third, we apply incremental optimizations such that NES modifies a stateful execution plan of a running query in incremental steps rather than in one large change. With each incoming or modified query as well as with each change in data velocity, volume, or variety, NES converges to the optimal NES-EP. Furthermore, we introduce continuous feedback loops between the NES Coordinator and the NES Node Engines in different layers to enable a central management in a heterogeneous distributed setup. In addition, NES re-optimizes the query execution based on dynamic changes in the workload and environment in an asynchronous process. The trade-off between a centralized orchestration in a coordinator and decentralized decisions in the nodes remains an open research question for the future. By defining feedback loops between its components and by performing changes incrementally and autonomously, we attempt to make NES resilient against constantly changing user workloads and network topologies. Running AI/ML/Data Science algorithms based on data sets and streams produced by sensors in the IoT provides explanation models and prediction capabilities, which in conjunction with actuators, result in a feedback loop between the digital and the physical world. Additionally, programming and reasoning about sensors and actuators in a distributed, diverse setting at the scale of the IoT provides a huge challenge both with respect to ease of use and scalability. Overall, NebulaStream addresses all challenges of an IoT data management system presented in Section~\ref{subsec:nes_challenges} by combining existing approaches with new solutions. To this end, NebulaStream's goals are to handle heterogeneous and distributed data sources and formats, to utilize available resources efficiently, to cope with unstable network topologies, and to provide multiple query and processing models. We envision that NES's unique features make it an attractive platform for future IoT application scenarios. \section{Acknowledgments} This work was funded by the EU projects E2Data (780245), DFG Priority Program “Scalable Data Management for Future Hardware” (MA4662-5), FogGuru (Horizon 2020 under Marie Skłodowska-Curie grant agreement No 765452), the German Ministry for Education and Research as BBDC~II (01IS18025A), and by the German Federal Ministry for Economic Affairs and Energy as Project ExDra (01MD19002B). Bonaventura Del Monte is partially funded by the German Ministry for Education and Research as Software Campus 2.0 (01IS17052). We thank Julius Hülsmann for his support with the experiments on decentralized joins and Vianney de Cibeins for his support with the experiments on data reduction techniques. Furthermore, we thank Eleni Tzirita Zacharatou and Xenofon Chatziliadis for the valuable input and discussions. \bibliographystyle{abbrv} \balance \section{State-of-the-art Systems} \label{sec:sota} In this section, we group existing approaches and outline how they address IoT data management challenges. \vspace{1mm} \subsection{Cloud-centric IoT data processing} \label{subsec:cloud_rel_work} The first group of approaches relies on the cloud to process IoT data centrally. Mobile Cloud Computing (MCC) outsources data storage and processing from devices to the cloud. In this scenario, a pool of sensors gathers and sends data directly to a cloud infrastructure for further processing ~\cite{aws_iot_analytics,azure_iot_hub}. Example applications following this approach are camera surveillance~\cite{nest,netatmo}, wearable cognitive assistance~\cite{glass, hololens}, and smart city monitoring~\cite{fogConsortium2017visual, fogConsortium2017smartcity}. As soon as data reach the cloud, common SPEs, such as Apache Flink~\cite{yang2017flink,piasecki2018Flink,sfikas2019Flink} and Apache Pulsar~\cite{kjerrumgaard2018pulsar2,bock2018pulsar1} process the incoming streams. Based on this infrastructure, cloud providers offer services to deploy and manage data streams. The cloud-centric processing of sensor data enables elastic scaling of compute and storage resources once data reach the cloud. However, this neglects the resources provided by sensors and intermediate nodes (\textbf{C1},\textbf{C2}). Although these systems offer fault-tolerance and dynamic scaling (addressing \textbf{C3},\textbf{C5}) in the cloud, they do not provide them across a unified sensor-fog-cloud environment. In NES, we extend existing work in the area of stream processing to incorporate IoT specific requirements. In particular, we enable cross-paradigm optimization, in-network processing, and hardware-tailored code generation. \vspace{1mm} \subsection{Edge-Aware IoT data processing} \label{subsec:cloud_rel_work} With the concept of Mobile-Edge Computing (MEC), cloud providers address the limitations of cloud-centric approaches by implementing \textit{hub devices} to extend their IoT services \cite{Streaming_IOT_Survey,amazon_greengrass,azure_iot}. Hub devices are placed at the edge of the fog topology and act as local control centers which are close to the sensors. They gather data from attached sensors, perform simple processing steps, and do not require a stable connection to a cloud infrastructure. Although MEC and MCC improve scalability with respect to the number of sensors (addressing \textbf{C1}), they do not focus on efficient resource utilization across heterogeneous devices (\textbf{C2}). In particular, hub devices do not enable cooperative processing across the whole topology. Furthermore, these approaches offer fault-tolerance only between hub-devices and the cloud but still require a stable connection between sensors and the hub-device (partially addressing \textbf{C3}). Additionally, these approaches do not address dynamic changes in the topology (\textbf{C3}). Ryden et al.~\cite{ryden2014nebula} introduce a distributed data and resource management framework. They leverage distributed in-situ data and computing resources on edge nodes only for batch processing. Their system supports the combination of dedicated and voluntary resources under a unified infrastructure while ensuring high availability (addressing \textbf{C5}, partially addressing \textbf{C1}). However, their framework neither exploits hardware heterogeneity for efficient code computation nor supports a multi-programming environment (\textbf{C2, C4}). In NES, we support streaming queries in a unified sensor-fog-cloud environment that is able to exploit fog devices and sensors to optimize query execution in a holistic way. \subsection{Fog-aware IoT data processing} Two data processing systems utilize the fog as the underlying infrastructure. O'Keeffe et al.~\cite{Frontier} propose Frontier, a distributed and resilient data processing system for fog devices. Frontier aims to handle a large number of sensors and to achieve reliability. To this end, it exploits the processing capability of the fog by distributing queries over a topology (addressing \textbf{C1}). It replicates operators to neighboring nodes to recompute intermediate results and to cope with device failures (addressing \textbf{C3}). However, Frontier does not address the efficient utilization of heterogeneous devices, diversity in programming environments, and adaptability to the constant evolution of the fog (\textbf{C2},\textbf{C4},\textbf{C5}). Finally, it does not consider the exploitation of cloud resources. Zhitao et al.~\cite{Streaming_IOT_Survey} extend Cisco's Connected Streaming Analytics platform (CSA) for IoT processing. CSA utilizes Cisco network hardware to enable in-network processing (partially addressing \textbf{C1},\textbf{C2}). However, CSA does not address potentially unreliable connections, the dynamic evolution of the fog, and provides only an SQL-like interface (\textbf{C3},\textbf{C5},\textbf{C4}). In NES, we build on top of these approaches and combine the possible compute and storage capacities of the fog and the cloud. Besides Frontier and CSA, additional research has been conducted on individual challenges in fog computing, which we will leverage in NES. Janssen et al.~\cite{janssen2018scheduling} propose operator placement techniques to partition queries across a fog topology (addressing \textbf{C1}). Park et al.~\cite{park2018StreamBoxTz} exploit special capabilities of IoT hardware to improve efficiency and security~(addressing \textbf{C2}). Kang et al.~\cite{kang2017neurosurgeon} and Grulich et al.~\cite{grulich2018collaborative} propose solutions to partition the inference of deep neural networks across fog topologies to improve scalability (addressing \textbf{C1}). \subsection{Data Processing in Sensor Networks} Sensor networks (SNs) target a particular sub-area of the IoT~\cite{tinyDB,Cougar}. In particular, these systems focus on distributed processing in a wireless sensor network~\cite{WSN_SURVEY}. A major goal is resilience to intermittent and changing network connectivities. To this end, sensor nodes form a network to transfer sensor values through multiple hops to a root node and perform in-network data processing. Approaches in this area tackle efficiency (addressing \textbf{C2}) by optimizing the computation for battery lifetimes and enable filtering and aggregation queries over sensor data \cite{tinyDB}. Moreover, they provide support for a dynamic execution environment (addressing \textbf{C5}). However, these approaches do not support more complex and general workloads, which combine multiple queries, languages, and algebras (\textbf{C4}). In addition, they do not provide strong fault-tolerance and correctness guarantees (\textbf{C3}). In NES, we leverage concepts from sensor networks and integrate them seamlessly across the Sensor, Fog, and Cloud Layers, resulting in a unified environment.
1,314,259,995,760
arxiv
\section{Introduction} The relation between the fluctuations occurring in a system at equilibrium and dissipation effects dates back to Einstein \cite{einstein} and his theory on Brownian motion. After that, Nyquist \cite{nyquist} derived a relation between the electrical resistance and voltage fluctuations in linear electrical systems. It was realized then by Callen and Welton \cite{callen} that such a relation could be proven for general linear dissipative systems using quantum mechanics. At that moment, the intuition of the authors, as described in the last paragraph of their Introduction, was that the relationship between equilibrium fluctuations and irreversibility would provide a method for a general approach to a theory of irreversibility and, indeed, this was the way pursued by Kubo \cite{kubo1} to achieve the theory of linear response. It is well established now that linear response theory gives a general proof of the Fluctuation-Dissipation Theorem (FDT) which states that the linear response of a given system to an external perturbation is expressed in terms of the fluctuation properties of the system in thermal equilibrium. Because of this deep relation between the FDT and linear response theory, it is worth noting that the response, as formulated by that theory, is given for any equilibrium ensemble. In other words, the response function can, in principle, be known not only when the system is initially in thermal equilibrium but also in another equilibrium state such as, for example, the microcanonical one. Therefore, the theory is quite general in the sense that the linear response of a system and its equilibrium fluctuations could be related to each other for any kind of equilibrium conditions. Indeed, fluctuation-response relations have been derived even in the context of stochastic systems \cite{deker,hanggi} and non-Hamiltonian deterministic systems \cite{marconi} using linear theory. Perhaps the very first work concerning different equilibrium conditions from the thermal one in Hamiltonian systems is Ref.\cite{bishop}, where the author shows that Kubo's formula can also be derived in the classical microcanonical ensemble as long as the thermodynamic limit is considered. However, for many and different reasons, much more attention was given for the statistical mechanics in the canonical ensemble than in the microcanonical one and the generality of linear response theory concerning different equilibrium conditions was not much explored. Of course, one could argue that the equivalence of the ensembles in the thermodynamic limit would be the reason for focusing just in the canonical ensemble, but recent developments have shown that there are indeed strong motivations to consider different equilibrium situations. For example, a path integral representation for the quantum microcanonical ensemble \cite{lawson} presented a few years ago was motivated by situations where the microcanonical approach may be more appropriate as for the description of systems at low temperatures or with a finite number of particles. The microcanonical ensemble has also been considered in relations between fluctuation and dissipation in systems far from equilibrium like the Crooks relation, where its microcanonical version helps to understand the connection between various of those fluctuation theorems \cite{broeck}. In Ref.\cite{talkner}, a derivation of a microcanonical quantum fluctuation theorem was presented. Considering the work performed by a classical force on a quantum system when it is initially prepared in the microcanonical state, the authors provide a relation that could be accessible experimentally to measure entropies. In the context of nanosystems, where the number of degrees of freedom constituting the environment is not always large enough, the microcanonical ensemble has also been considered. In Ref.\cite{esposito}, a quantum master equation was derived describing the dynamics of a subsystem weakly coupled to an environment of finite heat capacity and initially described by a microcanonical distribution. Finally, an analysis in the microcanonical state has also contributed to the recent debate about the foundations of the canonical formalism \cite{reimann}. The microcanonical ensemble implies a description of an isolated system. Therefore, one might ask how a relation between fluctuations and dissipation can be possible in a situation where no energy can be dissipated. In the present work, our goal is to explore the relation between fluctuations and response in microcanonical equilibrium conditions through the framework of linear response theory. As will be explained later, mainly after the development linear response theory, the name Fluctuation-Dissipation Theorem was associated with some relations which are analogs of the results presented here in the context of the microcanonical ensemble. That is the reason we took the freedom to call them also a FDT even in a situation where there is no physical mechanism for dissipation. The paper is organized as follows. In Sec. II the derivation of a FDT using linear response theory is presented and its validity is verified in a simple example. In Sec. III different dispersion relations and sum rules are derived in analogy with the usual Kramers-Kronig ones and their meaning is discussed. They are different because they are not derived in the frequency space, like the usual ones. Conclusions are presented finally in Sec. IV. \section{Derivation of the Fluctuation-Dissipation Theorem} We start by considering a system whose dynamics is given by a Hamiltonian $\hat{H}$. An external force $K(t)$ is applied to this system such that $\hat{H}$ is now perturbed by an external potential given by $-\hat{A}K(t)$. Following \cite{kubo1}, the reponse function of the system due to the external force measured through an observable $\hat{B}$ is given, in linear response, by \begin{eqnarray} \phi_{BA}(\lambda,t-t')&=&\mathrm{Tr}\left(\hat{\rho}_e(\lambda) \frac{1}{i\hbar}\left[\hat{A}(0),\hat{B}(t-t')\right]\right) \nonumber\\ &=&\mathrm{Tr}\left(\hat{\rho}_e(\lambda) \frac{1}{i\hbar}\left[\hat{A}(t'),\hat{B}(t)\right]\right), \label{eq.1} \end{eqnarray} where $[\,,\,]$ is the commutator and $\hat{\rho}_e(\lambda)$ is the equilibrium density operator as a function of a macroscopic parameter $\lambda$. One can also define the following correlation function between $\hat{A}$ and $\hat{B}$ \begin{eqnarray} C_{BA}(\lambda,t-t')&=&\mathrm{Tr}\left(\hat{\rho}_e(\lambda) \frac{1}{2}\{\hat{A}(0),\hat{B}(t-t')\}\right) \nonumber\\ &=&\mathrm{Tr}\left(\hat{\rho}_e(\lambda) \frac{1}{2}\{\hat{A}(t'),\hat{B}(t)\}\right), \label{eq.2} \end{eqnarray} where $\{\,,\,\}$ is the anticommutator. This function gives the spectrum of equilibrium fluctuations when the system is unperturbed. For the canonical ensemble, $\hat{\rho}_e(\lambda)=\hat{\rho}_e(\beta)=e^{-\beta\hat{H}}/Z(\beta)$, where $\beta=(k_{B}T)^{-1}$, and the FDT establishes a relation between the spectra of $\phi_{BA}$ and $C_{BA}$. That means a relation between an equilibrium and a nonequilibrium quantity. Our goal here is to show that there is also a relation between $\phi_{BA}$ and $C_{BA}$ in the microcanonical ensemble. First of all, let us start with the expression for the microcanonical density operator $\hat{\rho}_e(\lambda=E)$. Following \cite{lawson}, we take it as \begin{eqnarray} \hat{\rho}_e(E)=\frac{\delta(E-\hat{H})}{Z(E)}, \label{eq.3} \end{eqnarray} where $Z(E)=\mathrm{Tr}\,\delta(E-\hat{H})$. To derive the FDT, it is necessary to introduce an appropriate representation of $\delta(E-\hat{H})$ like, for example \cite{lawson}, \begin{equation} \delta(E-\hat{H})=\frac{1}{2\pi i}\int_{\gamma-i\infty}^{\gamma+i\infty} dz\exp{\left[(E-\hat{H})z\right]}. \label{eq.4} \end{equation} Expressions (\ref{eq.1}) and (\ref{eq.2}) can be written now in the following way \begin{eqnarray} \phi_{BA}(E,t-t')=\frac{1}{Z(E)}\mathrm{Tr}\left( \frac{1}{2\pi i}\int_{\gamma-i\infty}^{\gamma+i\infty} dz\,e^{(E-\hat{H})z}\,\frac{\left[\hat{A}(t'),\hat{B}(t)\right]}{i\hbar}\right), \label{eq.5}\\ C_{BA}(E,t-t')=\frac{1}{Z(E)}\mathrm{Tr}\left( \frac{1}{2\pi i}\int_{\gamma-i\infty}^{\gamma+i\infty} dz\,e^{(E-\hat{H})z}\,\frac{\{\hat{A}(t'),\hat{B}(t)\}}{2} \right). \label{eq.6} \end{eqnarray} It is important to note that, since the integrals in the complex plane are always convergent, the trace and integral signs can be interchanged. Doing that, it is convenient to define the following new quantities: $\varphi_{BA}(E,t-t')=Z(E)\phi_{BA}(E,t-t')$ and $\mathcal{C}_{BA}(E,t-t')=Z(E)C_{BA}(E,t-t')$ to obtain \begin{eqnarray} \varphi_{BA}(E,t-t')=\frac{1}{2\pi i}\int_{\gamma-i\infty}^{\gamma+i\infty} dz e^{Ez}\chi_{BA}(z,t-t'), \label{eq.7}\\ \mathcal{C}_{BA}(E,t-t')=\frac{1}{2\pi i}\int_{\gamma-i\infty}^{\gamma+i\infty} dz e^{Ez}F_{BA}(z,t-t'), \label{eq.8} \end{eqnarray} where \begin{eqnarray} \chi_{BA}(z,t-t')=\mathrm{Tr}\left(e^{-\hat{H}z}\frac{\left[\hat{A}(t'),\hat{B}(t)\right]}{i\hbar}\right), \label{eq.9}\\ F_{BA}(z,t-t')=\mathrm{Tr}\left(e^{-\hat{H}z}\frac{\{\hat{A}(t'),\hat{B}(t)\}}{2} \right). \label{eq.10} \end{eqnarray} Since $\varphi_{BA}$ and $\mathcal{C}_{BA}$ are given as inverse Laplace transforms of $\chi_{BA}$ and $F_{BA}$, they also satisfy the following relations \begin{eqnarray} \chi_{BA}(z,\tau)=\int_{0}^{\infty}dE\,e^{-Ez} \varphi_{BA}(E,\tau), \label{eq.11}\\ F_{BA}(z,\tau)=\int_{0}^{\infty}dE\,e^{-Ez} \mathcal{C}_{BA}(E,\tau), \label{eq.12} \end{eqnarray} where $\tau=t-t'$. We introduce now the Fourier transform of $\chi_{BA}$ and $F_{BA}$, \begin{eqnarray} \tilde{\chi}_{BA}(z,\omega)=\frac{1}{2\pi}\int_{-\infty}^{\infty}d\tau e^{-i\omega\tau}\chi_{BA}(z,\tau), \label{eq.13}\\ \tilde{F}_{BA}(z,\omega)=\frac{1}{2\pi}\int_{-\infty}^{\infty}d\tau e^{-i\omega\tau} F_{BA}(z,\tau), \label{eq.14} \end{eqnarray} and also the auxiliary function \begin{equation} S_{AB}(z,\tau)=\mathrm{Tr}\left(e^{-\hat{H}z}\hat{A}(t')\hat{B}(t)\right). \label{eq.16} \end{equation} Noticing that $e^{-\hat{H}z}\hat{A}(t')=\hat{A}(t'+iz\hbar)e^{-\hat{H}z}$ and using the cyclic property of the trace, we obtain \begin{eqnarray} \mathrm{Tr}\left[e^{-\hat{H}z}\hat{B}(t)\hat{A}(t'+iz\hbar)\right]= \mathrm{Tr}\left[e^{-\hat{H}z}\hat{A}(t')\hat{B}(t)\right]. \label{eq.17} \end{eqnarray} Using \begin{eqnarray} \frac{1}{2\pi}\int_{-\infty}^{\infty}d\tau\,e^{-i\omega\tau}\,\mathrm{Tr} \left[e^{-\hat{H}z}\hat{B}(t)\hat{A}(t'+iz\hbar) \right]=\frac{1}{2\pi}\int_{-\infty}^{\infty}d\tau'\,e^{-i\omega\tau'}\, \mathrm{Tr}\left[e^{-\hat{H}z}\hat{B}(t)\hat{A}(t'')\right]e^{z\hbar\omega}, \label{eq.18} \end{eqnarray} where $t''=t'+iz\hbar$ and $\tau'=t-t''$, one obtains from (\ref{eq.16}) and (\ref{eq.17}) \begin{eqnarray} \tilde{S}_{AB}(z,\omega)=\tilde{S}_{BA}(z,\omega)e^{z\hbar\omega}, \label{eq.19} \end{eqnarray} where \begin{eqnarray} \tilde{S}_{BA}(z,\omega)=\frac{1}{2\pi}\int_{-\infty}^{\infty}d\tau'\,e^{-i\omega\tau'}\, \mathrm{Tr}\left[e^{-\hat{H}z}\hat{B}(t)\hat{A}(t')\right]. \label{eq.20} \end{eqnarray} Using (\ref{eq.19}) in the Fourier transforms of (\ref{eq.13}) and (\ref{eq.14}) yields \begin{eqnarray} \tilde{\chi}_{BA}(z,\omega)&=&\frac{1}{i\hbar}\left[\tilde{S}_{AB}(z,\omega)-\tilde{S}_{BA}(z,\omega)\right]= \tilde{S}_{BA}(z,\omega)\frac{\left(e^{z\hbar\omega}-1\right)}{i\hbar}, \label{eq.21}\\ \tilde{F}_{BA}(z,\omega)&=&\frac{1}{2}\left[\tilde{S}_{AB}(z,\omega)+\tilde{S}_{BA}(z,\omega)\right]= \tilde{S}_{BA}(z,\omega)\frac{\left(e^{z\hbar\omega}+1\right)}{2}. \label{eq.22} \end{eqnarray} Finally, from (\ref{eq.21}) and (\ref{eq.22}), we obtain \begin{equation} \tilde{F}_{BA}(z,\omega)=i\frac{\hbar}{2}\coth{\left(\frac{z\hbar\omega}{2}\right)}\tilde{\chi}_{BA}(z,\omega), \label{eq.23} \end{equation} which is our quantum FDT. In the classical limit $\hbar\rightarrow 0$, we obtain \begin{eqnarray} \tilde{F}_{BA}(z,\omega)=\frac{i}{z\omega}\tilde{\chi}_{BA}(z,\omega), \label{eq.24} \end{eqnarray} which is our classical FDT. One easily realizes from (\ref{eq.23}) and (\ref{eq.24}) that the replacement of $z$ by $\beta$ in those equations leads precisely to the quantum and classical versions of the FDT in the canonical ensemble. However, the physical nature of (\ref{eq.23}) and (\ref{eq.24}) can be quite different from that in the canonical case. Let us consider, for example, in the classical regime an ergodic and small system, small in the sense that it is not in the thermodynamic limit. Then the microcanonical ensemble averages in (\ref{eq.1}) and (\ref{eq.2}) can be replaced by time averages whose behaviors are given by the dynamics of the system. Therefore, the fluctuations in this case happen due to the dynamics of the concerned system itself and not due to the coupling to a thermostat as in the canonical ensemble. From this point of view, it is surprising that there is a simple relation between the FDT in the canonical and microcanonical ensembles. Indeed, if one wants to compare both cases, the inverse Laplace transform in $z$ should be performed on (\ref{eq.23}) and (\ref{eq.24}) since the canonical FDT consists of a relation between $\tilde{\phi}_{BA}(\beta,\omega)$ and $\tilde{C}_{BA}(\beta,\omega)$, keeping the original macroscopic parameter $\beta$. For the classical case, this can be easily done using (\ref{eq.24}), leading to \begin{equation} \tilde{\mathcal{C}}_{BA}(E,\omega)=\frac{i}{\omega}\int_{0}^{E}dE'\,\tilde{\varphi}_{BA}(E',\omega). \label{eq.24b} \end{equation} For the quantum regime, the inverse Laplace transform should be performed on (\ref{eq.23}). It is not hard to imagine how different the result will also be from the canonical case. In addition to the pure meaning of the relation between response and fluctuations, one may wonder whether (\ref{eq.23}) and (\ref{eq.24}) can be useful or not. We would say they can be useful in situations where the microcanonical ensemble can be applied and the thermodynamic limit is not satisfied. However, what we mean by usefulness is the possibility of applying the FDT in a context very different from the ones considered so far to obtain response functions from correlation functions and vice versa. If by useful one meant to go further and speak about, e.g., transport coefficients, then one would have to discuss more carefully the linear response theory in the microcanonical ensemble, especially because van Kampen's objections \cite{kampen} can be more trickier in this case. The first objection, concerning the validity of the linearization, could still be answered as usual, we believe, by the argument of the stability of the distribution functions \cite{marconi,kubo2}. The second objection, concerning the origin of the decay of correlation functions which lead to finite transport coefficients, cannot be answered as is done sometimes in the context of the canonical ensemble by coupling to an environment \cite{vliet1,vliet2}. The reason is simple: to use the microcanonical ensemble one assumes an isolated system. A possible answer in this case would be the instability of the dynamics \cite{ruelle,gaspard}. However, the question of what ``dissipation'' would mean in the present context of the microcanonical ensemble would remain. This is because, originally, the name Fluctuation-Dissipation Theorem comes from the fact that part of the Fourier transform of the response function is related to the power dissipated by the system when a time-periodic perturbation is applied to it. But for an isolated system there will be no dissipated power. On the other hand, the Fluctuation-Dissipation Theorem, mainly after linear response theory was developed, has been associated with an equation relating the frequency spectra of the response function and of the corresponding symmetric correlation function. In this sense, (\ref{eq.23}) and (\ref{eq.24}) are analogous to Eq. (6.16) of Ref.\cite{kubo1} for the microcanonical ensemble and therefore we took the freedom of calling them Fluctuation-Dissipation Theorems as well. Although beyond the scope of the present work, a general and deep discussion of the subtle points mentioned above as well as of the linear response theory for the microcanonical ensemble would be of great interest and value. \subsection{Example: the Harmonic Oscillator} As an example, we would like to check (\ref{eq.23}) and (\ref{eq.24}) for a simple system whose response and correlation functions are known directly. In order to do that, we choose a simple harmonic oscillator. We consider the case $\hat{A}=\hat{B}=\hat{X}$ where $\hat{X}$ is the position operator. To perform first the calculation in the classical regime, we define the classical analogs of (\ref{eq.5}) and (\ref{eq.6}) as \begin{eqnarray} \varphi(E,t-t')=\int\,dx_{o}dp_{o}\,\delta\left(E-H(x_{o},p_{o})\right)\,\{x(t'),x(t)\}_{o}, \label{eq.25}\\ \mathcal{C}(E,t-t')=\int\,dx_{o}dp_{o}\,\delta\left(E-H(x_{o},p_{o})\right)\,x(t)x(t'), \label{eq.26} \end{eqnarray} where $\{\,,\,\}_{o}$ is the Poisson bracket with respect to the initial conditions $(x_{o},p_{o})$ and $x(t)$ is the solution of the classical equations of motion for the position. The averages above can be easily performed, leading to \begin{eqnarray} \varphi(E,\tau)=\frac{2\pi}{m\omega_{o}^{2}}\sin{\left(\omega_{o}\tau\right)}, \label{eq.27}\\ \mathcal{C}(E,\tau)=\frac{2\pi E}{m\omega_{o}^{3}}\cos{\left(\omega_{o}\tau\right)}. \label{eq.28} \end{eqnarray} We can now calculate \begin{eqnarray} \tilde{\chi}(z,\omega)=\frac{1}{2\pi}\int_{-\infty}^{\infty}d\tau\,e^{-i\omega\tau}\int_{0}^{\infty} dE\,e^{-Ez}\,\varphi(E,\tau), \label{eq.29}\\ \tilde{F}(z,\omega)=\frac{1}{2\pi}\int_{-\infty}^{\infty}d\tau\,e^{-i\omega\tau}\int_{0}^{\infty} dE\,e^{-Ez}\,\mathcal{C}(E,\tau). \label{eq.30} \end{eqnarray} The results are \begin{eqnarray} \tilde{\chi}(z,\omega)=-i\omega z \frac{2\pi}{m\omega_{o}^{3}}\tilde{g}(\omega)\int_{0}^{\infty}dE\,e^{-Ez}\,E \label{eq.31}\\ \tilde{F}(z,\omega)=\frac{2\pi}{m\omega_{o}^{3}}\tilde{g}(\omega)\int_{0}^{\infty}dE\,e^{-Ez}\,E, \label{eq.32} \end{eqnarray} where $\tilde{g}(\omega)=(1/2\pi)\int_{-\infty}^{\infty}d\tau\,e^{-i\omega\tau}\cos{\left(\omega_{o}\tau\right)}$. Therefore, \begin{eqnarray} \tilde{F}(z,\omega)=\frac{i}{z\omega}\tilde{\chi}(z,\omega), \label{eq.33} \end{eqnarray} which agrees with (\ref{eq.24}). Quantum mechanically, we can calculate directly (\ref{eq.9}) and (\ref{eq.10}) for the harmonic oscillator using the energy eigenbasis \begin{eqnarray} \chi(z,\tau)=\sum_{n}e^{-E_{n}z} \frac{\sin{\left(\omega\tau\right)}}{m\omega_{o}}, \label{eq.34}\\ F(z,\tau)=\sum_{n}e^{-E_{n}z}E_{n}\frac{\cos{\left(\omega\tau\right)}}{m\omega_{o}^{2}} \label{eq.35} \end{eqnarray} where $E_{n}$ are the energy eigenvalues. Therefore, for the Fourier transform $\tilde{\chi}(z,\omega)$ we obtain \begin{eqnarray} \tilde{\chi}(z,\omega)=\sum_{n}e^{-E_{n}z}\frac{i}{2m\omega_{o}} \left[\delta(\omega_{o}+\omega)-\delta(\omega_{o}-\omega)\right]. \label{eq.36} \end{eqnarray} Using (\ref{eq.23}) and (\ref{eq.36}), we obtain an expression for $\tilde{F}(z,\omega)$. Inverting the Fourier transform, we get \begin{eqnarray} F(z,\tau)=\sum_{n}e^{-E_{n}z}\frac{\hbar}{2}\coth{\left(\frac{z\hbar\omega_{o}}{2}\right)} \frac{\cos{\left(\omega_{o}\tau\right)}}{m\omega_{o}}. \label{eq.37} \end{eqnarray} Since \begin{eqnarray} \sum_{n}e^{-E_{n}z}E_{n}=\frac{\hbar\omega_{o}}{2}\coth{\left(\frac{z\hbar\omega_{o}}{2}\right)} \frac{1}{2\sinh{\left(z\hbar\omega_{o}/2\right)}}, \label{eq.38} \end{eqnarray} Equation (\ref{eq.37}) can be written as \begin{eqnarray} F(z,\tau)=\sum_{n}e^{-E_{n}z}E_{n}\frac{\cos{\left(\omega_{o}\tau\right)}}{m\omega_{o}^{2}} \label{eq.39} \end{eqnarray} which agrees with (\ref{eq.35}). This verification of (\ref{eq.23}) for the quantum harmonic oscillator is the same as in the canonical ensemble case if $z$ is replaced by $\beta$. However, here (\ref{eq.23}) still has to be transformed back to energy. \section{Dispersion Relations and Sum Rules} In the canonical ensemble, it is possible to derive relations between the real and imaginary parts of the Fourier transform of the response function \cite{kubo2,kubo3}. Those are the so-called Kramers-Kronig relations and mainly they express a causality property contained in the response function. In the present case, dispersion relations also hold in the $z$-space because $\phi_{BA}$ and $C_{BA}$ are defined for positive values of energy. Equations (\ref{eq.11}) and (\ref{eq.12}) imply that $\chi_{BA}(z,\tau)$ and $F_{BA}(z,\tau)$ are analytic functions in the half plane $\mathrm{Re}(z)\geq\gamma$, where $\gamma$ is positive. Therefore, in this region \begin{eqnarray} \chi_{BA}(z_{o},\tau)=\frac{1}{2\pi i}\oint dz\,\frac{\chi_{BA}(z,\tau)}{z-z_{o}}. \label{eq.40} \end{eqnarray} Since $\lim_{\vert z\vert \to \infty}\vert\chi_{BA}(z,\tau)\vert=0$, we can close the integration contour with a semicircle in the half plane where $\chi_{BA}(z,\tau)$ is analytic and a line along $\mathrm{Re}(z)=\gamma$ and send the radius to infinity to obtain from (\ref{eq.40}) the relation \begin{eqnarray} \chi_{BA}(y_{o},\tau)=\frac{1}{\pi i}\mathrm{P}\int_{-\infty}^{\infty}dy\frac{\chi_{BA}(y,\tau)}{y-y_{o}}, \label{eq.41} \end{eqnarray} where the choices $z=\gamma+iy$ and $z_{o}=\gamma+iy_{o}$ were made. The right-hand side denotes the principal value of the integral. Writing $\chi_{BA}$ in terms of its real and imaginary parts, $\chi_{BA}=\chi_{BA}^{\prime}+i\chi_{BA}^{\prime\prime}$, Eq.(\ref{eq.41}) leads to the following dispersion relations \begin{eqnarray} \chi_{BA}^{\prime}(y_{o},\tau)=\frac{1}{\pi}\mathrm{P}\int_{-\infty}^{\infty}dy \frac{\chi_{BA}^{\prime\prime}(y,\tau)}{y-y_{o}}, \label{eq.42}\\ \chi_{BA}^{\prime\prime}(y_{o},\tau)=-\frac{1}{\pi}\mathrm{P}\int_{-\infty}^{\infty}dy \frac{\chi_{BA}^{\prime}(y,\tau)}{y-y_{o}}. \label{eq.43} \end{eqnarray} As it is usually done \cite{kubo3}, from the two relations above, it is possible to derive the moment sum rules, which, in this case, are related to the energy dependence instead of the frequency spectrum. The derivation of such sum rules is sketched in the Appendix. The results for the first three moments are shown below, where the subscript $BA$ was dropped for convenience: \begin{eqnarray} \varphi(0,\tau)&=&\frac{1}{\pi}\int_{-\infty}^{\infty}dy\,\chi^{\prime}(y,\tau), \label{eq.44}\\ \varphi^{(1)}(0,\tau)&=&-\frac{1}{\pi}\int_{-\infty}^{\infty}dy\,y\, \left[\chi^{\prime\prime}(y,\tau)+\frac{\varphi(0,\tau)}{y}\right], \label{eq.45}\\ \varphi^{(2)}(0,\tau)&=&-\frac{1}{\pi}\int_{-\infty}^{\infty}dy\,y^{2}\, \left[\chi^{\prime}(y,\tau)+\frac{\varphi^{(1)}(0,\tau)}{y}\right], \label{eq.46} \end{eqnarray} where \begin{eqnarray} \varphi^{(n)}(0,\tau)=\left(\frac{\partial^{n}}{\partial E^{n}}\varphi(E,\tau)\right)\bigg\vert_{E=0}. \label{eq.47} \end{eqnarray} The moment sum rules above are related to the asymptotic expansion of $\chi_{BA}$ with respect to $z$ (which means low-energy behavior). For small values of $z$ (i.e. high-energy behavior), one obtains the following sum rules: \begin{eqnarray} \varphi^{(-1)}(0,\tau)&=&-\frac{1}{\pi}\int_{-\infty}^{\infty}dy\frac{\chi^{\prime\prime}(0,\tau)}{y}, \label{eq.48}\\ \varphi^{(-2)}(0,\tau)&=&\frac{1}{\pi}\int_{-\infty}^{\infty}dy\frac{1}{y^{2}} \left[\chi^{\prime}(0,\tau)+\varphi^{(-1)}(0,\tau)\right], \label{eq.49}\\ \varphi^{(-3)}(0,\tau)&=&\frac{1}{\pi}\int_{-\infty}^{\infty}dy\frac{1}{y^{3}} \left[\chi^{\prime\prime}(0,\tau)+y\varphi^{(-2)}(0,\tau)\right], \label{eq.50}\\ \end{eqnarray} where \begin{eqnarray} \varphi^{(-n)}(0,\tau)=\int_{0}^{\infty}dE_{1}\int_{E_{1}}^{\infty}dE_{2}\cdots \int_{E_{n-1}}^{\infty}dE_{n}\,\varphi(E,\tau). \label{eq.51} \end{eqnarray} The procedure shown in the Appendix can be repeated as long as the derivatives $\varphi^{(n)}$ and the integrals $\varphi^{(-n)}$ exist to derive higher-order moment sum rules. As for the sum rules in the frequency space, those above can be used to correct phenomenological expressions for $\varphi(E,\tau)$. For example, if one assumes a functional form for the reponse function with some free parameters with respect to the energy dependence, one could determine them by imposing the sum rules for high- or low-energy behavior. The way to do that in the frequency space is shown, for example, in \cite{kubo2,kubo3}. Since the relations above are valid for any value of $\tau$, one could also have dropped the $\tau$ dependence by setting $\tau=0$. Then, it is easier to understand the meaning and the importance of the sum rules: the $z$ spectrum of $\chi$ is given in terms of static quantities like $\varphi^{(n)}(0,\tau=0)$ and $\varphi^{(-n)}(0,\tau=0)$, which could be calculated quantum mechanically in terms of the commutation relations between $\hat{A}$ and $\hat{B}$ (see, for example, \cite{kubo1}). \section{Conclusions} Using linear response theory, we presented a derivation of the Fluctuation-Dissipation Theorem in the microcanonical ensemble in both quantum and classical regimes. The theorem is stated as a relation between the Laplace-Fourier transforms of the response and symmetric correlation functions. Although this relation is very similar to the one derived in the canonical ensemble context, it is valid, for example, in a situation where the fluctuations are very different from thermal ones, namely, fluctuations of an isolated system that is not in the thermodynamic limit. Therefore, the Fluctuation-Dissipation Theorem can be considered as a much more general relation and not constrained just to the context of the canonical ensemble. We believe this result can be very useful to calculate correlation functions from response functions (and vice versa) for systems in the microcanonical ensemble when they are not in the thermodynamic limit. In this sense, as mentioned in \cite{lawson,esposito} (see also the references in \cite{esposito}), the present work can be considered as an additional effort to apply statistical physics to small systems. Moment sum rules were also presented for the energy dependence and they could be useful to correct phenomenological expressions for the response functions.
1,314,259,995,761
arxiv
\section*{Appendix \thesection\protect\indent \parbox[t]{11.15cm}{#1}} \addcontentsline{toc}{section}{Appendix \thesection\ \ \ #1}} \jot=7pt \def\,\nabla\kern -0.7em\raise0.2ex\hbox{/}\,\,{\,\nabla\kern -0.7em\raise0.2ex\hbox{/}\,\,} \def\,y\kern -0.47em /{\,y\kern -0.47em /} \def\,a\kern -0.49em /{\,a\kern -0.49em /} \def\,\partial\kern -0.55em /\,\,{\,\partial\kern -0.55em /\,\,} \def{\vphantom{5pt}}{{\vphantom{5pt}}} \newcommand{\mathbb{J}}{\mathbb{J}} \newcommand{\mathbb{P}}{\mathbb{P}} \newcommand{\mathbb{X}}{\mathbb{X}} \def{\widetilde{P}}{{\widetilde{P}}} \newcommand{\eq}[1]{Eq.~(\ref{#1})} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \def{\scriptscriptstyle (a)}{{\scriptscriptstyle (a)}} \def{\scriptscriptstyle (aa)}{{\scriptscriptstyle (aa)}} \def{\scriptscriptstyle (ab)}{{\scriptscriptstyle (ab)}} \def{\scriptscriptstyle (b)}{{\scriptscriptstyle (b)}} \def{\scriptscriptstyle (a+1)}{{\scriptscriptstyle (a+1)}} \def{\scriptscriptstyle (a+2)}{{\scriptscriptstyle (a+2)}} \def{\scriptscriptstyle (n)}{{\scriptscriptstyle (n)}} \def{\scriptscriptstyle (1)}{{\scriptscriptstyle (1)}} \def{\scriptscriptstyle (2)}{{\scriptscriptstyle (2)}} \def{\scriptscriptstyle (3)}{{\scriptscriptstyle (3)}} \def{\scriptscriptstyle (4)}{{\scriptscriptstyle (4)}} \def{\scriptscriptstyle (12)}{{\scriptscriptstyle (12)}} \def{\scriptscriptstyle (23)}{{\scriptscriptstyle (23)}} \def{\scriptscriptstyle (31)}{{\scriptscriptstyle (31)}} \def{\scriptscriptstyle (11)}{{\scriptscriptstyle (11)}} \def{\scriptscriptstyle (22)}{{\scriptscriptstyle (22)}} \def{\scriptscriptstyle (33)}{{\scriptscriptstyle (33)}} \def{\scriptscriptstyle (aa+1)}{{\scriptscriptstyle (aa+1)}} \def{\scriptscriptstyle (a+1a+2)}{{\scriptscriptstyle (a+1a+2)}} \def{\scriptscriptstyle (a+2a)}{{\scriptscriptstyle (a+2a)}} \def{\scriptscriptstyle [2]}{{\scriptscriptstyle [2]}} \def{\scriptscriptstyle [3]}{{\scriptscriptstyle [3]}} \def{\scriptscriptstyle [4]}{{\scriptscriptstyle [4]}} \def{\scriptscriptstyle [n]}{{\scriptscriptstyle [n]}} \def{\bf j}{{\bf j}} \def{\bf s}{{\bf s}} \def{\bf J}{{\bf J}} \def{\bf L}{{\bf L}} \def{\bf M}{{\bf M}} \def{\bf P}{{\bf P}} \def{\bf S}{{\bf S}} \def{\bf X}{{\bf X}} \def{\boldsymbol{\beta}}{{\boldsymbol{\beta}}} \def{\boldsymbol{\zeta}}{{\boldsymbol{\zeta}}} \def{\cal B}{{\cal B}} \def{\cal I}{{\cal I}} \def{\cal L}{{\cal L}} \def{\cal N}{{\cal N}} \def{\cal P}{{\cal P}} \def{\cal Z}{{\cal Z}} \def{\sf N}{{\sf N}} \def\nu{\nu} \def\widehat{k}{\widehat{k}} \def\tilde{\phi}{\tilde{\phi}} \def\widetilde{\phi}{\widetilde{\phi}} \def\tilde{G}{\tilde{G}} \def\widetilde{G}{\widetilde{G}} \def|0\rangle{|0\rangle} \def{\bar{x}}{{\bar{x}}} \def{\rm m}{{\rm m}} \def{\cal L}{{\cal L}} \def{\cal X}{{\cal X}} \begin{document} \begin{flushright} FIAN/TD/19-05 \\ hep-th/0512342 \end{flushright} \vspace{1cm} \begin{center} {\Large \bf Cubic interaction vertices for massive and massless \medskip higher spin fields} \vspace{2.5cm} R.R. Metsaev\footnote{ E-mail: metsaev@lpi.ru } \vspace{1cm} {\it Department of Theoretical Physics, P.N. Lebedev Physical Institute, \\ Leninsky prospect 53, Moscow 119991, Russia } \vspace{3cm} {\bf Abstract} \end{center} Using the light-cone formulation of relativistic dynamics, we develop various methods for constructing cubic interaction vertices and apply these methods to the study of higher spin fields propagating in flat space of dimension greater than or equal to four. Generating functions of parity invariant cubic interaction vertices for massive and massless higher spin fields of arbitrary symmetry are obtained. We derive restrictions on the allowed values of spins and the number of derivatives, which provide a classification of cubic interaction vertices for totally symmetric fields. As an example of application of the light-cone formalism, we obtain simple expressions for the minimal Yang-Mills and gravitational interactions of massive totally symmetric arbitrary spin fields. We give the complete list of parity invariant and parity violating cubic interaction vertices that can be constructed for massless fields in five and six-dimensional spaces. \vspace{3cm} Keywords: Light-cone formalism, interaction vertices, higher spin fields. \bigskip PACS-2006: 11.10.-z; 11.30-j; 11.30.Cp \newpage \renewcommand{\thefootnote}{\arabic{footnote}} \setcounter{footnote}{0} \section{Introduction} The light-cone formalism \cite{Dirac:1949cp}-\cite{Goddard:1973qh} offers conceptual and technical simplifications of approaches to various problems of modern quantum field and string theories. This formalism hides some of the symmetries and makes the notation somewhat cumbersome but eventually turns out to be rather effective. A number of important problems have been solved in the framework of this formalism. For example, we mention the solution to the light-cone gauge string field theory \cite{Kaku:1974zz}-\cite{Green:1984fu} and the construction of a superfield formulation for some versions of supersymmetric theories \cite{Brink:1982pd}-\cite{Brink:1983pf}. Theories formulated within this formalism may sometimes be a good starting point for deriving a Lorentz covariant formulation \cite{Hata:1986jd}-\cite{Siegel:1988yz}. Another attractive application of the light-cone formalism is the construction of interaction vertices in the theory of massless higher spin fields \cite{Bengtsson:1983pd}-\cite{Fradkin:1991iy}. Some interesting applications of the light-cone formalism to field theory such as QCD are reviewed in \cite{Brodsky:1997de}. Discussions of super $p$-branes and string bit models in the light-cone gauge is given in \cite{deWit:1988ig,Bergshoeff:1988hw} and \cite{Bergman:1995wh} respectively. In this paper, we apply the light-cone formalism to study interaction vertices for higher spin fields. Considerable progress has been achieved in the problem of constructing the theory describing the interaction of massless higher spin fields with gravity. In Ref.\cite{Fradkin:1987ks}, cubic interaction vertices for massless higher spin fields propagating in $AdS_4$ space were constructed; in Ref.\cite{Vasiliev:1990en}, nonlinear equations of motion to all orders in the coupling constant for massless higher spin fields in $AdS_4$ were found. Nonlinear equations of motion for massless totally symmetric higher spin fields in $AdS_d$ space ($d\geq4$) were found in Ref.\cite{Vasiliev:2003ev} (see \cite{Vasiliev:2004cp},\cite{Bekaert:2005vh} for a recent review). It now becomes apparent that constructing a self-consistent theory of massless higher spin fields interacting with gravity requires formulating the theory in $AdS$ space. Unfortunately, despite the efforts, an action that leads to the above-mentioned nonlinear equations of motion has not yet been obtained. To quantize these theories and investigate their ultraviolet behavior, it would be important to find an appropriate action. Since the massless higher spin field theories correspond quantum mechanically to non-local point particles in a space of certain auxiliary variables, it is conjectured that such theories may be ultraviolet finite \cite{Vasiliev:1991rj}. We believe that the light-cone formulation may be helpful in understanding these theories better. The situation here may be analogous to that in string theory; a covariant formulation of the closed string field theories is non-polynomial and is not useful for practical calculations, while the light-cone formulation restricts the string action to the cubic order in string fields. In this paper, keeping these extremely important applications in mind, we develop various methods for constructing cubic interaction vertices and use these methods to find cubic vertices for massive and massless arbitrary spin fields propagating in flat space. We believe that most of our approach to massless higher spin fields can be relatively straightforwardly generalized to the case of massless higher spin fields in $AdS$ space. The light-cone gauge approach to dynamics of free fields in $AdS$ space was developed in \cite{Metsaev:1999ui} (see also \cite{Metsaev:2002vr},\cite{Metsaev:2003cu}). Although the light-cone approach in $AdS$ space is complicated compared to that in flat space, it turns out that these approaches share many properties. We therefore believe that methods developed in flat space might be helpful in analyzing dynamics of interacting massless higher spin fields in $AdS$ space. As regards our study of massive fields, we note that our interest in light-cone gauge vertices for massive higher spin fields in flat space is motivated, among other things, by the potential of our approach for in-depth studies of the interaction vertices of the light-cone gauge (super)string field theory. At present, a wide class of cubic interaction vertices for fields propagating in flat space is known. In particular, the self-interaction cubic vertices for the massless spin 3 field were found in \cite{Berends:1984wp}-\cite{Boulanger:2005br} and the higher-derivative cubic vertex for massless spin 2 and spin 4 fields was studied in \cite{Deser:1990bk}. More general examples of the cubic interaction vertices for massless higher spin fields were discovered in \cite{Bengtsson:1983pd,Bengtsson:1983pg,Berends:1985xx} and the full list of cubic interaction vertices for massless higher spin fields was given in \cite{Bengtsson:1986kh}. A wide list of cubic interaction vertices for massive higher spin fields was obtained in \cite{Weinberg:1969di} (see also \cite{Weinberg:1964cn},\cite{Weinberg:1964ev}). With the exception of Refs.\cite{Bekaert:2005jf,Boulanger:2005br} (devoted to spin 3 field self-interactions) all the above-mentioned works were devoted to the analysis of interaction vertices for higher spin fields in $4d$ flat space. In view of possible applications of the higher spin field theory to string theory, it is instructive to study cubic interaction vertices for higher spin fields in space of dimension $d \geq 4$. We do this in the present paper. This paper is organized as follows. In Section \ref{freesec}, we introduce the notation and describe the standard manifestly $so(d-2)$ covariant light-cone formulation of free massless and massive fields. In Section \ref{GENVERsec}, we discuss arbitrary $n$-point interaction vertices and find restrictions imposed by kinematical symmetries of the Poincar\'e algebra on these vertices. In Section \ref{CUBversec}, we study restrictions imposed by kinematical and dynamical symmetries of the Poincar\'e algebra on cubic interaction vertices for massless and massive fields. We find various forms of closed equations on cubic interaction vertices. In Section \ref{Solcubintversec}, we present solution to equations for parity invariant cubic interaction vertices of massless fields. Section \ref{secMMO} is devoted to parity invariant cubic interaction vertices for massless and massive fields. We apply our general results to derive the minimal Yang-Mills and gravitational interactions of massive arbitrary spin fields. Our approach allows us to obtain simple expressions for vertices of these interactions. In Section \ref{secMMM}, we discuss parity invariant cubic interaction vertices for massive fields. In Sections \ref{Solcubintversec}-\ref{secMMM}, we also derive restrictions on the allowed values of spins and the number of derivatives for cubic interaction vertices of the totally symmetric fields. In Section \ref{sod-4sec}, we develop the light-cone formalism with manifestly realized $so(d-4)$ symmetries that allows us to study both the parity invariant and parity violating interaction vertices on an equal footing. To illustrate this formalism, we construct cubic interaction vertices for massless totally symmetric fields in $5d$ space and for massless totally symmetric and mixed-symmetry fields in $6d$ space. We present the complete list of cubic interaction vertices that can be constructed for massless fields in $d=5,6$ dimensions. Section \ref{CONsec} summarizes our conclusions and suggests directions for future research. Appendices contain some mathematical details and useful formulas. \newsection{Free light-cone gauge massive and massless fields}\label{freesec} The method suggested in Ref.\cite{Dirac:1949cp} reduces the problem of finding a new (light-cone gauge) dynamical system to the problem of finding a new solution of defining symmetry algebra commutators% \footnote{This method is the Hamiltonian version of the Noether method for finding a new dynamical system. An interesting up-to date discussion of the Noether method may be found in \cite{Hurth:1998nq}.}. Since in our case the defining symmetries are generated by the Poincar\'e algebra, we begin with a discussion of the realization of the Poincar\'e algebra on the space of massive and massless fields. We focus on free fields in this section. The Poincar\'e algebra of $d$-dimensional Minkowski space is spanned by translation generators $P^A$ and rotation generators $J^{AB}$ (the latter span the $so(d-1,1)$ Lorentz algebra). The Lorentz covariant form of the non-trivial Poincar\'e algebra commutators is \begin{equation} \label{pj1} {} [P^A,\,J^{BC}]=\eta^{AB}P^C - \eta^{AC} P^B\,, \qquad {} [J^{AB},\,J^{CD}] =\eta^{BC}J^{AD} + 3\hbox{ terms}\,, \end{equation} where $\eta^{AB}$ stands for the mostly positive flat metric tensor. The generators $P^A$ are chosen to be hermitian, and the $J^{AB}$ to be antihermitian. To develop the light-cone formulation, in place of the Lorentz basis coordinates $x^A$ we introduce the light-cone basis coordinates $x^\pm$, $x^I$ defined by\! \footnote{ $A,B,C,D = 0,1,\ldots,d-1$ are $so(d-1,1)$ vector indices; `transverse' indices $I,J,K=1,\ldots,d-2$ are $so(d-2)$ vector indices; $i,j=1,\ldots,d-4$ are $so(d-4)$ vector indices.} \begin{equation} x^\pm \equiv \frac{1}{\sqrt{2}}(x^{d-1} \pm x^0)\,,\qquad x^I\,,\quad I=1,\ldots, d-2\,, \end{equation} and treat $x^{+}$ as an evolution parameter. In this notation, the Lorentz basis vector $X^A$ is decomposed as $(X^+,X^-,X^I)$ and a scalar product of two vectors is then decomposed as \begin{equation} \eta_{AB}X^A Y^B = X^+Y^- + X^-Y^+ +X^IY^I\,, \end{equation} where the covariant and contravariant components of vectors are related as $X^+=X_-$, $X^-=X_+$, $X^I=X_I$. Here and henceforth, a summation over repeated transverse indices is understood. In the light-cone formalism, the Poincar\'e algebra generators can be separated into two groups: \begin{eqnarray} && \label{kingen} P^+,\quad P^I,\quad J^{+I},\quad J^{+-},\quad J^{IJ}, \qquad \hbox{ kinematical generators}\,; \\ && \label{dyngen} P^-,\quad J^{-I}\,, \hspace{4.5cm} \hbox{ dynamical generators}\,. \end{eqnarray} For $x^+=0$, the kinematical generators in the field realization are quadratic in the physical fields% \footnote{Namely, for $x^+=\!\!\!\!\!\!/\, 0 $ they have a structure $G= G_1 + x^+ G_2$, where $G_1$ is quadratic in fields, while $G_2$ contains higher order terms in fields.}, while the dynamical generators receive higher-order interaction-dependent corrections. Commutators of the Poincar\'e algebra in light-cone basis can be obtained from \rf{pj1} by using the light-cone metric having the following non vanishing elements: $\eta^{+-}=\eta^{-+}=1$, $\eta^{IJ}=\delta^{IJ}$. Hermitian conjugation rules of the Poincar\'e algebra generators in light-cone basis take the form \begin{equation} P^{\pm \dagger}=P^\pm, \quad P^{I\dagger} = P^I, \quad J^{IJ\dagger}=-J^{IJ}\,,\quad J^{+-\dagger}=-J^{+-}, \quad J^{\pm I\dagger} = -J^{\pm I}\,. \end{equation} To find a realization of the Poincar\'e algebra on the space of massive and massless fields we use the light-cone gauge description of those fields. We discuss massive and massless fields in turn. {\it Mixed-symmetry massive fields}. In order to obtain the light-cone gauge description of a massive mixed-symmetry field in an easy--to--use form, let us introduce a finite set of the creation and annihilation operators $\alpha_n^I$, $\alpha_n$ and $\bar{\alpha}_n^I$, $\bar{\alpha}_n$ ($n=1,2,\ldots, \nu $) defined by the relations \begin{equation}\label{intver15} [\bar\alpha_n^I,\,\alpha_m^J]=\delta_{nm}\delta^{IJ}\,,\qquad [\bar{\alpha}_n,\,\alpha_m]=\delta_{nm}\,, \end{equation} \begin{equation} \bar\alpha_n^I|0\rangle=0\,,\qquad\bar{\alpha}_n|0\rangle=0\,.\end{equation} The oscillators $\alpha_n^I$, $\bar\alpha_n^I$ and $\alpha_n$, $\bar\alpha_n$ transform in the respective vector and scalar representations of the $so(d-2)$ algebra. In $d$-dimensional Minkowski space, the massive arbitrary spin field is labeled by the mass parameter ${\rm m}$ and spin labels $s_1,\ldots,s_\nu $, $\nu = [\frac{d-1}{2}]$\! \footnote{ In $4d$ flat space, massive arbitrary spin field is labeled by the mass parameter ${\rm m}$ and by one spin label $s=s_1$. Appearance of the spin labels $s_1,\ldots,s_\nu $ is related to the fact that physical spin D.o.F of massive field in $d$-dimensional flat space are described by the $so(d-1)$ algebra irreps labeled by $[\frac{d-1}{2}]$ Gelfand-Zetlin (or Dynkin) labels.}. Physical D.o.F of the massive field labeled by spin labels $s_1,\ldots,s_\nu $ can be collected into a ket-vector defined by \begin{equation} \label{intver16n1} |\phi_{s_1 \ldots s_\nu }(p,\alpha)\rangle \equiv \prod_{n=1}^\nu \sum_{t_n=0}^{s_n} \alpha_n^{I_1^n}\ldots\alpha_n^{I_{s_n-t_n}^n} \alpha_n^{t_n}\, \phi_{s_1\ldots s_\nu}^{I_1^1\ldots I_{s_1-t_1}^1\ldots I_1^\nu \ldots I_{s_\nu - t_\nu}^\nu }(p) |0\rangle\,. \end{equation} We note that the superscripts like $I_{s_n-t_n}^n$ in \rf{intver16n1} denote the transverse indices, while $t_n$ is the degree of the oscillator $\alpha_n$. In \rf{intver16n1} and the subsequent expressions, $\alpha$ occurring in the argument of ket-vectors $|\phi(p,\alpha)\rangle$ denotes a set of the oscillators $\{\alpha_n^I\,,\alpha_n\}$, while $p$ occurring in the argument of ket-vectors $|\phi(p,\alpha)\rangle$ and $\delta$- functions denotes a set of the momenta $\{p^I\,,\beta\equiv p^+\}$. Also, we do not explicitly show the dependence of the ket-vectors $|\phi(p,\alpha)\rangle$ on the evolution parameter $x^+$. The ket-vector \rf{intver16n1} is a degree $s_n$ homogeneous polynomial in the oscillators $\alpha_n^I$, $\alpha_n$: \begin{equation} \label{intver16nn1} \left(\alpha_n^I\bar\alpha_n^I + \alpha_n\bar\alpha_n- s_n\right)|\phi_{s_1 \ldots s_\nu }(p,\alpha)\rangle=0\,,\qquad n=1,\ldots, \nu\,.\end{equation} As noted above, physical D.o.F of a massive field in $d$-dimensional Minkowski space are described by irreps of the $so(d-1)$ algebra. For the ket-vector \rf{intver16n1} to be a carrier of $so(d-1)$ algebra irreps, some constraints must be imposed on the ket-vector \rf{intver16n1}. But to avoid unnecessary complications, we do not impose any constraints on the tensor fields \rf{intver16n1}, which single out irreps of the $so(d-1)$ algebra from these fields% \footnote{ For even $d$, these constraints are: a) $ (\alpha_m^I \bar\alpha_n^I + \alpha_m\bar\alpha_n -s_n\delta_{mn})|\phi_{s_1 \ldots s_\nu }\rangle=0$, $m\leq n$; b) $ ( \bar\alpha_m^I \bar\alpha_n^I + \bar\alpha_m \bar\alpha_n )|\phi_{s_1 \ldots s_\nu }\rangle=0$; c) $s_1 \geq \ldots \geq s_{\nu-1} \geq s_\nu \geq 0$, $\nu = [(d-1)/2]$. For odd $d$ and $s_\nu=0$, one can use the constraints a),b),c), while for $s_\nu\ne 0$ the label $s_\nu$ in a),c) should be replaced by $|s_\nu|$ and constraints a),b),c) should be supplemented by appropriate self-duality constraints. After imposing the constraints a),b),c) the labels $s_1,\ldots,s_\nu $ become Gelfand-Zetlin labels. In sections 2-7 we assume $s_\nu \geq0$ and the constraints \rf{intver16nn1}.}. This implies that the ket-vector \rf{intver16n1} actually describes a finite set of massive fields. To develop the light-cone gauge description of massive arbitrary spin fields on an equal footing we use a ket-vector defined by \begin{equation} \label{intver16} |\phi(p,\alpha)\rangle \equiv \sum_{s_1,\ldots, s_\nu = 0}^{\infty}\,\,|\phi_{s_1 \ldots s_\nu }(p,\alpha)\rangle\,. \end{equation} {\it Mixed-symmetry massless fields}. The light-cone gauge description of a massless mixed-sym\-metry field can be realized by using a finite set of the creation and annihilation operators $\alpha_n^I$ and $\bar{\alpha}_n^I$ ($n=1,2,\ldots, \nu$). In $d$-dimensional Minkowski space, the massless arbitrary spin field is labeled by spin labels $s_1,\ldots,s_\nu $, $\nu = [\frac{d-2}{2}]$\! \footnote{In $4d$ flat space, massless arbitrary spin field is labeled by one spin label $s=s_1$. Appearance of the spin labels $s_1,\ldots,s_\nu $ is related to the fact that physical D.o.F of massless field in $d$-dimensional flat space are described by $so(d-2)$ algebra irreps labeled by $[\frac{d-2}{2}]$ Gelfand-Zetlin (or Dynkin) labels.}. Physical D.o.F of the massless field labeled by spin labels $s_1,\ldots,s_\nu $ can be collected into a ket-vector defined by \begin{equation} \label{intver16n2} |\phi_{s_1 \ldots s_\nu }^{{\rm m}=0} (p,\alpha)\rangle \equiv \prod_{n=1}^\nu \alpha_n^{I_1^n}\ldots\alpha_n^{I_{s_n}^n}\, \phi_{s_1\ldots s_\nu}^{I_1^1\ldots I_{s_1}^1\ldots I_1^\nu \ldots I_{s_\nu }^\nu }(p)|0\rangle\,, \end{equation} which is degree $s_n$ homogeneous polynomial in the oscillators $\alpha_n^I$: \begin{equation} \label{intver16n3} \left(\alpha_n^I\bar\alpha_n^I - s_n\right)|\phi_{s_1 \ldots s_\nu }^{{\rm m}=0} (p,\alpha)\rangle=0\,,\qquad n=1,\ldots, \nu \,.\end{equation} In $d$-dimensional Minkowski space, physical D.o.F of massless field are described by irreps of the $so(d-2)$ algebra. For the ket-vector \rf{intver16n2} to be a carrier of $so(d-2)$ algebra irreps some additional constraints must be imposed on this ket-vector\! \footnote{ For odd $d$, these constraints are: a) $ (\alpha_m^I \bar\alpha_n^I - s_n \delta_{mn} )|\phi_{s_1 \ldots s_\nu }\rangle=0$, $m\leq n$; b) $\bar\alpha_m^I \bar\alpha_n^I |\phi_{s_1 \ldots s_\nu }\rangle=0$; c) $s_1 \geq \ldots \geq s_{\nu-1} \geq s_\nu \geq 0$, $\nu = [(d-2)/2]$. For even $d$ and $s_\nu=0$, one can use the constraints a),b),c), while for $s_\nu\ne 0$ the label $s_\nu$ in a),c) should be replaced by $|s_\nu|$ and constraints a),b),c) should be supplemented by appropriate self-duality constraints. In sections 2-7 we assume $s_\nu \geq0$ and the constraints \rf{intver16n3}.}. But, as in the case of a massive field, to avoid unnecessary complications we do not impose any constraints on the tensor fields \rf{intver16n2}; therefore the ket-vector \rf{intver16n2} describes a finite set of massless fields. By analogy with \rf{intver16}, the ket-vectors of massless fields \rf{intver16n2} can be collected into a ket-vector $|\phi^{{\rm m}=0}(p,\alpha)\rangle$. We note that in \rf{intver16n2} and the subsequent expressions, the letter $\alpha$ occurring in the argument of the ket-vectors of massless fields $|\phi^{{\rm m}=0}(p,\alpha)\rangle$ denotes a set of the oscillators $\alpha_n^I$\,. Below, unless otherwise specified, we keep the integer $\nu$ to be arbitrary for flexibility. \medskip {\it Totally symmetric massive and massless fields}. Totally symmetric fields are popular in various studies because these fields, being simpler than the mixed-symmetry fields, allow illustrating many characteristic features of higher spin fields in a relatively straightforward way. In order to obtain a description of massive and massless totally symmetric fields it is sufficient to introduce one sort of oscillators, i.e. we set $\nu = 1$ in \rf{intver16n1} and \rf{intver16n2} respectively. This is to say that physical D.o.F. of massive and massless totally symmetric spin $s$ fields can be collected into the respective ket-vectors \begin{eqnarray} && \label{intver16n4} |\phi_s(p,\alpha)\rangle = \sum_{t=0}^s \alpha^{I_1} \ldots \alpha^{I_{s-t}} \alpha^t \, \phi^{I_1\ldots I_{s-t}}(p)|0\rangle\,, \\[5pt] && \label{intver16n5} |\phi_s^{{\rm m}=0}(p,\alpha)\rangle = \alpha^{I_1} \ldots \alpha^{I_s}\, \phi^{I_1 \ldots I_s}(p) |0\rangle\,. \end{eqnarray} The ket-vector of massive field \rf{intver16n4} is degree $s$ homogeneous polynomial in oscillators $\alpha^I$, $\alpha$, while the ket-vector of massless field \rf{intver16n5} is degree $s$ homogeneous polynomial in oscillator $\alpha^I$: \begin{eqnarray} \label{intver16n6} && \left(\alpha^I\bar\alpha^I + \alpha\bar\alpha - s \right)|\phi_s(p,\alpha)\rangle =0\,, \\[5pt] \label{intver16n7} && \left(\alpha^I\bar\alpha^I - s \right)|\phi_s^{{\rm m}=0}(p,\alpha\rangle =0 \,.\end{eqnarray} As was said in $d$-dimensional Minkowski space physical D.o.F of massive and massless fields are described by irreps of the $so(d-1)$ and $so(d-2)$ algebras respectively. In order for the fields \rf{intver16n4} and \rf{intver16n5} to realize irreps of the $so(d-1)$ and $so(d-2)$ algebras respectively we should impose the respective tracelessness constraints \begin{equation} \label{intver16n8} \left(\bar\alpha^I\bar\alpha^I + \bar\alpha\bar\alpha\right) |\phi_s(p,\alpha)\rangle =0\,, \qquad \bar\alpha^I\bar\alpha^I |\phi_s^{{\rm m}=0}(p,\alpha)\rangle =0 \,.\end{equation} As in the case of mixed-symmetry fields in order to treat the totally symmetric arbitrary spin fields on an equal footing it is convenient to introduce ket-vectors for the respective towers of massive and massless fields \begin{eqnarray} \label{intver16n9} && |\phi(p,\alpha)\rangle \equiv \sum_{s=0}^{\infty}\,\,|\phi_s(p,\alpha)\rangle\,, \\ \label{intver16n10} && |\phi^{{\rm m}=0}(p,\alpha)\rangle \equiv \sum_{s=0}^{\infty}\,\,|\phi_s^{{\rm m}=0}(p,\alpha)\rangle\,. \end{eqnarray} We proceed with the discussion of a realization of the Poincar\'e algebra on the space of massive and massless fields. A representation of the kinematical generators in terms of differential operators acting on the ket-vector $|\phi\rangle$ is given by\! \footnote{Throughout this paper, without loss of generality, we analyze generators of the Poincar\'e algebra and their commutators for $x^+=0$.} \begin{eqnarray} \label{intver6}&& P^I=p^I\,, \qquad \qquad\quad P^+=\beta\,, \\[3pt] \label{intver9}&& J^{+I}=\partial_{p^I} \beta\,, \qquad \quad \ J^{+-}=\partial_\beta \beta\,, \\[3pt] \label{intver11}&& J^{IJ}=p^I\partial_{p^J}-p^J\partial_{p^I}+M^{IJ}\,, \end{eqnarray} where a spin operator $M^{IJ}$ satisfies commutators of the $so(d-2)$ algebra \begin{equation}\label{intver13} [M^{IJ},M^{KL}] = \delta^{JK}M^{IL} + 3\hbox{ terms }\,,\end{equation} and we use the notation \begin{equation} \beta\equiv p^+\,,\qquad \partial_\beta\equiv \partial/\partial \beta\,, \quad \partial_{p^I}\equiv \partial/\partial p^I\,. \end{equation} The representation of the dynamical generators in terms of differential operators acting on the ket-vector $|\phi\rangle$ is given by \begin{eqnarray} \label{intver8}&& P^-= p^-\,,\qquad p^- \equiv -\frac{p^Ip^I + {\rm m}^2}{2\beta}\,, \\[3pt] \label{intver12}&& J^{-I}=-\partial_{\beta}p^I + \partial_{p^I}P^- +\frac{1}{\beta}(M^{IJ}p^J + {\rm m} M^I)\,, \end{eqnarray} where ${\rm m}$ is the mass parameter and $M^I$ is a spin operator transforming in the vector representation of the $so(d-2)$ algebra. This operator satisfies the commutators \begin{equation}\label{intver14} [M^I,M^{JK}] = \delta^{IJ}M^K -\delta^{IK}M^J \,, \qquad [M^I,M^J ] = -M^{IJ}\,. \end{equation} The spin operators $M^{IJ}$ and $M^I$ form commutators of the $so(d-1)$ algebra (as it should be for the case of massive fields). The particular form of $M^{IJ}$ and $M^I$ depends on the choice of the realization of spin D.o.F of physical fields. For example, a representation of the spin operators $M^{IJ}$ and $M^I$ for the realization of the physical fields given in \rf{intver16} takes the form \begin{eqnarray} && M^{IJ}=\sum_{n=1}^\nu (\alpha_n^I\bar{\alpha}_n^J- \alpha_n^J\bar{\alpha}_n^I)\,,\qquad M^I=\sum_{n=1}^\nu (\alpha_n^I\bar{\alpha}_n-\alpha_n\bar{\alpha}_n^I)\,.\end{eqnarray} As seen from \rf{intver12}, in the limit as ${\rm m} \rightarrow 0$, the Poincar\'e algebra generators are independent of the spin operator $M^I$, i.e. the free light-cone gauge dynamics of massive fields have a smooth limit, given by the dynamics of massless fields. The above expressions provide a realization of the Poincar\'e algebra in terms of differential operators acting on the physical field $|\phi\rangle$. We now write a field theoretical realization of this algebra in terms of the physical field $|\phi\rangle$. As mentioned above the kinematical generators $G^{kin}$ are realized quadratically in $|\phi\rangle$, while the dynamical generators $G^{dyn}$ are realized non-linearly. At the quadratic level, both $G^{kin}$ and $G^{dyn}$ admit the representation \begin{equation} \label{fierep} G_{\scriptscriptstyle [2]}=\int \beta d^{d-1}p\, \langle\phi(-p)| G |\phi(p)\rangle\,, \qquad d^{d-1}p \equiv d\beta d^{d-2}p\,,\end{equation} where $G$ are the differential operators given in \rf{intver6}-\rf{intver11}, \rf{intver8}, \rf{intver12} and the notation $G_{\scriptscriptstyle [2]}$ is used for the field theoretical free generators. The field $|\phi\rangle$ satisfies the Poisson-Dirac commutator \begin{equation}\label{bascomrel} [\,|\phi(p,\alpha)\rangle\,,\,|\phi(p^\prime\,,\alpha^\prime)\rangle\,] \bigl|_{equal\, x^+}=\bigr.\frac{\delta^{d-1}(p+p^\prime)}{2\beta} |\rangle |\rangle'\,, \end{equation} \begin{equation} |\rangle |\rangle' \equiv \exp\bigl(\sum_{n=1}^\nu (\alpha_n^I\alpha_n^{\prime I}+ \alpha_n\alpha_n^\prime)\bigr)|0\rangle|0^\prime\rangle\,.\end{equation} With these definitions, we have the standard commutator \begin{equation} \label{phig} [ |\phi\rangle,G_{\scriptscriptstyle [2]}\,] = G|\phi\rangle\,. \end{equation} In the framework of the Lagrangian approach the light-cone gauge action takes the standard form \begin{equation} \label{lcact} S=\int dx^+ d^{d-1} p\,\, \langle \phi(-p)|{\rm i}\, \beta \partial^- |\phi(p)\rangle +\int dx^+ P^-\,, \end{equation} where $P^-$ is the Hamiltonian. This representation for the light-cone action is valid for the free and for the interacting theory. The free theory Hamiltonian can be obtained from relations \rf{intver8}, \rf{fierep}. Incorporation of the internal symmetry into the theory under consideration resembles the Chan--Paton method in string theory \cite{Paton:1969je}, and could be performed as in \cite{Metsaev:1991nb}. \newsection{General structure of $n$-point interaction vertices and light-cone dynamical principle}\label{GENVERsec} We begin with discussing the general structure of the Poincar\'e algebra dynamical generators \rf{dyngen}. In theories of interacting fields, the dynamical generators receive corrections involving higher powers of physical fields, and we have the following expansion for them: \begin{equation}\label{GDYN01} G^{dyn}=\sum_{n=2}^\infty G^{dyn}_{\scriptscriptstyle [n]}\,, \end{equation} where $G_{\scriptscriptstyle [n]}^{dyn}$ stands for the $n$ - point contribution (i.e. the functional that has $n$ powers of physical fields) to the dynamical generator $G^{dyn}$. The generators $G^{dyn}$ of classical supersymmetric Yang-Mills theories do not receive corrections of the order higher than four in fields \cite{Brink:1982pd,Mandelstam:1982cb,Brink:1984ry}, while the generators $G_{\scriptscriptstyle [n]}^{dyn}$ of (super)gravity theories are nontrivial for all $n\geq 2$ \cite{Goroff:1983hc,Hori:1985qy,Aragone:1989py}\! \footnote{Generators of the closed string field theories, which involve the graviton field, terminate at cubic correction $G_{\scriptscriptstyle [3]}^{dyn}$ \cite{Green:1983hw,Green:1984fu}. It is natural to expect that generators of a general covariant theory should involve all powers of the graviton field $h_{\mu\nu}$. The fact that the closed string field theories do not involve vertices of the order higher than tree in $h_{\mu\nu}$ implies that the general covariance in these theories is realized in a nontrivial way. In string theory, the general covariance manifests itself upon integrating over massive string modes and going to the low energy expansion (see \cite{Tseytlin:1986eq} for a discussion of this theme).}. The `free' generators $G_{\scriptscriptstyle [2]}^{dyn}$ \rf{GDYN01}, which are quadratic in the fields, were discussed in Section 2. Here we discuss the general structure of the `interacting' dynamical generators $G_{\scriptscriptstyle [n]}^{dyn}$, $n\geq 3$. Namely, we describe those properties of the dynamical generators $G_{\scriptscriptstyle [n]}^{dyn}$, $n\geq 3$, that can be obtained from commutators between $G^{kin}$ and $G^{dyn}$. In other words, we find restrictions imposed by kinematical symmetries on the dynamical `interacting' generators. We proceed in the following way. ({\bf i}) {} We first consider restrictions imposed by kinematical symmetries on the dynamical generator $P^-$. As seen from \rf{pj1}, the kinematical generators $P^I$, $P^+$, $J^{+I}$ have the following commutators with $P^-$: $[P^-,G_{\scriptscriptstyle [2]}^{kin}]=G_{\scriptscriptstyle [2]}^{kin}$. Since $G_{\scriptscriptstyle [2]}^{kin}$ are quadratic in the fields, these commutators imply \begin{equation} \label{gdynngkinr} [P_{\scriptscriptstyle [n]}^-,G_{\scriptscriptstyle [2]}^{kin}]=0\,, \qquad n\geq 3 \,. \end{equation} Commutators \rf{gdynngkinr} for $G_{\scriptscriptstyle [2]}^{kin}=(P^I,P^+)$ lead to the representation for $P_{\scriptscriptstyle [n]}^-$ as \begin{eqnarray} \label{pm1} && P_{\scriptscriptstyle [n]}^- = \int d\Gamma_n \langle \Phi_{\scriptscriptstyle [n]}| p_{\scriptscriptstyle [n]}^-\rangle \,, \qquad n\geq 3 \,,\end{eqnarray} where we use the notation \begin{equation} \label{pm1NN1} \langle \Phi_{\scriptscriptstyle [n]}| \equiv \prod_{a=1}^n \langle \phi(p_a,\alpha_a)|\,,\qquad\qquad |p_{\scriptscriptstyle [n]}^-\rangle \equiv p_{\scriptscriptstyle [n]}^- \prod_{a=1}^n |0\rangle_a \,, \end{equation} \begin{equation} \label{delfun01} d\Gamma_n \equiv (2\pi)^{d-1} \delta^{d-1}(\sum_{a=1}^np_a) \prod_{a=1}^n \frac{d^{d-1} p_a}{(2\pi)^{(d-1)/2}} \,. \end{equation} Here and below, the indices $a,b=1,\ldots,n$ label $n$ interacting fields and the $\delta$- functions in $d\Gamma_n$ \rf{delfun01} respect conservation laws for the transverse momenta $p_a^I$ and light-cone momenta $\beta_a$. Generic densities $p_{\scriptscriptstyle [n]}^-$ \rf{pm1NN1} depend on the momenta $p_a^I$, $\beta_a$, and variables related to the spin D.o.F, which we denote by $\alpha$: \begin{equation} \label{pmpmN1} p_{\scriptscriptstyle [n]}^- = p_{\scriptscriptstyle [n]}^- (p_a,\beta_a;\, \alpha)\,. \end{equation} ({\bf ii}) Commutators \rf{gdynngkinr} for $G_{\scriptscriptstyle [2]}^{kin}=J^{+I}$ tell us that the generic densities $p_{\scriptscriptstyle [n]}^-$ in \rf{pm1NN1} depend on the momenta $p_a^I$ through the new momentum variables $\mathbb{P}_{ab}^I$ defined by \begin{equation} \label{pablab} {\mathbb{P} }_{ab}^I\equiv p_a^I\beta_b-p_b^I\beta_a\,, \end{equation} i.e. the densities $p_{\scriptscriptstyle [n]}^-$ turn out to be functions of ${\mathbb{P} }_{ab}^I$ in place of $p_a^I$\! \footnote{We note that due to momentum conservation laws not all ${\mathbb{P} }_{ab}^I$ are independent. It easy to check that the $n$-point vertex involves $n-2$ independent momenta $\mathbb{P}_{ab}^I$.}: \begin{eqnarray} \label{pmpm} && p_{\scriptscriptstyle [n]}^- =p_{\scriptscriptstyle [n]}^-({\mathbb{P} }_{ab},\beta_a;\, \alpha)\,. \end{eqnarray} ({\bf iii}) Commutators between $P^-$ and the remaining kinematical generators $J^{IJ}$, $J^{+-}$ have the form $[P^-,J^{IJ}]=0$, $[P^-,J^{+-}]=P^-$. Since $J^{IJ}$, $J^{+-}$ are quadratic in physical fields, these commutators lead to \begin{equation} \label{gdynngkin} [P_{\scriptscriptstyle [n]}^-,J^{IJ}]= 0\,, \qquad [P_{\scriptscriptstyle [n]}^-,J^{+-}]= P_{\scriptscriptstyle [n]}^-\,, \quad \qquad n\geq 3\,. \end{equation} It is straightforward to check that commutators \rf{gdynngkin} lead to the respective equations for the generic densities $p_{\scriptscriptstyle [n]}^- = p_{\scriptscriptstyle [n]}^- (p_a,\beta_a;\, \alpha)$ in \rf{pmpmN1}: \begin{eqnarray} \label{JIJeq01} && \sum_{a=1}^n \left(p_a^I\partial_{p_a^J} -p_a^J\partial_{p_a^I} + M^{{\scriptscriptstyle (a)} IJ}\right) |p_{\scriptscriptstyle [n]}^-\rangle =0\,, \\ \label{jmpgn2} && \sum_{a=1}^n \beta_a\partial_{\beta_a} |p_{\scriptscriptstyle [n]}^-\rangle = 0\,. \end{eqnarray} Using \rf{pablab}, we rewrite Eqs.\rf{JIJeq01}, \rf{jmpgn2} in terms of $p_{\scriptscriptstyle [n]}^-= p_{\scriptscriptstyle [n]}^-(\mathbb{P}_{ab},\beta_a;\, \alpha)$ in \rf{pmpm} as \begin{eqnarray} \label{JIJeq01nn} && \Bigl(\sum_{\{ a b \}} \mathbb{P}_{ab}^I\partial_{\mathbb{P}_{ab}^J} - \mathbb{P}_{ab}^J\partial_{\mathbb{P}_{ab}^I} + \sum_{a=1}^n M^{{\scriptscriptstyle (a)} IJ} \Bigr) |p_{\scriptscriptstyle [n]}^-\rangle =0\,, \\ \label{jmpgn2nn} && \Bigl(\sum_{\{ a b \}} \mathbb{P}_{ab}^I\partial_{\mathbb{P}_{ab}^I} + \sum_{a=1}^n \beta_a\partial_{\beta_a}\Bigr) |p_{\scriptscriptstyle [n]}^-\rangle =0\,, \end{eqnarray} where the notation $\{ab\}$ is used to label the $n-2$ independent momenta $\mathbb{P}_{ab}^I$. ({\bf iv}) To complete the description of the dynamical generators, we consider the dynamical generator $J^{-I}$. Using commutators of $J^{-I}$ with the kinematical generators, we obtain the representation for $J_{\scriptscriptstyle [n]}^{-I}$, $n\geq 3$ as \begin{equation} \label{npoi4} J_{\scriptscriptstyle [n]}^{-I} =\int d\Gamma_n\,\Bigl(\langle \Phi_{\scriptscriptstyle [n]} | j_{\scriptscriptstyle [n]}^{-I}\rangle +\frac{1}{n} \Bigl(\sum_{a=1}^n \partial_{p_a^I} \langle\Phi_{\scriptscriptstyle [n]}|\Bigr)|p_{\scriptscriptstyle [n]}^-\rangle\Bigr)\,, \end{equation} where we introduce new densities $j_{\scriptscriptstyle [n]}^{-I}$. From the commutators of $J^{-I}$ with the kinematical generators, we learn that the densities $j_{\scriptscriptstyle [n]}^{-I}$ depend on the momenta $p_a^I$ through the momenta ${\mathbb{P} }_{ab}^I$ in \rf{pablab} and satisfy the equations \begin{eqnarray} \label{neweqqqq01} && \Bigl(\sum_{\{ a b \}} \mathbb{P}_{ab}^I\partial_{\mathbb{P}_{ab}^J} - \mathbb{P}_{ab}^J\partial_{\mathbb{P}_{ab}^I} + \sum_{a=1}^n M^{{\scriptscriptstyle (a)} IJ} \Bigr) |j_{\scriptscriptstyle [n]}^{-K}\rangle + \delta^{IK} |j_{\scriptscriptstyle [n]}^{-J}\rangle - \delta^{JK} |j_{\scriptscriptstyle [n]}^{-I}\rangle =0\,, \\ \label{neweqqqq02} && \Bigl(\sum_{\{ a b \}} \mathbb{P}_{ab}^I\partial_{\mathbb{P}_{ab}^I} + \sum_{a=1}^n \beta_a\partial_{\beta_a}\Bigr) |j_{\scriptscriptstyle [n]}^{-K}\rangle =0\,. \end{eqnarray} To summarize, the commutators between the kinematical and dynamical generators yield the expressions for the dynamical generators \rf{pm1}, \rf{npoi4}, where the densities $p_{\scriptscriptstyle [n]}^-$, $j_{\scriptscriptstyle [n]}^{-I}$ depend on ${\mathbb{P} }_{ab}^I$, $\beta_a$, and spin variables $\alpha$ and satisfy Eqs.\rf{JIJeq01nn}, \rf{jmpgn2nn}, \rf{neweqqqq01},\rf{neweqqqq02}. To find the densities $p_{\scriptscriptstyle [n]}^-$, $j_{\scriptscriptstyle [n]}^{-I}$, we consider commutators between the respective dynamical generators; the general strategy of finding these densities consists basically of the following three steps, to be referred to as the {\it light-cone dynamical principle}: \noindent {\bf a}) Find restrictions imposed by commutators of the Poincar\'e algebra between the dynamical generators. Using these commutators shows that the densities $j_{\scriptscriptstyle [n]}^{-I}$ are expressible in terms of the densities $p_{\scriptscriptstyle [n]}^-$. \noindent {\bf b}) Require the densities $p_{\scriptscriptstyle [n]}^-$, $j_{\scriptscriptstyle [n]}^{-I}$ to be polynomials in the momenta $\mathbb{P}_{ab}^I$. We refer to this requirement as the light-cone locality condition. \noindent {\bf c}) Find those densities $p_{\scriptscriptstyle [n]}^-$ that cannot be removed by field redefinitions. In what follows, we apply the light-cone dynamical principle to study the density $p_{\scriptscriptstyle [3]}^-$, which we refer to as the cubic interaction vertex. \newsection{ Equations for cubic interaction vertices}\label{CUBversec} Although many examples of cubic interaction vertices are known in the literature, constructing cubic interaction vertices for concrete field theoretical models is still a challenging procedure. General methods essentially simplifying the procedure of obtaining cubic interaction vertices were discovered in \cite{Metsaev:1993gx,Metsaev:1993mj,Metsaev:1993ap}. In this section we develop the approach in Ref.\cite{Metsaev:1993ap} and demonstrate how our approach allows constructing cubic interaction vertices systematically and relatively straightforwardly. As was explained above (see \rf{pmpm}), the vertex $p_{{\scriptscriptstyle [3]}}^-$ depends on the momenta $\mathbb{P}_{ab}^I$, where $a,b=1,2,3$ label three interacting fields in the cubic interaction vertex. But the momenta ${\mathbb{P} }_{12}^I$, ${\mathbb{P} }_{23}^I$, ${\mathbb{P} }_{31}^I$ are not independent. This is, using the momentum conservation laws for $p_a^I$ and $\beta_a$, \begin{equation} p_1^I + p_2^I + p_3^I = 0\,, \qquad \quad \label{betaconlaw} \beta_1 +\beta_2 +\beta_3 =0 \,,\end{equation} it is easy to check that ${\mathbb{P} }_{12}^I$, ${\mathbb{P} }_{23}^I$, ${\mathbb{P} }_{31}^I$ can be expressed in terms of a new momentum ${\mathbb{P} }^I$ as \begin{equation}\label{po122331} {\mathbb{P} }_{12}^I ={\mathbb{P} }_{23}^I ={\mathbb{P} }_{31}^I={\mathbb{P} }^I \,, \end{equation} where the new momentum $\mathbb{P}^I$ is defined by \begin{equation} \label{defpi} {\mathbb{P} }^I \equiv \frac{1}{3}\sum_{a=1}^3\check{\beta}_a p_a^I\,, \qquad \check{\beta}_a\equiv \beta_{a+1}-\beta_{a+2}\,, \quad \beta_a\equiv \beta_{a+3}\,. \end{equation} The use of ${\mathbb{P} }^I$ is advantageous since ${\mathbb{P} }^I$ is manifestly invariant under cyclic permutations of the external line indices $1,2,3$. Therefore the vertex $p_{\scriptscriptstyle [3]}^-$ is eventually a function of ${\mathbb{P} }^I$: \begin{equation} \label{p2v} p_{\scriptscriptstyle [3]}^- = p_{\scriptscriptstyle [3]}^-({\mathbb{P} },\beta_a;\, \alpha)\,. \end{equation} Before discussing the restrictions imposed by the light-cone dynamical principle, we note that the kinematical symmetry equations \rf{JIJeq01nn}, \rf{jmpgn2nn} take the following form in terms of vertex $p_{\scriptscriptstyle [3]}^-$ \rf{p2v}: \begin{eqnarray} \label{kinsod} && {\bf J}^{IJ} |p_{\scriptscriptstyle [3]}^-\rangle =0\,, \\ \label{honcon04} && (\mathbb{P}^I\partial_{\mathbb{P}^I} + \sum_{a=1}^3\beta_a\partial_{\beta_a}) |p_{\scriptscriptstyle [3]}^-\rangle =0\,,\end{eqnarray} where we use the notation \begin{eqnarray} \label{JIJp3} && {\bf J}^{IJ} \equiv {\bf L}^{IJ}(\mathbb{P}) + {\bf M}^{IJ}\,, \\[4pt] \label{LIJ01} && {\bf L}^{IJ}(\mathbb{P}) \equiv \mathbb{P}^I \partial_{\mathbb{P}^J} - \mathbb{P}^J \partial_{\mathbb{P}^I} \,,\qquad \ \ \ \ {\bf M}^{IJ}\equiv \sum_{a=1}^3 M^{{\scriptscriptstyle (a)} IJ}\,. \end{eqnarray} We now proceed with discussing the restrictions imposed by the light-cone dynamical principle on vertex $p_{\scriptscriptstyle [3]}^-$ \rf{p2v}. Following the procedure in the previous section, we first find the restrictions imposed by the Poincar\'e algebra commutators between the dynamical generators. All that is required is to consider the commutators \begin{equation} \label{cubeq01} [\,P^-\,,\,J^{-I}\,]=0\,,\qquad\quad[\,J^{-I}\,,\,J^{-J}\,]=0\,, \end{equation} which in the cubic approximation take the form \begin{eqnarray} \label{cubeq02} && {} [\, P_{\scriptscriptstyle [2]}^- \,,\,J_{\scriptscriptstyle [3]}^{-I}\,] + [\,P_{\scriptscriptstyle [3]}^-\,,\,J_{\scriptscriptstyle [2]}^{-I}\,]=0\,, \\[7pt] \label{cubeq03} && [\,J_{\scriptscriptstyle [2]}^{-I}\,,\,J_{\scriptscriptstyle [3]}^{-J}\,] + [\,J_{\scriptscriptstyle [3]}^{-I}\,,\,J_{\scriptscriptstyle [2]}^{-J}\,]=0\,. \end{eqnarray} Equation \rf{cubeq02} leads to the equation for the densities $|p_{\scriptscriptstyle [3]}^-(\mathbb{P},\beta_a;\alpha)\rangle$ and $|j_{\scriptscriptstyle [3]}^{-I}(\mathbb{P},\beta_a;\alpha)\rangle$, \begin{equation}\label{cubver3} {\bf P}^- |j_{\scriptscriptstyle [3]}^{-I}\rangle = {\bf J}^{-I\dagger} |p_{\scriptscriptstyle [3]}^-\rangle\,, \end{equation} where we use the notation \begin{eqnarray} && \label{cubeq05} {\bf P}^- \equiv \sum_{a=1}^3 p_a^-\,,\qquad {\bf J}^{-I\dagger} \equiv \sum_{a=1}^3 J_a^{-I\dagger} \,, \\[5pt] && p_a^- \equiv - \frac{p_a^Ip_a^I + {\rm m}_a^2}{2\beta_a}\,, \\ \label{cubver4} && J_a^{-I\dagger} \equiv p_a^I\partial_{\beta_a} -p_a^-\partial_{p_a^I} -\frac{1}{\beta_a}(M^{{\scriptscriptstyle (a)} IJ}p_a^J + {\rm m}_a M^{{\scriptscriptstyle (a)} I})\,. \end{eqnarray} ${\bf P}^-$ and the differential operator ${\bf J}^{-I\dagger}$ in \rf{cubeq05} can be expressed in terms of the momentum $\mathbb{P}^I$ (see Appendix A): \begin{eqnarray} \label{cubver15}&&{\bf P}^- \equiv \frac{\mathbb{P}^I\mathbb{P}^I}{2\hat{\beta}} -\sum_{a=1}^3 \frac{{\rm m}_a^2}{2\beta_a}\,, \\ && \label{cubeq04} {\bf J}^{-I\dagger}|p_{\scriptscriptstyle [3]}^-\rangle =-\frac{1}{3\hat\beta}\, {\cal X}^I |p_{\scriptscriptstyle [3]}^-\rangle \,,\end{eqnarray} where we use the notation \begin{eqnarray} && \label{cubeq06} {\cal X}^I \equiv X^{IJ} \mathbb{P}^J + X^I + X\partial_{\mathbb{P}^I}\,, \\[3pt] \label{harver02} && X^{IJ} \equiv \sum_{a=1}^3 \check\beta_a( \beta_a\partial_{\beta_a} \delta^{IJ} - M^{{\scriptscriptstyle (a)} IJ})\,, \\ \label{harver03} && X^I \equiv \sum_{a=1}^3 \frac{3\hat\beta {\rm m}_a}{\beta_a} M^{{\scriptscriptstyle (a)} I}\,, \\ \label{harver04} && X \equiv -\sum_{a=1}^3 \frac{\hat{\beta} \check{\beta}_a {\rm m}_a^2}{2\beta_a}\,, \\ \label{cubver16}&& \hat{\beta} \equiv \beta_1\beta_2\beta_3\,. \end{eqnarray} Taking \rf{cubeq04} into account we can rewrite Eq.\rf{cubver3} as \begin{equation}\label{cubver13} |j_{\scriptscriptstyle [3]}^{-I}\rangle = - \frac{1}{3\hat\beta{\bf P^-}} {\cal X}^I |p_{\scriptscriptstyle [3]}^-\rangle\,, \end{equation} which tells us that the density $j_{\scriptscriptstyle [3]}^{-I}$ is not an independent quantity but is expressible in terms of vertex $p_{\scriptscriptstyle [3]}^-$ \rf{p2v}. By substituting $j_{\scriptscriptstyle [3]}^{-I}$ \rf{cubver13} into Eq.\rf{cubeq03}, we can verify that Eq.\rf{cubeq03} is fulfilled. Thus, we exhaust all commutators of the Poincar\'e algebra in the cubic approximation. Equations \rf{kinsod}, \rf{honcon04} supplemented by relation \rf{cubver13} provide the complete list of restrictions imposed by commutators of the Poincar\'e algebra on the densities $p_{\scriptscriptstyle [3]}^-$, $j_{\scriptscriptstyle [3]}^{-I}$. We see that the restrictions imposed by commutators of the Poincar\'e algebra by themselves are not sufficient to fix the vertex $p_{\scriptscriptstyle [3]}^-$ uniquely. To choose the physically relevant densities $p_{\scriptscriptstyle [3]}^-$, $j_{\scriptscriptstyle [3]}^{-I}$, i.e. to fix them uniquely, we impose the light-cone locality condition: we require the densities $p_{\scriptscriptstyle [3]}^-$, $j_{\scriptscriptstyle [3]}^{-I}$ to be polynomials in ${\mathbb{P} }^I$. As regards the vertex $p_{\scriptscriptstyle [3]}^-$, we require this vertex to be local (i.e. polynomial in ${\mathbb{P} }^I$) from the very beginning. However it is clear from relation \rf{cubver13} that a local $p_{\scriptscriptstyle [3]}^-$ does not lead automatically to a local density $j_{\scriptscriptstyle [3]}^{-I}$. From \rf{cubver13}, we see that the light-cone locality condition for $j_{\scriptscriptstyle [3]}^{-I}$ amounts to the equation \begin{equation}\label{loccon01} {\cal X}{}^I |p_{\scriptscriptstyle [3]}^-\rangle = {\bf P}^- |V^I\rangle\,, \end{equation} where a vertex $|V^I\rangle$ is restricted to be polynomial in $\mathbb{P}^I$. In fact, imposing the light-cone locality condition amounts to requiring that the generators of the Poincar\'e algebra be local functionals of the physical fields with respect to transverse directions. The last requirement we impose on the cubic interaction vertex is related to field redefinitions. We note that by using local (i.e. polynomial in the transverse momenta) field redefinitions, we can remove the terms in the vertex $p_{\scriptscriptstyle [3]}^-$ that are proportional to ${\bf P}^-$ (see Appendix B). Since we are interested in the vertex that cannot be removed by field redefinitions, we impose the equation \begin{equation}\label{cubver17} |p_{\scriptscriptstyle [3]}^-\rangle \ne {\bf P}^- |V\rangle\,,\end{equation} where a vertex $|V\rangle$ is restricted to be polynomial in $\mathbb{P}^I$. We note that Eqs.\rf{loccon01}, \rf{cubver17} amount to the light-cone dynamical principle. If we restrict ourselves to low spin $s=1,2$ field theories, i.e. Yang-Mills and Einstein theories, it can then be verified that the light-cone dynamical principle and Eqs.\rf{kinsod}, \rf{honcon04} allow fixing the cubic interaction vertices unambiguously (up to several coupling constants). It then seems reasonable to use the light-cone dynamical principle and Eqs.\rf{kinsod}, \rf{honcon04} for studying the cubic interaction vertices of higher spin fields. To summarize the discussion in this section, we collect equations imposed by the kinematical symmetries and the light-cone dynamical principle on vertex $p_{\scriptscriptstyle [3]}^-$ \rf{p2v}: \begin{eqnarray} \label{basequ0001} && {\bf J}^{IJ} |p_{\scriptscriptstyle [3]}^-\rangle = 0\,, \hspace{4.5cm} so(d-2) \hbox{ invariance } \\ \label{basequ0002} && (\mathbb{P}^I\partial_{\mathbb{P}^I} + \sum_{a=1}^3\beta_a\partial_{\beta_a}) | p_{\scriptscriptstyle [3]}^- \rangle =0\,,\qquad \ \ \ \ \ \ \ \beta-\hbox{ homogeneity } \\ && \label{basequ0003} {\cal X}{}^I |p_{\scriptscriptstyle [3]}^-\rangle = {\bf P}^- |V^I \rangle\,, \qquad \hspace{2.4cm} \hbox{ light-cone locality condition } \\[5pt] && \label{basequ0004} |p_{\scriptscriptstyle [3]}^-\rangle \ne {\bf P}^- |V\rangle\,,\end{eqnarray} where the vertices $|V\rangle$ and $|V^I\rangle$ are restricted to be polynomials in $\mathbb{P}^I$. Solving light-cone locality condition \rf{basequ0003} leads to the representation for the density $|j_{\scriptscriptstyle [3]}^{-I}\rangle$ \rf{cubver13} as \begin{equation}\label{cubver13nn} |j_{\scriptscriptstyle [3]}^{-I}\rangle = - \frac{1}{3\hat\beta} |V^I\rangle\,. \end{equation} Equations \rf{basequ0001}-\rf{basequ0004} constitute a complete system of equations on vertex $p_{\scriptscriptstyle [3]}^-$ \rf{p2v}. Equation \rf{basequ0001} reflects the invariance of the vertex $|p_{\scriptscriptstyle [3]}^-\rangle$ under $so(d-2)$ rotations, and Eq.\rf{basequ0002} tells us that $|p_{\scriptscriptstyle [3]}^-\rangle$ is a zero-degree homogeneity function in $\mathbb{P}^I$ and $\beta_a$. Equations \rf{basequ0003}, \rf{basequ0004} and the representation for the density $|j_{\scriptscriptstyle [3]}^{-I}\rangle$ \rf{cubver13nn} are obtainable from the light-cone dynamical principle. \subsection{Equations for cubic interaction vertices in the harmonic scheme for an arbitrary realization of the spin degrees of freedom }\label{equharmschem} Kinematical symmetry equations \rf{basequ0001}, \rf{basequ0002} present no difficulties. For example, the solution of Eq.\rf{basequ0001} can be written simply as $p_{\scriptscriptstyle [3]}^- = p_{\scriptscriptstyle [3]}^-({\cal I})$, where ${\cal I}$ is the complete set of the $so(d-2)$ algebra invariants, which can be constructed using $\mathbb{P}^I$ and the variables describing spin degrees of freedom. A real difficulty is then to choose such a $p_{\scriptscriptstyle [3]}^-({\cal I})$ that satisfies the light-cone locality condition \rf{basequ0003} and Eq.\rf{basequ0004}. Since Eqs.\rf{basequ0003}, \rf{basequ0004} are inconvenient to use it is preferable to recast these equations into some explicit differential equations. In turns out that this becomes possible by using the special, so called harmonic scheme. Our purpose now is therefore to formulate equations for the cubic interaction vertex in the harmonic scheme. We begin with defining the harmonic scheme. By definition, the vertex $|p_{\scriptscriptstyle [3]}^-\rangle$ is a polynomial in the momentum $\mathbb{P}^I$. It is well known that an {\it arbitrary polynomial} in $\mathbb{P}^I$ can be made a {\it harmonic polynomial} in $\mathbb{P}^I$ by adding a suitable polynomial proportional to $\mathbb{P}^I\mathbb{P}^I$. We also recall that a polynomial proportional to $\mathbb{P}^I\mathbb{P}^I$ can be generated using field redefinitions. In other words, via field redefinitions, the vertex $|p_{\scriptscriptstyle [3]}^-\rangle$ can be made a harmonic polynomial in $\mathbb{P}^I$ (see Appendix B). {\it The scheme in which the vertex $|p_{\scriptscriptstyle [3]}^-\rangle$ satisfies the harmonic equation \begin{equation} \label{harmcon01} \partial_{\mathbb{P}^I}^{}\partial_{\mathbb{P}^I}^{}|p_{\scriptscriptstyle [3]}^-\rangle=0 \end{equation} is referred to as the harmonic scheme}. We proceed with the discussion of Eqs.\rf{basequ0003}, \rf{basequ0004} in the harmonic scheme. In the harmonic scheme, Eq.\rf{basequ0004} is satisfied automatically because a harmonic polynomial in $\mathbb{P}^I$ cannot be represented as ${\bf P}^- |V\rangle$, where $|V\rangle$ is a polynomial in $\mathbb{P}^I$. It then remains to analyze the light-cone locality condition \rf{basequ0003} in the harmonic scheme. It turns out that this condition leads to the differential equations for vertex $p_{\scriptscriptstyle [3]}^-$ \rf{p2v} (see Appendix C): \begin{eqnarray} \label{harver01} && \Bigl( X^{IJ} {\cal P}^J + X^I + \Bigl(X\delta^{IJ} + \frac{\hat\beta}{2\widehat{k} + N}\sum_{a=1}^3 \frac{{\rm m}_a^2}{\beta_a}X^{IJ}\Bigr) \partial_{\mathbb{P}^J} \Bigr) |p_{\scriptscriptstyle [3]}^-\rangle = 0\,, \end{eqnarray} where $X^{IJ}$, $X^I$, $X$ are defined in \rf{harver02}-\rf{harver04} and we use the notation \begin{equation} \label{khdef} {\cal P}^I \equiv \mathbb{P}^I - \mathbb{P}^J\mathbb{P}^J \frac{1}{2\widehat{k} + N}\partial_{\mathbb{P}^I}\,,\qquad \widehat{k} \equiv \mathbb{P}^I\partial_{\mathbb{P}^I}\,,\qquad N\equiv d-2\,.\ \end{equation} In what follows, we refer to Eqs.\rf{harver01} as locality equations. A remarkable property of the harmonic scheme is that it allows writing closed expression for the density $|j_{\scriptscriptstyle [3]}^{-I}\rangle$ in terms of vertex $|p_{\scriptscriptstyle [3]}^-\rangle$ \rf{p2v} without specifying the spin operators $M^{IJ}$, $M^I$: \begin{equation} \label{clorepj3}|j_{\scriptscriptstyle [3]}^{-I}\rangle = -\frac{2}{3(2 \widehat{k} + N)} X^{IJ} \partial_{\mathbb{P}^J}|p_{\scriptscriptstyle [3]}^-\rangle\,. \end{equation} To summarize, the complete set of equations to be solved in the harmonic scheme is given by \rf{basequ0001}, \rf{basequ0002}, \rf{harmcon01}, \rf{harver01}. These equations are used in Section \ref{sod-4sec} to develop the so called $so(d-4)$ light-cone approach. \subsection{Equations for parity invariant cubic interaction vertices in the minimal scheme and for the oscillator realization of spin degrees of freedom} In this section, we develop an alternative approach to the analysis of the equations for the cubic interaction vertex \rf{basequ0001}-\rf{basequ0004} based on a scheme we refer to as the minimal scheme. This scheme, to be defined below, turns out to be convenient for the classification of cubic interaction vertices. To proceed, we use the oscillator realization of physical fields in terms of the ket-vectors in \rf{intver16n1}, \rf{intver16}. In this case, the spin arguments of vertex $p_{\scriptscriptstyle [3]}^-$ \rf{p2v} denoted by $\alpha$ become oscillators $\alpha_n^{{\scriptscriptstyle (a)} I}$, $\alpha_n^{\scriptscriptstyle (a)}$, $a=1,2,3$, and vertex $p_{\scriptscriptstyle [3]}^-$ \rf{p2v} takes the form% \footnote{Throughout this paper, unless otherwise specified, the subscripts $m,n,q$ takes the values $1,\ldots,\nu$. The short notation like $p_{\scriptscriptstyle [3]}^-(x^{\scriptscriptstyle (a)})$ is used to indicate the dependence of $p_{\scriptscriptstyle [3]}^-$ on $x^{\scriptscriptstyle (1)}$, $x^{\scriptscriptstyle (2)}$, $x^{\scriptscriptstyle (3)}$.}: \begin{equation} \label{varrep5} |p_{\scriptscriptstyle [3]}^-\rangle = p_{\scriptscriptstyle [3]}^- (\mathbb{P}^I,\beta_a ;\, \alpha_n^{{\scriptscriptstyle (a)} I}, \, \alpha_n^{\scriptscriptstyle (a)} )|0\rangle_1|0\rangle_2|0\rangle_3\,. \end{equation} We now analyze Eqs.\rf{basequ0001}-\rf{basequ0004} in turn. {\bf i}) We first analyze the restrictions imposed by the $so(d-2)$ invariance equations \rf{basequ0001}. These equations tell us that vertex $p_{\scriptscriptstyle [3]}^-$ \rf{varrep5} depends on the $so(d-2)$ algebra invariants that can be constructed using $\mathbb{P}^I$ and the oscillators $\alpha_n^{{\scriptscriptstyle (a)} I}$. It is clear that the following $so(d-2)$ invariants can be constructed: \begin{equation} \mathbb{P}^I\mathbb{P}^I\,,\qquad \alpha_n^{{\scriptscriptstyle (a)} I}\mathbb{P}^I\,,\qquad \alpha_n^{{\scriptscriptstyle (a)} I}\alpha_m^{{\scriptscriptstyle (b)} I}\,.\end{equation} We note that there are additional invariants constructed using the antisymmetric Levi-Civita symbol $\epsilon^{I_1\ldots I_{d-2}}$. {\it The vertices not involving the antisymmetric Levi-Civita symbol are said to be parity invariant vertices}% \footnote{ The methods of manifest covariantization of light-cone vertices \cite{Hata:1986jd}-\cite{Siegel:1988yz} are most suitable for studying the parity invariant vertices. Thus, we expect that all our parity invariant vertices could be covariantized in a relatively straightforward way.}, and {\it vertices involving one antisymmetric Levi-Civita symbol are said to be parity violating vertices}. We focus on the parity invariant vertices (below we shall discuss the cases where the parity invariant vertices provide a complete list of vertices). Thus, we restrict our attention to the vertex \begin{equation}\label{varrep6} p_{\scriptscriptstyle [3]}^- = p_{\scriptscriptstyle [3]}^-(\mathbb{P}^I\mathbb{P}^I,\, \beta_a\,;\, \alpha_n^{{\scriptscriptstyle (a)} I}\mathbb{P}^I,\, \alpha_n^{\scriptscriptstyle (a)}\,;\, \alpha_{mn}^{\scriptscriptstyle (aa+1)},\,\, \alpha_{mn}^{{\scriptscriptstyle (aa)}} )\,, \end{equation} where% \begin{equation}\label{amnabdef} \alpha_{mn}^{{\scriptscriptstyle (ab)}}\equiv \alpha_m^{{\scriptscriptstyle (a)} I} \alpha_n^{{\scriptscriptstyle (b)} I}\,.\end{equation} \bigskip {\bf ii}) The second step is to analyze the restrictions imposed by Eq.\rf{basequ0004}. Using field redefinitions, we can remove the terms in \rf{varrep6} that are proportional to $\mathbb{P}^I\mathbb{P}^I$. Thus, we can drop down the dependence on $\mathbb{P}^I\mathbb{P}^I$ in $p_{\scriptscriptstyle [3]}^-$ \rf{varrep6}. {\it The scheme in which the vertex $p_{\scriptscriptstyle [3]}^-$ is independent of $\mathbb{P}^I\mathbb{P}^I$ is said to be the minimal scheme}. Obviously, in the minimal scheme, vertex $p_{\scriptscriptstyle [3]}^-$ \rf{varrep6}, being polynomial in $\mathbb{P}^I$, satisfies Eq.\rf{basequ0004} automatically. Before analyzing the light-cone locality condition, in place of the variables used in \rf{varrep6}, \begin{equation}\label{var01} \beta_a\,, \quad \alpha_n^{{\scriptscriptstyle (a)} I}\mathbb{P}^I\,,\quad \alpha_n^{\scriptscriptstyle (a)}\,,\quad \alpha_{mn}^{\scriptscriptstyle (aa+1)}, \quad \alpha_{mn}^{\scriptscriptstyle (aa)}\,, \end{equation} we introduce the new variables \begin{equation} \label{var02} {}~ \ \ \ \beta_a\,,\quad \ \ B_n^{\scriptscriptstyle (a)}, \qquad \alpha_n^{\scriptscriptstyle (a)}\,,\quad \alpha_{mn}^{\scriptscriptstyle (aa+1)}\,, \quad Q_{mn}^{\scriptscriptstyle (aa)}\,, \ \ \end{equation} where the new `improved' $so(d-2)$ invariants $B_m^{\scriptscriptstyle (a)}$, $Q_{mn}^{\scriptscriptstyle (aa)}$ are defined by \begin{eqnarray} \label{var02N1} && B_n^{\scriptscriptstyle (a)} \equiv \frac{\alpha_n^{{\scriptscriptstyle (a)} I}\mathbb{P}^I}{\beta_a}+ \frac{\check{\beta}_a}{2\beta_a}{\rm m}_a \alpha_n^{\scriptscriptstyle (a)}\,, \\[5pt] \label{Qmnaadef} && Q_{mn}^{\scriptscriptstyle (aa)} \equiv \alpha_{mn}^{\scriptscriptstyle (aa)} + \alpha_m^{\scriptscriptstyle (a)} \alpha_n^{\scriptscriptstyle (a)}\,. \end{eqnarray} The use of the variables $Q_{mn}^{\scriptscriptstyle (aa)}$ \rf{var02} instead of $\alpha_{mn}^{\scriptscriptstyle (aa)}$ \rf{var01} is preferable because the variables $Q_{mn}^{\scriptscriptstyle (aa)}$ commute with the spin operators of the $so(d-1)$ algebra \begin{eqnarray} \label{MaIJdef} && M^{{\scriptscriptstyle (a)} IJ} = \sum_{n=1}^\nu (\alpha_n^{{\scriptscriptstyle (a)} I} \bar{\alpha}_n^{{\scriptscriptstyle (a)} J} - \alpha_n^{{\scriptscriptstyle (a)} J}\bar{\alpha}_n^{{\scriptscriptstyle (a)} I})\,,\qquad M^{{\scriptscriptstyle (a)} I} = \sum_{n=1}^\nu ( \alpha_n^{{\scriptscriptstyle (a)} I}\bar{\alpha}_n^{\scriptscriptstyle (a)} - \alpha_n^{\scriptscriptstyle (a)}\bar{\alpha}_n^{{\scriptscriptstyle (a)} I})\,, \ \ \ \end{eqnarray} i.e. $Q_{mn}^{\scriptscriptstyle (aa)}$ are invariants of the $so(d-1)$ algebra. The advantages of the variables $B_n^{\scriptscriptstyle (a)}$ \rf{var02} over the variables $\alpha_n^{{\scriptscriptstyle (a)} I} \mathbb{P}^I$ \rf{var01} are to be explained shortly in the course of the analysis of the light-cone locality condition. Thus, we use the `improved' representation for the vertex \begin{equation}\label{varrep7} p_{\scriptscriptstyle [3]}^- = p_{\scriptscriptstyle [3]}^-( \beta_a\,,\, B_n^{\scriptscriptstyle (a)},\, \alpha_n^{\scriptscriptstyle (a)} ;\, \alpha_{mn}^{\scriptscriptstyle (aa+1)},\, Q_{mn}^{\scriptscriptstyle (aa)})\,. \end{equation} {\bf iii}) We next analyze the restrictions imposed by the light-cone locality condition \rf{basequ0003}. For this, we derive the following formula for action of the operator ${\cal X}^I$ \rf{cubeq06} on vertex $p_{\scriptscriptstyle [3]}^-$ \rf{varrep7}: \begin{equation}\label{basrel1} \frac{1}{3\hat{\beta}}{\cal X}^I | p_{\scriptscriptstyle [3]}^-\rangle = -{\bf P}^- \sum_{a=1,2,3\atop n=1,\dots ,\nu} \frac{2\check{\beta}_a}{3\beta_a} \alpha_n^{{\scriptscriptstyle (a)} I}\partial_{B_n^{\scriptscriptstyle (a)}} |p_{\scriptscriptstyle [3]}^-\rangle + \sum_{a=1,2,3\atop n=1,\dots ,\nu} \frac{1}{\beta_a}\alpha_n^{{\scriptscriptstyle (a)} I} G_{a n} + \mathbb{P}^I E\,,\end{equation} where we use the notation \begin{eqnarray} \label{Gandef} G_{a n} & \equiv & \Bigl\{ -\frac{1}{2}({\rm m}_{a+1}^2- {\rm m}_{a+2}^2)\partial_{B_n^{\scriptscriptstyle (a)}} + {\rm m}_a \partial_{\alpha_n^{\scriptscriptstyle (a)}}\Bigr. \nonumber\\ \Bigl. & + &\sum_{m=1}^\nu (B_m^{\scriptscriptstyle (a+1)} + \frac{1}{2} {\rm m}_{a+1} \alpha_m^{\scriptscriptstyle (a+1)} ) \partial_{\alpha_{nm}^{\scriptscriptstyle (aa+1)} }-(B_m^{\scriptscriptstyle (a+2)} - \frac{1}{2} {\rm m}_{a+2} \alpha_m^{\scriptscriptstyle (a+2)} ) \partial_{ \alpha_{mn}^{\scriptscriptstyle (a+2a)}} \Bigr\} |p_{\scriptscriptstyle [3]}^-\rangle\,,\ \ \ \ \ \ \\ E & \equiv & \frac{1}{3\hat{\beta}} \sum_{a=1}^3 \check{\beta}_a \beta_a \partial_{\beta_a} |p_{\scriptscriptstyle [3]}^-\rangle\,. \end{eqnarray} It follows from relation \rf{basrel1} that the light-cone locality condition \rf{basequ0003} amounts to the equations \begin{eqnarray} \label{loc1}&& G_{a n} =0\,,\qquad a=1,2,3; \qquad n=1,\ldots,\nu; \\[6pt] \label{loc2}&& E=0\,. \end{eqnarray} We note that in deriving relation \rf{basrel1} we use the fact that the operator ${\cal X}^I$ does not act on the variables $Q_{mn}^{\scriptscriptstyle (aa)}$ because these variables commute with the spin operators \rf{MaIJdef}. The use of the variables $B_n^{\scriptscriptstyle (a)}$ is advantageous because $B_n^{\scriptscriptstyle (a)}$ satisfy the equations \begin{eqnarray} \label{Bhelequ1} && (\mathbb{P}^I\partial_{\mathbb{P}^I} + \sum_{a=1}^3\beta_a\partial_{\beta_a}) B_n^{\scriptscriptstyle (b)}=0\,, \\[6pt] \label{Bhelequ2} && \sum_{a=1}^3 \Bigl\{ \check{\beta}_a \Bigr( \mathbb{P}^I \beta_a \partial_{\beta_a} + \sum_{n=1}^\nu \mathbb{P}^J \alpha_n^{{\scriptscriptstyle (a)} J} \bar{\alpha}_n^{{\scriptscriptstyle (a)} I}\Bigl)- \frac{3\hat{\beta} {\rm m}_a}{\beta_a} \sum_{n=1}^\nu \alpha_n^{\scriptscriptstyle (a)} \bar{\alpha}_n^{{\scriptscriptstyle (a)} I}\Bigr\} B_m^{\scriptscriptstyle (b)}|0\rangle =0\,. \end{eqnarray} Equations \rf{Bhelequ1} and \rf{Bhelequ2} are very helpful for solving the $\beta$-homogeneity equation \rf{basequ0002} and deriving formula \rf{basrel1} respectively. \bigskip {\bf iv}) We finally analyze the restrictions imposed by the $\beta$--homogeneity equation \rf{basequ0002} and Eq.\rf{loc2}. In terms of vertex $p_{\scriptscriptstyle [3]}^-$ \rf{varrep7}, Eqs.\rf{basequ0002}, \rf{loc2} become \begin{equation} \label{betdep1} \sum_{a=1}^3 \beta_a \partial_{\beta_a} p_{\scriptscriptstyle [3]}^-=0\,,\qquad\quad \sum_{a=1}^3 \check{\beta}_a\beta_a\partial_{\beta_a} p_{\scriptscriptstyle [3]}^-=0\,. \end{equation} Equations \rf{betdep1} imply that vertex $p_{\scriptscriptstyle [3]}^-$ \rf{varrep7} is independent of $\beta_a$, $a=1,2,3$: \begin{equation}\label{varrep8} p_{\scriptscriptstyle [3]}^- = p_{\scriptscriptstyle [3]}^-(B_n^{\scriptscriptstyle (a)}, \alpha_n^{\scriptscriptstyle (a)}\,, \alpha_{mn}^{\scriptscriptstyle (aa+1)}, Q_{mn}^{\scriptscriptstyle (aa)} )\,, \end{equation} while Eqs.\rf{cubver13}, \rf{basrel1}, \rf{loc1}, \rf{loc2} lead to the following expression for $|j_{\scriptscriptstyle [3]}^{-I}\rangle$: \begin{equation} \label{varrep8N1} |j_{\scriptscriptstyle [3]}^{-I}\rangle = \sum_{a=1,2,3\atop n=1,\dots ,\nu} \frac{2\check{\beta}_a}{3\beta_a} \alpha_n^{{\scriptscriptstyle (a)} I}\partial_{B_n^{\scriptscriptstyle (a)}} |p_{\scriptscriptstyle [3]}^-\rangle\,. \end{equation} {\bf Summary} of analysis in Eqs.\rf{basequ0001}-\rf{basequ0004} in the minimal scheme is given in formula \rf{varrep8}, with the equations still to be solved given in \rf{loc1}. Up to this point our treatment has been applied to vertices for massive as well as massless fields. From now on, we separately consider vertices for the massless fields, vertices involving both massless and massive fields, and vertices for the massive fields. The vertices in Sections \ref{Solcubintversec}-\ref{secMMM} constitute the complete lists of parity invariant cubic interaction vertices for massless and massive fields in $d\geq 4$ dimensions. \newsection{Parity invariant cubic interaction vertices for massless fields } \label{Solcubintversec} We begin with discussing the parity invariant cubic interaction vertex for the massless mixed-symmetry fields. We consider vertex \rf{varrep8} for three massless fields: \begin{equation}\label{0001} {\rm m}_1 = {\rm m}_2 = {\rm m}_3=0\,.\end{equation} Equations for the vertex involving three massless fields are obtainable from Eqs.\rf{loc1} by letting ${\rm m}_a \rightarrow 0 $, $a=1,2,3$, in Eqs.\rf{loc1} . The general solution for vertex \rf{varrep8} then takes the form (see Appendix D) \begin{equation}\label{0002} p_{\scriptscriptstyle [3]}^- = p_{\scriptscriptstyle [3]}^-(B_n^{\scriptscriptstyle (a)}; \alpha_{mn}^{{\scriptscriptstyle (aa)}};\, Z_{mnq})\,,\end{equation} where we use the notation \begin{equation} \label{0003} B_n^{\scriptscriptstyle (a)} \equiv \frac{\alpha_n^{{\scriptscriptstyle (a)} I}\mathbb{P}^I}{\beta_a}\,,\qquad\quad Z_{mnq}\equiv B_m^{\scriptscriptstyle (1)} \alpha_{nq}^{\scriptscriptstyle (23)} + B_n^{\scriptscriptstyle (2)} \alpha_{qm}^{\scriptscriptstyle (31)} + B_q^{\scriptscriptstyle (3)} \alpha_{mn}^{\scriptscriptstyle (12)} \,, \end{equation} and $\alpha_{mn}^{{\scriptscriptstyle (ab)}}$ is defined in \rf{amnabdef}. This solution provides the complete list of parity invariant cubic interaction vertices for both totally symmetric and mixed-symmetry fields. We now comment on the solution obtained. Vertex $p_{\scriptscriptstyle [3]}^-$ \rf{0002} depends on $B_n^{\scriptscriptstyle (a)}$, $\alpha_{mn}^{\scriptscriptstyle (aa)}$ and $Z_{mnq}$, which are the respective degree 1, 2, and 3 homogeneous polynomials in oscillators. Henceforth, degree 1, 2, and 3 homogeneous polynomials in oscillators are referred to as linear, quadratic, and cubic forms respectively. We emphasize, however, that the contribution of the $\alpha_{mn}^{\scriptscriptstyle (aa)}$-terms to the Hamiltonian $P_{\scriptscriptstyle [3]}^-$ vanishes when the ket-vector \rf{intver16n2} is subjected to the tracelessness constraint. This is, the physical massless fields, being irreps of the $so(d-2)$ algebra, satisfy the tracelessness constraints \begin{equation}\label{traceless01} \bar\alpha_m^{{\scriptscriptstyle (a)} I}\bar\alpha_n^{{\scriptscriptstyle (a)} I}|\phi_a^{{\rm m}_a=0}\rangle = 0\,,\qquad a=1,2,3\,.\end{equation} It is then clear that the $\alpha_{mn}^{\scriptscriptstyle (aa)}$-terms in \rf{0002} do not contribute to the Hamiltonian $P_{\scriptscriptstyle [3]}^-$ \rf{pm1}% \footnote{ We keep $\alpha_{mn}^{\scriptscriptstyle (aa)}$-terms in the general solution \rf{0002} because in certain applications it is convenient to deal with ket-vectors $|\phi^{{\rm m}=0}\rangle$, which are not subjected to the tracelessness constraint \rf{traceless01}. It is clear that such ket-vectors describe a collection of massless fields.}. Thus, in case of massless fields belonging to irreps of the $so(d-2)$ algebra, vertex \rf{0002} is governed by the linear forms $B_n^{\scriptscriptstyle (a)}$ and by the cubic forms $Z_{mnq}$. To understand the remaining important properties of solution \rf{0002} we consider cubic vertices for massless totally symmetric fields. \subsection{ Cubic interaction vertices for massless totally symmetric fields}\label{SolcubintversecN1} In this section we restrict attention to the parity invariant cubic interaction vertices for massless totally symmetric fields. To consider the totally symmetric fields it suffices to use one sort of oscillators, i.e. to set $\nu = 1$ in \rf{0002}, \rf{0003}. To simplify formulas we drop oscillator's subscript $n=1$ and use the simplified notation $\alpha^I\equiv \alpha_1^I$. The cubic interaction vertex can then be obtained from the general solution \rf{0002} by using the identifications \begin{equation}\label{simnot01} \alpha^{{\scriptscriptstyle (a)} I} \equiv \alpha_1^{{\scriptscriptstyle (a)} I}\,,\qquad a=1,2,3\,,\end{equation} in \rf{0002} and ignoring contribution of oscillators carrying a subscript $n>1$. Adopting the simplified notation \rf{simnot01} for linear forms $B^{\scriptscriptstyle (a)} \equiv B_1^{\scriptscriptstyle (a)}$ \rf{0003}, quadratic forms $\alpha^{\scriptscriptstyle (ab)} \equiv \alpha_{11}^{\scriptscriptstyle (ab)}$ \rf{amnabdef}, and cubic form $Z\equiv Z_{111}$ \rf{0003}, we obtain \begin{equation}\label{0008} B^{\scriptscriptstyle (a)} \equiv \frac{\alpha^{{\scriptscriptstyle (a)} I} \mathbb{P}^I}{\beta_a}\,,\qquad \alpha^{\scriptscriptstyle (ab)} \equiv \alpha^{{\scriptscriptstyle (a)} I}\alpha^{{\scriptscriptstyle (b)} I}\,,\qquad Z\equiv \sum_{a=1}^3 B^{\scriptscriptstyle (a)} \alpha^{\scriptscriptstyle (a+1a+2)} \,, \end{equation} and vertex \rf{0002} becomes \begin{equation} \label{sec05nn1} p_{\scriptscriptstyle [3]}^- = p_{\scriptscriptstyle [3]}^-(B^{\scriptscriptstyle (a)};\, \alpha^{\scriptscriptstyle (aa)};\, Z )\,.\end{equation} Vertex \rf{sec05nn1} describes interaction of towers of massless fields \rf{intver16n10}. We now obtain vertex for massless totally symmetric spin $s^{\scriptscriptstyle (1)}$, $s^{\scriptscriptstyle (2)}$, $s^{\scriptscriptstyle (3)}$ fields. The massless totally symmetric spin $s^{\scriptscriptstyle (a)}$ fields are described by the respective ket-vectors $|\phi_{s^{\scriptscriptstyle (a)} }^{{\rm m}_a=0}\rangle$. The ket-vectors of massless fields $|\phi_{s^{\scriptscriptstyle (a)} }^{{\rm m}_a=0}\rangle$, $a=1,2,3$, are obtainable from \rf{intver16n5} by replacement $s\rightarrow s^{\scriptscriptstyle (a)} $, $\alpha^I\rightarrow \alpha^{{\scriptscriptstyle (a)} I}$ in \rf{intver16n5}. Taking into account that the ket-vectors $|\phi_{s^{\scriptscriptstyle (a)}}^{{\rm m}_a=0}\rangle$ are the respective degree $s^{\scriptscriptstyle (a)} $ homogeneous polynomials in the oscillators $\alpha^{{\scriptscriptstyle (a)} I}$ (see \rf{intver16n7}) it is easy to see that the vertex we are interested in must satisfy the equations \begin{equation} \label{sec05nn2} (\alpha^{{\scriptscriptstyle (a)} I}\bar\alpha^{{\scriptscriptstyle (a)} I} - s^{\scriptscriptstyle (a)} )|p_{\scriptscriptstyle [3]}^-\rangle = 0\,,\qquad a=1,2,3, \end{equation} which tell us that the vertex $p_{\scriptscriptstyle [3]}^-$ should be degree $s^{\scriptscriptstyle (a)} $ homogeneous polynomial in the oscillators $\alpha^{{\scriptscriptstyle (a)} I}$. Taking into account that the forms $B^{\scriptscriptstyle (a)}$ and $Z$ \rf{0008} are the respective degree 1 and 3 homogeneous polynomials in oscillators, we find the general solution of Eq.\rf{sec05nn2} \begin{equation}\label{0006} p_{\scriptscriptstyle [3]}^-(s^{\scriptscriptstyle (1)},s^{\scriptscriptstyle (2)},s^{\scriptscriptstyle (3)};k) = Z^{\frac{1}{2}({\bf s} - k)} \prod_{a=1}^3 (B^{\scriptscriptstyle (a)})^{s^{\scriptscriptstyle (a)} + \frac{1}{2}(k - {\bf s}) }\,, \end{equation} where we use the notation% \footnote{ We ignore the contribution of $\alpha^{\scriptscriptstyle (aa)}$-terms \rf{sec05nn1} to vertex \rf{0006}. Because of the tracelessness constraint (see the second relation in \rf{intver16n8}) the contribution of these terms to the Hamiltonian $P_{\scriptscriptstyle [3]}^-$ \rf{pm1} vanishes.} \begin{equation}\label{0007} {\bf s} \equiv \sum_{a=1}^3 s^{\scriptscriptstyle (a)} \,, \end{equation} and integer $k$ is a freedom in our solution. The integer $k$ labels all possible cubic vertices that can be built for massless spin $s^{\scriptscriptstyle (1)}$, $s^{\scriptscriptstyle (2)}$, $s^{\scriptscriptstyle (3)}$ fields and has a clear physical interpretation. Taking into account that the forms $B^{\scriptscriptstyle (a)}$ and $Z$ \rf{0008} are degree 1 homogeneous polynomials in the momentum $\mathbb{P}^I$,% \footnote{ It is this property of the forms $B^{\scriptscriptstyle (a)}$ and $Z$ that allows us to introduce the vertex that is the homogeneous polynomial in $\mathbb{P}^I$. A completely different type of a situation occurs in the case of massive fields, whose cubic interaction vertices depend on forms that are non-homogeneous polynomials in the $\mathbb{P}^I$.} it is easy to see that vertex \rf{0006} is a degree $k$ homogeneous polynomial in $\mathbb{P}^I$. To summarize, the vertex $p_{\scriptscriptstyle [3]}^-(s^{\scriptscriptstyle (1)},s^{\scriptscriptstyle (2)},s^{\scriptscriptstyle (3)};k)$ describes interaction of three massless spin $s^{\scriptscriptstyle (1)}$, $s^{\scriptscriptstyle (2)}$, $s^{\scriptscriptstyle (3)}$ fields having $k$ powers of the momentum $\mathbb{P}^I$. In the Lorentz covariant approach, the integer $k$ is equal to the number of the derivatives with respect to space-time coordinates. \medskip We now discuss the restrictions to be imposed on the spin values $s^{\scriptscriptstyle (1)}$, $s^{\scriptscriptstyle (2)}$, $s^{\scriptscriptstyle (3)}$ and the integer $k$. The powers of the forms $B^{\scriptscriptstyle (a)}$ and $Z$ in \rf{0006} must be non--negative integers. For this to be the case, it is necessary to impose the following restrictions on the allowed spin values $s^{\scriptscriptstyle (1)}$, $s^{\scriptscriptstyle (2)}$, $s^{\scriptscriptstyle (3)}$ and the number of powers of the momentum $\mathbb{P}^I$ (the number of the derivatives): \begin{eqnarray} \label{0009} && k\leq {\bf s} \,,\qquad\quad {\bf s} - k \leq 2 s^{\scriptscriptstyle (a)} \,, \qquad a=1,2,3\,, \\[9pt] \label{00011} && {\bf s} - k \qquad\qquad \qquad \hbox{ even integer}\,. \end{eqnarray} Restrictions \rf{0009} can be rewritten equivalently as \begin{equation}\label{00012} {\bf s} - 2s_{min} \leq k \leq {\bf s} \,, \quad \qquad s_{min}\equiv \min_{a=1,2,3} s^{\scriptscriptstyle (a)}\,. \end{equation} \bigskip A few remarks are in order. \bigskip {\bf i}) The restriction $k\leq {\bf s}$ in \rf{00012} expresses the fact that in the minimal scheme, which does not admit $\mathbb{P}^I\mathbb{P}^I$-terms, the total number of the transverse indices of fields that enter the cubic Hamiltonian $P_{\scriptscriptstyle [3]}^-$ cannot be less than the number of powers of the momentum $\mathbb{P}^I$ in the vertex. \bigskip {\bf ii}) If $k=2$, then the restriction ${\bf s} - 2s_{min} \leq 2$ in \rf{00012} is precisely the restriction that leaves no place for the gravitational interaction of massless higher spin fields ($s>2$). Indeed, the gravitational interaction of a massless spin $s$ field could be described by the vertex $p_{\scriptscriptstyle [3]}^-(s^{\scriptscriptstyle (1)},s^{\scriptscriptstyle (2)},s^{\scriptscriptstyle (3)};k)$ with $s^{\scriptscriptstyle (1)}=s^{\scriptscriptstyle (2)}=s>2$, $s^{\scriptscriptstyle (3)}=2$, $k=2$. For these values $s^{\scriptscriptstyle (a)}$ we obtain $s_{min} = 2$, ${\bf s} = 2s+2$ and therefore restrictions \rf{00012} take the form \begin{equation} 2s - 2 \leq k \leq 2s+2 \,. \end{equation} On the one hand, these restrictions tell us that for $s>2$, the gravitational interaction, i.e. the case $k=2$, is not allowed. On the other hand, we see that all allowed cubic interactions vertices for graviton and higher spin $s>2$ fields involve higher derivatives, $k>2$. \bigskip {\bf iii}) Restrictions \rf{00011}, \rf{00012} lead to a surprisingly simple result for the number of allowed parity invariant cubic interaction vertices $p_{\scriptscriptstyle [3]}^-(s^{\scriptscriptstyle (1)},s^{\scriptscriptstyle (2)},s^{\scriptscriptstyle (3)};k)$. Indeed, we see from \rf{00011} and \rf{00012} that for spin values $s^{\scriptscriptstyle (1)}$, $s^{\scriptscriptstyle (2)}$, $s^{\scriptscriptstyle (3)}$, the integer $k$ takes the values \begin{equation}\label{kval01} k = {\bf s},\, {\bf s} - 2,\, {\bf s} - 4,\, \ldots\, , {\bf s} - 2s_{min}\,,\qquad \hbox{ for } \ \ d>4\,. \end{equation} This implies that given spin values $s^{\scriptscriptstyle (1)}$, $s^{\scriptscriptstyle (2)}$, $s^{\scriptscriptstyle (3)}$, the number of parity invariant cubic interaction vertices $p_{\scriptscriptstyle [3]}^-(s^{\scriptscriptstyle (1)},s^{\scriptscriptstyle (2)},s^{\scriptscriptstyle (3)};k)$ that can be constructed is given by \begin{equation}\label{number01} {\sf N} (s^{\scriptscriptstyle (1)},s^{\scriptscriptstyle (2)},s^{\scriptscriptstyle (3)}) = s_{min} + 1\,, \qquad \hbox{ for } \ \ d>4 \,. \end{equation} \bigskip {\bf iv}) Vertices \rf{0006}, with $k$ in \rf{kval01}, constitute the complete list of vertices for $d>4$. For $d=4$, the number of allowed vertices is decreased. This is, if $d=4$ then for spin values $s^{\scriptscriptstyle (1)}$, $s^{\scriptscriptstyle (2)}$, $s^{\scriptscriptstyle (3)}$, the integer $k$ takes the values% \footnote{ For $d=4$, the vertices \rf{0006} with ${\bf s} > k > {\bf s} -2 s_{min}$ (see \rf{kval01}) are proportional to $\mathbb{P}^2$ (and can be removed by field redefinitions) or to $\alpha_a^I\alpha_a^I$ (and do not contribute to the Hamiltonian in view of tracelessness constraint \rf{intver16n8}). This can be demonstrated straightforwardly using helicity formalism in Ref.\cite{Bengtsson:1983pd}.} \begin{equation} \label{kval02} k = {\bf s},\,\, {\bf s} - 2s_{min}\,,\qquad \hbox{ for } \ \ d=4. \end{equation} This implies that for spin values $s^{\scriptscriptstyle (1)}$, $s^{\scriptscriptstyle (2)}$, $s^{\scriptscriptstyle (3)}$, the number of parity invariant cubic vertices that can be built for massless fields in four dimensions is equal one if $s_{min}=0$ and two if $s_{min}\ne 0$% \footnote{ Values of $k$ \rf{kval02} explain the vanishing of the vertices $p_{\scriptscriptstyle [3]}^-(2,2,2;4)$ (see Table I) and $p_{\scriptscriptstyle [3]}^-(3,3,3;5)$ (see table II) in $d=4$. The vanishing of the covariant counterpart of our light-cone vertex $p_{\scriptscriptstyle [3]}^-(3,3,3;5)$ in $d=4$ was discussed in Ref.\cite{Bekaert:2005jf}.}. \medskip {\bf v}) We comment on the relation of our vertices to the vertices for higher spin fields in $AdS$ space \cite{Fradkin:1987ks}-\cite{Vasiliev:2003ev}. Clearly, direct comparison of our vertices with AdS vertices is not possible because of two reasons: 1) Our vertices are defined for fields in {\it flat space}; 2) Cubic vertices in $AdS$ are given in terms of some explicit {\it generating function}, but vertices for three fields with arbitrary (but fixed) spin values are still not available in the literature. Nevertheless, it seems likely that: 1) All our vertices \rf{0006},\rf{kval01} (and \rf{kval02} for $d=4$) allow a smooth extension to $AdS$ space and these vertices, being supplemented by appropriate cosmological constant dependent corrections, coincide with some $AdS$ vertices; 2) The remaining AdS vertices are singular in the flat space limit and do not have flat space counterparts. Formula \rf{0006} not only provides a surprisingly simple form for the vertices of massless higher spin fields but also gives a simple form for the vertices of the well-studied massless low spin $s=0,1,2$ fields. By way of example, we consider cubic vertices that describe the self-interaction of spin $s$ field having $k=s$ powers of $\mathbb{P}^I$. In the literature, such cubic vertices are referred to as Yang-Mills like interaction vertices% \footnote{ Such vertices for spin $s>2$ fields in $4d$ flat space were built by using light-cone approach in \cite{Bengtsson:1983pd}. Our formula \rf{YMHSver} gives alternative simple expression for these vertices in $d=4$ and provides an extension to arbitarry $d>4$ dimensions on an equal footing.}. We consider vertices with $s^{\scriptscriptstyle (1)} = s^{\scriptscriptstyle (2)} = s^{\scriptscriptstyle (3)} = s$ and $k=s$ and formula \rf{0006} leads to \begin{equation} \label{YMHSver} p_{\scriptscriptstyle [3]}^-(s,s,s;s)=Z^s\,.\end{equation} For spin $s=1$ and $s=2$ fields, these vertices describe the respective cubic interaction vertices of Yang-Mills and Einstein theories, \begin{eqnarray} \label{YMcovL} && p_{\scriptscriptstyle [3]}^-(1,1,1;1)=Z \sim (F_{\mu\nu}F^{\mu\nu})_{\scriptscriptstyle [3]}, \\ \label{EcovL} && p_{\scriptscriptstyle [3]}^-(2,2,2;2)=Z^2 \sim (\sqrt{g}R)_{\scriptscriptstyle [3]}\,,\end{eqnarray} where the subscript $[3]$ of Yang-Mills and Einstein covariant Lagrangians is used to indicate the cubic vertices of these theories. Taking into account the complicated structure of the cubic vertices of Yang-Mills and Einstein theories in covariant approaches, we see that the light-cone gauge approach gives a simpler representation for such vertices. Another attractive property of the light-cone approach is that it allows treating the interaction vertices of Yang-Mills and Einstein theories on an equal footing. Formula \rf{0006} provides a convenient representation for other well-known parity invariant cubic interaction vertices of massless low spin fields. These vertices and their Lorentz covariant counterparts are collected in Table I. In Table II, we present light-cone vertices \rf{0006} involving higher spin fields whose Lorentz covariant descriptions are available in the literature. \noindent{\sf Table I. Parity invariant cubic vertices for massless low spin $s=0,1,2$ fields. \small In the 3rd column, $\phi$ stands for the scalar field, $F_{\mu\nu}$ and $D_\mu$ stand for the respective Yang-Mills field strength and covariant derivative $D_\mu = \partial_\mu +A_\mu$, and $R_{\mu\nu\rho\sigma}$ stands for the Riemann tensor}% \footnote{ $\omega_\mu{}^{\nu\rho}$ and $R_{\mu\nu\rho\sigma}$ in covariant vertices (1,2,2;3), (1,2,2;5) stand for the linearized Lorentz connection, $\omega_\mu{}^{\nu\rho}= - \omega_\mu{}^{\rho\nu}$, and the Riemann tensor of the {\it charged} (w.r.t. the Yang-Mills gauge group) spin 2 field. These covariant vertices and the vertex (0,1,2;3) are invariant under linearized on-shell gauge transformations.} {\small \begin{center} \begin{tabular}{|l|c|c|} \hline & & \\ [-3mm] \ Spin values and & \ \ \ Light-cone \ \ \ & Covariant \\ number of derivatives & vertex & Lagrangian \\ \ \ \ $ s^{\scriptscriptstyle (1)} , s^{\scriptscriptstyle (2)} , s^{\scriptscriptstyle (3)} ;\, k $ & $p_{\scriptscriptstyle [3]}^-(s^{\scriptscriptstyle (1)},s^{\scriptscriptstyle (2)},s^{\scriptscriptstyle (3)};k)$ & \\ [2mm]\hline && \\[-3mm] \ \ \ \ \ \ $ 0,0,0;\, 0$ & $ 1 $ & $ \phi^3 $ \\[2mm]\hline && \\[-3mm] \ \ \ \ \ \ $ 0,0,1;\,1 $ & $ B^{\scriptscriptstyle (3)} $ & $( D_\mu\phi D^\mu \phi)_{\scriptscriptstyle [3]}$ \\[2mm]\hline && \\[-3mm] \ \ \ \ \ \ $ 0,0,2;\,2$ & $ (B^{\scriptscriptstyle (3)})^2 $ & $(\sqrt{g}g^{\mu\nu} \partial_\mu\phi\partial_\nu\phi)_{\scriptscriptstyle [3]}$ \\[2mm]\hline && \\[-3mm] \ \ \ \ \ \ $0,1,1;\, 2$ & $ B^{\scriptscriptstyle (2)} B^{\scriptscriptstyle (3)} $ & $(\phi F_{\mu\nu} F^{\mu\nu})_{\scriptscriptstyle [3]}$ \\[2mm]\hline && \\[-3mm] \ \ \ \ \ \ $ 0,1,2;\,3$ & $ B^{\scriptscriptstyle (2)} (B^{\scriptscriptstyle (3)})^2 $ & $(\partial^\mu \phi F_{\nu\rho}\omega_\mu{}^{\nu\rho})_{\scriptscriptstyle [3]}$ \\[2mm]\hline && \\[-3mm] \ \ \ \ \ \ $ 0,2,2;\,4$ & $ (B^{\scriptscriptstyle (2)} B^{\scriptscriptstyle (3)})^2 $ & $(\sqrt{g} \phi R_{\mu\nu\rho\sigma} R^{\mu\nu\rho\sigma})_{\scriptscriptstyle [3]}$ \\[2mm]\hline && \\[-3mm] \ \ \ \ \ \ $ 1,1,1;\,1$ & $ Z $ & $(F_{\mu\nu} F^{\mu\nu})_{\scriptscriptstyle [3]}$ \\[2mm]\hline && \\[-3mm] \ \ \ \ \ \ $ 1,1,1;\,3$ & $B^{\scriptscriptstyle (1)} B^{\scriptscriptstyle (2)} B^{\scriptscriptstyle (3)} $ & $(F_{\mu\nu} F^{\nu\rho} F_\rho{}^\mu)_{\scriptscriptstyle [3]}$ \\[2mm]\hline && \\[-3mm] \ \ \ \ \ \ $ 1,1,2;\,2 $ & $B^{\scriptscriptstyle (3)} Z$ & $(\sqrt{g}g^{\mu\rho}g^{\nu\sigma}F_{\mu\nu}F_{\rho\sigma})_{\scriptscriptstyle [3]}$ \\[2mm]\hline && \\[-3mm] \ \ \ \ \ \ $ 1,1,2;\,4 $ & $ B^{\scriptscriptstyle (1)} B^{\scriptscriptstyle (2)} (B^{\scriptscriptstyle (3)})^2$ & $(\sqrt{g}F_{\mu\nu}F_{\rho\sigma}R^{\mu\nu\rho\sigma})_{\scriptscriptstyle [3]}$ \\[2mm]\hline && \\[-3mm] \ \ \ \ \ \ $ 1,2,2;\,3 $ & $ B^{\scriptscriptstyle (2)} B^{\scriptscriptstyle (3)} Z $ & $F_{\mu\nu} (\omega^{\mu,\rho\sigma} \omega^\nu{}_{\rho\sigma} - \omega^{\rho,\sigma\mu} \omega_{\rho,\sigma}{}^\nu)_{\scriptscriptstyle [3]}$ \\[2mm]\hline && \\[-3mm] \ \ \ \ \ \ $ 1,2,2;\,5$ & $ B^{\scriptscriptstyle (1)} (B^{\scriptscriptstyle (2)})^2 (B^{\scriptscriptstyle (3)})^2$ & $(F^{\mu\nu} R_\mu{}^{\rho\sigma\lambda}R_{\nu\rho\sigma\lambda})_{\scriptscriptstyle [3]}$ \\[2mm]\hline && \\[-3mm] \ \ \ \ \ \ $ 2,2,2;\,2$ & $ Z^2 $ & $(\sqrt{g}R)_{\scriptscriptstyle [3]}$ \\[2mm]\hline && \\[-3mm] \ \ \ \ \ \ $ 2,2,2;\,4$ & $ B^{\scriptscriptstyle (1)} B^{\scriptscriptstyle (2)} B^{\scriptscriptstyle (3)} Z $ & $(\sqrt{g}R_{\mu\nu\rho\sigma} R^{\mu\nu\rho\sigma})_{\scriptscriptstyle [3]}$ \\[2mm]\hline && \\[-3mm] \ \ \ \ \ \ $ 2,2,2;\,6$ & $(B^{\scriptscriptstyle (1)} B^{\scriptscriptstyle (2)} B^{\scriptscriptstyle (3)})^2$ & $(\sqrt{g}R^{\mu\nu}_{\rho\sigma} R^{\rho\sigma}_{\lambda\tau} R^{\lambda\tau}_{\mu\nu})_{\scriptscriptstyle [3]}$ \\[2mm]\hline \end{tabular} \end{center} } \medskip We note that vertices with $k={\bf s}$ correspond to gauge theory cubic interaction vertices built entirely in terms of gauge field strengths% \footnote{Our result for the vertex $p_{\scriptscriptstyle [3]}^-(2,2,2;6)$ implies that there is only one Lorentz covariant $R_{....}^3$ vertex ($R_{....}$ is the Riemann tensor) that gives a non-trivial contribution to the 3-point scattering amplitude. In Ref.\cite{Metsaev:1986yb}, it was demonstrated that this is indeed the case.}. The vertices with $k<{\bf s}$ cannot be built entirely in terms of gauge field strengths. It is the vertices with $k<{\bf s}$ that are difficult to construct in Lorentz covariant approaches. The light-cone approach treats all vertices on an equal footing. We finish with a discussion of the completeness of solution \rf{0006}. Our solution \rf{0006} provides the complete list of parity invariant cubic vertices for the massless totally symmetric fields in $d \geq 4$ dimensions. In $d>6$ dimensions, $so(d-2)$-invariants constructed out of the antisymmetric Levi-Civita symbol $\epsilon^{I_1\ldots I_{d-2}}$, the oscillators $\alpha^{{\scriptscriptstyle (a)} I}$, and the momentum $\mathbb{P}^I$ are equal to zero, and therefore there are no parity violating cubic vertices for the massless totally symmetric fields. For $d=4,5,6$ there are nontrivial $so(d-2)$ invariants constructed out of the antisymmetric Levi-Civita symbol. It turns out that for $d=5$, these invariants allow building parity violating cubic vertices for the massless totally symmetric fields% \footnote{ Complete list of cubic vertices for massless fields in $4d$ was obtained in \cite{Bengtsson:1986kh} and we do not discuss this case.}, while for $d=6$, solution \rf{0006} provides the complete list of cubic vertices for the massless totally symmetric fields (i.e. there are no parity violating cubic vertices for the massless totally symmetric fields in $d=6$). The complete list of cubic vertices for massless fields in $d=5,6$ is to be obtained in the respective Sections \ref{5dtheor} and \ref{6dtheor}% \footnote{ Complete list of cubic vertices for massless fields in $d=5,6$ was announced in Refs.\cite{Metsaev:1993gx,Metsaev:1993mj}. In Refs.\cite{Metsaev:1993gx,Metsaev:1993mj} we mistakenly thought that our solution \rf{0006} provides the complete list of cubic vertices for massless fields in $d=5$.}. \medskip \noindent{\sf Table II. Parity invariant cubic interaction vertices for massless higher spin fields.} {\small \begin{center} \begin{tabular}{|l|c|c|} \hline & & \\ [-3mm]\ Spin values and & \ \ \ Light-cone \ \ \ & Covariant \\ number of derivatives & vertex & Lagrangian \\ \ \ \ $ s^{\scriptscriptstyle (1)},s^{\scriptscriptstyle (2)},s^{\scriptscriptstyle (3)};\,k$ & $p_{\scriptscriptstyle [3]}^-(s^{\scriptscriptstyle (1)},s^{\scriptscriptstyle (2)},s^{\scriptscriptstyle (3)};k)$ & \\[1mm] \hline && \\[-3mm] \ \ \ \ \ \ $ 2,2,4;\,4 $ & $ (B^{\scriptscriptstyle (3)})^2 Z^2 $ & ${\cal L}(\hbox{see Ref.\cite{Deser:1990bk}})$ \\[2mm]\hline && \\[-3mm] \ \ \ \ \ \ $ 3,3,3;\,3 $ & $Z^3 $ & ${\cal L} (\hbox{see Ref}.\cite{Berends:1984wp}$ \\[2mm]\hline && \\[-3mm] \ \ \ \ \ \ $ 3,3,3;\,5 $ & $B^{\scriptscriptstyle (1)} B^{\scriptscriptstyle (2)} B^{\scriptscriptstyle (3)} Z^2 $ & ${\cal L}(\hbox{see Ref}.\cite{Bekaert:2005jf})$ \\[2mm]\hline && \\[-3mm] \ \ \ \ \ \ $ s,s,s';\,k' $ & $(B^{\scriptscriptstyle (1)} B^{\scriptscriptstyle (2)})^{s-s_{min}} (B^{\scriptscriptstyle (3)})^{s'-s_{min}} Z^{s_{min}} $ & ${\cal L}(\hbox{see Refs.\cite{Berends:1984rq,Berends:1985xx}})$ \\[1mm] && \\[-3mm] $ k'= 2s+s' -2s_{min} $ & & \\ && \\[-3mm] $ s_{min} \equiv \min (s,s') $ & & \\[2mm]\hline \end{tabular} \end{center} } \newsection{Parity invariant cubic interaction vertices for massless and massive fields}\label{secMMO} We now study cubic interaction vertices for massless and massive field. We consider cubic vertices for one massive field and two massless fields and cubic vertices for one massless field and two massive fields. In other words, we consider vertices for fields with the following mass values: \begin{eqnarray} && {\rm m}_1 = {\rm m}_2 = 0,\qquad {\rm m}_3 \ne 0\,; \\ && {\rm m}_1 = {\rm m}_2 \equiv {\rm m} \ne 0 ,\qquad {\rm m}_3 = 0\,; \\ && {\rm m}_1 \ne 0 ,\qquad {\rm m}_2\ne 0,\qquad {\rm m}_1 \ne {\rm m}_2, \qquad {\rm m}_3= 0\,. \end{eqnarray} We study these cases in turn. \subsection{ Cubic interaction vertices for two massless and one massive fields} We start with the cubic interaction vertex \rf{varrep8} for three fields with the mass values \begin{equation}\label{00m1} {\rm m}_1 = {\rm m}_2 = 0,\qquad {\rm m}_3 \ne 0\,, \end{equation} i.e. the {\it massless} fields carry external line indices $a=1,2$, while the {\it massive} field corresponds to $a=3$. Equations for the vertex involving two massless fields can be obtained from Eqs.\rf{loc1} in the limit as ${\rm m}_1 \rightarrow 0 $, ${\rm m}_2 \rightarrow 0 $. The general solution for vertex \rf{varrep8} is then found to be (see Appendix D) \begin{equation}\label{00m2} p_{\scriptscriptstyle [3]}^- = p_{\scriptscriptstyle [3]}^-( B_n^{\scriptscriptstyle (3)};\, Q_{mn}^{\scriptscriptstyle (aa+1)},\, \alpha_{mn}^{\scriptscriptstyle (11)},\, \alpha_{mn}^{\scriptscriptstyle (22)},\, Q_{mn}^{\scriptscriptstyle (33)})\,, \end{equation} where we use the notation \footnote{ We recall that the short notation like $p_{\scriptscriptstyle [3]}^-(Q^{\scriptscriptstyle (aa+1)})$ is used to indicate a dependence of $p_{\scriptscriptstyle [3]}^-$ on $Q^{\scriptscriptstyle (12)}$, $Q^{\scriptscriptstyle (23)}$, $Q^{\scriptscriptstyle (31)}$.} \begin{eqnarray} && \label{00m6} B_n^{\scriptscriptstyle (a)} \equiv \frac{\alpha_n^{{\scriptscriptstyle (a)} I} \mathbb{P}^I}{\beta_a}\,,\quad a=1,2;\qquad \ \ \ B_n^{\scriptscriptstyle (3)}\equiv \frac{\alpha_n^{{\scriptscriptstyle (3)} I}\mathbb{P}^I}{\beta_3}+ \frac{\check{\beta}_3}{2\beta_3} {\rm m}_3 \alpha_n^{\scriptscriptstyle (3)}\,,\end{eqnarray} \begin{eqnarray} \label{00m3} && Q_{mn}^{\scriptscriptstyle (12)} \equiv \alpha_{mn}^{\scriptscriptstyle (12)} - \frac{2}{{\rm m}_3^2} B_m^{\scriptscriptstyle (1)} B_n^{\scriptscriptstyle (2)} \,, \\ \label{00m4}&& Q_{mn}^{\scriptscriptstyle (23)} \equiv \alpha_{mn}^{\scriptscriptstyle (23)} +\frac{\alpha_n^{\scriptscriptstyle (3)}}{{\rm m}_3} B_m^{\scriptscriptstyle (2)} + \frac{2}{{\rm m}_3^2} B_m^{\scriptscriptstyle (2)} B_n^{\scriptscriptstyle (3)}\,, \\ \label{00m5}&& Q_{mn}^{\scriptscriptstyle (31)} \equiv \alpha_{mn}^{\scriptscriptstyle (31)} - \frac{\alpha_m^{\scriptscriptstyle (3)}}{{\rm m}_3} B_n^{\scriptscriptstyle (1)} + \frac{2}{{\rm m}_3^2} B_m^{\scriptscriptstyle (3)} B_n^{\scriptscriptstyle (1)}\,, \end{eqnarray} and $\alpha_{mn}^{\scriptscriptstyle (ab)}$, $Q_{mn}^{\scriptscriptstyle (aa)}$ are defined in \rf{amnabdef}, \rf{Qmnaadef}. This solution describes cubic interaction vertices for both totally symmetric and mixed-symmetry fields. We note that all forms in \rf{00m2} that depend on $\mathbb{P}^I$ (the linear forms $B_m^{\scriptscriptstyle (3)}$ and the quadratic forms $Q_{mn}^{\scriptscriptstyle (aa+1)}$) are non-homogeneous polynomials in $\mathbb{P}^I$. Therefore, as seen from \rf{00m2}-\rf{00m5}, there is no possibility to construct a cubic vertex that would be a homogeneous polynomial in $\mathbb{P}^I$. In other words, the dependence on the linear forms $B_m^{\scriptscriptstyle (3)}$ and the quadratic forms $Q_{mn}^{\scriptscriptstyle (aa+1)}$ leads to the cubic vertices that are non-homogeneous polynomials in $\mathbb{P}^I$. The appearance of massive field interaction vertices involving different powers of derivatives is a well-known fact (see e.g. \cite{GR,Ferrara:1992yc}). Thus, we see that the light-cone formalism gives a very simple explanation to this phenomenon by means of the linear forms $B_m^{\scriptscriptstyle (3)}$ and the quadratic forms $Q_{mn}^{\scriptscriptstyle (aa+1)}$. To understand the remaining characteristic properties of solution \rf{00m2}, we consider vertices for the totally symmetric fields. \subsubsection{ Cubic interaction vertices for totally symmetric fields} In this section, we restrict ourselves to cubic interaction vertices for two massless totally symmetric fields and one massive totally symmetric field. To consider the totally symmetric fields, it is sufficient to use one sort of oscillators and we set $\nu = 1$ in \rf{00m2}-\rf{00m5}. To simplify the formulas we drop the oscillator's subscript $n=1$ and use the simplified notation for oscillators: $\alpha^I \equiv \alpha_1^I$, $\alpha \equiv \alpha_1$. The cubic interaction vertex for totally symmetric fields under consideration can then be obtained from the general solution \rf{00m2} by making the identifications \begin{eqnarray} \label{00m7} && \alpha^{{\scriptscriptstyle (a)} I} \equiv \alpha_1^{{\scriptscriptstyle (a)} I}, \quad \ a =1,2\,; \qquad \alpha^{{\scriptscriptstyle (3)} I} \equiv \alpha_1^{{\scriptscriptstyle (3)} I} \qquad \alpha^{\scriptscriptstyle (3)} \equiv \alpha_1^{\scriptscriptstyle (3)}\,, \end{eqnarray} in \rf{00m2}-\rf{00m5} and ignoring the contribution of oscillators carrying a subscript $n>1$. Adopting simplified notation \rf{00m7} for forms \rf{00m6}-\rf{00m5}: \begin{equation} \label{00m9} B^{\scriptscriptstyle (a)} \equiv B_1^{\scriptscriptstyle (a)}\,, \qquad Q^{\scriptscriptstyle (ab)} \equiv Q_{11}^{\scriptscriptstyle (ab)}\,, \qquad \alpha^{\scriptscriptstyle (ab)} \equiv \alpha_{11}^{\scriptscriptstyle (ab)}\,, \end{equation} we see that vertex \rf{00m2} takes the form \begin{equation} \label{00m10} p_{\scriptscriptstyle [3]}^- = p_{\scriptscriptstyle [3]}^-(B^{\scriptscriptstyle (3)};\, Q^{\scriptscriptstyle (aa+1)},\, \alpha^{\scriptscriptstyle (11)},\, \alpha^{\scriptscriptstyle (22)},\, Q^{\scriptscriptstyle (33)})\,.\end{equation} Vertex \rf{00m10} describes the interaction of the towers of massive and massless fields \rf{intver16n9}, \rf{intver16n10}. We now obtain vertex for two massless totally symmetric spin $s^{\scriptscriptstyle (1)}$, $s^{\scriptscriptstyle (2)}$ fields and one massive totally symmetric spin $s^{\scriptscriptstyle (3)}$ field. The massless totally symmetric spin $s^{\scriptscriptstyle (1)}$ and $s^{\scriptscriptstyle (2)}$ fields are described by respective ket-vectors $|\phi_{s^{\scriptscriptstyle (1)} }^{{\rm m}_1=0}\rangle$ and $|\phi_{s^{\scriptscriptstyle (2)} }^{{\rm m}_2=0}\rangle$, while the massive totally symmetric spin $s^{\scriptscriptstyle (3)}$ field is described by a ket-vector $|\phi_{s^{\scriptscriptstyle (3)} }\rangle$. The ket-vectors of massless fields $|\phi_{s^{\scriptscriptstyle (a)} }^{{\rm m}_a=0}\rangle$, $a=1,2$, can be obtained from \rf{intver16n5} by the replacement $s\rightarrow s^{\scriptscriptstyle (a)} $, $\alpha^I\rightarrow \alpha^{{\scriptscriptstyle (a)} I}$, $a=1,2$, in \rf{intver16n5}, while the ket-vector of the massive field $|\phi_{s^{\scriptscriptstyle (3)} }\rangle$ can be obtained from \rf{intver16n4} by the replacement $s\rightarrow s^{\scriptscriptstyle (3)} $, $\alpha^I\rightarrow \alpha^{{\scriptscriptstyle (3)} I}$, $\alpha\rightarrow \alpha^{\scriptscriptstyle (3)}$ in \rf{intver16n4}. Taking into account that the ket-vectors $|\phi_{s^{\scriptscriptstyle (a)} }^{{\rm m}_a=0}\rangle$, $a=1,2$, are the respective degree $s^{\scriptscriptstyle (a)} $ homogeneous polynomials in the oscillators $\alpha^{{\scriptscriptstyle (a)} I}$ (see \rf{intver16n7}), while the ket-vector $|\phi_{s^{\scriptscriptstyle (3)} }\rangle$ is a degree $s^{\scriptscriptstyle (3)}$ homogeneous polynomial in the oscillators $\alpha^{{\scriptscriptstyle (3)} I}$, $\alpha^{\scriptscriptstyle (3)}$ (see \rf{intver16n6}) it is easy to understand that the vertex we are interested in must satisfy the equations \begin{eqnarray} && \label{00m11} (\alpha^{{\scriptscriptstyle (a)} I}\bar\alpha^{{\scriptscriptstyle (a)} I} - s^{\scriptscriptstyle (a)} ) |p_{\scriptscriptstyle [3]}^-\rangle = 0\,,\qquad a=1,2, \\[3pt] && \label{00m12} (\alpha^{{\scriptscriptstyle (3)} I} \bar\alpha^{{\scriptscriptstyle (3)} I} + \alpha^{\scriptscriptstyle (3)}\bar\alpha^{\scriptscriptstyle (3)} - s^{\scriptscriptstyle (3)} ) |p_{\scriptscriptstyle [3]}^-\rangle = 0\,. \end{eqnarray} These equations tell us that the vertex must be a degree $s^{\scriptscriptstyle (a)} $ homogeneous polynomial in the respective oscillators. Taking into account that the forms $B^{\scriptscriptstyle (3)}$ and $Q^{\scriptscriptstyle (aa+1)}$ are the respective degree 1 and 2 homogeneous polynomials in the oscillators we find the general solution of Eqs.\rf{00m11}, \rf{00m12} as% \footnote{ We ignore the contribution of $\alpha^{\scriptscriptstyle (11)}$-, $\alpha^{\scriptscriptstyle (22)}$-, $Q^{\scriptscriptstyle (33)}$-terms of \rf{00m10} to vertex \rf{intver30}. Because of the tracelessness constraints \rf{intver16n8} the contribution of these terms to the Hamiltonian $P_{\scriptscriptstyle [3]}^-$ \rf{pm1} vanishes.} \begin{equation} \label{intver30} p_{\scriptscriptstyle [3]}^-(s^{\scriptscriptstyle (1)},s^{\scriptscriptstyle (2)},s^{\scriptscriptstyle (3)};x) = (B^{\scriptscriptstyle (3)})^x \prod_{a=1}^3 (Q^{\scriptscriptstyle (aa+1)})^{y^{\scriptscriptstyle (a+2)} } \,, \end{equation} where integers $y^{\scriptscriptstyle (a)} $ are expressible in terms of the $s^{\scriptscriptstyle (a)} $ and an integer $x$ by the relations \begin{eqnarray} \label{yexp01}&& y^{\scriptscriptstyle (a)} = \frac{{\bf s} - x}{2} -s^{\scriptscriptstyle (a)} \,,\qquad a=1,2\,, \\ \label{yexp03} && y^{\scriptscriptstyle (3)} = \frac{{\bf s} + x}{2} -s^{\scriptscriptstyle (3)} \,. \end{eqnarray} The integer $x$ expresses the freedom of the solution and labels all possible cubic interaction vertices that can be constructed for the fields under consideration. For vertex \rf{intver30} to be sensible, we impose the restrictions \begin{eqnarray} \label{restr01}&& x\geq 0\,,\qquad y^{\scriptscriptstyle (a)} \geq 0\,, \quad a=1,2,3; \\ \label{restr03} && {\bf s} - x \qquad \hbox{ even integer}\,,\end{eqnarray} which amount to the requirement that the powers of all forms in \rf{intver30} be non--negative integers. We note that using relations \rf{yexp01}, \rf{yexp03} allows rewriting the restrictions \rf{restr01} as% \footnote{ If $x=0$, then restrictions \rf{restr03NNN1} become the restrictions well known in the angular momentum theory: $ |s^{\scriptscriptstyle (1)} - s^{\scriptscriptstyle (2)}| \leq s^{\scriptscriptstyle (3)} \leq s^{\scriptscriptstyle (1)} + s^{\scriptscriptstyle (2)}$.} \begin{equation} \label{restr03NNN1} \max(0, s^{\scriptscriptstyle (3)} - s^{\scriptscriptstyle (1)} - s^{\scriptscriptstyle (2)}) \leq x \leq s^{\scriptscriptstyle (3)} - |s^{\scriptscriptstyle (1)} - s^{\scriptscriptstyle (2)}|\,. \end{equation} Compared to the vertex for three massless fields \rf{0006}, vertex \rf{intver30} is a non-homogeneous polynomial in $\mathbb{P}^I$. An interesting property of vertex \rf{intver30} is that the maximal number of powers of the momentum $\mathbb{P}^I$, denoted by $k_{max}$, is independent of $x$ and is determined only by ${\bf s}$,% \footnote{ Expressions for $B^{\scriptscriptstyle (3)}$ and $Q^{\scriptscriptstyle (aa+1)}$ \rf{00m6}-\rf{00m5} imply that $k_{max} = x + 2\sum_{a=1}^3 y^{\scriptscriptstyle (a)}$. Taking expressions for $y^{\scriptscriptstyle (a)} $ \rf{yexp01}, \rf{yexp03} into account we find \rf{kmaxN1}.} \begin{equation} \label{kmaxN1} k_{max}= {\bf s}\,.\end{equation} \subsection{ Cubic interaction vertices for one massless and two massive fields with the same mass values}\label{equalmasses} The case under consideration is most interesting because it involves the minimal Yang-Mills and gravitational interactions of massive arbitrary spin fields as particular cases. We now consider the cubic interaction vertex \rf{varrep8} for one massless field and two massive fields with the same mass values, \begin{equation}\label{eqmas00007NN1} {\rm m}_1 = {\rm m}_2 \equiv {\rm m} \ne 0 ,\qquad {\rm m}_3 = 0\,, \end{equation} i.e. the {\it massive} fields carry external line indices $a=1,2$, while the {\it massless} field corresponds to $a=3$. The analysis of equations for the vertex is straightforward and the general solution is found to be (see Appendix D) \begin{equation} \label{intvereqmas01} p_{\scriptscriptstyle [3]}^- = p_{\scriptscriptstyle [3]}^-(L_n^{\scriptscriptstyle (1)}, L_n^{\scriptscriptstyle (2)}, B_n^{\scriptscriptstyle (3)};\, Q_{mn}^{\scriptscriptstyle (12)}, Q_{mn}^{\scriptscriptstyle (11)}, Q_{mn}^{\scriptscriptstyle (22)}, \alpha_{mn}^{\scriptscriptstyle (33)}\,;\, Z_{mnq})\,, \end{equation} where we use the notation \begin{eqnarray} \label{eqmas00007} && L_n^{\scriptscriptstyle (1)} \equiv B_n^{\scriptscriptstyle (1)} + \frac{1}{2}{\rm m} \alpha_n^{\scriptscriptstyle (1)}\,,\qquad\quad L_n^{\scriptscriptstyle (2)} \equiv B_n^{\scriptscriptstyle (2)} - \frac{1}{2}{\rm m} \alpha_n^{\scriptscriptstyle (2)}\,, \\ \label{eqmas00009} && B_n^{\scriptscriptstyle (a)} \equiv \frac{\alpha_n^{{\scriptscriptstyle (a)} I}\mathbb{P}^I}{\beta_a}+ \frac{\check{\beta}_a}{2\beta_a} {\rm m} \alpha_n^{\scriptscriptstyle (a)}\,,\qquad a=1,2; \\ \label{eqmas00010} && B_n^{\scriptscriptstyle (3)}\equiv \frac{\alpha_n^{{\scriptscriptstyle (3)} I}\mathbb{P}^I}{\beta_3}\,, \\ \label{eqmas00006} && Q_{mn}^{\scriptscriptstyle (12)} \equiv \alpha_{mn}^{\scriptscriptstyle (12)} + \frac{\alpha_n^{\scriptscriptstyle (2)}}{{\rm m}} B_m^{\scriptscriptstyle (1)} - \frac{\alpha_m^{\scriptscriptstyle (1)}}{{\rm m}} B_n^{\scriptscriptstyle (2)}\,, \\[5pt] \label{eqmas00005} && Z_{mnq} \equiv L_m^{\scriptscriptstyle (1)} \alpha_{nq}^{\scriptscriptstyle (23)} + L_n^{\scriptscriptstyle (2)} \alpha_{qm}^{\scriptscriptstyle (31)} + B_q^{\scriptscriptstyle (3)} ( \alpha_{mn}^{\scriptscriptstyle (12)} - \alpha_m^{\scriptscriptstyle (1)}\alpha_n^{\scriptscriptstyle (2)} )\,, \end{eqnarray} and $\alpha_{mn}^{\scriptscriptstyle (ab)}$, $Q_{mn}^{\scriptscriptstyle (aa)}$ are defined in \rf{amnabdef}, \rf{Qmnaadef}. Thus, we see that vertex \rf{intvereqmas01} depends, among other things, on linear forms $B_n^{\scriptscriptstyle (3)}$ \rf{eqmas00010}, which are degree 1 {\it homogeneous} polynomials in the momentum $\mathbb{P}^I$. This implies that cubic interaction vertices that are homogeneous polynomials in $\mathbb{P}^I$ can be constructed for certain fields. This also implies that the minimal number of powers of $\mathbb{P}^I$ in \rf{intvereqmas01} is not equal to zero in general (for example, the dependence on $B_m^{\scriptscriptstyle (3)}$ leads to an increasing number of powers of the momentum $\mathbb{P}^I$). All the remaining forms that depend on the momentum $\mathbb{P}^I$ and enter the cubic vertex (the linear forms $L_n^{\scriptscriptstyle (1)}$, $L_n^{\scriptscriptstyle (2)}$ and the quadratic forms $Q_{mn}^{\scriptscriptstyle (12)}$) are non-homogeneous polynomials in $\mathbb{P}^I$. To discuss the remaining important properties of solution \rf{intvereqmas01} we restrict attention to cubic vertices for the totally symmetric fields. \subsubsection{Cubic interaction vertices for totally symmetric fields}\label{tsymMM0} In this section, we restrict ourselves to cubic interaction vertices for the totally symmetric fields with mass values given in \rf{eqmas00007NN1}. As usual, we use one sort of oscillators, i.e. we set $\nu = 1$ in \rf{intvereqmas01}-\rf{eqmas00005} and simplify formulas by dropping the oscillator's subscript $n=1$: $\alpha^I \equiv \alpha_1^I$, $\alpha \equiv \alpha_1$. The cubic interaction vertex for the totally symmetric fields under consideration can be obtained from the general solution \rf{intvereqmas01} by making the identifications \begin{equation} \label{mm014nn} \alpha^{{\scriptscriptstyle (a)} I} \equiv \alpha_1^{{\scriptscriptstyle (a)} I}, \qquad \alpha^{\scriptscriptstyle (a)} \equiv \alpha_1^{\scriptscriptstyle (a)}\,, \quad a =1,2; \qquad \alpha^{{\scriptscriptstyle (3)} I} \equiv \alpha_1^{{\scriptscriptstyle (3)} I} \,, \end{equation} in \rf{intvereqmas01}-\rf{eqmas00005} and ignoring the contribution of oscillators carrying a subscript $n>1$. Adopting the simplified notation \rf{mm014nn} for forms \rf{eqmas00007}-\rf{eqmas00005}, \begin{equation} \label{mm016nn} L^{\scriptscriptstyle (a)} \equiv L_1^{\scriptscriptstyle (a)}\,, \qquad B^{\scriptscriptstyle (a)} \equiv B_1^{\scriptscriptstyle (a)}\,, \qquad \alpha^{\scriptscriptstyle (ab)} \equiv \alpha_{11}^{\scriptscriptstyle (ab)}\,,\qquad Q^{\scriptscriptstyle (ab)} \equiv Q_{11}^{\scriptscriptstyle (ab)}\,, \qquad Z \equiv Z_{111}\,, \end{equation} we see that vertex \rf{intvereqmas01} takes the form \begin{equation} \label{intvereqmas01N1} p_{\scriptscriptstyle [3]}^- = p_{\scriptscriptstyle [3]}^-(L^{\scriptscriptstyle (1)}, L^{\scriptscriptstyle (2)}, B^{\scriptscriptstyle (3)};\, Q^{\scriptscriptstyle (12)}, Q^{\scriptscriptstyle (11)}, Q^{\scriptscriptstyle (22)}, \alpha^{\scriptscriptstyle (33)}\,;\, Z)\,. \end{equation} Vertex \rf{intvereqmas01N1} describes the interaction of the towers of massive and massless fields \rf{intver16n9}, \rf{intver16n10}. We next obtain the vertex for two massive totally symmetric spin $s^{\scriptscriptstyle (1)}$, $s^{\scriptscriptstyle (2)}$ fields and one massless totally symmetric spin $s^{\scriptscriptstyle (3)}$ field. Two massive totally symmetric spin $s^{\scriptscriptstyle (1)}$ and $s^{\scriptscriptstyle (2)}$ fields are described by the respective ket-vectors $|\phi_{s^{\scriptscriptstyle (1)} }\rangle$ and $|\phi_{s^{\scriptscriptstyle (2)} }\rangle$, while one massless totally symmetric spin $s^{\scriptscriptstyle (3)}$ field is described by a ket-vector $|\phi_{s^{\scriptscriptstyle (3)} }^{{\rm m}_3=0}\rangle$. The ket-vectors of massive fields $|\phi_{s^{\scriptscriptstyle (a)} }\rangle$, $a=1,2$, can be obtained from \rf{intver16n4} by the replacement $s\rightarrow s^{\scriptscriptstyle (a)} $, $\alpha^I\rightarrow \alpha^{{\scriptscriptstyle (a)} I}$, $\alpha\rightarrow \alpha^{\scriptscriptstyle (a)}$, $a=1,2$, in \rf{intver16n4}, while the ket-vector of massless field $|\phi_{s^{\scriptscriptstyle (3)} }^{{\rm m}_3=0}\rangle$ can be obtained from \rf{intver16n5} by the replacement $s\rightarrow s^{\scriptscriptstyle (3)} $, $\alpha^I\rightarrow \alpha^{{\scriptscriptstyle (3)} I}$ in \rf{intver16n5}. Taking into account that the ket-vectors $|\phi_{s^{\scriptscriptstyle (a)} }\rangle$, $a=1,2$, are the respective degree $s^{\scriptscriptstyle (a)} $ homogeneous polynomials in the oscillators $\alpha^{{\scriptscriptstyle (a)} I}$, $\alpha^{\scriptscriptstyle (a)} $ (see \rf{intver16n6}), while the ket-vector $|\phi_{s^{\scriptscriptstyle (3)} }^{{\rm m}_3=0}\rangle$ is a degree $s^{\scriptscriptstyle (3)}$ homogeneous polynomial in the oscillator $\alpha^{{\scriptscriptstyle (3)} I}$ (see \rf{intver16n7}) it is easy to understand that the vertex we are interested in must satisfy the equations \begin{eqnarray} && \label{mm018} (\alpha^{{\scriptscriptstyle (a)} I}\bar\alpha^{{\scriptscriptstyle (a)} I} +\alpha^{\scriptscriptstyle (a)} \bar\alpha^{\scriptscriptstyle (a)} - s^{\scriptscriptstyle (a)} ) |p_{\scriptscriptstyle [3]}^-\rangle = 0\,,\qquad a=1,2\,, \\[3pt] && \label{mm019} (\alpha^{{\scriptscriptstyle (3)} I} \bar\alpha^{{\scriptscriptstyle (3)} I} - s^{\scriptscriptstyle (3)} ) |p_{\scriptscriptstyle [3]}^-\rangle = 0\,. \end{eqnarray} These equations tell us that the vertex must be a degree $s^{\scriptscriptstyle (a)} $ homogeneous polynomial in the respective oscillators. Taking into account that the forms $L^{\scriptscriptstyle (1)}$, $L^{\scriptscriptstyle (2)}$, $B^{\scriptscriptstyle (3)}$ \rf{mm016nn} are degree 1 homogeneous polynomials in the oscillators, while the forms $Q^{\scriptscriptstyle (12)}$ and $Z$ \rf{mm016nn} are respective degree 2 and 3 homogeneous polynomials in the oscillators we find the general solution of Eqs.\rf{mm018}, \rf{mm019} as% \footnote{ We ignore the contribution of $Q^{\scriptscriptstyle (11)}$-, $Q^{\scriptscriptstyle (22)}$-, $\alpha^{\scriptscriptstyle (33)}$-terms of \rf{intvereqmas01N1} to vertex \rf{intvereqmass01}. Because of the tracelessness constraints \rf{intver16n8} the contribution of these terms to the Hamiltonian $P_{\scriptscriptstyle [3]}^-$ \rf{pm1} vanishes.} \begin{equation} \label{intvereqmass01} p_{\scriptscriptstyle [3]}^-(s^{\scriptscriptstyle (1)},s^{\scriptscriptstyle (2)},s^{\scriptscriptstyle (3)}\,;\,k_{min},k_{max}) = (L^{\scriptscriptstyle (1)})^{x^{\scriptscriptstyle (1)} } (L^{\scriptscriptstyle (2)})^{x^{\scriptscriptstyle (2)} } (B^{\scriptscriptstyle (3)})^{x^{\scriptscriptstyle (3)} } (Q^{\scriptscriptstyle (12)})^{y^{\scriptscriptstyle (3)} } Z^y\,, \end{equation} where the parameters $x^{\scriptscriptstyle (1)} $, $x^{\scriptscriptstyle (2)} $, $x^{\scriptscriptstyle (3)} $, $y^{\scriptscriptstyle (3)} $, $y$ are given by \begin{eqnarray} \label{xadefmm0} && x^{\scriptscriptstyle (1)} = k_{max} - k_{min} - s^{\scriptscriptstyle (2)} \,, \\ && x^{\scriptscriptstyle (2)} = k_{max} - k_{min} - s^{\scriptscriptstyle (1)} \,, \\ && x^{\scriptscriptstyle (3)} = k_{min}\,, \\ && y^{\scriptscriptstyle (3)} = {\bf s} - 2s^{\scriptscriptstyle (3)} - k_{max} + 2 k_{min}\,, \\ \label{y3def} && y= s^{\scriptscriptstyle (3)} -k_{min}\,, \end{eqnarray} and ${\bf s}$ is defined in \rf{0007}. New integers $k_{min}$ and $k_{max}$ in \rf{intvereqmass01}-\rf{y3def} are the freedom in our solution. In general, vertex \rf{intvereqmass01} is a non-homogeneous polynomial in the momentum $\mathbb{P}^I$ and the integers $k_{min}$ and $k_{max}$ are the respective minimal and maximal numbers of powers of the momentum $\mathbb{P}^I$ in \rf{intvereqmass01}\! \footnote{ This can be checked by taking into account that the forms $L^{\scriptscriptstyle (1)}$, $L^{\scriptscriptstyle (2)}$, $Q^{\scriptscriptstyle (12)}$ and $Z$ are degree 1 polynomials in $\mathbb{P}^I$, while the form $B^{\scriptscriptstyle (3)}$ is degree 1 homogeneous polynomial in $\mathbb{P}^I$ (see \rf{eqmas00007}-\rf{eqmas00005}).}. As noted above, the minimal number of powers of the momentum $\mathbb{P}^I$ is not equal to zero in general. For vertex \rf{intvereqmass01} to be sensible, we should impose the restrictions \begin{eqnarray} \label{restrict01}&& x^{\scriptscriptstyle (a)} \geq 0\,, \qquad a=1,2,3; \\[5pt] \label{restrict03} && y^{\scriptscriptstyle (3)} \geq 0\,,\qquad y \geq 0\,, \end{eqnarray} which amount to requiring the powers of all forms in \rf{intvereqmass01} to be non--negative integers. With \rf{xadefmm0}-\rf{y3def}, restrictions \rf{restrict01}, \rf{restrict03} can be rewritten in a more convenient form as \begin{equation} \label{restrict04} k_{min} + \max_{a=1,2} s^{\scriptscriptstyle (a)} \leq k_{max} \leq {\bf s}- 2s^{\scriptscriptstyle (3)} + 2 k_{min}\,, \end{equation} \begin{equation} \label{restrict05} 0\leq k_{min} \leq s^{\scriptscriptstyle (3)} \,.\end{equation} \subsubsection{ Minimal Yang-Mills interaction of massive totally symmetric arbitrary spin field} We now apply our results in Section \ref{tsymMM0} to the discussion of the minimal Yang-Mills interaction of the massive totally symmetric arbitrary spin field. We first present the list of {\it all} cubic vertices for the massive totally symmetric spin $s$ field interacting with the massless spin 1 field (Yang-Mills field). This is, we consider the vertices \rf{intvereqmass01} with the spin values \begin{equation} \label{intvereqmass02N1} s^{\scriptscriptstyle (1)} = s^{\scriptscriptstyle (2)} = s\,,\qquad s^{\scriptscriptstyle (3)} =1\,.\end{equation} Restrictions \rf{restrict05} lead to two allowed values of $k_{min}$: $k_{min} =0,1$. Substituting these values of $k_{min}$ in \rf{restrict04}, we obtain two families of vertices \begin{eqnarray} \label{xxxN1N1} && k_{min} = 1 \,, \qquad s + 1 \leq \ k_{max} \leq \ 2s+1 \,,\qquad \ \ \ \quad s\geq 0\,; \\[3pt] \label{yyyN1N1} && k_{min} = 0 \,, \qquad \ \ \ s \ \ \leq \ \ k_{max} \leq \ \ \ 2s-1 \,,\qquad\quad \ \ s\geq 1\,. \end{eqnarray} We now discuss those vertices from the list in \rf{xxxN1N1}, \rf{yyyN1N1} that correspond to the minimal Yang-Mills interaction of the massive arbitrary spin field. We consider various spin fields in turn. \noindent {\bf a}) Spin $s=0$ field. The vertices for the spin $s=0$ field fall in the family of vertices given in \rf{xxxN1N1}. Plugging $s=0$ in \rf{xxxN1N1} we obtain $k_{min}=k_{max}=1$ and therefore the cubic vertex of the minimal Yang-Mills interaction of the massive scalar field is a degree 1 homogeneous polynomial in derivatives. Relations \rf{intvereqmass01}-\rf{y3def} lead to the minimal Yang-Mills interaction of the massive scalar field \begin{equation}\label{minint001} p_{\scriptscriptstyle [3]}^-(0,0,1; 1, 1 ) = B^{\scriptscriptstyle (3)}\,.\end{equation} \noindent {\bf b}) Spin $s\geq 1$ field. All vertices given in \rf{xxxN1N1}, \rf{yyyN1N1} are candidates for the minimal Yang-Mills interaction of the spin $s\geq 1$ field. We therefore impose an additional requirement, which allows us to choose one suitable vertex: given spin $s$, we look for the vertex with the minimal value of $k_{max}$. It can be seen that such a vertex is given by \rf{yyyN1N1} with $k_{max} = s$. The choice of the vertex from \rf{yyyN1N1} implies $k_{min}=0$ and we obtain from \rf{intvereqmass01}-\rf{y3def} the minimal Yang-Mills interaction of the massive spin $s\geq 1$ field% \footnote{ A gauge invariant description of the electromagnetic interaction of the massive spin $s=2$ field was obtained in \cite{Klishevich:1997pd}. The application of the approach in \cite{Klishevich:1997pd} to the massive arbitrary spin $s$ field can be found in \cite{Klishevich:1998ng}. The derivation of the electromagnetic interaction of massive spin $s=2,3$ fields from string theory is given in \cite{Argyres:1989cu,Klishevich:1998sr}. In these references, the electromagnetic field is treated as an external (non-dynamical) field.}, \begin{equation} \label{minintss1} p_{\scriptscriptstyle [3]}^-(s,s,1; 0, s ) = (Q^{\scriptscriptstyle (12)})^{s-1} Z\,,\qquad s\geq 1\,.\end{equation} A few remarks are in order. i) The forms $B^{\scriptscriptstyle (3)}$ \rf{eqmas00007} and $Z$ \rf{eqmas00005} have smooth massless limit (${\rm m} \rightarrow 0$). Therefore, the minimal Yang-Mills interactions of the massive low spin $s=0,1$ fields given in \rf{minint001}, \rf{minintss1} have a smooth massless limit, as they should. These interactions in the massless limit coincide with the respective interactions of the massless spin $s=0,1$ fields in Table I. ii) The form $Q^{\scriptscriptstyle (12)}$ \rf{eqmas00006} does not have a smooth massless limit (${\rm m} \rightarrow 0$). This implies that the minimal Yang-Mills interaction of the massive spin $s > 1$ field \rf{minintss1} does not admit a sensible massless limit; in light-cone approach, it is contribution of $Q^{\scriptscriptstyle (12)}$ that explains why the minimal Yang-Mills interaction of the massive spin $s>1$ field does not admit the massless limit. As was expected, the minimal Yang-Mills interaction of the massive spin $s>1$ field \rf{minintss1} involves higher derivatives. The appearance of the higher derivatives in \rf{minintss1} can be seen from the expression for $Q^{\scriptscriptstyle (12)}$ \rf{eqmas00006}. \subsubsection{ Gravitational interaction of massive totally symmetric arbitrary spin field} We proceed with the discussion of the gravitational interaction of the massive totally symmetric arbitrary spin field. We first present the list of {\it all} cubic vertices for the massive totally symmetric spin $s$ field interacting with the massless spin 2 field. This is, we consider vertices \rf{intvereqmass01} with the spin values \begin{equation} \label{intvereqmass02} s^{\scriptscriptstyle (1)} = s^{\scriptscriptstyle (2)} = s\,,\qquad s^{\scriptscriptstyle (3)} =2\,.\end{equation} Restrictions \rf{restrict05} lead to three allowed values of $k_{min}$: $k_{min} =0,1,2$. Plugging these values of $k_{min}$ in restrictions \rf{restrict04}, we obtain three families of vertices \begin{eqnarray} \label{intvereqmass03} && k_{min} = 2 \,, \qquad s + 2 \leq \ k_{max} \leq \ 2s+2 \,,\qquad \ \quad s\geq 0\,; \\[3pt] \label{intvereqmass04} && k_{min} = 1 \,, \qquad s+1 \leq \ k_{max} \leq \ \ \ 2s \,,\qquad\qquad \ \ s\geq 1\,; \\[3pt] \label{intvereqmass05} && k_{min} = 0 \,, \qquad \ \ \ s \ \ \ \ \leq \ k_{max} \ \leq \ 2s- 2 \,,\qquad \ \ \ \ s\geq 2\,. \end{eqnarray} We now discuss those vertices from the list given in \rf{intvereqmass03}-\rf{intvereqmass05} that correspond to the gravitational interaction of the massive arbitrary spin field. We consider various spin fields in turn. \noindent {\bf a}) Spin $s=0$ field. The gravitational interaction of the massive scalar field is given by \rf{intvereqmass03}. Plugging $s=0$ in \rf{intvereqmass03}, we obtain the well-known relation $k_{min}=k_{max}=2$, which tells us that the cubic vertex of the gravitational interaction of the massive scalar field is a degree 2 homogeneous polynomial in the derivatives. Formulas \rf{intvereqmass01}-\rf{y3def} lead to the gravitational interaction of the massive scalar field, \begin{equation} \label{intvereqmass06} p_{\scriptscriptstyle [3]}^-(0,0,2; 2, 2 ) = (B^{\scriptscriptstyle (3)})^2\,.\end{equation} \noindent {\bf b}) Spin $s=1$ field. The obvious candidates for the gravitational interaction vertices of the massive vector field are given in \rf{intvereqmass03}, \rf{intvereqmass04}. If $s=1$, then restrictions \rf{intvereqmass03} lead to $3\leq k_{max}\leq 4$, and therefore vertices \rf{intvereqmass03} involve higher derivatives. But from the covariant approach, it is well known that the gravitational interaction of the massive vector field does not involve higher derivatives. We therefore restrict attention to the vertices given in \rf{intvereqmass04}. Plugging $s=1$ in \rf{intvereqmass04} we obtain $k_{max}=2$. Formulas \rf{intvereqmass01}-\rf{y3def} then lead to the gravitational interaction of the massive vector field \begin{equation} \label{intvereqmass07} p_{\scriptscriptstyle [3]}^-(1,1,2; 1, 2 ) = B^{\scriptscriptstyle (3)} Z\,.\end{equation} \noindent {\bf c}) Spin $s\geq 2$ field. All vertices given in \rf{intvereqmass03}-\rf{intvereqmass05} are candidates for the gravitational interaction of spin $s\geq 2$ field. We should impose some additional requirement that would allow us to choose one suitable vertex. Our additional requirement is that given a spin $s$, we look for vertex with the minimal value of $k_{max}$. It can be seen that such a vertex is given by \rf{intvereqmass05} with $k_{max} = s$. We note that $k_{min}=0$ and relations \rf{intvereqmass01}-\rf{y3def} lead to the gravitational interaction of the massive spin $s\geq 2$ field, \begin{equation} \label{intvereqmass08} p_{\scriptscriptstyle [3]}^-(s,s,2; 0, s ) = (Q^{\scriptscriptstyle (12)})^{s-2} Z^2\,,\qquad s\geq 2\,.\end{equation} A few remarks are in order. i) Since the forms $B^{\scriptscriptstyle (3)}$ \rf{eqmas00007} and $Z$ \rf{eqmas00005} have a smooth massless limit (${\rm m} \rightarrow 0$), the gravitational interactions of the massive low spin $s=0,1,2$ fields \rf{intvereqmass06}-\rf{intvereqmass08} have smooth massless limit, as they should. These gravitational interactions in the massless limit reduce to the corresponding interactions of the massless spin $s=0,1,2$ fields given in Table I. ii) Since the form $Q^{\scriptscriptstyle (12)}$ \rf{eqmas00006} does not have a smooth massless limit (${\rm m} \rightarrow 0$), the gravitational interaction of the massive higher spin $s > 2$ field \rf{intvereqmass08} does not admit a sensible massless limit; it is the form $Q^{\scriptscriptstyle (12)}$ that explains why the gravitational interaction of the massive higher spin field does not admit the massless limit. Higher derivatives in the gravitational interaction of the massive higher spin field are related to the contribution of $Q^{\scriptscriptstyle (12)}$ \rf{eqmas00006}% \footnote{ Gauge invariant formulations of the gravitational interaction of massive fields are studied e.g. in \cite{Cucchieri:1994tx,Klishevich:1998wr}. Interesting discussion of various aspects of the massive spin 2 field in gravitational background may be found in \cite{Buchbinder:1999ar,Buchbinder:2000fy}.}. \subsection{ Cubic interaction vertices for one massless and two massive fields with\\ different mass values} We now consider the cubic interaction vertex \rf{varrep8} for fields with the following mass values: \begin{equation} \label{mm08} {\rm m}_1 \ne 0 ,\qquad {\rm m}_2\ne 0,\qquad {\rm m}_1 \ne {\rm m}_2, \qquad {\rm m}_3= 0, \end{equation} i.e. the {\it massive} fields carry external line indices $a=1,2$, while the {\it massless} field corresponds to $a=3$. Equations for the vertex involving one massless field can be obtained from Eqs.\rf{loc1} in the limit as ${\rm m}_3 \rightarrow 0 $. The general solution for vertex \rf{varrep8} then takes the form (see Appendix D) \begin{equation} \label{mm09ex01} p_{\scriptscriptstyle [3]}^- = p_{\scriptscriptstyle [3]}^-(L_n^{\scriptscriptstyle (1)}, L_n^{\scriptscriptstyle (2)};\, Q_{mn}^{\scriptscriptstyle (aa+1)},\, Q_{mn}^{\scriptscriptstyle (11)},\,Q_{mn}^{\scriptscriptstyle (22)},\,\alpha_{mn}^{\scriptscriptstyle (33)} )\,, \end{equation} where we use the notation \begin{equation} \label{mm012} L_n^{\scriptscriptstyle (1)} \equiv B_n^{\scriptscriptstyle (1)} + \frac{{\rm m}_2^2}{2{\rm m}_1}\alpha_n^{\scriptscriptstyle (1)}\,,\qquad L_n^{\scriptscriptstyle (2)} \equiv B_n^{\scriptscriptstyle (2)} - \frac{{\rm m}_1^2}{2{\rm m}_2}\alpha_n^{\scriptscriptstyle (2)}\,, \end{equation} \begin{eqnarray} && B_n^{\scriptscriptstyle (a)} \equiv \frac{\alpha_n^{{\scriptscriptstyle (a)} I}\mathbb{P}^I}{\beta_a}+ \frac{\check{\beta}_a}{2\beta_a} {\rm m}_a \alpha_n^{\scriptscriptstyle (a)}\,,\qquad a=1,2; \\ \label{mm013ex01}&& B_n^{\scriptscriptstyle (3)}\equiv \frac{\alpha_n^{{\scriptscriptstyle (3)} I}\mathbb{P}^I}{\beta_3}\,,\end{eqnarray} \vspace{-0.5cm} \begin{eqnarray} \label{mm09}&& Q_{mn}^{\scriptscriptstyle (12)} \equiv \alpha_{mn}^{\scriptscriptstyle (12)} + \frac{\alpha_n^{\scriptscriptstyle (2)}}{{\rm m}_2} B_m^{\scriptscriptstyle (1)} -\frac{\alpha_m^{\scriptscriptstyle (1)}}{{\rm m}_1} B_n^{\scriptscriptstyle (2)}\,, \\ \label{mm010}&& Q_{mn}^{\scriptscriptstyle (23)} \equiv \alpha_{mn}^{\scriptscriptstyle (23)} +\frac{{\rm m}_2\alpha_m^{\scriptscriptstyle (2)}}{{\rm m}_1^2 - {\rm m}_2^2} B_n^{\scriptscriptstyle (3)} - \frac{2}{{\rm m}_1^2 - {\rm m}_2^2} B_m^{\scriptscriptstyle (2)} B_n^{\scriptscriptstyle (3)}\,, \\ \label{mm011}&& Q_{mn}^{\scriptscriptstyle (31)} \equiv \alpha_{mn}^{\scriptscriptstyle (31)} +\frac{{\rm m}_1\alpha_n^{\scriptscriptstyle (1)}}{{\rm m}_1^2 - {\rm m}_2^2} B_m^{\scriptscriptstyle (3)} + \frac{2}{{\rm m}_1^2 - {\rm m}_2^2} B_m^{\scriptscriptstyle (3)} B_n^{\scriptscriptstyle (1)}\,, \end{eqnarray} and $\alpha_{mn}^{\scriptscriptstyle (ab)}$, $Q_{mn}^{\scriptscriptstyle (aa)}$ are defined in \rf{amnabdef}, \rf{Qmnaadef}. An interesting property of the solution obtained is the appearance of expressions like ${\rm m}_1^2 - {\rm m}_2^2$ in the denominators of the quadratic forms $Q^{\scriptscriptstyle (23)}$ \rf{mm010} and $Q^{\scriptscriptstyle (31)}$ \rf{mm011}; the forms $Q^{\scriptscriptstyle (23)}$, $Q^{\scriptscriptstyle (31)}$ are therefore singular as ${\rm m}_1\rightarrow {\rm m}_2$. For this reason, we considered the case of ${\rm m}_1 = {\rm m}_2$ separately in Section \ref{equalmasses}. As can be seen from \rf{mm09ex01}-\rf{mm011}, it is impossible to construct a cubic vertex that would be a homogeneous polynomial in the momentum $\mathbb{P}^I$. All forms that depend on $\mathbb{P}^I$ and enter the vertex (i.e. $L_n^{\scriptscriptstyle (1)}$, $L_n^{\scriptscriptstyle (2)}$, and $Q_{mn}^{\scriptscriptstyle (aa+1)}$) are non-homogeneous polynomials in $\mathbb{P}^I$. This implies that the cubic vertex is a non-homogeneous polynomial in $\mathbb{P}^I$ in general. To understand the remaining characteristic properties of solution \rf{mm09ex01}, we consider the vertices for the totally symmetric fields. \subsubsection{ Cubic interaction vertices for totally symmetric fields} \label{tsymMM0noneqmas} The discussion of cubic interaction vertices for two massive totally symmetric fields with different mass values and one massless totally symmetric field largely follows that in Section \ref{tsymMM0}. The cubic vertex we are interested in can be obtained from the general solution \rf{mm09ex01} by making identifications \rf{mm014nn} in \rf{mm09ex01}-\rf{mm011} and ignoring the contribution of oscillators carrying a subscript $n>1$. From \rf{mm09ex01}, adopting the simplified notation for forms \rf{mm012}-\rf{mm011}: \begin{equation} L^{\scriptscriptstyle (a)} \equiv L_1^{\scriptscriptstyle (a)}\,, \qquad B^{\scriptscriptstyle (a)} \equiv B_1^{\scriptscriptstyle (a)}\,, \qquad Q^{\scriptscriptstyle (ab)} \equiv Q_{11}^{\scriptscriptstyle (ab)}\,, \qquad \alpha^{\scriptscriptstyle (ab)} \equiv \alpha_{11}^{\scriptscriptstyle (ab)}\,, \end{equation} we obtain the vertex that describes the interaction of towers of massive and massless totally symmetric fields \begin{equation} \label{xxxnnn1} p_{\scriptscriptstyle [3]}^- = p_{\scriptscriptstyle [3]}^-(L^{\scriptscriptstyle (1)}, L^{\scriptscriptstyle (2)};\, Q^{\scriptscriptstyle (aa+1)},\, Q^{\scriptscriptstyle (11)},\,Q^{\scriptscriptstyle (22)},\,\alpha^{\scriptscriptstyle (33)} )\,. \end{equation} The vertices for two massive totally symmetric spin $s^{\scriptscriptstyle (1)}$, $s^{\scriptscriptstyle (2)}$ fields $|\phi_{s^{\scriptscriptstyle (1)} }\rangle$, $|\phi_{s^{\scriptscriptstyle (2)} }\rangle$ with different mass values and one massless totally symmetric spin $s^{\scriptscriptstyle (3)}$ field $|\phi_{s^{\scriptscriptstyle (3)} }^{{\rm m}_3=0}\rangle$ can be obtained by solving Eqs.\rf{mm018}, \rf{mm019} with $p_{\scriptscriptstyle [3]}^-$ given in \rf{xxxnnn1}. We then obtain the cubic vertex% \footnote{ We ignore the contribution of $Q^{\scriptscriptstyle (11)}$-, $Q^{\scriptscriptstyle (22)}$-, $\alpha^{\scriptscriptstyle (33)}$-terms of \rf{xxxnnn1} to vertex \rf{pint40}. Because of the tracelessness constraints \rf{intver16n8}, the contribution of these terms to the Hamiltonian $P_{\scriptscriptstyle [3]}^-$ \rf{pm1} vanishes.} \begin{equation}\label{pint40} p_{\scriptscriptstyle [3]}^-(s^{\scriptscriptstyle (1)},s^{\scriptscriptstyle (2)},s^{\scriptscriptstyle (3)}\,;\,x^{\scriptscriptstyle (1)} ,x^{\scriptscriptstyle (2)} ) = (L^{\scriptscriptstyle (1)})^{x^{\scriptscriptstyle (1)} } (L^{\scriptscriptstyle (2)})^{x^{\scriptscriptstyle (2)} } (Q^{\scriptscriptstyle (12)})^{y^{\scriptscriptstyle (3)} }(Q^{\scriptscriptstyle (23)})^{y^{\scriptscriptstyle (1)} }(Q^{\scriptscriptstyle (31)})^{y^{\scriptscriptstyle (2)} }\,, \end{equation} where the parameters $y^{\scriptscriptstyle (a)} $ are given by \begin{eqnarray} \label{mm014}&& y^{\scriptscriptstyle (1)} = \frac{1}{2}(s^{\scriptscriptstyle (2)} + s^{\scriptscriptstyle (3)} -s^{\scriptscriptstyle (1)} + x^{\scriptscriptstyle (1)} -x^{\scriptscriptstyle (2)} )\,, \\ \label{mm015}&& y^{\scriptscriptstyle (2)} = \frac{1}{2}(s^{\scriptscriptstyle (1)} + s^{\scriptscriptstyle (3)} -s^{\scriptscriptstyle (2)} - x^{\scriptscriptstyle (1)} + x^{\scriptscriptstyle (2)} )\,, \\ \label{mm016}&& y^{\scriptscriptstyle (3)} = \frac{1}{2}(s^{\scriptscriptstyle (1)} + s^{\scriptscriptstyle (2)} -s^{\scriptscriptstyle (3)} - x^{\scriptscriptstyle (1)} -x^{\scriptscriptstyle (2)} )\,. \end{eqnarray} Two integers $x^{\scriptscriptstyle (1)} $, $x^{\scriptscriptstyle (2)} $ are the freedom of our solution. For fixed spin values $s^{\scriptscriptstyle (1)}$, $s^{\scriptscriptstyle (2)}$, $s^{\scriptscriptstyle (3)}$, these integers label all possible cubic interaction vertices that can be built for the fields under consideration. For vertex \rf{pint40} to be sensible we impose the restrictions \begin{eqnarray} \label{restr0101}&& y^{\scriptscriptstyle (a)} \geq 0\,,\qquad a=1,2,3; \\[4pt] \label{restr0202} && x^{\scriptscriptstyle (1)} \geq 0 \,,\qquad x^{\scriptscriptstyle (2)} \geq 0\,, \\[4pt] \label{restr0303} &&{\bf s} - x^{\scriptscriptstyle (1)} - x^{\scriptscriptstyle (2)} \qquad \hbox{ even integer}\,,\end{eqnarray} which amount to the requirement that the powers of all forms in \rf{pint40} be non--negative integers. The maximal number of powers of $\mathbb{P}^I$ in \rf{pint40}, which is denoted by $k_{max}$, is given by% \footnote{ Expressions for $L^{\scriptscriptstyle (a)}$ and $Q^{\scriptscriptstyle (aa+1)}$ \rf{mm012}-\rf{mm011} imply that $k_{max} = x^{\scriptscriptstyle (1)} + x^{\scriptscriptstyle (2)} +2y^{\scriptscriptstyle (1)} + 2 y^{\scriptscriptstyle (2)} + y^{\scriptscriptstyle (3)}$. Relations for $y^{\scriptscriptstyle (a)}$ \rf{mm014}-\rf{mm016} then lead to \rf{kmaxmm0N1}.} \begin{equation}\label{kmaxmm0N1} k_{max} = \frac{1}{2}(s^{\scriptscriptstyle (1)} + s^{\scriptscriptstyle (2)} + 3s^{\scriptscriptstyle (3)} + x^{\scriptscriptstyle (1)} + x^{\scriptscriptstyle (2)} )\,. \end{equation} We note that using \rf{mm014}-\rf{mm016} allows rewriting restrictions \rf{restr0101} in the equivalent form% \footnote{ If $x^{\scriptscriptstyle (1)} = x^{\scriptscriptstyle (2)}=0$, then restrictions \rf{mm017} become the restrictions well known in the angular momentum theory: $ |s^{\scriptscriptstyle (1)} - s^{\scriptscriptstyle (2)}| \leq s^{\scriptscriptstyle (3)} \leq s^{\scriptscriptstyle (1)} + s^{\scriptscriptstyle (2)}$.} \begin{equation} \label{mm017} |s^{\scriptscriptstyle (1)} -s^{\scriptscriptstyle (2)} -x^{\scriptscriptstyle (1)} +x^{\scriptscriptstyle (2)} | \leq s^{\scriptscriptstyle (3)} \leq s^{\scriptscriptstyle (1)} + s^{\scriptscriptstyle (2)} -x^{\scriptscriptstyle (1)} -x^{\scriptscriptstyle (2)} \,.\end{equation} \newsection{ Parity invariant cubic interaction vertices for massive fields }\label{secMMM} We finally consider the cubic interaction vertex \rf{varrep8} for three massive fields: \begin{equation} {\rm m}_1 \ne 0 ,\qquad {\rm m}_2 \ne 0,\qquad {\rm m}_3\ne 0. \end{equation} The general solution for vertex \rf{varrep8} is found to be (see Appendix D) \begin{equation}\label{mmm1} p_{\scriptscriptstyle [3]}^- = p_{\scriptscriptstyle [3]}^-(L_n^{\scriptscriptstyle (a)} ;\, Q_{mn}^{\scriptscriptstyle (aa+1)}, Q_{mn}^{\scriptscriptstyle (aa)}\,)\,, \end{equation} where we use the notation \begin{eqnarray} \label{mmm3} && L_n^{\scriptscriptstyle (a)}\equiv B_n^{\scriptscriptstyle (a)} + \frac{{\rm m}_{a+1}^2 - {\rm m}_{a+2}^2}{2 {\rm m}_a}\alpha_n^{\scriptscriptstyle (a)}\,,\qquad B_n^{\scriptscriptstyle (a)}\equiv \frac{\alpha_n^{{\scriptscriptstyle (a)} I}\mathbb{P}^I}{\beta_a}+ \frac{\check{\beta}_a}{2\beta_a}{\rm m}_a \alpha_n^{\scriptscriptstyle (a)}\,, \\[6pt] \label{mmm2} && Q_{mn}^{\scriptscriptstyle (aa+1)} \equiv \alpha_{mn}^{\scriptscriptstyle (aa+1)} + \frac{\alpha_n^{\scriptscriptstyle (a+1)} }{{\rm m}_{a+1}} B_m^{\scriptscriptstyle (a)} - \frac{\alpha_m^{\scriptscriptstyle (a)}}{{\rm m}_a} B_n^{\scriptscriptstyle (a+1)} - \frac{{\rm m}_{a+2}^2}{2{\rm m}_a {\rm m}_{a+1}}\alpha_m^{\scriptscriptstyle (a)} \alpha_n^{\scriptscriptstyle (a+1)}\,, \end{eqnarray} and $\alpha_{mn}^{\scriptscriptstyle (ab)}$, $Q_{mn}^{\scriptscriptstyle (aa)}$ are defined in \rf{amnabdef}, \rf{Qmnaadef}. From the expressions for the quadratic forms $Q_{mn}^{\scriptscriptstyle (aa+1)}$ \rf{mmm2}, it follows that the cubic vertex for massive fields is singular as ${\rm m}_a\rightarrow 0$, $a=1,2,3$. The remaining quadratic forms $Q_{mn}^{\scriptscriptstyle (aa)}$ do not contribute to the Hamiltonian when the ket-vectors $|\phi_a\rangle$ are restricted to be traceless. We note, however, that it is sometimes convenient to formulate interacting fields theories in terms of ket-vectors that are not subjected to the tracelessness constraint. For example, the ket-vectors of the light-cone gauge string field theories are not subjected to the tracelessness constraint. We now restrict attention to vertices for the totally symmetric fields. \subsection{ Cubic interaction vertices for totally symmetric fields} To obtain the cubic interaction vertex for the massive totally symmetric fields we simply set $\nu = 1$ in relations \rf{mmm1}-\rf{mmm2}. To simplify the formulas we drop the oscillator's subscript $n=1$ and use the simplified notation $\alpha^I =\alpha_1^I$, $\alpha =\alpha_1$. The expression for the cubic vertex can then be obtained from the general solution \rf{mmm1} by using the identifications \begin{equation}\label{simnot01N1} \alpha^{{\scriptscriptstyle (a)} I} \equiv \alpha_1^{{\scriptscriptstyle (a)} I}\,, \qquad \alpha^{\scriptscriptstyle (a)} \equiv \alpha_1^{\scriptscriptstyle (a)}\,,\qquad a=1,2,3\,,\end{equation} in \rf{mmm1} and ignoring contribution of oscillators carrying a subscript $n>1$. Adopting the simplified notation \rf{simnot01N1} for linear forms $L^{\scriptscriptstyle (a)} \equiv L_1^{\scriptscriptstyle (a)}$, $B^{\scriptscriptstyle (a)} \equiv B_1^{\scriptscriptstyle (a)}$ \rf{mmm3}, and quadratic forms $Q^{\scriptscriptstyle (ab)} \equiv Q_{11}^{\scriptscriptstyle (ab)} $ \rf{mmm2}, we see that vertex \rf{mmm1} takes the form \begin{equation}\label{mmmN2} p_{\scriptscriptstyle [3]}^- = p_{\scriptscriptstyle [3]}^-(L^{\scriptscriptstyle (a)} ;\, Q^{\scriptscriptstyle (aa+1)}, Q^{\scriptscriptstyle (aa)}\,)\,. \end{equation} Vertex \rf{mmmN2} describes the interaction of the towers of massive totally symmetric fields \rf{intver16n9}. We next obtain the vertex for massive totally symmetric spin $s^{\scriptscriptstyle (1)}$, $s^{\scriptscriptstyle (2)}$, $s^{\scriptscriptstyle (3)}$ fields. The massive totally symmetric spin $s^{\scriptscriptstyle (a)}$ fields are described by the respective ket-vectors $|\phi_{s^{\scriptscriptstyle (a)} }\rangle$. The ket-vectors of massive fields $|\phi_{s^{\scriptscriptstyle (a)} }\rangle$, $a=1,2,3$, can be obtained from \rf{intver16n4} by replacement $s\rightarrow s^{\scriptscriptstyle (a)} $, $\alpha^I\rightarrow \alpha^{{\scriptscriptstyle (a)} I}$, $\alpha\rightarrow \alpha^{\scriptscriptstyle (a)}$ in \rf{intver16n4}. Because $|\phi_{s^{\scriptscriptstyle (a)} }\rangle$ are respective degree $s^{\scriptscriptstyle (a)} $ homogeneous polynomials in $\alpha^{{\scriptscriptstyle (a)} I}$, $\alpha^{\scriptscriptstyle (a)}$ (see \rf{intver16n6}), it is obvious that the vertex we are interested in must satisfy the equations \begin{equation} \label{mmmN3} (\alpha^{{\scriptscriptstyle (a)} I}\bar\alpha^{{\scriptscriptstyle (a)} I} + \alpha^{\scriptscriptstyle (a)}\bar\alpha^{\scriptscriptstyle (a)} - s^{\scriptscriptstyle (a)} )|p_{\scriptscriptstyle [3]}^-\rangle = 0\,,\qquad a=1,2,3, \end{equation} which tell us that the vertex $p_{\scriptscriptstyle [3]}^-$ must be a degree $s^{\scriptscriptstyle (a)} $ homogeneous polynomial in the oscillators $\alpha^{{\scriptscriptstyle (a)} I}$, $\alpha^{\scriptscriptstyle (a)}$. Taking into account that the forms $L^{\scriptscriptstyle (a)}$ and $Q^{\scriptscriptstyle (aa+1)}$ are respective degree 1 and 2 homogeneous polynomials in oscillators we obtain the general solution of Eqs.\rf{mmmN3} as% \footnote{ We ignore the contribution of $Q^{\scriptscriptstyle (aa)}$-terms of \rf{mmmN2} to vertex \rf{intvermmm01}. Because of the tracelessness constraint (see the first relation in \rf{intver16n8}) the contribution of these terms to the Hamiltonian $P_{\scriptscriptstyle [3]}^-$ \rf{pm1} vanishes.} \begin{equation}\label{intvermmm01} p_{\scriptscriptstyle [3]}^-(s^{\scriptscriptstyle (1)},s^{\scriptscriptstyle (2)},s^{\scriptscriptstyle (3)};x^{\scriptscriptstyle (1)},x^{\scriptscriptstyle (2)},x^{\scriptscriptstyle (3)}) = \prod_{a=1}^3 (L^{\scriptscriptstyle (a)})^{x^{\scriptscriptstyle (a)} }(Q^{\scriptscriptstyle (aa+1)})^{y^{\scriptscriptstyle (a+2)}}\,, \end{equation} where integers $y^{\scriptscriptstyle (a)} $ are expressible in terms of $s^{\scriptscriptstyle (a)} $ and three integers $x^{\scriptscriptstyle (a)}$ labeling the freedom of our solution, \begin{equation}\label{mmm12} y^{\scriptscriptstyle (a)} = \frac{1}{2}({\bf s} + x^{\scriptscriptstyle (a)} - x^{\scriptscriptstyle (a+1)} - x^{\scriptscriptstyle (a+2)}) -s^{\scriptscriptstyle (a)} \,, \qquad a=1,2,3\,,\end{equation} and ${\bf s}$ is given in \rf{0007}. The maximal number of powers of $\mathbb{P}^I$ in \rf{intvermmm01}, denoted by $k_{max}$, is given by% \footnote{ Expressions for $L^{\scriptscriptstyle (a)}$ and $Q^{\scriptscriptstyle (aa+1)}$ \rf{mmm3}, \rf{mmm2} imply that $k_{max} =\sum_{a=1}^3 (x^{\scriptscriptstyle (a)} + y^{\scriptscriptstyle (a)} )$. Taking $y^{\scriptscriptstyle (a)}$ \rf{mmm12} into account we then find \rf{mmmkmax}.} \begin{equation} \label{mmmkmax} k_{max} = \frac{1}{2}\bigl({\bf s} + \sum_{a=1}^3 x^{\scriptscriptstyle (a)}\bigr)\,.\end{equation} Requiring the powers of the forms $L^{\scriptscriptstyle (a)}$ and $Q^{\scriptscriptstyle (aa+1)}$ in \rf{intvermmm01} to be non--negative integers gives the restrictions \begin{eqnarray} \label{mmm13} && x^{\scriptscriptstyle (a)} \geq 0 \,,\qquad y^{\scriptscriptstyle (a)} \geq 0\,, \qquad a =1,2,3\,; \\ \label{mmm14} && {\bf s} + \sum_{a=1}^3 x^{\scriptscriptstyle (a)} \quad \hbox{ even integer}\,. \end{eqnarray} Using relations \rf{mmm12} allows rewriting restrictions \rf{mmm13} as% \footnote{ If $x^{\scriptscriptstyle (a)}=0$, $a=1,2,3$, then restrictions \rf{mmm15} become the restrictions well known in the angular momentum theory: $ |s^{\scriptscriptstyle (1)} - s^{\scriptscriptstyle (2)}| \leq s^{\scriptscriptstyle (3)} \leq s^{\scriptscriptstyle (1)} + s^{\scriptscriptstyle (2)}$.} \begin{equation} \label{mmm15} s^{\scriptscriptstyle (3)} - s^{\scriptscriptstyle (1)} - s^{\scriptscriptstyle (2)} + x^{\scriptscriptstyle (1)} + x^{\scriptscriptstyle (2)} \leq x^{\scriptscriptstyle (3)} \leq s^{\scriptscriptstyle (3)} - | s^{\scriptscriptstyle (1)} - s^{\scriptscriptstyle (2)} - x^{\scriptscriptstyle (1)} + x^{\scriptscriptstyle (2)} |\,. \end{equation} \newsection{ ${\bf so(d-4)}$ light-cone formalism}\label{sod-4sec} In the preceding sections we constructed parity invariant cubic interaction vertices for massive and massless higher spin fields. We studied cubic vertices for both the mixed-symmetry and totally symmetric fields. For totally symmetric fields in Minkowski space with dimension $d>6$, the antisymmetric Levi-Civita symbol does not give a contribution to cubic vertices and therefore the parity invariant vertices we obtained constitute the complete list of cubic vertices. For totally symmetric fields in $d=4,5,6$ dimensions and mixed-symmetry fields in $d\geq 6$ dimensions, the antisymmetric Levi-Civita symbol admits new invariants and we should therefore develop a method for deriving the complete lists of cubic vertices in a systematic way. In the theories of higher spin fields, it is important to know the complete lists of cubic vertices. This is related to the fact that one needs to use all interaction vertices for constructing full to all orders in coupling constant theories of higher spin fields. We also note that vertices involving the antisymmetric Levi-Civita symbol are unavoidable in supersymmetric theories. The ${\cal N}=4$, $4d$ supersymmetric Yang-Mills theory in light-cone superspace and most supergravity theories are important examples of such theories. Another very important example of a dynamical system whose cubic vertices involve the antisymmetric Levi-Civita symbol are superstring field theories. Cubic vertices of the superstring field theories take the form $A\exp B$, where the factor $A$ involves the antisymmetric Levi-Civita symbol. To summarize, with the prospects of potentially interesting applications to supersymmetric Yang-Mills theories, supergravity, superstring theory, and supersymmetric higher spin field theories, it is desirable to develop a method for constructing cubic vertices that allows analyzing all possible cubic vertices on an equal footing. In this section we develop such a method. Because one of the characteristic features of our method is reducing the manifest transverse $so(d-2)$ symmetry to the $so(d-4)$ symmetry we call it the $so(d-4)$ light-cone approach% \footnote{ In the preceding studies \cite{Green:1984fu}, reducing the manifest $so(d-2)$ symmetry to the $so(d-4)$ symmetry was used to formulate superfield theory of $IIA$ superstrings. In \cite{Green:1984fu}, the reduction was motivated by the desire to obtain an unconstrained superfield formulation. In our study, the main motivation for the reduction is the desire to obtain the most general solution to cubic vertices for arbitrary spin fields in a Poincar\'e invariant theory. It is worth noticing that our method is especially convenient for studying the interaction vertices of supersymmetric theories whose unconstrained superfield formulation is based on reducing the manifest $so(d-2)$ symmetry to the $so(d-4)$ symmetry. The application of our method to the study of $11d$ supergravity can be found in \cite{Metsaev:2004wv}.}. To develop the $so(d-4)$ light-cone approach we use equations for cubic interaction vertices in the harmonic scheme (see Section \ref{equharmschem}). To keep the discussion from becoming unwieldy, we restrict our attention to the case of massless fields. All that is then required is to solve the equations given in \rf{basequ0001}, \rf{basequ0002}, \rf{harmcon01}, \rf{harver01}: \begin{eqnarray} \label{d43}&& {\bf J}^{IJ}|p_{\scriptscriptstyle [3]}^-\rangle=0\,, \\ \label{d44}&& (\mathbb{P}^I\partial_{\mathbb{P}^I} + \sum_{a=1}^3 \beta_a\partial_{\beta_a} )|p_{\scriptscriptstyle [3]}^-\rangle=0\,, \\ \label{d44N1} && \partial_{\mathbb{P}^I} \partial_{\mathbb{P}^I} |p_{\scriptscriptstyle [3]}^-\rangle = 0\,, \\[4pt] \label{m0basequ01} && X^{IJ} {\cal P}^J |p_{\scriptscriptstyle [3]}^-\rangle = 0\,, \\[3pt] && \label{p2vN1} |p_{\scriptscriptstyle [3]}^-\rangle \equiv p_{\scriptscriptstyle [3]}^-({\mathbb{P} },\beta_a;\, \alpha)|0\rangle_1|0\rangle_2|0\rangle_3\,, \end{eqnarray} where the angular momentum ${\bf J}^{IJ}$ is defined in \rf{JIJp3}, and Eqs.\rf{m0basequ01} are obtainable from Eqs.\rf{harver01} by setting ${\rm m}_a=0$, $a=1,2,3$. To proceed, we decompose the momentum $\mathbb{P}^I$, which is an $so(d-2)$ vector, as \begin{equation}\label{d46} \mathbb{P}^I \ \ \rightarrow \ \ \mathbb{P}^i\,,\qquad \mathbb{P}^R\,,\qquad \mathbb{P}^L \,, \qquad i=1,\ldots,d-4\,,\end{equation} where the momentum $\mathbb{P}^i$ is an $so(d-4)$ vector and complex-valued momenta $\mathbb{P}^R$, $\mathbb{P}^L $ are defined by \begin{equation}\label{d47} \mathbb{P}^R =\frac{1}{\sqrt{2}}(\mathbb{P}^{d-2}+{\rm i}\mathbb{P}^{d-3}), \qquad \mathbb{P}^L =\frac{1}{\sqrt{2}}(\mathbb{P}^{d-2} - {\rm i}\mathbb{P}^{d-3})\,. \end{equation} In what follows, in place of the momenta $\mathbb{P}^i$, $\mathbb{P}^R $, $\mathbb{P}^L $, we prefer to use a dimensionfull momentum ${\mathbb{P} }^L$ and dimensionless momentum variables $q^i$, $\rho$ defined by \begin{equation} \label{newvar} q^i\equiv \frac{{\mathbb{P} }^i}{{\mathbb{P} }^L}\,, \qquad \rho\equiv \frac{{\mathbb{P} }^i{\mathbb{P} }^i+2{\mathbb{P} }^R{\mathbb{P} }^L} {2({\mathbb{P} }^L)^2}\,, \qquad \frac{{\mathbb{P} }^R}{{\mathbb{P} }^L}= \rho - \frac{q^2}{2}\,, \end{equation} where $q^2\equiv q^iq^i$. In terms of the new momenta, vertex \rf{p2vN1} takes form \begin{equation} \label{p3v} p_{\scriptscriptstyle [3]}^-=({\mathbb{P} }^L)^k V(q\,,\rho\,,\beta_a\,;\,\alpha)\,, \end{equation} which implies that the vertex $p_{\scriptscriptstyle [3]}^-$ is a degree $k$ monomial in ${\mathbb{P} }^L$. In terms of momenta \rf{newvar}, various components of the orbital momentum \rf{LIJ01} take the form \begin{eqnarray} \label{Lrl} && {\bf L}^{RL} =q^i\partial_{q^i} + 2\rho\partial_\rho -{\mathbb{P} }^{L} \partial_{{\mathbb{P} }^L}\,,\qquad\quad \\ \label{Lij} && {\bf L}^{ij} =q^i\partial_{q^j}-q^j\partial_{q^i}\,, \\ \label{Lli} && {\bf L}^{Li} =\partial_{q^i}\,, \\ \label{Lri} && {\bf L}^{Ri} =(\rho-\frac{q^2}{2})\partial_{q^i} +q^i (q^j \partial_{q^j} + 2\rho\partial_\rho -{\mathbb{P} }^L\partial_{{\mathbb{P} }^L})\,. \end{eqnarray} To demonstrate the main idea of introducing the variable $q^i$ we focus on $Li$-part of Eqs.\rf{d43}. Plugging vertex $p_{\scriptscriptstyle [3]}^-$ \rf{p3v} and ${\bf L}^{Li}$ \rf{Lli} in the $Li$-part of Eqs.\rf{d43}, we obtain the equation \begin{equation} \label{Li2} (\partial_{q^i} + {\bf M}^{Li})V(q\,,\rho\,,\beta_a\,;\, \alpha)=0\,. \end{equation} The solution of Eq.\rf{Li2} is easily found to be \begin{equation} \label{d416} V(q\,,\rho\,,\beta_a\,;\, \alpha) = \widehat{E}_q\, \widetilde{V}(\rho\,, \beta_a\,;\, \alpha)\,, \qquad\quad \widehat{E}_q\equiv \exp(-q^i {\bf M}^{L i})\,. \end{equation} Collecting the above expressions, we obtain the following representation for the vertex $p_{\scriptscriptstyle [3]}^-$: \begin{equation} \label{p3int} p_{\scriptscriptstyle [3]}^- = ({\mathbb{P} }^L)^k \widehat{E}_q \widetilde{V}(\rho\,, \beta_a\,;\, \alpha)\,, \end{equation} and note that in terms of the vertex $\widetilde{V}$ the $\beta$-homogeneity equation \rf{d44} becomes \begin{equation}\label{d420} (\sum_{a=1}^3 \beta_a\partial_{\beta_a}+k)\widetilde{V}(\rho,\beta_a\,;\, \alpha)=0\,. \end{equation} Next step is to find the dependence on the momentum $\rho$. For this, we use the $RL$, $Ri$ and $ij$ parts of Eqs.\rf{d43}: \begin{equation} \label{d417} {\bf J}^{RL}p_{\scriptscriptstyle [3]}^- = 0\,,\qquad {\bf J}^{Ri}p_{\scriptscriptstyle [3]}^- = 0\,,\qquad {\bf J}^{ij}p_{\scriptscriptstyle [3]}^- =0\,. \end{equation} It turns out that kinematical equations \rf{d417} allow finding dependence on $\rho$. We thus obtain the following representation for the vertex $\widetilde{V}$ (see Appendix E): \begin{eqnarray} && \label{d422} \widetilde{V}(\rho,\beta_a\,;\, \alpha) =\widehat{E}_\rho\widetilde{V}_0(\beta_a\,;\, \alpha)\,, \\[6pt] && \label{d423} \widehat{E}_\rho\equiv\sum_{n=0}^{k} (-\rho)^n\frac{\Gamma(\frac{d-4}{2}+k-n)}{2^n n!\Gamma(\frac{d-4}{2}+k)} ({\bf M}^{L i}{\bf M}^{L i})^n\,. \end{eqnarray} In addition, the kinematical equations \rf{d417} lead to equations for the new vertex $\widetilde{V}_0(\beta_a\,;\, \alpha)$ \rf{d422}, \begin{eqnarray} \label{d424} && ({\bf M}^{RL}-k)\widetilde{V}_0(\beta_a\,;\, \alpha)=0\,, \\[3pt] \label{d425}&& {\bf M}^{Ri}\widetilde{V}_0(\beta_a\,;\, \alpha)=0\,, \\[3pt] \label{d426}&& {\bf M}^{ij}\widetilde{V}_0(\beta_a\,;\, \alpha)=0\,, \end{eqnarray} while the $\beta$-homogeneity equation \rf{d420} takes the form \begin{equation}\label{d426ex} (\sum_{a=1}^3 \beta_a\partial_{\beta_a}+k)\widetilde{V}_0(\beta_a\,;\, \alpha)=0\,. \end{equation} The dependence on the transverse space momentum ${\mathbb{P} }^I$ is thus found explicitly and we obtain the following representation for the cubic interaction vertex: \begin{equation} \label{p3v0} p_{\scriptscriptstyle [3]}^-({\mathbb{P} }, \beta_a\,;\, \alpha) =({\mathbb{P} }^L)^k \widehat{E}_q \widehat{E}_\rho \widetilde{V}_0(\beta_a\,;\, \alpha)\,, \end{equation} where $\widetilde{V}_0$ satisfies Eqs.\rf{d424}-\rf{d426ex}. One can make sure that vertex \rf{p3v0} satisfies the harmonic equation \rf{d44N1} (see Appendix E). We now proceed to the last step of our method. The last step is to find the dependence of the vertex $\widetilde{V}_0(\beta_a\,;\, \alpha)$ on the three light-cone momenta, $\beta_1$, $\beta_2$, $\beta_3$. Finding the dependence of $\widetilde{V}_0(\beta_a\,;\, \alpha)$ on the momenta $\beta_a$ is the most difficult point in the framework of the light-cone approach because the vertices $p_{\scriptscriptstyle [3]}^-$ and $\widetilde{V}_0(\beta_a\,;\, \alpha)$ are not polynomials in the light-cone momenta $\beta_a$ in general, i.e. there is no locality condition with respect to the light-cone coordinate $x^-$. But our approach, which is algebraic in nature, allows finding simple representation of the dependence on $\beta_a$. We proceed as follows. Because of the second relation in \rf{betaconlaw}, the vertex $\widetilde{V}_0$ depends on two light-cone momenta. Therefore, we need two equations to find $\widetilde{V}_0$. One of equations is given in \rf{d426ex}. Our basic observation is that the second equation for $\widetilde{V}_0$ can be obtained from locality equations \rf{m0basequ01}. It is easy to understand if the $so(d-2)$ invariance equations \rf{d43} are satisfied then in order to respect all locality equations \rf{m0basequ01} it is sufficient to solve the $L$-part of locality equations \rf{m0basequ01}, which in the $so(d-4)$ notation takes the form \begin{equation}\label{Llocequ01} (X^{LR} {\cal P}^L + X^{Li}{\cal P}^i) |p_{\scriptscriptstyle [3]}^-\rangle = 0\,.\end{equation} Using the representation for $p_{\scriptscriptstyle [3]}^-$ given in \rf{p3v0}, we can prove that locality equation \rf{Llocequ01} amounts to requirement that the vertex $\widetilde{V}_0$ satisfies the equation (see Appendix E) \begin{equation} \label{d427} \sum_{a=1}^3 \check{\beta}_a\left( \beta_a \partial_{\beta_a}^{\vphantom{5pt}} + M^{{\scriptscriptstyle (a)} RL} \right)\widetilde{V}_0 (\beta_a\,;\, \alpha) = 0\,.\end{equation} We note that the consistence requirement for Eqs.\rf{d425}, \rf{d427} leads to the equation \begin{equation} \label{d428} \sum_{a=1}^3 \check{\beta}_a M^{{\scriptscriptstyle (a)} Ri} \widetilde{V}_0(\beta_a\,;\, \alpha) = 0\,. \end{equation} It is easy to see that Eqs.\rf{d424}, \rf{d426ex}, \rf{d427} allow finding the dependence of $\widetilde{V}_0$ on the light-cone momenta $\beta_a$ completely: \begin{equation} \label{d429} \widetilde{V}_0(\beta_a\,;\, \alpha)= \widehat{E}_\beta\bar{V}_0(\alpha)\,,\qquad\quad \widehat{E}_\beta\equiv \prod_{a=1}^3 \beta_a^{- M^{{\scriptscriptstyle (a)} RL}}\,. \end{equation} Using \rf{d429}, it can be shown that Eqs.\rf{d425}, \rf{d428} amount to the following equations for $\bar{V}_0(\alpha)$: \begin{equation}\label{d431} M^{{\scriptscriptstyle (1)} Ri} \bar{V}_0(\alpha) = M^{{\scriptscriptstyle (2)} Ri} \bar{V}_0(\alpha) = M^{{\scriptscriptstyle (3)} Ri} \bar{V}_0(\alpha)\,,\end{equation} while Eqs.\rf{d424}, \rf{d426} lead to the equations \begin{eqnarray} && \label{d430} ( {\bf M}^{RL} - k )\,\bar{V}_0(\alpha)=0\,, \\[3pt] && \label{d432} {\bf M}^{ij}\,\bar{V}_0(\alpha)=0\,.\end{eqnarray} Collecting all the steps above, we obtain the following representation for the cubic vertex: \begin{equation} \label{p3v001} p_{\scriptscriptstyle [3]}^-({\mathbb{P} }, \beta_a\,;\, \alpha) =({\mathbb{P} }^L)^k \widehat{E}_q \widehat{E}_\rho\widehat{E}_\beta \bar{V}_0(\alpha)\,, \end{equation} where the vertex $\bar{V}_0(\alpha)$ depends only on the spin degrees of freedom, denoted by $\alpha$, and satisfies Eqs.\rf{d431}-\rf{d432}. The dependence on the transverse space momentum ${\mathbb{P} }^I$ and the light-cone momenta $\beta_a$ is thus fixed explicitly. An attractive feature of the representation \rf{p3v001} for the vertex is that it is valid for an arbitrary realization of spin degrees of freedom. Because we used the general form of the angular momentum ${\bf J}^{IJ}$ in deriving \rf{p3v001}, the solution for $p_{\scriptscriptstyle [3]}^-$ \rf{p3v001} is universal and is valid for an arbitrary Poincar\'e invariant theory. Various theories differ by: (i) the spin operators $M^{IJ}$; (ii) the vertex $\bar{V}_0(\alpha)$ that depends only on spin variables $\alpha$. We now demonstrate that the remaining Eqs.\rf{d431}-\rf{d432} can be recast into a form that admits a purely group theoretical interpretation. For this, we use the well-known fact that each irrep of the $so(d-2)$ algebra can be realized as an induced representation by inducing from the $so(d-4) \otimes so(2)$ subalgebra. Let $M^{IJ}$ be the $so(d-2)$ algebra generators realized in the $so(d-2)$ algebra irreps labeled by Gelfand-Zetlin labels $s_1,s_2,\ldots,s_\nu$. Then the generators $M^{IJ}$ obtained via the method of induced representations take the form \begin{eqnarray} && \label{d433} M^{Ri} = \bar{\zeta}^i\,, \\[3pt] && \label{d434} M^{RL} = -\zeta\bar{\zeta} + s_1\,, \\[3pt] && \label{d435} M^{ij} = \zeta^i \bar{\zeta}^j -\zeta^j \bar{\zeta}^i+S^{ij}\,, \\[3pt] && \label{d436} M^{L i} = -\frac{1}{2}\zeta^2 \bar{\zeta}^i + \zeta^i \zeta\bar{\zeta} +S^{ij}\zeta^j - s_1 \zeta^i\,, \end{eqnarray} where $\zeta\bar\zeta\equiv \zeta^i\bar\zeta^i$, $\zeta^2 \equiv \zeta^i\zeta^i$ and $S^{ij}$ stands for the $so(d-4)$ algebra generators. The generators $S^{ij}$ are realized in the $so(d-4)$ algebra irreps labeled by the Gelfand-Zetlin labels $s_2,\ldots, s_\nu$. The oscillators $\zeta^i$, $\bar{\zeta}^i$, being vectors of the $so(d-4)$ algebra, satisfy the commutator $[\bar{\zeta}^i,\zeta^j]=\delta^{ij}$\! \footnote{ Alternative convenient realization of $\zeta^i$ and $\bar{\zeta}^i$ is to treat $\zeta^i$ as complex-valued vector and $\bar{\zeta}^i$ as derivative in $\zeta^i$: $\bar{\zeta}^i \equiv \partial_{\zeta^i}$.}. To apply the spin operators $M^{IJ}$ \rf{d433}-\rf{d436} to the analysis of cubic vertices, we attach external line index to all quantities given in \rf{d433}-\rf{d436}, i.e. we introduce $M^{{\scriptscriptstyle (a)} IJ}$, $\zeta^{{\scriptscriptstyle (a)} i} $, $S^{{\scriptscriptstyle (a)} ij}$, $s_1^{\scriptscriptstyle (a)}$, $a=1,2,3$. Using the representation for the spin operators $M^{{\scriptscriptstyle (a)} Ri}$ \rf{d433} we then obtain the solution of Eqs.\rf{d431} \begin{equation} \label{d438} \bar{V}_0(\alpha) = G({\boldsymbol{\zeta}}, \alpha_S)\,, \qquad \ \ \ \ {\boldsymbol{\zeta}}^i\equiv \sum_{a=1}^3 \zeta^{{\scriptscriptstyle (a)} i}\,, \end{equation} where $\alpha_S$ stands for spin variables related to the $so(d-4)$ symmetries. Plugging $\bar{V}_0(\alpha)$ \rf{d438} into Eqs.\rf{d430}, \rf{d432} we find the equations for the vertex $G$, \begin{eqnarray} \label{d440} &&({\bf L}^{ij}({\boldsymbol{\zeta}})+ {\bf S}^{ij})G=0\,,\qquad {\bf S}^{ij} \equiv \sum_{a=1}^3 S^{{\scriptscriptstyle (a)} ij}\,, \\ \label{d441}&&({\boldsymbol{\zeta}}^i \partial_{{\boldsymbol{\zeta}}^i} + k- \sum_{a=1}^3 s_1^{\scriptscriptstyle (a)} )G = 0\,, \end{eqnarray} where ${\bf L}^{ij}({\boldsymbol{\zeta}})$ is defined similarly to \rf{LIJ01}. Equations \rf{d440} for the vertex $G$ are purely group theoretical equations. In the group theoretical language, $G$ is the generating function of Clebsch -Gordan coefficients connecting one vector representation ${\boldsymbol{\zeta}}^i$ and three representations whose spin matrices are given by $S^{{\scriptscriptstyle (a)} ij}$, $a=1,2,3$. Thus, the final expression for the vertex $p_{\scriptscriptstyle [3]}^-$ is given by \begin{equation}\label{d443} p_{\scriptscriptstyle [3]}^-= (\mathbb{P}^L)^k\widehat{E}_\rho V_0 \,,\qquad V_0 \equiv \widehat{E}_q \widehat{E}_\beta G({\boldsymbol{\zeta}}, \alpha_S)\,. \end{equation} In this formula, the operators $\widehat{E}_q$, $\widehat{E}_\rho$, $\widehat{E}_\beta$ are given in \rf{d416}, \rf{d423}, \rf{d429}, while the spin operators $M^{IJ}$ are given in \rf{d433}-\rf{d436}. The vertex $G$ is fixed by Eqs.\rf{d440}, \rf{d441}. We next demonstrate how the general solution \rf{d443} can be used in concrete applications. \subsection{ Cubic interaction vertices for massless totally symmetric fields in $5d$ Minkowski space}\label{5dtheor} $5d$ flat space is the simplest case where the advantages of the $so(d-4)$ light-cone approach can be demonstrated. In $5d$ flat space all physical massless fields are classified by irreps of the $so(3)$ algebra. Since irreps of the $so(3)$ algebra are labeled by one label, all massless fields in $5d$ flat space can be described by totally symmetric tensors fields of the $so(3)$ algebra; we therefore restrict our attention to the study of cubic interaction vertices for the totally symmetric fields. For the $5d$ space, the indices $i,j$ that label $d-4$ directions take one value: $i,j=1$. To simplify our expressions, we use the short notation for the dimensionless momentum $q^i$ \rf{newvar} and the spin variable $\zeta^i$: \begin{equation} q\equiv q^1\,,\qquad \zeta \equiv \zeta^1 \,. \end{equation} The spin operators $M^{IJ}$ \rf{d433}-\rf{d436} then take the form \begin{equation} \label{d444} M^{RL}=-\zeta\bar{\zeta} + s \,,\qquad M^{R 1}= \bar\zeta\,,\qquad M^{L 1} =\frac{1}{2}\zeta^2\bar{\zeta}- s \zeta\,, \end{equation} where $s\equiv s_1$. In considering cubic vertices, the quantities $M^{IJ}$, $\zeta$, and $s$ should be equipped with an external line index to become $M^{{\scriptscriptstyle (a)} IJ}$, $\zeta^{\scriptscriptstyle (a)}$, and $s^{\scriptscriptstyle (a)}$, $a=1,2,3$. With this convention, the solution of Eq.\rf{d441} takes the form \begin{equation}\label{d445} G= {\boldsymbol{\zeta}}^{{\bf s} - k}\,, \qquad {\boldsymbol{\zeta}} \equiv \sum_{a=1}^3 \zeta^{\scriptscriptstyle (a)}\,,\qquad {\bf s} \equiv \sum_{a=1}^3 s^{\scriptscriptstyle (a)}\,.\end{equation} Using relations for action of the spin operators $M^{{\scriptscriptstyle (a)} RL}$, $M^{{\scriptscriptstyle (a)} L1}$: \begin{eqnarray} \label{d446} && \prod_{a=1}^3 \beta_a^{-M^{{\scriptscriptstyle (a)} RL}}f({\boldsymbol{\zeta}}) =\prod_{a=1}^3 \beta_a^{-s^{\scriptscriptstyle (a)}}f( {\boldsymbol{\beta}}\cdot{\boldsymbol{\zeta}})\,, \qquad {\boldsymbol{\beta}} \cdot {\boldsymbol{\zeta}} \equiv \sum_{a=1}^3 \beta_a\zeta^{\scriptscriptstyle (a)} \,, \\ \label{d447} && e^{-q(\zeta^2\bar{\zeta}+b\zeta)}f(\zeta) =(1+q\zeta)^{-b}f\Bigl(\frac{\zeta}{1+q\zeta}\Bigr)\,, \end{eqnarray} we obtain the following representation for vertex $V_0$ \rf{d443}: \begin{equation}\label{d448} V_0(s^{\scriptscriptstyle (1)},s^{\scriptscriptstyle (2)},s^{\scriptscriptstyle (3)};\,k) = {\cal Z}^{\,{\bf s} - k} \prod_{a=1}^3 ({\cal B}^{\scriptscriptstyle (a)})^{s^{\scriptscriptstyle (a)}} \,, \end{equation} where we use the notation \begin{equation} \label{d448N1} {\cal B}^{\scriptscriptstyle (a)} \equiv \frac{1}{\beta_a}(1 + \frac{1}{2} q \zeta^{\scriptscriptstyle (a)} )^2\,, \quad \qquad {\cal Z} \equiv \sum_{a=1}^3\frac{\beta_a\zeta^{\scriptscriptstyle (a)} }{1 + \frac{1}{2}q \zeta^{\scriptscriptstyle (a)} }\,. \end{equation} For vertex $V_0$ \rf{d448} to be sensible it should be polynomial with respect to the spin variables $\zeta^{\scriptscriptstyle (a)}$. We therefore impose the restrictions on spin values $s^{\scriptscriptstyle (a)}$ and the number of derivatives $k$: \begin{equation} \label{0009N1} {\bf s} - k \geq 0\,, \qquad 2s^{\scriptscriptstyle (a)} \geq {\bf s} -k\,, \qquad a=1,2,3\,,\end{equation} which coincide with those in \rf{0009}. We rewrite restrictions \rf{0009N1} as \begin{equation}\label{d449} {\bf s} - 2 s_{min} \leq k \leq {\bf s}\,,\qquad s_{min}\equiv \min_{a=1,2,3} s^{\scriptscriptstyle (a)}\,. \end{equation} Comparing with restrictions \rf{0009}, \rf{00011}, we see that {\it the number ${\bf s} - k$ is not restricted to be even integer} in the case under consideration. This is, for fixed spin values $s^{\scriptscriptstyle (1)}$, $s^{\scriptscriptstyle (2)}$, $s^{\scriptscriptstyle (3)}$, the integer $k$ takes the values (see \rf{d449}) \begin{equation} k = {\bf s},\, {\bf s}-1,\, {\bf s} - 2,\, \ldots\, , {\bf s} - 2s_{min}\,. \end{equation} This implies that for fixed spin values $s^{\scriptscriptstyle (1)}$, $s^{\scriptscriptstyle (2)}$, $s^{\scriptscriptstyle (3)}$, the number of allowed vertices $p_{\scriptscriptstyle [3]}^-(s^{\scriptscriptstyle (1)},s^{\scriptscriptstyle (2)},s^{\scriptscriptstyle (3)};k)$ that can be constructed is given by \begin{equation}\label{Nallow1} {\sf N}(s^{\scriptscriptstyle (1)},s^{\scriptscriptstyle (2)},s^{\scriptscriptstyle (3)}) = 2 s_{min} + 1\,. \end{equation} Comparing \rf{Nallow1} with \rf{number01}, we conclude that the $so(d-4)$ formalism gives additional $s_{min}$ vertices compared with those obtained in the $so(d-2)$ formalism of Section \ref{SolcubintversecN1} without using the antisymmetric Levi-Civita symbol. It is these additional $s_{min}$ vertices that could be built using the antisymmetric Levi-Civita symbol $\epsilon^{IJK}$ in the $so(d-2)$ ($so(3)$ for $d=5$) formalism% \footnote{ We recall that vertices not involving the antisymmetric Levi-Civita symbol are referred to as parity invariant vertices, while vertices involving the antisymmetric Levi-Civita symbol are referred to as parity violating vertices.}. To summarize, {\it the complete list of cubic vertices for the massless spin $s^{\scriptscriptstyle (1)},s^{\scriptscriptstyle (2)},s^{\scriptscriptstyle (3)}$ fields in $5d$ space involves $s_{min}+1$ parity invariant vertices $p_{\scriptscriptstyle [3]}^-(s^{\scriptscriptstyle (1)},s^{\scriptscriptstyle (2)},s^{\scriptscriptstyle (3)};k)$} with the number of derivatives given by \begin{equation} k = {\bf s},\, \, {\bf s}-2,\, \ldots\, , {\bf s} - 2s_{min} \,, \hspace{1cm}\qquad \hbox{ for parity invariant vertices}\,, \ \ \ \end{equation} {\it and $s_{min}$ parity violating vertices $p_{\scriptscriptstyle [3]}^-(s^{\scriptscriptstyle (1)},s^{\scriptscriptstyle (2)},s^{\scriptscriptstyle (3)};k)$} with the number of derivatives given by \begin{equation} k = {\bf s} - 1,\, {\bf s} - 3,\, \ldots\, , {\bf s} - 2s_{min} + 1\,,\qquad \hbox{ for parity violating vertices}\,. \end{equation} A remarkable property of the $so(d-4)$ formalism is that it allows constructing both the parity invariant and parity violating vertices on an equal footing. This is especially important in applications to supersymmetric theories that involve vertices of both these types. In Table III we present those {\it parity violating} cubic vertices whose Lorentz covariant counterparts are available in the literature% \footnote{ In the literature, we have not found the parity violating covariant Lagrangian for low spin $s=1,2$ fields that corresponds to our light-cone vertex $V_0(1,1,2;3)$. The covariant Lagrangian in the 4th line of Table III is invariant only under linearized gauge transformations.}. The {parity invariant} cubic vertices for arbitrary $d\geq 4$ were given in Tables I, II. \bigskip \noindent {\sf Table III. Parity violating cubic interaction vertices for massless totally symmetric fields in $5d$ space. \small In the 3rd column, $A\mu$ stands for the Abelian spin 1 field, the matrices $R_{\mu\nu}$ and $\omega_\mu$ stand for the Riemann tensor $R_{\mu\nu}^{AB}$ and Lorentz connection $\omega_\mu^{AB}$, and Tr denotes trace over Lorentz indices $A,B$}. {\small \begin{center} \begin{tabular}{|l|c|c|} \hline & & \\ [-3mm]\ Spin values and \ & \ \ \ Light-cone \ \ \ & Covariant \\ number of derivatives & vertex & Lagrangian \\ \ \ $ s^{\scriptscriptstyle (1)},s^{\scriptscriptstyle (2)},s^{\scriptscriptstyle (3)};\,k $ & $V_0(s^{\scriptscriptstyle (1)},s^{\scriptscriptstyle (2)},s^{\scriptscriptstyle (3)};k)$ & \\ \hline && \\[-3mm] \ \ \ \ \ \ \ $ 1,1,1;\,2$ & ${\cal B}^{\scriptscriptstyle (1)}{\cal B}^{\scriptscriptstyle (2)}{\cal B}^{\scriptscriptstyle (3)} {\cal Z} $ & $ \epsilon^{\mu\nu\rho\sigma\lambda}F_{\mu\nu}F_{\rho\sigma}A_\lambda$ \\[2mm]\hline && \\[-3mm] \ \ \ \ \ \ \ $ 1,2,2;\,4$ & $ {\cal B}^{\scriptscriptstyle (1)}({\cal B}^{\scriptscriptstyle (2)}{\cal B}^{\scriptscriptstyle (3)})^2 {\cal Z} $ & $\epsilon^{\mu\nu\rho\sigma\lambda} \hbox{ Tr } (R_{\mu\nu} R_{\rho\sigma}) A_\lambda $ \\[2mm]\hline && \\[-3mm] \ \ \ \ \ \ \ $ 2,2,2;\,3 $ & $ ({\cal B}^{\scriptscriptstyle (1)}{\cal B}^{\scriptscriptstyle (2)}{\cal B}^{\scriptscriptstyle (3)})^2 {\cal Z}^3 $ & ${\cal L}(\hbox{see Ref}.\cite{Boulanger:2000ni})$ \\[2mm]\hline && \\[-3mm] \ \ \ \ \ \ \ $ 2,2,2;\,5 $ & $ ({\cal B}^{\scriptscriptstyle (1)}{\cal B}^{\scriptscriptstyle (2)}{\cal B}^{\scriptscriptstyle (3)})^2 {\cal Z} $ & $\epsilon^{\mu\nu\rho\sigma\lambda} \hbox{Tr } ( R_{\mu\nu} R_{\rho\sigma}\omega_\lambda) $ \\[2mm]\hline && \\[-3mm] \ \ \ \ \ \ \ $ 3,3,3;\,4$ & $ ({\cal B}^{\scriptscriptstyle (1)}{\cal B}^{\scriptscriptstyle (2)}{\cal B}^{\scriptscriptstyle (3)})^3 {\cal Z}^5 $ & ${\cal L}(\hbox{see Ref}.\cite{Boulanger:2005br}) $ \\[2mm]\hline \end{tabular} \end{center} } \subsection{Cubic interaction vertices for massless totally symmetric and mixed-symmetry fields in $6d$ Minkowski space}\label{6dtheor} In $6d$ flat space all physical massless fields are classified by irreps of the $so(4)$ algebra. Since irreps of the $so(4)$ algebra are labeled by two Gelfand-Zetlin labels $s_1$, $s_2$, $s_1\geq |s_2|$, massless fields in $6d$ flat space are described by $so(4)$ totally symmetric tensor fields ($s_1 \geq 0$, $s_2=0$) and $so(4)$ mixed-symmetry tensor fields ($s_2\ne 0$). For the massless {\it mixed-symmetry} fields, $6d$ flat space is the simplest case where advantages of the $so(d-4)$ light-cone approach can be demonstrated. Our approach allows us to build cubic vertices for all massless fields on an equal footing. Since the indices $i,j$ labeling $d-4$ directions take two values for $d=6$, $i,j=1,2$, we prefer to use the complex coordinates \begin{equation} x = \frac{1}{\sqrt{2}}(x^1+{\rm i}x^2)\,, \qquad {\bar{x}} = \frac{1}{\sqrt{2}}(x^1 - {\rm i}x^2)\,,\end{equation} in place of $d-4$ coordinates $x^1$, $x^2$. In the complex coordinates, the indices $i,j$ range over $x$ and $\bar{x}$ and the dimensionless momentum $q^i$ \rf{newvar} and the spin variable $\zeta^i$ are decomposed as \begin{equation} q^i = q^x,\,\, q^{\bar{x}} \,; \qquad \zeta^i = \, \zeta^x,\,\, \zeta^{\bar{x}} \,;\end{equation} while the generator of the $so(2)$ algebra $S^{ij}$ is expressed in terms of Gelfand-Zetlin label $s_2$, \begin{equation} S^{x{\bar{x}}} = s_2\,. \end{equation} This implies the following representation for the spin operators $M^{Li}$ \rf{d436}: \begin{equation} \label{MLxb} M^{L {\bar{x}} } =(\zeta^{\bar{x}})^2\bar{\zeta}^x - (s_1 + s_2) \zeta^{\bar{x}}\,, \qquad M^{L x}=(\zeta^x)^2 \bar{\zeta}^{\bar{x}} - ( s_1 - s_2 ) \zeta^x\,. \end{equation} We note that the Gelfand-Zetlin labels $s_1$ and $s_2$ of the $so(4)$ algebra irreps can be related to labels $j_1,j_2$ of irreps of $so(3)_1$ and $so(3)_2$ algebras that enter the decomposition $so(4)=so(3)_1 \oplus so(3)_2$: \begin{equation}\label{j1j2def} j_1 = \frac{1}{2}(s_1 + s_2)\,,\qquad j_{2}=\frac{1}{2}(s_1 - s_2)\,. \end{equation} To study cubic vertices the quantities $j_1$, $j_2$, $\zeta^i$ should be equipped with external line index $a=1,2,3$, i.e. we should introduce $j_1^{\scriptscriptstyle (a)}$, $j_2^{\scriptscriptstyle (a)}$, $\zeta^{{\scriptscriptstyle (a)} i}$. With this convention, the solution of Eqs.\rf{d440}, \rf{d441} takes the form \begin{equation} G=({\boldsymbol{\zeta}}^{\bar{x}})^{{\bf j}_1 - \frac{k}{2}} ({\boldsymbol{\zeta}}^x)^{{\bf j}_2 - \frac{k}{2} }\,, \qquad {\boldsymbol{\zeta}}^i \equiv \sum_{a=1}^3 \zeta^{{\scriptscriptstyle (a)} i}\,,\qquad \quad {\bf j}_\sigma\equiv\sum_{a=1}^3 j{}_\sigma^{\scriptscriptstyle (a)} \,.\end{equation} Using spin operators \rf{MLxb} and relations \rf{d446}, \rf{d447}, we obtain the cubic vertex \begin{equation} \label{Vod6} V_0(j_\sigma^{\scriptscriptstyle (a)};\,k)= \prod_{\sigma=1,2} {\cal Z}_\sigma^{{\bf j}_\sigma -\frac{k}{2}} \prod_{a=1,2,3\atop \sigma=1,2} ({\cal B}_\sigma^{\scriptscriptstyle (a)})^{j_\sigma^{\scriptscriptstyle (a)}}\,,\end{equation} where we use the notation \begin{equation} {\cal B}_\sigma^{\scriptscriptstyle (a)} \equiv \frac{1}{\beta_a} (1+q_\sigma \zeta{}_\sigma^{\scriptscriptstyle (a)} )^2 \,,\qquad {\cal Z}_\sigma\equiv \sum_{a=1}^3 \frac{\beta_a\zeta{}_\sigma^{\scriptscriptstyle (a)} }{1 + q_\sigma\zeta{}_\sigma^{\scriptscriptstyle (a)} }\,, \end{equation} \begin{equation} q_1\equiv q^x\,,\qquad q_2\equiv q^{\bar{x}}\,,\qquad \zeta_1^{\scriptscriptstyle (a)}\equiv\zeta^{{\scriptscriptstyle (a)} \bar{x}}\,,\qquad \zeta_2^{\scriptscriptstyle (a)} \equiv\zeta^{{\scriptscriptstyle (a)} x}\,. \end{equation} For vertex $V_0$ \rf{Vod6} to be polynomial in the spin variables $\zeta_\sigma^{\scriptscriptstyle (a)}$, we impose the restrictions \begin{eqnarray} \label{mixsymineq2} && 2({\bf j}_\sigma - 2j{}_\sigma^{\scriptscriptstyle (a)} )\leq k \leq 2 {\bf j}_\sigma\,,\qquad a=1,2,3; \quad \sigma=1,2\,; \\ \label{mixsymineq3} && 2{\bf j}_\sigma - k \quad \hbox{ even integers}\,, \qquad \sigma = 1,2\,. \end{eqnarray} Restrictions \rf{mixsymineq2} amount to the restrictions \begin{equation} \label{mixsymineq4} 2\max_{\sigma=1,2}({\bf j}_\sigma - 2 \min_{a=1,2,3} j{}_\sigma^{\scriptscriptstyle (a)}) \leq k \leq 2\min_{\sigma = 1,2} {\bf j}_\sigma\,. \end{equation} From \rf{mixsymineq3}, \rf{mixsymineq4}, we see that for fixed spin values $j_\sigma^{\scriptscriptstyle (a)}$, the number of cubic interaction vertices $p_{\scriptscriptstyle [3]}^-(j_\sigma^{\scriptscriptstyle (a)};k)$ that can be constructed is given by \begin{equation}\label{Nd6} {\sf N}(j_\sigma^{\scriptscriptstyle (a)}) = \min_{\sigma = 1,2} {\bf j}_\sigma -\max_{\sigma=1,2}({\bf j}_\sigma - 2 \min_{a=1,2,3} j{}_\sigma^{\scriptscriptstyle (a)}) +1 \,. \end{equation} To summarize, {\it for fixed spin values $j_\sigma^{\scriptscriptstyle (a)}$, restrictions \rf{mixsymineq3}, \rf{mixsymineq4} define all possible (parity invariant and parity violating) cubic vertices} that can be built for the massless totally symmetric and mixed-symmetry fields in $6d$ flat space. The number of these vertices is given by \rf{Nd6}. We now restrict our attention to the totally symmetric fields and compare the complete list of vertices obtained using the $so(d-4)$ method in this section and the list of parity invariant vertices obtained by the $so(d-2)$ method in Section \ref{SolcubintversecN1}. All that is required is to compare the restrictions \rf{mixsymineq3}, \rf{mixsymineq4} for the totally symmetric fields and restrictions \rf{00011}, \rf{00012}. To adapt restrictions \rf{mixsymineq3}, \rf{mixsymineq4} to the totally symmetric fields we set $s_2^{\scriptscriptstyle (a)}=0$, $a=1,2,3$. Relations \rf{j1j2def} then give $j_1^{\scriptscriptstyle (a)}=j_2^{\scriptscriptstyle (a)} =s_1^{\scriptscriptstyle (a)}/2$. Using the identification $s_1^{\scriptscriptstyle (a)} \equiv s^{\scriptscriptstyle (a)}$, we see that restrictions \rf{mixsymineq3}, \rf{mixsymineq4} for the totally symmetric fields coincide with those of the $so(d-2)$ approach, \rf{00011}, \rf{00012}. This implies that for massless totally symmetric fields in $6d$, the complete list of cubic vertices obtained using the $so(d-4)$ method in this section coincides with the list of parity invariant cubic vertices obtained using the $so(d-2)$ method in Section \ref{SolcubintversecN1}, i.e. {\it all cubic vertices for the massless totally symmetric fields in $6d$ are parity invariant}. Thus, in contrast to the $5d$ case the antisymmetric Levi-Civita symbol does not lead to new cubic vertices for the massless totally symmetric fields in $6d$. In a manifestly Lorentz covariant formulation, the massless mixed-symmetry fields in $6d$ flat space are described by a set of the tensor fields whose $SO(5,1)$ space-time tensor indices have the structure of a Young tableaux with two rows. The study of interaction vertices for massless mixed-symmetry fields for arbitrary $d$ in the framework of covariant approach can be found in \cite{Bekaert:2004dz}. \newsection{Conclusions}\label{CONsec} Using the light-cone formalism we have developed various methods for constructing cubic interaction vertices for higher spin fields propagating in flat space. We applied these methods to construct a wide class of cubic interaction vertices for massless and massive arbitrary spin fields. For mixed-symmetry fields in space of arbitrary dimension, we obtained the generating function of the parity invariant cubic vertices. We believe that this generating function involves all possible parity invariant vertices. To classify these vertices (i.e. to find restrictions on powers of derivatives in a vertex for three fields carrying various values of spins) it is necessary to single out irreducible components of the reducible sets of fields as this was done in the case of totally symmetric fields. It seems likely that the easiest way to do that is to apply the $so(d-4)$ method to the generating solution for vertices. We hope to study this issue elsewhere. We emphasize that the generating form of the cubic vertices is very convenient for studying higher order interaction corrections in the theories of higher spin fields. We believe that the study these corrections will allow us to find the generating function explicitly. Our results should have a number of the following interesting applications and generalizations. i) We studied interaction vertices for bosonic fields. It would be interesting to extend the methods developed in this paper to the case of fermionic fields and apply these results to interaction vertices of the closed superstring field theory. ii) The light-cone gauge formulation of free fields in $AdS$ space was developed in \cite{Metsaev:1999ui}-\cite{Metsaev:2003cu}. It would be interesting to extend the methods in this paper to study cubic interaction vertices for fields propagating in $AdS$ space. This should be relatively straightforward because the light-cone gauge formulation of the field dynamics in $AdS$ space provides certain simplifications. iii) Another interesting application is related to certain massless (nonsupersymmetric) triplets in $d=11$, the dimension of M-theory. It was found in \cite{Pengpan:1998qn} that some irreps of the $so(9)$ algebra naturally group together into triplets to be referred to as Euler triplets which are such that bosonic and fermionic degrees of freedom match up the same way as in $11d$ supergravity (see also \cite{Brink:2002zq}-\cite{Brink:1999te}). Later on it was conjectured that these triplets might be organized in a relativistic theory so that this theory would presumably be finite. The methods we developed in this paper and those we used for studying $11d$ supergravity \cite{Metsaev:2004wv} admit a straightforward generalization to higher spin Euler triplets. We hope to return to these problems in future publications. \medskip {\bf Acknowledgments}. We thank N. Boulanger for informative communications and A. Semikhatov for useful comments. This work was supported by the INTAS project 03-51-6346, by the RFBR Grant No.05-02-17654, RFBR Grant for Leading Scientific Schools, Grant No. LSS-4401.2006.2 and Russian Science Support Foundation. \setcounter{section}{0} \setcounter{subsection}{0}
1,314,259,995,762
arxiv
\section{Introduction} Given a positive integer $d$ and a pair of Young diagrams $\lambda,\mu \vdash d,$ we write $\lambda \succeq \mu$ if, for any $r \geq 1,$ the number of cells in the first $r$ rows of $\lambda$ is at least as large as the number of cells in the first $r$ rows of $\mu.$ This relation --- which is known as \emph{majorization} --- defines a partial order on the set of Young diagrams with $d$ cells. Given a Young diagram $\lambda$ and a positive integer $N,$ a Young tableau of shape $\lambda$ and rank $N$ is a function $T$ on the cells of $\lambda$ which takes values in $\{1,\dots,N\}.$ We say that $T$ is \emph{semistandard} if it is weakly increasing along rows and strictly increasing along columns. If $\lambda \vdash d \leq N,$ then the set $\mathrm{SSYT}(\lambda,N)$ of semistandard Young tableaux of shape $\lambda$ and rank $N$ is nonempty, and the \emph{normalized} Schur polynomials \begin{equation*} S_\lambda(x_1,\dots,x_N) = \frac{1}{|\mathrm{SSYT}(\lambda,N)|} \sum_{T \in \mathrm{SSYT}(\lambda,N)} \prod_{j=1}^N x_j^{|T^{-1}(j)|}, \qquad \lambda \vdash d, \end{equation*} \noindent form a basis of the space of homogeneous symmetric polynomials of degree $d$ in $N$ variables. The starting point of this paper is a result of Cuttler--Greene--Skandera \cite{CGS} and Sra \cite{Sra},\footnote{Ait-Haddou and Mazure have also given a different proof of this theorem via the theory of blossoms \cite{AH-M}.} which states that the polynomials $S_\lambda$ characterize the majorization order in the following sense. \begin{theorem} \label{thm:CGSS} For any $\lambda,\mu \vdash d \leq N,$ we have $\lambda \succeq \mu$ if and only if \begin{equation*} S_\lambda(x_1,\dots,x_N) \geq S_\mu(x_1,\dots,x_N) \quad \textrm{for all } \, x_1,\dots,x_N \in \mathbb{R}_{\geq 0}. \end{equation*} \end{theorem} Quite recently, Khare and Tao \cite{KhTao, KhTao2} have provided an analytic extension of Theorem \ref{thm:CGSS} by showing that the functions $S_\lambda$ can also be used to characterize \emph{weak} majorization, i.e.~the preorder obtained by dropping the condition $|\lambda|=|\mu|$ in the definition of majorization. The present paper provides a geometric generalization of Theorem \ref{thm:CGSS}: we consider a new notion of majorization associated to the Weyl group of a given root system\footnote{In this paper all root systems are assumed to be crystallographic, but not necessarily reduced.} $\Phi$, and show that it is characterized by a pointwise inequality for spherical functions on any Riemannian symmetric space with restricted root system $\Phi$. The paper is organized as follows. In Section \ref{sec:majorization}, we introduce the notion of Weyl group majorization and prove our main result in the special case of spherical functions on the Lie algebra ${\mathfrak g}} \def\tot{{\mathfrak t}$ of a compact group $G$ (Theorem \ref{thm:H-W-convexity}). We treat this case separately because spherical functions reduce to Harish-Chandra orbital integrals in this setting, leading to a very natural and direct generalization of Theorem \ref{thm:CGSS}. Moreover, the proofs in the Lie algebra case do not require a discussion of symmetric spaces, and are thus somewhat more elementary. In Section \ref{sec:symspaces}, we treat the general case of spherical functions on a Riemannian symmetric space of non-compact type, and prove our main result (Theorem \ref{thm:spherical-majorization}). We then use this theorem to deduce majorization-characterizing inequalities for spherical functions on symmetric spaces of Euclidean type (Proposition \ref{prop:euclidean-majorization}) and compact type (Proposition \ref{prop:compact-majorization}). Furthermore, we discuss how the result for the compact case implies inequalities for various families of orthogonal polynomials, such as Schur polynomials. In Section \ref{sec:conjecture}, we develop an even more general framework based on Heckman--Opdam hypergeometric functions \cite{RSHF1}. We state a conjectural characterization of majorization in this context, and prove one direction of this conjecture. Moreover, we show that the full conjecture holds in rank one, where it reduces to an inequality for the classical Gauss hypergeometric function. \section{Lie Algebras} \label{sec:majorization} Let $G$ be a connected, compact Lie group with Lie algebra ${\mathfrak g}} \def\tot{{\mathfrak t}$. Let $T \subset G$ be a maximal torus, $\tot = \mathrm{Lie}(T) \subset {\mathfrak g}} \def\tot{{\mathfrak t}$ the corresponding Cartan subalgebra, and $\Phi$ the root system of ${\mathfrak g}} \def\tot{{\mathfrak t}$ with respect to $\tot$. Fix a choice of positive roots $\Phi^+$, and let $W$ be the Weyl group generated by reflections in the root hyperplanes in $\tot$. \begin{definition} \label{def:W-majorization} \normalfont Let $V$ be any vector space on which $W$ acts by reflections, and let $\lambda, \mu \in V$. We say that $\lambda$ $W$-{\it majorizes} $\mu$, written $\lambda \succeq \mu$, if $\mu$ lies in the convex hull of the Weyl orbit of $\lambda$. A function $F: V \to \R$ is said to be $W$-{\it convex} if $F(\lambda) \ge F(\mu)$ whenever $\lambda \succeq \mu$. \end{definition} The relation $\succeq$ defines a preorder on $\tot$ and a partial order on each Weyl chamber.\footnote{The $W$-majorization preorder is not the same as the height partial order on $\tot$. If $\lambda, \mu \in \tot$, we say that $\lambda$ is {\it higher} than $\mu$ if $\lambda - \mu$ can be written as a linear combination of positive roots with nonnegative coefficients. The resulting order coincides with $W$-majorization when $\lambda$ and $\mu$ are both dominant, but the height partial order is not $W$-invariant.} When $G = \mathrm{U}(N),$ so that $W \cong S_N$ and $\tot \cong \R^N$, $W$-majorization coincides with the usual majorization order on vectors, and $W$-convexity coincides with the widely studied notion of Schur-convexity. In particular, we recover the ordering on Young diagrams by regarding them as vectors with weakly decreasing integer coordinates. Generalized majorization orders associated to group actions have been studied since the 1960's; see \cite{AOA,EP, GW, Mudh1}. In this section, we prove that $W$-majorization may be characterized by comparing the pointwise behavior of Laplace transforms of invariant probability measures on coadjoint orbits of $G$. Concretely, choose an invariant inner product $\langle \cdot,\cdot \rangle$ identifying ${\mathfrak g}} \def\tot{{\mathfrak t} \cong {\mathfrak g}} \def\tot{{\mathfrak t}^*$, and for $\lambda \in {\mathfrak g}} \def\tot{{\mathfrak t}$ let $\mathcal{O}_\lambda = \{ \mathrm{Ad}_g \lambda \ | \ g \in G \}$ denote its (co)adjoint orbit. Define \begin{equation} \label{eqn:L-def} L_{\lambda}(x) = \int_{\mathcal{O}_\lambda} e^{\langle y,x \rangle} dy = \int_G e^{\langle \mathrm{Ad}_g \lambda, x \rangle} \, dg, \qquad \lambda, x \in \tot_\C, \end{equation} where $dy$ is the unique invariant probability measure on $\mathcal{O}_\lambda$, $dg$ is the Haar probability measure on $G$, and $\tot_\C \cong \tot \oplus i \tot$ is a Cartan subalgebra of the complexified Lie algebra ${\mathfrak g}} \def\tot{{\mathfrak t}_\C$. The functions $L_{\lambda}(x)$ are ubiquitous objects that arise in many different areas of mathematics and physics. They were originally studied by Harish-Chandra in the context of harmonic analysis on Lie algebras \cite{HC}, and they play an important role in the orbit method in representation theory \cite{AK}. When $G = \U(N)$, the transform $L_\lambda(x)$ is known as the Harish-Chandra--Itzykson--Zuber (HCIZ) integral; it has been widely studied in theoretical physics and random matrix theory since the 1980's \cite{IZ}. More recently, the HCIZ integral has become an important object in combinatorics \cite{GGN,Novak:New} and probability \cite{BGH,Novak:JSP}. For further background on these functions and their diverse applications, we refer the reader to \cite{McS1, McS2}. The main result of this section is the following characterization theorem. \begin{theorem} \label{thm:H-W-convexity} For any $\lambda, \mu \in \tot$, the following are equivalent: \begin{enumerate} \item $\lambda \succeq \mu$, \item $L_\lambda(x) \ge L_\mu(x)$ for all $x \in \tot$. \end{enumerate} \end{theorem} \begin{proof} We first show (ii) implies (i), by proving the contrapositive. The {\it discriminant} of ${\mathfrak g}} \def\tot{{\mathfrak t}$ is the polynomial $\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(x) = \prod_{\alpha \in \Phi^+} \langle \alpha, x \rangle$, i.e.~the product of the positive roots.\footnote{Here we take the roots to be real-valued linear functionals in $\tot^*$, which we identify with $\tot$ via the inner product. As a result, our roots differ by a factor of $i$ from those typically used in the setting of complex semisimple Lie algebras.} Let $x \in \tot$ with $\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(x) \not = 0$. We assume for now that $\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(\lambda), \Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(\mu) \not = 0$ as well; later we will remove this assumption. The Laplace transform (\ref{eqn:L-def}) admits an exact expression, due to Harish-Chandra \cite{HC}: \begin{equation} \label{eqn:hc} L_\lambda(x) = \frac{\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(\rho)}{\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(\lambda) \Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(x)} \sum_{w \in W} \epsilon(w) e^{\langle w(\lambda), x \rangle}, \end{equation} where $\rho = \frac{1}{2} \sum_{\alpha \in \Phi^+} \alpha$ and $\epsilon(w)$ is the sign of $w \in W$. Taking $t > 0$ and using (\ref{eqn:hc}), we can write \begin{equation} \label{eqn:first-H-diff} L_\mu(tx) - L_\lambda(tx) = \frac{\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(\rho)}{\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(tx)} \sum_{w \in W} \epsilon(w) \left( \frac{e^{t \langle w(\mu), x \rangle}}{\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(\mu)} - \frac{e^{t \langle w(\lambda), x \rangle}}{\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(\lambda)} \right). \end{equation} This expression is manifestly $W$-invariant in $\lambda$, $\mu$ and $x$, so we may assume without loss of generality that all three are dominant. Then as $t \to +\infty$ we have: \begin{equation} \label{eqn:H-asymp} L_\mu(tx) - L_\lambda(tx) = \frac{\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(\rho)}{\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(tx)} \left( \frac{e^{t \langle \mu, x \rangle}}{\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(\mu)} - \frac{e^{t \langle \lambda, x \rangle}}{\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(\lambda)} \right) + (\textrm{lower-order terms}). \end{equation} Now suppose $\lambda \not \succeq \mu$. Then $\mu$ lies outside the convex hull of the $W$-orbit of $\lambda$, so by the hyperplane separation theorem there is some $x_0 \in \tot$ and $C > 0$ such that $\langle \mu, x_0 \rangle > C$ while $\langle w(\lambda), x_0 \rangle < C$ for all $w \in W$. By making a small perturbation to $x_0$ if necessary, we can ensure that $\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(x_0) \not = 0$. Take $x$ in (\ref{eqn:H-asymp}) to be the dominant representative of the Weyl orbit of $x_0$. Then we still have $\langle \mu, x \rangle > C$, $\langle \lambda, x \rangle < C$, and from (\ref{eqn:H-asymp}) we find: $$ \lim_{t \to \infty} \Big[ L_\mu(tx) - L_\lambda(tx) \Big] \ge \lim_{t \to \infty} \frac{\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(\rho)}{\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(tx)} \left( \frac{e^{t \langle \mu, x \rangle}}{\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(\mu)} - \frac{e^{t C}}{\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(\lambda)} \right) = \infty, $$ which implies $L_\mu(tx) > L_\lambda(tx)$ for some $t > 0$, so that (ii) cannot hold. Now we remove the assumption that $\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(\lambda), \Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(\mu) \not = 0$. In this case the expression (\ref{eqn:first-H-diff}) may be singular, so we instead take a limit: \begin{equation} \label{eqn:singular-H-diff} L_\mu(tx) - L_\lambda(tx) = \lim_{\eta \to 0} \frac{\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(\rho)}{\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(tx)} \sum_{w \in W} \epsilon(w) \left( \frac{e^{t \langle w(\mu + \eta \rho), x \rangle}}{\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(\mu + \eta \rho)} - \frac{e^{t \langle w(\lambda + \eta \rho), x \rangle}}{\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(\lambda + \eta \rho)} \right). \end{equation} To evaluate this limit, we apply l'H\^opital's rule as many times as needed, treating the $\lambda$ and $\mu$ terms separately. After $j$ applications to the $\lambda$ terms and $k$ applications to the $\mu$ terms for some $j, k \ge 0$, in place of (\ref{eqn:H-asymp}) we find: \begin{equation} \label{eqn:singular-H-asymp} L_\mu(tx) - L_\lambda(tx) = \frac{\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(\rho)}{\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(tx)} \left( \frac{t^k \langle \rho, x \rangle^k e^{t \langle \mu, x \rangle}}{\partial_\rho^k \Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(\mu)} - \frac{t^j \langle \rho, x \rangle^j e^{t \langle \lambda, x \rangle}}{\partial_\rho^j \Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(\lambda)} \right) +\ (\textrm{lower-order terms}), \end{equation} where $\partial_\rho^k \Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(\mu) = \frac{d^k}{d \eta^k} \Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(\mu + \eta \rho) \big |_{\eta = 0}$. The remainder of the argument then goes through as before, and we conclude that (ii) implies (i). The other direction of the proof, (i) implies (ii), amounts to showing that for all $x \in \tot$, the function \mbox{$\lambda \mapsto L_\lambda(x)$} is $W$-convex. This function is clearly $W$-invariant, and by \mbox{\cite[Theorem 1]{GW},} a $W$-invariant, convex function is $W$-convex. It therefore remains only to show that $L_\lambda(x)$ is convex in $\lambda$, and for this it is sufficient to show midpoint convexity. For $u, v \in \tot$ we have \begin{multline*} L_{\frac{1}{2}(u+v)}(x) = \int_{G} e^{ \langle \mathrm{Ad}_g (u + v)/2, x \rangle} dg = \int_{G} \sqrt{e^{\langle \mathrm{Ad}_g u, x \rangle} e^{\langle \mathrm{Ad}_g v, x \rangle} } \,dg \\ \le \int_{G} \frac{1}{2}\big( e^{\langle \mathrm{Ad}_g u, x \rangle} + e^{\langle \mathrm{Ad}_g v, x \rangle} \big) \, dg = \frac{1}{2}L_u(x) + \frac{1}{2}L_v(x), \end{multline*} where in the final line we have applied the inequality of arithmetic and geometric means. This proves the theorem. \end{proof} \begin{remark} \normalfont It is easily verified from the definition (\ref{eqn:L-def}) that $L_\lambda(x) = L_x(\lambda)$, so condition (ii) in Theorem \ref{thm:H-W-convexity} could equivalently be written: $$L_x(\lambda) \ge L_x(\mu) \textrm{ for all } x \in \tot.$$ \end{remark} \section{Symmetric spaces} \label{sec:symspaces} This section contains our main results, which are majorization inequalities for spherical functions on Riemannian symmetric spaces. After introducing some background on symmetric spaces and spherical functions, we state and prove separate inequalities for each of the three types of irreducible symmetric space. In each case the theorem takes the form of an inequality between the pointwise values of any two spherical functions, which reflects the $W$-majorization order on the space of vectors that index the spherical functions. As we explain below, these results imply Theorem \ref{thm:H-W-convexity} and discretizations thereof, such as the Schur function inequality studied in \cite{CGS,Sra}. \subsection{Background on symmetric spaces and spherical functions} \label{sec:symspace-background} Here we introduce only the minimum background required to state and prove the theorems. We refer the reader to \cite[Appendices B and C]{OlPe} for a concise introduction to these topics, and to \cite{DS, GGA} for detailed references. The definitions below mostly follow \cite[ch.~4]{GGA} \begin{definition} \label{def:symspace} \normalfont Let $G$ be a connected Lie group, and $K \subset G$ a compact subgroup. We say that $(G, K)$ is a {\it symmetric pair} if $K$ is the fixed-point set of an involutive automorphism $\sigma: G \to G$. For our purposes, a {\it Riemannian symmetric space} is a quotient $X = G/K$, where $(G, K)$ is a symmetric pair. When $G$ is non-compact and semisimple with finite center, and $K$ is a maximal compact subgroup, we say that $X$ is of {\it non-compact type}. \end{definition} Below, the term ``symmetric space'' always means a Riemannian symmetric space as defined above. \begin{definition} \label{def:spherical-funcs} \normalfont Let $X = G/K$ be a Riemannian symmetric space, and write $[g] \in X$ for the image of $g \in G$ under the quotient map $G \to G/K$. Let $\mathcal{D}(X)$ be the algebra of differential operators on $X$ that are invariant under all translations $[x] \mapsto [gx]$, $g \in G$. A complex-valued function $\phi \in C^\infty(X)$ is called a {\it spherical function} if all of the following hold: \begin{enumerate} \item $\phi([\mathrm{id}]) = 1$, \item $\phi([kx]) = \phi([x])$ for all $k \in K$, \item $D \phi = \gamma_D \phi$ for each $D \in \mathcal{D}(X)$, where $\gamma_D$ is some complex eigenvalue. \end{enumerate} \end{definition} Spherical functions play a central role in the theory of harmonic analysis on symmetric spaces, and many important families of special functions can be realized as spherical functions on some symmetric space. \begin{example} \label{ex:cpt-gp-symspace} \normalfont Let $G$ be a compact connected Lie group, and $K \subset G \times G$ the diagonal subgroup. Then $(G \times G, K)$ is a symmetric pair, and we can identify $(G \times G)/K \cong G$ via $(g_1, g_2) K \mapsto g_1 g_2^{-1}$. The spherical functions on $G$ are precisely the functions of the form $$\phi_\lambda(g) = \frac{\chi_\lambda(g)}{\dim V_\lambda},$$ where $V_\lambda$ is the irreducible representation of $G$ with highest weight $\lambda$, and $\chi_\lambda$ is its character. \end{example} \begin{example} \label{ex:alg-spherical-funcs} \normalfont Let $G$ again be a compact connected Lie group. If we regard its Lie algebra ${\mathfrak g}} \def\tot{{\mathfrak t}$ as an abelian Lie group, we can form the semidirect product $G \ltimes {\mathfrak g}} \def\tot{{\mathfrak t}$ with multiplication $(g_1, x_1) \cdot (g_2, x_2) = (g_1 g_2, \, \mathrm{Ad}_{g_1} x_2 + x_1)$. Then $(G \ltimes {\mathfrak g}} \def\tot{{\mathfrak t}, G)$ is a symmetric pair, and we can identify $(G \ltimes {\mathfrak g}} \def\tot{{\mathfrak t})/G \cong {\mathfrak g}} \def\tot{{\mathfrak t}$ via $(g, x) \mapsto \mathrm{Ad}_g x$. Thus ${\mathfrak g}} \def\tot{{\mathfrak t}$ is a symmetric space, and the spherical functions on ${\mathfrak g}} \def\tot{{\mathfrak t}$ reduce to the Laplace transforms studied in Section \ref{sec:majorization}, $$L_{\lambda}(x) = \int_G e^{\langle \mathrm{Ad}_g \lambda, x \rangle} \, dg, \qquad x \in {\mathfrak g}} \def\tot{{\mathfrak t}, \quad \lambda \in \tot_\C,$$ where $\tot_\C$ is the complexification of a Cartan subalgebra $\tot \subset {\mathfrak g}} \def\tot{{\mathfrak t}$. \end{example} If $X$ is a symmetric space then its universal cover $\tilde X$ is also symmetric, and the spherical functions on $X$ may be identified with spherical functions on $\tilde X$ that are constant on the fibers of the covering map. In this sense the spherical functions on $\tilde X$ subsume those on $X$, so that in what follows we may assume without loss of generality that $X$ is simply connected. We say that $X$ is {\it irreducible} if it cannot be written as a nontrivial product of symmetric spaces. A simply connected, irreducible symmetric space is always: \renewcommand{\labelenumi}{(\arabic{enumi})} \begin{enumerate} \item of non-compact type; or, \item a Euclidean space; or, \item compact. \end{enumerate} \renewcommand{\labelenumi}{(\roman{enumi})} These three types correspond respectively to the cases in which $X$ is negatively curved, flat, or positively curved. There is a a well-known correspondence between the three types, which we now describe. If $X^- = G/K$ is a symmetric space of non-compact type, $\sigma : G \to G$ is the associated involution fixing $K$, and ${\mathfrak g}} \def\tot{{\mathfrak t} = \mathrm{Lie}(G)$, then we have the Cartan decomposition ${\mathfrak g}} \def\tot{{\mathfrak t} = \mathfrak{k} + \mathfrak{p}$, where $\mathfrak{k} = \mathrm{Lie}(K)$ is the fixed-point set of $d \sigma$. From these data we can construct both a Euclidean symmetric space and a compact symmetric space. First define ${\mathfrak g}} \def\tot{{\mathfrak t}^+ = \mathfrak{k} + i \mathfrak{p} \subset {\mathfrak g}} \def\tot{{\mathfrak t}_\C$, which is the Lie algebra of the compact real form $G^+$ of $G$. The symmetric space $X^+ = G^+/K$ is obviously compact. Next define the algebra ${\mathfrak g}} \def\tot{{\mathfrak t}^0$, which is equal to ${\mathfrak g}} \def\tot{{\mathfrak t}$ as a vector space but is endowed with a different Lie bracket $[ \cdot, \cdot ]_0$ defined by $$[x, y]_0 = \begin{cases}0, & x, y \in \mathfrak{p}, \\ [x,y], & \textrm{otherwise.} \end{cases}$$ Then the group $G^0 = \exp({\mathfrak g}} \def\tot{{\mathfrak t}^0) \cong K \ltimes \mathfrak{p}$ acts on $\mathfrak{p}$ by affine transformations, $(k, p) \cdot x = \mathrm{Ad}_k x + p$, and $X^0 = G^0 / K \cong \mathfrak{p}$ is a Euclidean symmetric space. Thus we have constructed a triple of symmetric spaces $(X^-, X^0, X^+)$ that belong respectively to the three types listed above. Moreover, every simply connected, irreducible symmetric space occurs in such a triple. In the following subsections, we study spherical functions on the spaces $X^-$, $X^0$, and $X^+$. \subsection{Symmetric spaces of non-compact type} When $X^- = G/K$ is a symmetric space of non-compact type, the spherical functions admit a convenient integral representation, due to Harish-Chandra \cite{HC2, HC3}. The version that we use here is proved in \cite[ch.~4, Theorem 4.3]{GGA}. Let $G = NAK$, ${\mathfrak g}} \def\tot{{\mathfrak t} = \mathfrak{n} + \mathfrak{a} + \mathfrak{k}$ be the Iwasawa decompositions of $G$ and ${\mathfrak g}} \def\tot{{\mathfrak t}$. For $g \in G$, let $a(g)$ be the unique element of $\mathfrak{a}$ such that $g \in N e^{a(g)} K$. The Killing form $\langle \cdot, \cdot \rangle$ on ${\mathfrak g}} \def\tot{{\mathfrak t}$ restricts to an inner product on $\mathfrak{a}$. For $\alpha \in \mathfrak{a}$, define $${\mathfrak g}} \def\tot{{\mathfrak t}_\alpha = \{ \ x \in {\mathfrak g}} \def\tot{{\mathfrak t} \ | \ [h, x] = \langle \alpha, h \rangle x \ \textrm{for all} \ h \in \mathfrak{a} \ \}.$$ The {\it restricted root system} $\Phi$ of $X^-$ consists of all nonzero $\alpha \in \mathfrak{a}$ for which ${\mathfrak g}} \def\tot{{\mathfrak t}_\alpha$ is nontrivial. Fix a choice $\Phi^+$ of positive roots, and let $W$ be the Weyl group generated by reflections in the root hyperplanes. For $\lambda, \mu \in \mathfrak{a}$, we again write $\lambda \succeq \mu$ to indicate that $\lambda$ $W$-majorizes $\mu$. For $\alpha \in \Phi$, define $m_\alpha = \dim {\mathfrak g}} \def\tot{{\mathfrak t}_\alpha$, and set $\rho = \frac{1}{2} \sum_{\alpha \in \Phi^+} m_\alpha \alpha$. Write $dk$ for the normalized Haar measure on $K$. \begin{theorem}[Harish-Chandra] \label{thm:hc-integral-spherical} The spherical functions on $X^-$ are exactly the functions of the form \begin{equation} \label{eqn:hc-integral-spherical} \phi^-_\lambda([g]) = \int_K e^{\langle i \lambda + \rho, a(kg) \rangle} dk, \qquad g \in G, \end{equation} as $\lambda$ ranges over $\mathfrak{a}_\C$. Moreover, two such functions $\phi^-_\lambda$ and $\phi^-_\mu$ are identical if and only if $\mu = w(\lambda)$ for some $w \in W$. \end{theorem} The following theorem is the main result of this paper. \begin{theorem} \label{thm:spherical-majorization} Let $X^- = G/K$ be a Riemannian symmetric space of non-compact type. For any $\lambda, \mu \in \mathfrak{a}$, the following are equivalent: \begin{enumerate} \item $\lambda \succeq \mu$, \item $\phi^-_{i \lambda}(x) \ge \phi^-_{i \mu}(x)$ for all $x \in X$. \end{enumerate} \end{theorem} \begin{proof} The argument generalizes the proof of Theorem \ref{thm:H-W-convexity}. We first show that (ii) implies (i) by proving the contrapositive, and then that (i) implies (ii) using the integral representation (\ref{eqn:hc-integral-spherical}) for the spherical functions. Suppose $\lambda \not \succeq \mu$. Since the map $\lambda \mapsto -\lambda$ is an isometry of $\mathfrak{a}$, we have $\lambda \succeq \mu$ if and only if $- \lambda \succeq - \mu$. Accordingly, to prove that (ii) implies (i), it suffices to show that $\phi^-_{- i \mu}(x) > \phi^-_{- i \lambda}(x)$ for some $x \in X$. By hyperplane separation, we can obtain $y \in \mathfrak{a}$ and $C_1 > 0$ such that $\langle \mu, y \rangle > C_1$ and $\langle w(\lambda), y \rangle < C_1$ for all $w \in W$. Clearly both of these inequalities still hold if we replace $y$ with the dominant representative of its Weyl orbit, and by Theorem \ref{thm:hc-integral-spherical} we have $\phi^-_{i \lambda} = \phi^-_{i w(\lambda)}$ for $w \in W$. Therefore without loss of generality we may take all three of $\lambda$, $\mu$ and $y$ to be dominant. With these assumptions, we will study the asymptotic behavior of the spherical functions $\phi^-_{- i \lambda}$ and $\phi^-_{- i \mu}$ at infinity. This topic is well understood; see e.g.~\cite{Duis-asymp}. In particular we have the following sharp estimate as $t \to +\infty$, which also follows directly from (\ref{eqn:symspace-HGF}) and (\ref{eqn:HGF-asymp}) below: \begin{equation} \label{eqn:spherical-asymp} \phi^-_{- i \lambda}([e^{ty}]) \ \asymp \ e^{t \langle \lambda - \rho, \, y \rangle} \prod_{\alpha \in \Phi^+_\lambda} (1 + 2 t \langle \alpha, y \rangle ), \end{equation} where \begin{equation} \label{eqn:phi-plus-lambda} \Phi^+_\lambda = \Big \{ \ \alpha \in \Phi^+ \ \Big | \ \frac{1}{2} \alpha \not \in \Phi^+, \ \langle \alpha, \lambda \rangle = 0 \ \Big \}. \end{equation} Here the notation $f(t) \asymp g(t)$ means that there exist constants $c, C, T > 0$ such that $c g(t) < f(t) < Cg(t)$ for all $t > T$. Thus the estimate (\ref{eqn:spherical-asymp}) implies that $$\phi^-_{- i \mu}([e^{ty}]) - \phi^-_{- i \lambda}([e^{ty}]) \ > \ e^{-t \langle \rho, y \rangle} \bigg(C_2 \, e^{t \langle \mu, y \rangle} - C_3 \, e^{tC_1} \prod_{\alpha \in \Phi^+_\lambda} (1 + 2 t \langle \alpha, y \rangle ) \bigg)$$ for large $t$ and some constants $C_2, C_3 > 0$. For $t$ sufficiently large, the quantity on the right-hand side above is positive, proving that $\phi^-_{- i \mu}(x) > \phi^-_{- i \lambda}(x)$ for some $x \in X$, as desired. We next prove that (i) implies (ii). It suffices to show that the function $f_x(\lambda) = \phi^-_{i \lambda}(x)$ is $W$-convex. As in the proof of Theorem \ref{thm:H-W-convexity}, we use the result of \cite[Theorem 1]{GW}, which states that a $W$-invariant, convex function is $W$-convex. By Theorem \ref{thm:hc-integral-spherical}, $f_x$ is $W$-invariant, so we need only prove that $f_x$ is convex, for which it suffices to check midpoint convexity. Write $x = [g]$ for some $g \in G$. Using the integral representation (\ref{eqn:hc-integral-spherical}) and the inequality of arithmetic and geometric means, we find: \begin{align*} f_{x} \bigg(\frac{1}{2}(\lambda + \mu) \bigg) &= \int_K e^{\langle \rho - (\lambda + \mu)/2, \, a(kg) \rangle} dk \\ &= \int_K e^{\langle \rho, a(kg) \rangle} \sqrt{ e^{- \langle \lambda, a(kg) \rangle} e^{- \langle \mu, a(kg) \rangle} } \, dk \\ & \ge \frac{1}{2} \int_K e^{\langle \rho, a(kg) \rangle} ( e^{- \langle \lambda, a(kg) \rangle} + e^{- \langle \mu, a(kg) \rangle} ) \, dk \\ &= \frac{1}{2} \big( f_x(\lambda) + f_x(\mu) \big), \end{align*} which shows that $f_x$ is convex, completing the proof. \end{proof} \begin{remark} \label{rem:bounded-phi} \normalfont There are parallels between Theorem \ref{thm:spherical-majorization} and a celebrated result of Helgason and Johnson \cite{HJ}, which classifies the bounded spherical functions on $X^-$. Write $\mathrm{Re} \lambda,\, \mathrm{Im} \lambda \in \mathfrak{a}$ for the real and imaginary parts of $\lambda \in \mathfrak{a}_\C$. Although Helgason and Johnson do not use the terminology of $W$-majorization, their theorem states that $\phi^-_\lambda$ is bounded if and only if $\rho \succeq \mathrm{Im} \lambda.$ More recently, this result has been generalized to a classification of bounded Heckman--Opdam hypergeometric functions \cite{NPP}. Theorem \ref{thm:spherical-majorization} and the Helgason--Johnson theorem are independent but related. The boundedness of $\phi^-_\lambda$ when $\rho \succeq \mathrm{Im} \lambda$ follows from Theorem \ref{thm:spherical-majorization} once one observes from the integral formula (\ref{eqn:hc-integral-spherical}) that $|\phi^-_\lambda| \le \phi^-_{i \, \mathrm{Im} \lambda}$ and that $\phi^-_{i \rho} \equiv 1.$ On the other hand, Theorem \ref{thm:spherical-majorization} by itself does not imply that all other $\phi^-_\lambda$ are unbounded (though this can be deduced from the estimate (\ref{eqn:spherical-asymp})), while the Helgason--Johnson theorem does not tell us that $W$-majorization actually induces a partial order on {\it all} $\phi^-_\lambda$ with $\lambda \in i \mathfrak{a}$. \end{remark} \subsection{Euclidean symmetric spaces} The spherical functions on the Euclidean symmetric space $X^0 \cong \mathfrak{p}$ are precisely the functions \begin{equation} \label{eqn:spherical-typeI} \phi^0_\lambda(x) = \lim_{\varepsilon \to 0} \phi^-_{\lambda / \varepsilon}([e^{\varepsilon x}]) = \int_K e^{i \langle \lambda, \mathrm{Ad}_k x \rangle} dk, \qquad x \in \mathfrak{p}, \end{equation} as $\lambda$ ranges over $\mathfrak{a}_\C$; see \cite[ch.~4, Proposition 4.8]{GGA}. Taking the limit (\ref{eqn:spherical-typeI}) in the proof of Theorem \ref{thm:spherical-majorization}, we obtain the following. \begin{proposition} \label{prop:euclidean-majorization} Let $X^0$ be a Euclidean symmetric space. For any $\lambda, \mu \in \mathfrak{a}$, the following are equivalent: \begin{enumerate} \item $\lambda \succeq \mu$, \item $\phi^0_{i \lambda}(x) \ge \phi^0_{i \mu}(x)$ for all $x \in X$. \end{enumerate} \end{proposition} In particular, Theorem \ref{thm:H-W-convexity} is a special case of Proposition \ref{prop:euclidean-majorization}, corresponding to the Euclidean symmetric space described in Example \ref{ex:alg-spherical-funcs}. \subsection{Compact symmetric spaces} \label{sec:cpt-sym} We now consider the compact symmetric space $X^+ = G^+/K$. Let $V_\lambda$ be the irreducible $G^+$-representation with highest weight $\lambda$, and $\chi_\lambda$ its character. If $V_\lambda$ contains a nontrivial $K$-fixed vector, we say that $V_{\lambda}$ is a {\it spherical} representation and $\lambda$ is a spherical highest weight. By \cite[ch.~4, Theorem 4.2]{GGA}, the spherical functions on $X^+$ are precisely the functions \begin{equation} \label{eqn:cpt-spherical} \phi^+_\lambda([g]) = \int_K \chi_\lambda(g^{-1}k) \, dk, \qquad g \in G^+, \end{equation} where $\chi_\lambda$ is the character of an irreducible spherical representation of $G^+$. Here we depart in two ways from the conventions used above in Section \ref{sec:majorization}. First, we now use the notation $\langle \cdot, \cdot \rangle$ to indicate the Killing form, which restricts to a {\it negative}-definite form on ${\mathfrak g}} \def\tot{{\mathfrak t}^+$ rather than an inner product. Second, we now regard the roots and weights of $G^+$ as {\it imaginary}-valued linear functionals on a Cartan subalgebra $\tot \subset {\mathfrak g}} \def\tot{{\mathfrak t}^+$ with $i \mathfrak{a} \subset \tot$. We then use the Killing form to identify the weights and roots with elements of $i \tot$. With these conventions, the spherical highest weights of $G^+$ correspond to certain lattice points in $\mathfrak{a} \subset i \tot$; see \cite[ch.~5 \textsection 4]{GGA}. By \cite[ch.~5, Theorem 4.1 and Corollary 4.2]{GGA}, if $G^+$ is simply connected and semisimple then the spherical highest weights are exactly those $\lambda \in \mathfrak{a}$ satisfying \begin{equation} \label{eqn:lambda-reqs} \frac{\langle \lambda, \alpha \rangle}{\langle \alpha, \alpha \rangle} \in {\mathbb Z}_{\ge 0} \quad \textrm{for all } \alpha \in \Phi^+, \end{equation} where $\Phi^+$ are the positive restricted roots of $X^-$. Here as well, given $\lambda, \mu \in \mathfrak{a}$, we write $\lambda \succeq \mu$ to indicate that $\lambda$ $W$-majorizes $\mu$, where $W$ is the Weyl group of the {\it restricted} root system of $X^-$ (rather than the root system of $G^+$). The function $\phi^+_\lambda$ can be analytically continued to the complexification $G_\C$, so that we may evaluate $\phi^+_\lambda([e^x])$ for any $x \in \tot_\C$. We then have the following majorization inequality. \begin{proposition} \label{prop:compact-majorization} Let $\lambda, \mu \in \mathfrak{a}$ be two spherical highest weights of $G^+$. The following are equivalent: \begin{enumerate} \item $\lambda \succeq \mu$, \item $\phi^+_\lambda([e^{ix}]) \ge \phi^+_\mu([e^{ix}])$ for all $x \in \tot$. \end{enumerate} \end{proposition} \begin{proof} Consider the spherical function $\phi^-_{-i(\lambda - \rho)}$ on $X^-$, regarded as a function on the non-compact group $G$. When $\lambda$ is a spherical highest weight, this function also admits an analytic continuation to $G_\C$, which coincides with $\phi^+_\lambda$; see \cite[\textsection 4]{LV}. Since $\tot = \tot \cap \mathfrak{k} + i \mathfrak{a}$ and $[e^{y + x}] = [e^x] \in X^+$ for $y \in \tot \cap \mathfrak{k}$, we can take $x \in i \mathfrak{a}$, so that $e^{ix} \in G$. The desired result is then immediate from Theorem \ref{thm:spherical-majorization}. \end{proof} \subsection{Application to orthogonal polynomials} Let us explain how Proposition \ref{prop:compact-majorization} implies the results of \cite{CGS,Sra} relating majorization and Schur polynomials, which we stated above in Theorem \ref{thm:CGSS}. Young diagrams with $N$ rows index the irreducible polynomial representations of $\U(N)$. If $\lambda$ is a Young diagram and $V_\lambda$ is the corresponding irreducible representation, we can identify $\R^N$ with a Cartan subalgebra in $\mathfrak{u}(N)$ such that \begin{equation} \label{eqn:schur-char} S_\lambda(e^{iy_1}, \hdots, e^{iy_N}) = \frac{\chi_\lambda(e^y)}{\dim V_\lambda}, \qquad y = (y_1, \hdots, y_N) \in \R^N. \end{equation} If we then regard the group $\U(N)$ as a compact symmetric space as in Example \ref{ex:cpt-gp-symspace}, we find \begin{equation} \label{eqn:spherical-schur} \phi^+_\lambda([e^{iy}]) = S_\lambda(e^{y_1}, \hdots, e^{y_N}), \end{equation} that is, in an appropriate choice of coordinates, the spherical functions are normalized Schur polynomials. Writing $x_i = e^{y_i}$, Proposition \ref{prop:compact-majorization} then yields Theorem \ref{thm:CGSS} under the stricter assumption that all $x_1, \hdots, x_N > 0$. Since Schur polynomials are continuous, we can relax this to $x_1, \hdots, x_N \ge 0$, which proves Theorem \ref{thm:CGSS} in full. A related characterization of \emph{weak majorization} in terms of Schur polynomials was recently obtained by Khare and Tao in \cite{KhTao}. Schur polynomials are not unique in this regard. Many families of orthogonal polynomials, such as the real and quaternionic zonal polynomials, can be realized as spherical functions on compact symmetric spaces. In fact, Vretare \cite{LV} showed that {\it every} compact symmetric space gives rise to an associated family of orthogonal polynomials, which express the spherical functions via a relation similar to (\ref{eqn:spherical-schur}). In all such cases, Proposition \ref{prop:compact-majorization} gives an inequality for the orthogonal polynomials that is analogous to Theorem \ref{thm:CGSS}. \begin{remark} \normalfont One can also deduce Theorem \ref{thm:CGSS} directly from Theorem \ref{thm:H-W-convexity} by using the Kirillov character formula for compact groups \cite[ch.~5]{AK} to write the Schur polynomials in terms of integrals of the form (\ref{eqn:L-def}). Since the arguments in (\ref{eqn:L-def}) are not constrained to lie in the weight lattice, in the case $G=\U(N)$ Theorem \ref{thm:H-W-convexity} in fact entails an extension of Theorem \ref{thm:CGSS} to generalized Schur polynomials with non-integer exponents, which was previously observed by Khare and Tao \cite{KhTao2}. Corresponding statements for the characters of other compact groups are also immediate from Theorem \ref{thm:H-W-convexity}. \end{remark} \section{Hypergeometric functions} \label{sec:conjecture} The Heckman--Opdam hypergeometric functions are a family of special functions associated to root systems, which generalize the classical Gauss hypergeometric function to higher dimensions. They are eigenfunctions of the hyperbolic quantum Calogero--Sutherland Hamiltonian and were introduced in the paper \cite{RSHF1} in order to prove the complete integrability of quantum Calogero--Sutherland models. Many special functions of interest can be expressed via limits or specializations of Heckman--Opdam hypergeometric functions, including the spherical functions on all Riemannian symmetric spaces of non-compact type. Also in \cite{RSHF1}, Heckman and Opdam defined the multivariable Jacobi polynomials, now known as Heckman--Opdam polynomials. These are closely related to hypergeometric functions and generalize numerous widely studied families of orthogonal polynomials, such as Schur and Jack polynomials. In this final section, we conjecture that Heckman--Opdam hypergeometric functions satisfy a fundamental monotonicity property with respect to $W$-majorization. If true, this conjecture would unify and generalize all of the majorization results discussed in this paper. We prove one of the two directions of implication that comprise the conjecture, and we show that the full conjecture holds in rank one. Just as Heckman--Opdam hypergeometric functions generalize the spherical functions $\phi^-_{\lambda}$ on different symmetric spaces of non-compact type, the Heckman--Opdam polynomials generalize of the functions $\phi^+_\lambda$, up to some differences in normalization. Similarly, another related class of functions, the generalized Bessel functions, interpolate between the functions $\phi^0_\lambda$ on different Euclidean symmetric spaces. If the conjecture is true, then analogous results hold for both generalized Bessel functions and Heckman--Opdam polynomials. In particular, we show below that the conjecture would immediately imply an analogue for Heckman--Opdam polynomials of the Schur polynomial inequality proved in \cite{CGS,Sra}. To define the Heckman--Opdam hypergeometric functions, we first must fix some preliminary data. Here we largely follow the conventions of Anker \cite{AnkerDunklNotes} and Heckman and Schlichtkrul \cite{HS}. Let $V \cong \R^r$ be a Euclidean space with inner product $\langle \cdot, \cdot \rangle$, $\Phi \subset V$ a (crystallographic) root system spanning $V$, and $W$ the Weyl group acting on $V$ by reflections in the root hyperplanes. The Heckman--Opdam hypergeometric function $F_{k,\lambda}$ depends on a point $\lambda$ in the complexification $V_\C$, as well as on a {\it multiplicity parameter}, which is a function $k: \Phi \to \C$ such that $k_{w \cdot \alpha} = k_\alpha$ for $w \in W$. Unless stated otherwise, we assume in what follows that $\lambda \in V$ and that all $k_\alpha$ are nonnegative real numbers. We now define $F_{k,\lambda}$ in terms of solutions to certain differential-difference equations. Fix a choice of positive roots $\Phi^+$. For $\alpha \in \Phi^+$ let $s_\alpha$ be the reflection through the hyperplane $\{x \in V \ | \ \langle \alpha, x \rangle = 0 \},$ and define $\rho^{(k)} = \frac{1}{2} \sum_{\alpha \in \Phi^+} k_\alpha \alpha.$ \begin{definition} \label{def:cherednik-op} \normalfont For $y \in V$, the \emph{Cherednik operator} $D_{k,y}$ is the differential-difference operator \begin{equation} D_{k,y} = \partial_y + \sum_{\alpha \in \Phi^+} \langle y, \alpha \rangle k_\alpha \frac{1}{1 - e^{-\alpha}} (1 - s_\alpha) - \langle y, \rho^{(k)} \rangle. \end{equation} \end{definition} The Cherednik operators were originally defined and studied in \cite{Ch1, Ch2}. For details of their properties, we refer the reader to these papers as well as to \cite[\textsection 4]{AnkerDunklNotes} and \cite[\textsection 2]{Op}. Here we need only the following fact: when all $k_\alpha$ are nonnegative, for any $\lambda \in V_\C$ there is a unique smooth function $G_{k,\lambda}$ on $V$ satisfying the system of differential-difference equations \begin{equation} \label{eqn:opdam-HG-eqn} D_{k,y} G_{k,\lambda} = \langle y, \lambda \rangle G_{k,\lambda} \quad \textrm{for all } y \in V \end{equation} and normalized so that $G_{k,\lambda}(0) = 1$. \begin{definition} \label{def:hgf} \normalfont For any nonnegative real multiplicity parameter $k$ and any $\lambda \in V_\C$, the {\it Heckman--Opdam hypergeometric function} $F_{k, \lambda}$ is defined as \begin{equation} \label{eqn:hgf-def} F_{k, \lambda}(x) = \frac{1}{|W|} \sum_{w \in W} G_{k,\lambda}(w(x)). \end{equation} \end{definition} The functions $F_{k, \lambda}$ unify and interpolate between many widely studied special functions, as illustrated in the following examples. \begin{example} \label{ex:rank1} \normalfont In the 1-dimensional case where $V \cong \R$, the root system $\Phi$ can be either $A_1$ or $BC_1$. For $BC_1$ there are two Weyl orbits $\{ \pm 1 \}$, $\{ \pm 2 \} \subset \R$, and the Heckman--Opdam hypergeometric function reduces to the Gauss hypergeometric function: \begin{equation} \label{eqn:HO-to-Gauss} F_{k, \lambda}(x) = {}_{2}F_1 \Big( \frac{k_1}{2} + k_2 + \lambda, \, \frac{k_1}{2} + k_2 - \lambda ; \, k_1 + k_2 + \frac{1}{2} ; \, - \sinh^2 \frac{x}{2} \Big). \end{equation} The Heckman--Opdam hypergeometric function for $A_1$ corresponds to the special case \mbox{$k_2 = 0$}. \end{example} \begin{example} \label{ex:symspace-HGF} \normalfont Suppose $\Psi$ is the restricted root system of a symmetric space $X = G/K$ of non-compact type, and $m_\alpha = \dim \mathfrak{g}_\alpha$ for each $\alpha \in \Psi$. Take $V = \mathfrak{a}$, $\Phi = \{ 2 \alpha \ | \ \alpha \in \Psi \}$ and $k_{2\alpha} = \frac{1}{2} m_{\alpha}$. Then \begin{equation} \label{eqn:symspace-HGF} \phi^-_{\lambda}([e^x]) = F_{k, i \lambda }(x), \qquad x \in \mathfrak{a}, \quad \lambda \in \mathfrak{a}_\C. \end{equation} See \cite[\textsection 4]{AnkerDunklNotes}. \end{example} \begin{example} \label{ex:gen-bessel} \normalfont The generalized Bessel function $J_{k, \lambda}$ on $V$ can be obtained as the {\it rational limit} of $F_{k,\lambda}$: \begin{equation} J_{k, \lambda}(x) = \lim_{\varepsilon \to 0} F_{k, \lambda / \varepsilon}(\varepsilon x). \end{equation} From the previous example and the relation (\ref{eqn:spherical-typeI}), it is clear that $J_{k, \lambda}$ generalizes the spherical functions on Euclidean symmetric spaces in the same way that $F_{k, \lambda}$ generalizes the spherical functions on symmetric spaces of non-compact type. See \cite[\textsection 3 and \textsection 4.4]{AnkerDunklNotes} for details on generalized Bessel functions and the rational limit. \end{example} For any multiplicity parameter $k$, we define the function \begin{equation} \label{eqn:delta-k-def} \delta_k(x) = \prod_{\alpha \in \Phi^+} (e^{\langle \alpha, x \rangle/2} - e^{- \langle \alpha,x \rangle/2})^{k_\alpha}. \end{equation} \begin{example} \label{ex:HGF-to-L} \normalfont When $\Phi$ is reduced, we can identify $V$ with a Cartan subalgebra $\tot$ of a compact semisimple Lie algebra ${\mathfrak g}} \def\tot{{\mathfrak t}$ with root system $\Phi$. We write $k = \vec 1$ for the multiplicity parameter with $k_\alpha = 1$ for all $\alpha \in \Phi$. Then \begin{equation} \label{eqn:HGF-to-L} F_{\vec 1, \lambda}(x) = \frac{\Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(x)}{\delta_{\vec 1}(x)} L_{\lambda}(x), \qquad x \in \tot. \end{equation} \end{example} The notion of $W$-majorization is defined in this setting just as in Definition \ref{def:W-majorization}. We conjecture the following monotonicity property for Heckman--Opdam hypergeometric functions with respect to $W$-majorization. \begin{conjecture} \label{conj:HGF-majorization} Let $\lambda, \mu \in V$ and let $k$ be a nonnegative real multiplicity parameter. The following are equivalent: \begin{enumerate} \item $\lambda \succeq \mu$, \item $F_{k,\lambda}(x) \ge F_{k,\mu}(x)$ for all $x \in V$. \end{enumerate} \end{conjecture} Here we show one half of the conjecture, namely that (ii) implies (i), using known sharp asymptotics for $F_{k,\lambda}$. We then give an elementary proof of the conjecture in rank one, where it amounts to an inequality for the Gauss hypergeometric function. \begin{proposition} \label{prop:HGF-majorization-onlyif} Take $\lambda, \mu$ and $k$ as in Conjecture \ref{conj:HGF-majorization}. If $F_{k,\lambda}(x) \ge F_{k,\mu}(x)$ for all $x \in V$, then $\lambda \succeq \mu$. \end{proposition} \begin{proof} Again we show the contrapositive. Suppose $\lambda \not \succeq \mu$, and again use hyperplane separation to obtain a $y \in V$ and $C_1 > 0$ such that $\langle \mu, y \rangle > C_1$ and $\langle w(\lambda), y \rangle < C_1$ for all $w \in W$. Without loss of generality, we take $\lambda$, $\mu$ and $y$ to be dominant. We have the following sharp asymptotic estimate, due to Schapira \cite[Remark 3.1]{SchapiraHGF} and Narayanan--Pasquale--Pusti \cite[Theorem 3.4]{NPP}: \begin{equation} \label{eqn:HGF-asymp} F_{k,\lambda}(ty) \ \asymp \ e^{t \langle \lambda - \rho^{(k)}, \, y \rangle} \prod_{\alpha \in \Phi^+_\lambda} (1 + t \langle \alpha, y \rangle ) \end{equation} as $t \to +\infty,$ with $\Phi^+_\lambda$ as defined above in (\ref{eqn:phi-plus-lambda}) and $\asymp$ as defined immediately after. We thus find that $$F_{k, \mu}(ty) - F_{k, \lambda}(ty) \ > \ e^{-t \langle \rho^{(k)}, \, y \rangle} \bigg( C_2 \, e^{t \langle \mu, y \rangle} - C_3 \, e^{t C_1} \prod_{\alpha \in \Phi^+_\lambda} (1 + t \langle \alpha, y \rangle ) \bigg)$$ for large $t$ and some constants $C_2, C_3 > 0$. For $t$ sufficiently large, the quantity on the right-hand side above is positive, which implies that $F_{k,\lambda}(x) < F_{k,\mu}(x)$ for some $x \in V$, completing the proof. \end{proof} \begin{remark} \label{rem:HO-integral-formulae} \normalfont As in the proof of Theorem \ref{thm:spherical-majorization}, to complete the proof of Conjecture \ref{conj:HGF-majorization} it suffices to check that the function $\lambda \mapsto F_{k,\lambda}(x)$ is midpoint-convex. However, integral representations analogous to (\ref{eqn:hc-integral-spherical}) for general Heckman--Opdam hypergeometric functions are not known, so we cannot directly apply the same technique. Although there are known integral expressions in certain cases where the multiplicity parameter does not correspond to a symmetric space (see e.g.~\cite{RoslerVoit, Sun}), these are more complicated than (\ref{eqn:hc-integral-spherical}) and have so far resisted a similar analysis. A more promising approach to a general proof of Conjecture \ref{conj:HGF-majorization} might be to use hypergeometric differential equations, as illustrated in the following proposition. Another approach could be to analyze a series expansion for the hypergeometric function as in \cite{NPP}. \end{remark} \begin{proposition} \label{prop:HGF-rank1} When $\dim V = 1$, Conjecture \ref{conj:HGF-majorization} is true. \end{proposition} \begin{proof} In light of Proposition \ref{prop:HGF-majorization-onlyif}, we need only show that (i) implies (ii) in Conjecture \ref{conj:HGF-majorization}. Following the discussion in Example \ref{ex:rank1}, it is sufficient to consider the case $\Phi = BC_1$. It is a classical result that the Gauss hypergeometric function $F(z) = {}_2 F_1(a, b; c; z)$ satisfies Euler's hypergeometric equation: $$ z(1-z) \frac{d^2F}{dz^2} + [c - (a+b+1)z] \frac{dF}{dz} - ab \, F = 0.$$ Comparing to (\ref{eqn:HO-to-Gauss}), we find: \begin{equation} \label{eqn:HGDE-rank1} F''_{k,\lambda}(x) + \Big( k_1 \coth \frac{x}{2} + 2 k_2 \coth x \Big) F'_{k, \lambda}(x) + \Big[ \Big(\frac{k_1}{2} + k_2 \Big)^2 - \lambda^2 \Big] F_{k,\lambda}(x) = 0, \end{equation} for $\lambda, x \in \R$. The function $F_{k,\lambda}$ is determined by (\ref{eqn:HGDE-rank1}) and by the initial conditions \begin{equation} \label{eqn:rank1-ICs} F'_{k,\lambda}(0) = 0, \qquad F_{k, \lambda}(0) = 1. \end{equation} The above initial conditions follow directly from the definition (\ref{eqn:hgf-def}) in rank one: we must have $F'_{k,\lambda}(0) = 0$ because $F_{k, \lambda}$ is $W$-invariant (i.e.~even), and we must have $F_{k, \lambda}(0) = 1$ due to the stipulation that $G_{k, \lambda}(0) = 1$. Suppose $\lambda \succeq \mu$, which in dimension one just means that $|\lambda| \ge |\mu|$. Since the equation (\ref{eqn:HGDE-rank1}) depends only on $\lambda^2$ and not on the sign of $\lambda$, we find $F_{k, -\lambda} = F_{k, \lambda}$, so we can in fact take $|\lambda| > |\mu|$. We will show that $F_{k, \lambda}(x) \ge F_{k, \mu}(x)$ for all $x \in \R$, with equality only at $x = 0$. Since $F_{k, \lambda}$ is even, it suffices to consider $x \ge 0$. From (\ref{eqn:HGDE-rank1}) and the initial conditions (\ref{eqn:rank1-ICs}), we have: $$F_{k, \lambda}(0) = F_{k, \mu}(0) = 1,$$ $$F'_{k,\lambda}(0) = F'_{k,\mu}(0) = 0,$$ $$F''_{k,\lambda}(0) = \lambda^2 - \Big(\frac{k_1}{2} + k_2 \Big)^2 > \mu^2 - \Big(\frac{k_1}{2} + k_2 \Big)^2 = F''_{k,\mu}(0).$$ Therefore there is some $\varepsilon > 0$ such that for all $x \in (0, \varepsilon)$, $$F_{k, \lambda}(x) > F_{k, \mu}(x), \qquad F'_{k, \lambda}(x) > F'_{k, \mu}(x), \qquad F''_{k, \lambda}(x) > F''_{k, \mu}(x).$$ Consider the set $E = \{ x > 0 \ | \ F_{k, \lambda}(x) = F_{k, \mu}(x) \}.$ If $E$ is empty, then $F_{k, \lambda}(x) > F_{k, \mu}(x)$ for all $x > 0$, and there is nothing to prove. Assume for the sake of contradiction that $E$ is not empty, and let $x_0 = \inf E.$ Clearly $x_0 > \varepsilon.$ Since $F_{k, \lambda}$ and $F_{k, \mu}$ are continuous, we have $F_{k, \lambda}(x_0) = F_{k, \mu}(x_0)$, and \begin{equation} \label{eqn:F-x0} F_{k, \lambda}(x) > F_{k, \mu}(x) \textrm{ for all } x \in (0, x_0). \end{equation} However, since $$F_{k, \lambda}(x_0) = 1 + \int_{0}^{x_0} F'_{k, \lambda}(\tau) \, d\tau, \qquad F_{k, \mu}(x_0) = 1 + \int_{0}^{x_0} F'_{k, \mu}(\tau) \, d\tau,$$ and $F'_{k, \lambda} > F'_{k, \mu}$ on $(0, \varepsilon)$, in order to have $F_{k, \lambda}(x_0) = F_{k, \mu}(x_0)$ there must be some $x_1 \in (\varepsilon, x_0)$ such that $F'_{k, \lambda}(x_1) < F'_{k, \mu}(x_1)$. By the intermediate value theorem and by (\ref{eqn:F-x0}), there must then be some $x_2 \in (\varepsilon, x_1)$ such that all of the following hold: \[ \textrm{(a) } F'_{k, \lambda}(x_2) = F'_{k, \mu}(x_2), \qquad \textrm{(b) } F''_{k, \lambda}(x_2) < F''_{k, \mu}(x_2), \qquad \textrm{(c) } F_{k, \lambda}(x_2) > F_{k, \mu}(x_2). \] Subtracting (\ref{eqn:HGDE-rank1}) from the corresponding equation for $F_{k, \mu}$ and using condition (a) above to cancel terms, we find $$ F''_{k,\mu}(x_2) - F''_{k,\lambda}(x_2) + \Big[ \Big(\frac{k_1}{2} + k_2 \Big)^2 - \mu^2 \Big] F_{k,\mu}(x_2) - \Big[ \Big(\frac{k_1}{2} + k_2 \Big)^2 - \lambda^2 \Big] F_{k,\lambda}(x_2) = 0.$$ But by (b), (c), and the assumption that $|\lambda| > |\mu|$, the left-hand side above must be positive, yielding a contradiction and completing the proof. \end{proof} We next turn our attention from hypergeometric functions to the closely related family of Heckman--Opdam polynomials, which we now define. For $\alpha \in \Phi$, write $$\alpha^\vee = \frac{2 \alpha}{\langle \alpha, \alpha \rangle}.$$ The fundamental weights $\omega_1, \hdots, \omega_r$ are defined by $\langle \omega_i, \alpha_j^\vee \rangle = \delta_{ij},$ where $\alpha_1, \hdots, \alpha_r$ are the simple roots. They span the {\it weight lattice} $P \subset V$. The {\it dominant integral weights} are the lattice points $P^+ \subset P$ that lie in the dominant Weyl chamber. The Heckman--Opdam polynomials $P_{k,\lambda}$ depend on a nonnegative multiplicity parameter $k$ and a dominant integral weight $\lambda \in P^+$. They are elements of $\R[P]$, the group algebra of the weight lattice, and are therefore polynomials in an abstract-algebraic sense. However, it is typical to identify $\R[P]$ with the algebra spanned by the functions $e^{\langle \lambda, x \rangle}$, $\lambda \in P$, so that as functions on $V$ the Heckman--Opdam polynomials are actually {\it exponential} polynomials. We write an element $f \in \R[P]$ as $f = \sum_{\lambda \in P} f_\lambda e^\lambda$, where only finitely many $f_\lambda$ are nonzero, and set $$\bar f = \sum_{\lambda \in P} f_{-\lambda} e^\lambda.$$ Define a bilinear form $(\cdot, \cdot)_k$ on $\R[P]$ by $$( f, g )_k = (f \bar g \delta_k \bar \delta_k)_0,$$ which extracts the constant term (i.e.~the coefficient of $e^0 = 1$) in $f \bar g \delta_k \bar \delta_k$, where $\delta_k$ is the function defined in (\ref{eqn:delta-k-def}). This bilinear form is symmetric and positive definite, and therefore defines an inner product on $\R[P]$. For $\lambda \in P^+$, let $$M_\lambda = \frac{|W \cdot \lambda|}{|W|} \sum_{w \in W} e^{w(\lambda)}$$ be the monomial $W$-invariant (exponential) polynomial, and define $\mathrm{low}(\lambda)$ as the set of $\mu \in P^+$ such that $\lambda - \mu$ can be written as a linear combination of simple roots with non-negative integer coefficients. \begin{definition} \label{def:HO-polys} \normalfont For $\lambda \in P^+$, the {\it Heckman--Opdam polynomial} $P_{k, \lambda}$ is defined by \begin{equation} \label{eqn:HO-poly1} P_{k, \lambda} = \sum_{\mu \in \mathrm{low}(\lambda)} c_{\lambda \mu} M_\mu, \qquad c_{\lambda \lambda} = 1, \end{equation} and by the orthogonality relations \begin{equation} \label{eqn:HO-orth} (P_{k, \lambda} , \, M_\mu)_k = 0, \qquad \mu \in \mathrm{low}(\lambda), \ \mu \not = \lambda. \end{equation} Note that (\ref{eqn:HO-poly1}) and (\ref{eqn:HO-orth}) together determine the coefficients $c_{\lambda \mu} \in \R$ for $\mu \not = \lambda.$ \end{definition} As $\lambda$ ranges over $P^+$ with $k$ fixed, the $P_{k,\lambda}$ form an $\R$-basis of the algebra $\R[P]^W$ of $W$-invariant elements in $\R[P]$. Moreover this basis is orthogonal with respect to the inner product $(\cdot, \cdot)_k$. \begin{example} \label{ex:HO-poly-examples} \normalfont When all $k_\alpha$ are 0, $P_{0,\lambda} = M_\lambda.$ Let ${\mathfrak g}} \def\tot{{\mathfrak t}$ be a compact semisimple Lie algebra with root system $\Phi$, and identify $V$ with a Cartan subalgebra $\tot \subset {\mathfrak g}} \def\tot{{\mathfrak t}$. Let $G$ be the connected, simply connected Lie group with $\mathrm{Lie}(G) = {\mathfrak g}} \def\tot{{\mathfrak t}$. When $k = \vec 1$, $P_{\vec 1, \lambda}(ix) = \chi_\lambda(e^x)$ is the character of the irreducible $G$-representation with highest weight $\lambda$. When $\Phi = A_{N-1}$, there is a natural injective homomorphism from $\R[P]^W$ to the usual ring of symmetric polynomials in $N$ variables. This homomorphism is obtained by identifying $M_\lambda$ with the monomial symmetric polynomial $$m_\lambda(x_1, \hdots, x_N) = \frac{|S_N \cdot \lambda |}{N!} \sum_{\sigma \in S_N} \prod_{i=1}^N x_i^{\lambda_{\sigma(i)}}.$$ Under this identification, the Heckman--Opdam polynomials are Jack polynomials. In particular, for $k = \vec 1$, they are Schur polynomials. In light of Proposition \ref{prop:HO-majorization} below, Conjecture \ref{conj:HGF-majorization} therefore subsumes both the Schur polynomial inequality of \cite{CGS,Sra} and the classical Muirhead inequality for monomial symmetric polynomials \cite{Muirhead}. \end{example} Up to a normalizing factor, the Heckman--Opdam polynomials turn out to be specializations of the Heckman--Opdam hypergeometric function. In particular, for $\lambda \in V$, define \begin{equation} \label{eqn:tilde-c-def} \tilde c(\lambda, k) = \prod_{\alpha \in \Phi^+} \frac{\Gamma(\langle \lambda, \alpha^\vee \rangle + \frac{1}{2} k_{\frac{1}{2} \alpha})}{\Gamma(\langle \lambda, \alpha^\vee \rangle + \frac{1}{2} k_{\frac{1}{2} \alpha} + k_\alpha)}, \end{equation} where $k_{\frac{1}{2} \alpha} = 0$ if $\frac{1}{2} \alpha \not \in \Phi$. Observe that if $\Phi$ is the root system of a compact Lie algebra ${\mathfrak g}} \def\tot{{\mathfrak t}$, we have $\tilde c(\lambda, \vec 1) = \Delta_{\mathfrak g}} \def\tot{{\mathfrak t}(\lambda)^{-1}.$ Set $$c(\lambda, k) = \frac{\tilde c(\lambda, k)}{\tilde c(\rho^{(k)}, k)}.$$ We then have the following relation between Heckman--Opdam polynomials and hypergeometric functions \cite[eq.~4.4.10]{HS}: \begin{equation} \label{eqn:P-to-F} F_{k,\lambda + \rho^{(k)}}(x) = c(\lambda + \rho^{(k)}, k) \, P_{k,\lambda}(x), \qquad x \in V, \ \lambda \in P^+. \end{equation} This relation generalizes the relation between the spherical functions $\phi^-_{-i(\lambda - \rho)}$ and $\phi^+_\lambda$ discussed in Section \ref{sec:cpt-sym}. It immediately yields the following proposition. \begin{proposition} \label{prop:HO-majorization} Let $\lambda, \mu \in P^+$. If Conjecture \ref{conj:HGF-majorization} holds, then the following are equivalent: \begin{enumerate} \item $\lambda \succeq \mu$, \item $\tilde c(\lambda + \rho^{(k)}, k) P_{k,\lambda}(x) \ge \tilde c(\mu + \rho^{(k)}, k) P_{k,\mu}(x)$ for all $x \in V$. \end{enumerate} \end{proposition} In other words, Conjecture \ref{conj:HGF-majorization} would imply a generalization of Proposition \ref{prop:compact-majorization} to the case of Heckman--Opdam polynomials. A similar generalization of Proposition \ref{prop:euclidean-majorization} to generalized Bessel functions would also follow immediately via the rational limit in Example \ref{ex:gen-bessel}. It is well known that Heckman--Opdam polynomials can be realized as a limit of Macdonald polynomials \cite{Mac}. It is interesting to speculate about whether Conjecture \ref{conj:HGF-majorization}, if it holds, is itself a manifestation of an even more general monotonicity property of Macdonald polynomials with respect to $W$-majorization. \section*{Acknowledgements} Colin McSwiggen would like to thank Patrick McSwiggen for helpful discussions during the preparation of this manuscript. The work of Colin McSwiggen is partially supported by the National Science Foundation under Grant No.~DMS 1714187. The work of Jonathan Novak is partially supported by a Lattimer Fellowship, as well as by the Natural Science Foundation under Grant No.~DMS 1812288. Both authors would like to thank the anonymous referee for helpful comments. \bibliographystyle{alpha}